AEA Conference 2022

Preconference Workshop: Transformative Mixed Methods: Supporting Equity & Justice by Donna Mertens

A key reason that I decided to take this pre-conference workshop was because I wanted to learn from Donna Mertens. I really like her writing and wanted to have a chance to learn from her in person. She did not disappoint! While I didn’t find there was much about mixed methods, per se, there was a lot about transformation, equity, and justice. Here are some things I learned/re-learned:

  • In France there is a law against the government collecting data on race. It comes from WWII when government data on Jewish people facilitated the ability to send Jewish people to concentration camps.
  • Ethics is the start of every decision we make in evaluation.
  • If you are not challenging oppressive structures, you are complicit in the status quo.
  • You can challenge your client/commissioner – e.g., if they ask for a survey for a summative evaluation, you can ask them if that’s really going to be transformative.
  • mixed methods – what is the synergy between the quant and the qual? what do you gain by bringing quant and qual in dialogue with each other?
  • transformative paradigm
    • axiology (i.e., nature of ethics & values) – culturally responsive, promotes social/environmental/economic justice and human rights, address inequities, reciprocity (what do you leave the community so they can sustain the change when the evaluator leaves?), resilience, interconnectedness (living & non-living), relationships
    • ontology (i.e., nature of reality) – reality is multi-faceted, historically situated, consequences of privilege
    • epistemology (i.e., the nature of knowledge & relationship between knower and that which would be known) – interactive, trust, coalition building
    • methodology (i.e., nature of systematic inquiry) – transformative, dialogic, culturally responsive, mixed methods, policy change as part of methodology
  • transformative mixed methods design:
    • Build relationships
      • often historical experiences of research/evaluation that are extractive and oppressive; researchers need to earn trust
      • identifying existing community actions groups and understand the history of their efforts; identify formal & informal leaders; identifying community needs/gaps/strengths/assets
    • Contextual analysis
      • cultural, historical, political, environmental, legislative, power mapping
      • policy analysis (what’s written and unwritten; what’s written by not enacted)
    • Pilot interventions
      • collect data, make mid-course corrections
    • Implement intevetnion
      • collect data for proces evaluation
      • collect data on unintended/unanticipated outcomes
    • Determine effectiveness
      • outcome evalaution
    • Use findings for transformative purposes
      • include in contract the importance of working with the community – from relationship building at the start all the way through to sharing the findings at the end
      • if the community is involved throughout the evaluation, they will already know the findings and will not need to wait for the final report to find out (also, reporting findings along the way will make sure you are reporting data back to community in a reasonable time)
  • You can say that you’ll work with an expected goal of reducing inequities and increased justice and that you’ll work in respective ways; you can’t guarantee that you’ll make things better and can’t guarantee you won’t cause harm, because we don’t know what will happen
  • http://transformativeresearchandevaluation.com/

Opening Plenary: Re(Shaping) Evaluation: Decolonization, New Actors, & Digital Data. Edgar Villanueva interviewed by Nicky Bowman.

Villaneuva wrote the book Decolonizing Wealth. I will admit that I have this book in my big pile of books to read, but hadn’t got around to reading it! After hearing this keynote, I’m even more excited to read it. Here are some things he said during the keynote that resonated with me:

  • we learned the names of the colonizers’ ships (the Nina, the Pinta, the Santa Maria, the Mayflower), but not the names of the Indigenous lands and people
  • colonization is like a virus that wipes out anything that is not like the dominant culture
  • the US is working on Truth & Reconciliation legislation re: Indian Boarding Schools
  • none of us has ever lived in a world that wasn’t actively being colonized. It can be violent and it can be subtle.
  • we can’t collectively heal without acknowledging how we got here
  • we need to change 4 things:
    1. people: more diversity of perspectives in leadership
    2. resources: who has them and who makes decisions about them. who has the microphone.
    3. stories: need to shift away from the deficit mindset, see the strengths
    4. rules: spoken & unspoken policies, need more equitable policies, but also need to become aware of and change the unspoken rules that limit our work

Concurrent Session: Walking the talk: Bringing Ontological Thinking into Evaluation Practice by Jennifer Billman and Eric Einspruch

Journal article: Framing Evaluation in Reality: An Introduction to Ontologically Integrative Evaluation

Thursday Plenary: Co-creation of Strategic Evaluations to Shift Power Moderator: Ayesha Boyce Speakers: Elizabeth Taylor-Schiro/Biidabinikwe, Gabriela Garcia, Melanie Kawano-Chiu, and Subarna Mathes 

Here are some things that the panelists said that resonated with me:

  • Ayesha Boyce:
    • equity is context-specific
  • Gabriela Garcia:
    • equity is not enough. The next step is collective liberation
    • At Beyond, they use a culturally-responsive evaluation framework, start all evaluations in a visioning session, ensure the evaluation is grounded in community values
  • Elizabeth Taylor-Schiro/Biidabinikwe:
    • communities striving for collective liberation don’t have power and that’s the problem. Power is needed to draw on their strengths, move toward sustainability and self-determination
    • it should be the community leading the evaluation, supported by evaluators, rather than ‘co-creating’ the evaluation
  • Melanie Kawano-Chiu:
  • Subarna Mathes:
    • if rigour = degree of confidence that the program has led to an outcome [different than rigour in the post-positivist sense]
    • we need to push against the view of rigour that is narrowly defined, that prioritizes a worldview of “one reality” or “objectivity”

Concurrent session: Interactive tool to promote responsible use and understanding of culturally responsive and equity-focused evaluation by Blanca Guillen-Woods, Felisa Gonzales, Katrina Bledsoe, Kantahyanee Murray

  • https://slp4i.com/the-eval-matrix/ is an online tool that helps you to choose from various different equity-focused/culturally-responsive evaluation approaches
  • 7 key principles, 3 focus areas (individual, interpersonal, structural levels)
  • This tool is really cool and I’m definitely going to share it with my students, as they often ask how to choose an approach (or approaches) when designing an evaluation

Concurrent Session: Design Sprint: How Researchers Can Share Power with Communities Involved in Evaluations by Gloriela Iguina-Colón and Brit Henderson

These presenters took us through a workshop on power sharing. Here are some things that they talked about that resonated with me:

  • power is often thought of in the sense of authority, control – power over other people or things
  • MLK descrbied power as “Power properly understood is nothing but the ability to achieve purpose. It is the strength required to bring about social, political, and economic change.”
  • can have power with (collaborate with others to find common ground), power to (believe in people’s ability to shape their own lives), and power within [which I didn’t catch the meaning of in my notes so I just Googled it and found this: “the sense of confidence, dignity and self-esteem that comes from gaining awareness of one’s situation and realizing the possibility of doing something about it.”]
  • power levers:
    • resources
    • access
    • opportunities
  • power sharing = recognizing the power levers that you have and actively choosing to leverage these to build collective strength
  • positionality – “how our social identities and experiences influence the choices we make in the research process and how those factors shape the way others see us and give us power and/or insight in a specific research context.”
    • consider experiences (interactions with the topic; lived experience of the topic), social identities (it’s context-dependent which are valued or not valued), perspectives (about the topic; understanding systems of oppression); identifying these can provide helpful insights (e.g., when you share an identity with participants) and biases (e.g., when you don’t share an identity (or an intersection of identities) and have assumptions/biases)
      • in addition to individual positionality, think about team positionality
  • reflectivity – “an attitude of attending systematically to the context of knowledge construction, especially to the effect of the researcher, at every step of the research practice.
    • examination, attitude, process related to the topic; not just about identifying these things, but also what insights this will give me and where I might have knowledge gaps
  • opportunity spaces: “points in the [evaluation] process during which you can apply power levers to facilitate meaningful participation among and share decision-making power with
    • each of the steps of the evaluation process is an opportunity for meaningful participation: evaluation design, data collection, data analysis, interpretation of results, and dissemination
  • facilitation – provide enough structure so everyone can be heard; be mindful of different views of evaluation
  • when considering the key people/groups in an evaluation, ask:
    • who has the most power/privilege in this context?
    • who will be most impacted by the evaluation?

Concurrent Session: Ethics for Evaluation: Can We Go Beyond Doing No Harm to Tackle Bad and Do Good? by Penny Hawkins, Donna Mertens, and Tessie Catsambas

Concurrent Session: Equitable Evaluation Discussion Guide by Maggie Jones, Natasha Arora, and Elena Kuo

  • Centre for Community Health & Evaluation at Kaiser Permenated (Seattle) [Seemed quite similar to CHEOS)
  • equity-focused conversation about the evaluation design with someone from their organization who is not part of the project to get a different perspective
  • they created a guide that includes pre-work before the meeting, them a meeting where you do a consultation with reviewers
  • helped them to think from multiple angles (not just “what’s in the RFP?”)
  • helped them to discuss assumptions and implications
  • articulate what they can and cannot do to address equity
    • might not be able to do something in the current evaluation, but if you don’t identify ideas, won’t ever do them – so may be something to put in the next proposal if it’s too late to do it in this proposal
  • they tell funders that the EDI reviews is part of their process (i.e., we will develop the plan, put it through the EDI review and may come back to the funder with new ideas)
  • ultimately would like to have a systematic follow up process where people will document what they do (trying to document changes that happened due to the EDI review process) to build evidence if this process makes a difference

Concurrent Session: Identifying Gaps in the Research on Professionalizing Evaluation: What Do We Need? by Amanda Sutter, Esther Nolton, Rebecca Teasdale, Rachael Kenney, Dana Wanzer

Concurrent Session: Creative Practices for Evaluators by Chantal Hoff & Susan Putnins

  • reminded me a lot of Jennica & May from ANDImplementation

Concurrent Session: Who Are We? Studies on Evaluators’ Values, Ethics and Ontologies by John M. LaVelle, Michael Morris, Clayton Stephenson, Scott I Donaldson, Justin Hacket, Paidamoyo Chikate, Jennifer Billman

  • VOPEs have ethics, standards, and competencies, but we as evaluator interpret them through our own lenses
  • values = a set of goals and motivations that serve as a guiding group of principles, affect decisions/attitudes/behaviours, come from many sources, influence our practice

Concurrent Session: Mapping Distinctions in the Implementation of Learning Health System (LHS) by Anna Perry & Dough Easterling

  • from National Academy of Medicine, but concept is too high level and ambiguous to guide the actual work of becoming a LHS
  • in the US, electronic health records adopted in early 2000s, Afforable Care Act required the use of data to inform the health system
  • Academci Helath Centres not early adopters of LHS because they were focused on research to build knowledge vs. continuous improvement type stuff
  • hypothesis is that LHS is supposed to improve patient care, patient outcomes, and staff satisfaction (since they are more engaged)

Concurrent Session: Who are We? Studies of Evaluator Beliefs, Identify, and Ethics by Rachel Kenney, Bianca Montrosse-Moorhead, Amanda Sutter, Christina Peterson, Rachel Ladd, Betty Onyura, Abigail Fisher, Qian Wu, Shrutikaa Rajkumar, Sarick Chapagain, Judith Nassuna, Latika Nirula

  • Ladd & Peterson discussed consensual qualitative analysis
  • Tin Vo presented on behalf of Betty Onyura, who was not able to attend. Talked about how the commodification of evaluation work is in tension with trying to support equity and social justice
Untitled
  • an audience member suggested the word “constituent” instead of “stakeholder” [as a lot of us are trying to find a word to replace “stakeholder”]

Concurrent Session: Playing with Uncertainty: Leaning into Improv for Effective Evaluation by Daniel Tsin, Libby Smith, and Tiffany Tovey

  • improv as reflection-in-action
  • improv as a mindset – every idea matters
  • thinking on your feet, using a different part of your brain, building on ideas, chance to be brave – can all be useful in evaluation
  • activity: Zip Zap Zop – toss a ball and say “zip,” “zap”, “zop” in that order and when someone drops the ball, we all cheer “woop!”
    • a chance to experience failure and turn it into a celebration
    • shared experience of a group
    • a plan for when we messed up
    • have to pay attention the whole time – not planning what to do, but being present, acknowledging what is being said/done
    • facilitator is not in control
  • activity: Yes, and…
    • “and” is generative, while “but” feels more like you are shutting someone down
    • you will notice “but” in every day life when you could have used an “and”
    • sometimes you want to be generative and sometimes you want to prioritize (e.g., don’t want to keep “and”ing when building a program ToC and end up with trying to do everything).
  • Adrienne Marie Brown’s Emergent Strategy

Concurrent Session: What Should I Do? Examining Uncertainty, Decisions Points, and Pushback in Evaluation Practice by Rebecca Tesadale, Tiffany Tovey, Grettel Arias Orozco, Julianne Zemaitis, Onyinyechukwu Onwuka, Cherie Avent, Christina Peterson, Allison Ricket, Mandy White, Kelli Schoen, Daniel Kloepfer, Natalie Wilson

  • evaluations require interpersonal skills, but it’s not taught in evaluation courses or in evaluation texts
  • it’s a human tendency to be defensive and as a conversation proceeds, defensiveness will increase as what a person hears can become a distortion of what the message was
  • Kahlke, 2014 – Generic Qualitative Approaches: Pitfalls and Benefits of Methodological Mixology
  • Braun & Clarke, 2022
  • evaluators are always dealing with uncertainty
  • different people have different level of tolerance for uncertainty (and an evaluator’s tolerance might be different than that of the people they work with)
  • aspects of uncertainty”
    • probability – quant representation about the amount of uncertainty
    • ambiguity – different ways to interpret findings
    • vagueness – how detailed the language is
  • take with client before you start – what is the stake of the decision? What is their tolerance for university how certain do they need to be? This can inform choice of methods, etc.
  • uncertainty can be leveraged to drive transformational change by creating dialogue about the unknown and asking more interesting questions about the unknown (e.g., if data is not available, ask why there is no data)

My post conference “to do” list

Posted in evaluation, event notes, notes | Tagged , , , , , , , , , , | Leave a comment

Webinar: Introduction to Thematic Analysis: Understanding, conceptualising, and designing (reflexive) TA for quality research

Date: 29 November 2022

Offered by NVivo

Presenters: Virginia Braun and Victoria Clarke

Summary

“The process is not the purpose” – this quote really resonated for me, as did their note of “fitting method to purpose”. They aren’t trying to say that everyone should do reflexive TA. They are saying that you should know what type of TA you are using and to chose it purposely for what you are trying to achieve. And then do the analysis in a thoughtful way, a way that aligns (your ontology/epistemology should be consistent with your methods). I quite enjoyed this webinar and I think I should check out their book!

Detailed Notes

Resources:

  • thematic analysis is an approach, not a single method, more like a family of things
  • family differences:
    • paradigmatic differences – what are we (conceptually) doing here? (e.g., describing/uncovering a single reality, c0-creating knowledge, etc.)
    • what paradigm are we operating in?
      • post-positivist – “small q” (using not numbers, but still using post-positivist understanding of the world)
      • non-positivist – “big Q” or “fully qualitative”
      • view of subjectivity – a threat (as understood in postpositivism, subjective seen as leading to bias) or a resource (as understood in Big Q)?
    • research practice differences:
      • conceptual (discovery vs production)
      • practical (identifying themes vs. developing analysis; themes inputs or outputs )
        • in reflexive TA they don’t talk about “emerging themes”, since they aren’t thinking that the knowledge is being discovered, it’s being produced
      • what is a theme?
        • united by focus/topic?
        • united by shared core concept?
  • Braun & Clarke’s way of clustering these approaches:
    • coding reliability
    • codebook versions (e.g., framework analysis)
    • reflexive versions (Braun & Clarke’s version is one of the most well-known of these)
    • other versions
  • Findlay’s differentiation
    • scientifically descriptive
    • artfully interpretative
  • TA is about developing, analyzing, and interpreting patterns across a qual dataset, involves systematic processes of data coding to develop themes
    • methods, not methodology (but you do still have a worldview/paradigm you are operating in when you choose and use a method)
    • focus on patterns of meaning aka themes across a dataset (but what’s s pattern?)
    • processes of coding –> themes
    • reporting ‘themes’
  • Reflexive TA
    • conceptualizing of analysis. Research Question + Research + Data are embedded within our disciplinary training & scholarly knowledge, sociocultural meanings, and values
    • Big Q/artfully interpretative
    • research subjectivity value –> reflexivity is essential
    • coding is open and organic (codes as analytic ‘entity’)
    • themes as analytic ‘output’
    • multiple ways to do reflexive TA (theoretical alignments, etc.)
    • six phase process to do reflexive TA:
      • familiarization
      • coding
      • generating/constructing (initial) themes
      • theme development and review
      • refining, defining, and naming themes
      • writing/stopping (it’s never “complete”, so you need to pick a point to end)
      • NB. The process is not the purpose, nor a guarantor of quality.,
      • NB. It’s not a linear process. Can go back to any phase at any time. Open & recursive.
  • Take home message: there is a diversity of TA; understand what type of TA you are using!
  • Common problems in published TA:
    • misunderstanding/misrepresenting (lack of diversity)
      • e.g., saying they are doing TA when they aren’t; aren’t adequately rationalizing why TA is used; “swimming (unknowingly) into the waters of positivism”
      • e.g., saying there is no guidance for TA (when there’s lots in the literature)
      • e.g., a paper saying it’s reflexive TA but then says used reflexivity to “manage their bias”
      • inadequate description (e.g., just saying “followed the 6 phases of…” but not how you did it)
      • too many themes – thin/fragmented
      • deploying theoretically incoherent quality standards (e.g., saying intercoder reliability, which is not a coherent strategy for reflexive TA (would be appropriate for a coding relatability version of TA)
    • mismatches:
      • conceptual
      • methodology (practice-based)
      • reporting
      • quality criteria
    • Become a more knowing practitioner:
      • don’t treat TA as a single method
      • talk about what version of TA you used
      • make choices thoughtfully & appropriately and show you made choices
      • engage in conceptual and design thinking
    • conceptual thinking
      • research values (awareness)
        • ontological
        • epistemological assumptions
      • design thinking: design coherence/methodological integrity (Levit et al, 2017)
    • 10 fundamentals of reflexive TA (for conceptualization & design coherence) (Braun & Clarke, 2022 paper – go read it!)
      • coding quality doesn’t depend on a multiple coders
      • analysis can’t be accurate or objective, but can be weaker/stronger
      • good quality coding/themes come from depth of engaging and distancing (the value of time!)
      • assumptions underpinning analysis need to be acknowledged – they don’t like “saturation” (they wrote a paper on this – a lot of qualitative approaches use this concept, but their paper talks about underlying assumptions of it)
    • 5 key challenges
      • fitting method to purpose (claims and practice)
      • working in a time and using reflexive TA coherently
      • time (tensions and pressures)
      • reporting (challenges in style, length, and from reviewers & editors)
      • choosing appropriate quality criteria (e.g., in health, often COREQ is often seen as the way to go, but it has assumptions embedded in it)
    • quality and being a reflexive (TA) practitioner:
      • you are not a robot or a mechanic
      • you are an adventurer
        • values-led
        • reflexive
        • active
        • positioned
        • thoughtFULL (aka, don’t just think of this as “rules to follow”)
    • Q&A:
      • content analysis vs. TA – there are different versions of content analysis, just as there are different versions of TA. They wrote a paper comparing TA to content analysis, grounded theory, and something else.
    • Twitter: @ginnybraun @drviciclark
Posted in data analysis, evaluation tools, event notes, methods, philosophy, research, webinar notes | Tagged , , , | Leave a comment

AEA Coffee Break: Five Core Processes for Enhancing the Quality of Qualitative Evaluation 

Presenter: Jennifer Jewiss

Date: 25 October 2022

The presenter had reflective questions for the audience, so I figured I’d put mine here, along with my notes from the webinar.

Reflective questions 1: When I think of qualitative approaches to evaluation, the following words come to mind:

  • open
  • emergent
  • unexpected
  • nuance
  • deep
  • devalued by some
  • harder than people think

They put together a book on qualitative methods in evaluation with chapters authored by many evaluators, then identified themes of what makes up quality in qualitative inquiry:

  1. acknowledging who you are and what you bring to the work (you)
    • positionality
    • how do facets of your identity, history, etc. intersect
    • how does it enrich and limit your work as an evaluator?
      • what blind spots do you have? what learning do you need to do?
  2. building and maintaining trusting relationships (us)
    • throughout the entire evaluation
  3. employing sound & explicit methodology (process
    • a wide array of things that can be done in qualitative inquiry
  4. staying true to the data (what we find)
    • hearing and representing the voices and perspectives of participants
    • be really conscious of what you might be bringing to bear on the data (our own priorities, biases, desires) – monitor that to “keep it in its proper place”
  5. fostering learning (what we learn)
    • helping everyone involved learn, including ourselves
    • open-ended learning helps people to surface that tacit knowledge
  • these things are not unique to qualitative
  • a cycle, not linear. They wanted a spiral/dynamic diagram, but publisher suggested a cycle would be more clear

Reflective question 2: how might one use this model to information qualitative evaluation practice?

  • presenter suggested that each of these elements could be a prompt for reflective writing or reflective art (drawing, collages, etc.)
Posted in evaluation, webinar notes | Tagged , | Leave a comment

Webinar Notes: Using Evaluation in Context: Multicultural Validity and Cultural Competence in Evaluation

Date: 14 April 2022

Hosted by: AEA, Government Accountability Office

Speakers: Karen Kirkhart, Kathryn Newcomer, Giovanni Dazzo, Nicole Bowman, Terell Lasane (moderator)

This was a really great webinar. I furiously took notes of as many of the insightful things the panelists were talking about. The notes are imperfect (I tried to catch some direct quotes inside quotation marks, but some of this is paraphrased – any errors are my own! If you are really interested in this topic, you can check out a recording of the webinar here and there are a bunch of resources that the speakers shared at the end of this posting.

Karen Kirkhart:

  • multicultural validity is a “call to broaden the kinds of evidence that are considered in validity conversations”
  • limited views of “validity” promotes social injustice – it silences
  • 5 intersections sources of intersecting validity evidence
    1. methodological validity – the stuff we usually think of re: quant and qual (insufficient as the only source of evidence)
    2. theoretical evidence – “insights from social sciences and humanities and professions”, Indigenous wisdom; program theories (examine these for bias towards deficits and disadvantage)
    3. relational evidence – “how people relate to one another, to our planet, to the universe”; “how power is exercised in relationships”; collaborative and participatory approaches ‘ position relationships as positive, but this is not always true. e.g., “inclusion” can “twist” into a settler invitation to assimilate
    4. experiential evidence grounds our understanding in the lives of the community members; “calls evaluators to spend time with communities, upon being invited”
    5. consequential evidence – brings accountability to our work; examine what happens or fails to happen as a result” if “evaluation does not move the needle towards social justice, what does that tell us about our accuracy and adequacy of our prior understandings?”

Kathryn Newcomer:

  • multicultural validity is a lens through which we should view our claims (e.g., claims of
  • “evidence-based policy making” has been embraced in OECD trails and a focus on RCTs as being the way to demonstrate evidence
  • likes the term “impactees” rather than “beneficiaries” (because you don’t know if they are benefiting!)
  • concerned with standards used in various registries to judge research
  • working on an advanced set of evidence standards – broadening view of causation, context, equity
  • fit methods to the questions
  • 3 books influential in her thinking of cultural humility and in understanding racism, sexism, and classism (Stamped from the Beginning by Ibram X. Kendi, White Trash: 400 Years of Untold History of Class in America by Nancy Isenberg; Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez)

Giovanni Dazzo

  • evaluations often based on the opinions of what is “rigourous” according to the funder and the evaluator, but not necessarily on the people who the program is supposed to serve
  • we as evaluators often term sticky note activities as “participatory”, but is that what the community consider to be the ways they participate
  • if we enact oppressive ways of “participating”, we are robbing people of their identities
  • how can our practices restore our humanity as evaluators?
  • “an expertise that privileges distance (another word for “objectivity”)”
  • co-constructed a reflective framework
  • “the extractive nature of inquiry” vs. a way to restore
  • “restorative validity”
  • seek to heal and restore rather than to “prove a point”

Nicole Bowman

  • storytelling as valid and impactful
  • scientific and policy and academic humility to add to the idea of cultural humility
  • we must understanding history in our context to walk together in a good way
  • our experiences matter, how we got here today
  • Braiding Sweetgrass by Robin Wall Kimmerer
    • braiding requires tension – “tension is respected and expected”
    • the more tension in the knot, the stronger it is – intersectionalities
  • think of the 575 tribes as a nation state
  • refers to self as a “blue collar scholar”
  • “we have to get upriver”
  • not much has changed despite many years of reports, etc.
  • lets bring some wisdom into that work, instead of just “evidence”
  • learn about sovereign nations
  • build capacity, competency, and skills on how to work nation-to-nation
  • how do you make RFP policies so that we can build things differently and start piloting and testing things to look for better outcomes
  • who owns the data – how we publish
  • if we are trying to learn how to do things differently together, we need to dedicate more time and resources to do that
  • think about who is here and who is not here

Q&A

  • KK:
    • There is not one “evaluation community” – only a small proportion of those are members of evolution associations
    • much evaluation is done by outside contractors
    • social impact investing” not part of evaluation community, do a lot of evaluation work
    • lots of people have not had training in evaluation, let alone training in culturally responsive evaluation, cultural humility
    • some foundations (like Kellogg) and organizations like CREA that have been doing this stuff for a long time
    • the Urban Institute – lots of free materials you can download
    • cultural humility is so important – you can never fully understand another community/culture, you don’t just do a training on cultural humility/responsiveness and say you are done
  • KK
    • cultural competence is a stance – it’s infused across the AEA competencies, not a single “competency”
    • cultural competence implies an “end point” – that term may have outliving its usefulness
  • NB
    • legal political aspects – Tribal Nations are the only groups within “cultural responsive” that have this status
  • KK
    • there’s been work on cultural responsive evaluation for a long time (e.g., growth of TIGs, diversity work in AEA)
    • intersectionality theory has had a huge impact – “it messes everything up. which is a good thing”
    • things that disrupt and shakes us up is a
    • within society at large, the pandemic has raised awareness of inequities and the anger and outrage of the murder of Black citizens
    • and recognition that historic “solutions” have not been working
  • TL
    • if you codify things into law, it changes society
    • The “evidence act”
    • current administration released something talking about the importance of Indigenous wisdom
  • NB
  • younger generations do not see disciplinary and other lanes, “everything is related”; they don’t see boundaries they see opportunities, “putting together this beautiful quilt”
  • e.g., government TIG reaches out to Indigenous TIG all the time
  • we need to braid this together
  • TL:
    • I teach and our discussions show that students are thinking critically about how evaluations have not met the mandate because they are not considering cultural
  • KN:
    • qualitative and mixed methods are more and more becoming the body of research and evaluation, we may have reached a tipping point
    • many of the standards of evidence are “canonized” with positivist notions of “validity”, but more qualitative researchers are coming to the fore to challenge this; KN’s new standards are in a manuscript she’s
  • GD:
    • “we are more concerned with being ‘scientific enough’ than te are about being relevant”
    • demonstrated to where the money flows – to quant research – so those researchers hold more power and control
    • in participatory, community-based, it is assumed that “participation is good” , but as KK mentioned, it’s not always so
    • we have to ask why people are being asked to participate, are they being compensated? do they have time? often funders give excuses as to why “we can’t pay individuals”
    • processes often silence minoritized or under-resources communities
    • people often showcase the “participatory method” as the end goal, as opposed to how the method promotes mutual understanding, without that we don’t get to relational evidence, liberation
  • NB
    • there’s an Indigenous data sovereignty network
    • they are publishing in the data science literature too
    • data = power
    • “I need courageous, compassionate, and curious people”
    • mostly white males and females fill these positions that have the power and priveldge
    • we have to talk about power and privilege and capitalism, uncomfortable things
    • red, white, yellow, black are all the colours on our medicine wheel, all working together
    • we have no business making policy on things we know nothing about
    • we are all learning different things – e.g., “I don’t have experience in LGBT+, but have been invited into the work because I know Native stuff and they know that I will come in a humble way”
    • you can learn about communities based on what they are posting in social media
    • we learn, unlearn, and relearn together
  • GD
    • we have to think of where the money is going
    • evaluation work is contract based
    • we’ve broadened our thinking about how we do evaluation funding. Learning about how communities do things rather than funding projects for evaluators to go in and say “tell us everything you know”
  • KN
    • book on inclusive engagement
    • we think “engagement” is saying “we are having a meeting on Wed at 7 pm so we can tell you what we are going to do to you” – that’s not engagement
    • what are communities getting from this?
    • need to think of inclusion at design stage, not just at evaluator
    • evaluators come into projects too late to do a lot of this work sometimes
    • the term “rigour” is interesting – has a specific ontological assumption that there is a truth that evaluations have to find; probably not how to think about it. tends to compete with ideas of multicultural sensitivities. A very rigid view of rigour
  • KK
    • rigiour is often invoked against multicultural sensitivities. – my answer is that nothing is more rigiour than triangulating multiple sources of data
  • NB
    • if your “rigour” is working why are Native people still experiencing such high levels of ditabets, suicide, lower rates of graduation at high school, universities – your rigour is not working, since we are not getting the outcomes
  • “lets go beyond “do not harm” and be a good relative”
  • TL:
    • when you get a “significant” result in an evaluation saying there’s a , people often don’t ask “does it work well for everyone? does it work well in different contexts”
  • DG:
    • there are courses on decolonizing methodologies
    • where is the money going?

Resources

There were a tonne of resources suggested during the workshop. Here are some that I’m planning on checking out:

Posted in evaluation, event notes, webinar notes | Tagged , , , , | Leave a comment

Webinar Notes: The “Coin Model of Privilege and Critical Allyship”

Title: The “Coin Model of Privilege and Critical Allyship”: Orienting Ourselves for Accountable Action on Equity

Speaker: Dr. Stephanie Nixon, University of Toronto

Hosted by: Simon Fraser University, Faculty of Health Sciences

  • Dr. Nixon asked us to jot down our thoughts on the following three questions:

What are new insights?

  • the coin model = privilege (unearned advantages) and oppression (unearned disadvantages)
  • we have words for those people whose health is affected by oppressions: “marginalized”, “vulnerable”, “at risk”, “target population” – but we don’t have any words for those people who are on the other side of the coin. We frame equity as solely around those on the bottom of the coin – and we thus limit our thinking of possible solutions to these “problem” of the bottom of the coin – we disappear those on the “top of the coin” – we disappear the coin altogether
  • we frame the privileged as neutral instead of as complicit in the oppression
  • when is EDI used to avoid actually dealing with oppression?

What feels important but is still muddy?

What do I feel as I lean into reflecting on privilege? body, emotions. (“We cannot think our way out of oppression.”)

Other notes:

  • I’ve seen the original version of this experiment, and appreciated this updated version. When they did the reveal, I felt my stomach fall – I missed something that should be so obvious again! I also appreciated Dr. Nixon’s use of this as a metaphor for privilege: e.g., those who don’t experience oppression not only don’t see it, they don’t believe it when others tell them that they experience it and gaslight them by saying that what they have experienced did not happen.
  • strengths that helped me get to my level of education: parents who supported me to pursue higher education, availability of student loans; barriers: cost of tuition and living as a student without an income, not having role models in my family who had done higher education before
  • the people on the “bottom” of the coin are the experts on how oppression affects them – those on the privileged side of the coin can’t see the ways in which they are privileged (it’s like the gorilla!)
  • white supremacy – the view that white is “normal”, the “default”
  • people on one side of the coin are not homogeneous – e.g., if we think about colonialism, the people on the oppressed side are indigenous, and there are many different indigenous groups; similarly, the group on the privileged side of the coin of colonialism are settlers and they are also not homogenous
  • education on antiracism, anti-oppression is not enough – it doesn’t change the material conditions that people experience, it doesn’t dismantle the systems of oppression
  • what is my work to do on “EDI”?
    • when you are on the top of a coin, you need to work in solidary with the people who are experiencing the oppression
    • it is not about the person with privilege “saving” or “fixing” the populations experiencing the oppression
    • when privilege is unchecked it leads to an irrational sense of neutrality
    • when you are on “top” of the coin, you need to understand your position as having unearned privilege (and even recognizing there is a coin) and that you are not the expert

Dr. Nixon’s article on this model: https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-019-7884-9

Posted in Uncategorized | Leave a comment

Webinar Notes: Ethical Storytelling

Panelists: Amy Costello & Frederica Boswell

Hosted by: Nonprofit Quarterly

Video of the webinar can be viewed here.

  • Tiny Spark podcast
  • Sophie Otiende, Activist and Advocate, HAART Kenya:
    • non-profits “parade and exploit” the people they are claiming to help
    • e.g., asking someone who has been assisted by an NPO to share their story – the organization holds power over the victim – can that survivor give proper consent about telling their stories?
    • “survivor porn” – why do we need a person to come and tell us that these horrible things are bad?
    • people don’t talk to survivors about the risks and impacts of telling your story. People live in an ideal world where they think that if they tell their story, people will be compassionate. But that’s not true – some people will abuse those who tell their stories, or we just forget about the person and move onto to getting the next survivor’s stories
  • we are interested in the whole person -not just their trauma
  • not everyone wants to be called “survivor” or “person who formerly experienced homelessness” or “recovering addict” – how does the person whose story is being told want to be represented?
  • the person whose story it is should be a full partner in the storytelling
    • ensure they are in the loop at all developments in the storytelling and being extra sure at every step that they are comfortable with any details that are shared
    • never want to surprise someone with details about their story being made public
  • don’t want to engage in trauma porn – just sharing the trauma in isolation
    • figure out what the message is – e.g., in a story on the Me Too movement in the charitable sector, the message was that serial predators are hiding in the charitable sector and their institutions are protecting them
    • figure out what the purpose of telling the story is – things like holding organizations to account or highlighting resilience
  • when conducting interviews, establish trust and intimacy
    • be fully present in the interview
    • ask follow up questions, based on really listening to them, rather than just following the interview guide in order
    • don’t drive the interview – the interviewee should have autonomy and control. The story is hers, not the interviewer’s
    • interviewer’s job is to help the interviewee feel safe
  • we should let people know what their rights are – that they can say “no” to answering our questions
  • interviewing “experts” (e.g., professors who study a topic)
    • isn’t someone who has years of experience living with homelessness an expert on the subject?
    • “professional” “experts” are often well rehearsed when you interview them – you have to push them to be real, rather than just being on auto-pilot
  • think about the stereotypes you may be perpetuating with your storytelling

Posted in event notes, webinar notes | Tagged | Leave a comment

Webinar Notes: Beyond the Board Statement: How Can Boards Join the Movement for Racial Justice?

Sheila Matano, who is the VP of the board of the BC Chapter of the Canadian Evaluation Society (CESBC), who is also the chair of our Diversity, Equity, and Inclusion (DEI) committee, told me about this two-part webinar series. Like many boards, we are wanting to do better when it comes to doing our work in an inclusive way, and we didn’t want to just put out a board statement that says “Black Lives Matters” but then just go on operating the way that we always have. So I was excited to check out this series for some concrete ideas about how we can do this well. And I was not disappointed!

Panelists: Robin Stacia (RS) https://sageconsultingnetwork.com/meet-our-ceo/ and  Vernetta Walker (VW).

Hosted by: Nonprofit Quarterly

Part 1: Date: June 22, 2020

Watch part 1 here. Watch part 2 here.

Here are my notes from the webinars.

My takeaways:

  • board statements need to state a commitment to what you are going to do
  • it’s not about waiting out the uprising until you can go “back to business”
  • how can boards use their influence in a way that aligns with their mission?
  • the work needs to be done by the whole board – it’s not to be on the one Black person on your board to own this work. It can be retraumatizing for them. And Black people are tired from fighting for centuries – white people need to step up.
  • look at your board composition – we need a diverse board and a coalition of all of us

Understanding our history:

  • we are a post-colonial society – there was a narrative that “natives” were “savage” –> white supremacy –> allowed white people to enslave Black people
  • slavery did not end – it just evolved
  • there is still a presumption of danger re: Black and brown people
  • truth and reconcilitation/justice/reparation are sequential – the truth must come first
  • as boards, we need to tell the truth about what we’ve ignored, overlooked, and benefitted from

Debunking Myths

Myth: “It’s just a few bad actors”

  • RS: this myth “minimizes the centuries long struggled that Black, brown, indigenous people have experienced”
  • it is a system of racism:
    • restricts every aspect of life for Black, brown, and indigenous people (healthcare, criminal justice, politics, education, wealth – everything)
    • institutional policies/practices/laws/regulations designed to benefit and create advantages for white people and oppress and disadvantage Black, brown, and indigenous people
    • exists no matter your age, location, socioeconomic status
  • VW: we have a lot to unlearn
    • we’ve been socialized to not talk about race
    • boards should talk about why they are so uncomfortable to talk about race
    • boards should learn about unconscious bias
    • do you have authentic relationships with Black and brown people? Because we’ve been separated
    • COVID-19 and this uprising = perfect storm, because people had time to reflect and feel the pain
    • we can’t show up effectively for the board work if you haven’t done the individual work

Myth: People try to replace “Black Lives Matter” with “All Lives Matter”

  • VW: saying “Black Lives Matter” is not saying “only Black Lives Matter” – it’s saying “Black Lives Matter too”
  • there is violence against Black bodies, often by state actors
  • lots of people have heard that “race is a social construct”, but they don’t get it. They think there are differences between the races that justify the violence, but there are not.
  • “waking up Black” has a level of stress that is measurable – decreased life expectancy, gaps in educational acheivement, maternal mortality, criminal justice system involvement – bias and systemic racism leads to all of this
  • RS: people misunderstand “racial equity” – it means the state where my racial identity doesn’t have an impact on me -e.g., I can go to the bank or go birdwatching and my racial identity does not dictate the outcome

A board statement alone is not enough

  • When they polled the webinar audience, about 3/4 said that their board had issues a statement in the wake of the BLM protests, but only 1/4 said that their board had an indepth conversation about the issues
  • VM: some statements just say something to the effect of “we stand with you”, but nothing about what they will actually do
    • good statements will say what they are doing and what they commit to doing
    • there was a backlash if you didn’t put out a statement, and there was also a backlash if your statement didn’t have any teeth – it shows that people are paying attention
    • put putting out a statement for the sake of public perception is not good

Questions to ask when if and when you do speak out:

These are taken verbatim from their slide:

  1. How does your statement acknowledge the historical injustices of structural and systemic racism?
  2. How do you use the document to bring about awareness concerning systemic and structural racism to your audiences?
  3. How does the statement align with your organization’s mission?
  4. Is your organization willing to be an ally in supporting the work? If so, how?
  5. What is the call to action and committment to the work? Examples can include:
    1. How do you plan to alleviate barriers and create access to opportunities to bring about equitable and just outcomes?
    2. How do you plan to leverage the various forms of capital that are at your disposal to address the issues?

Source: Robert L. Dortch, Jr. Vice President, Programs & Innovation, Robins Foundation

As I look at these questions, I think that not only are they useful for our work on the CESBCY board, but they can also be helpful for me to think about how I do my teaching.

Resources

Posted in evaluation, event notes, webinar notes | Tagged , , , | Leave a comment

Evaluator Competencies Series: Evaluation Topics and Questions

Since it’s been a while since I last wrote a blog psoting in this series, and since I stopped in the middle of the “technical competencies” domain, let’s review where we are at. The first competency in the “Technical Domain” was about figuring out what the purpose and scope of an evaluation – what is the evaluation trying to do and what ground is it going to cover (and what is it not going to cover). The next competency was about figuring out if a program is in a state in which it is ready to be evaluated and the third competency was about making program theories explicit. This brings us to the fourth competency in the technical domain:

2.4 Frames evaluation topics and questions

Questions

People often get confused when we say “evaluation questions”, thinking that we are referring a question you might ask in an interview or survey (like “were you satisfied with the services you received?”). But the “evaluation question” we are referring to here (sometimes referred to a “Key Evaluation Questions” (KEQ)) are a higher-level than that; they are an overarching question (or a few questions) that guide the development of the evaluation.

An important thing to remember about evaluation questions is that they should be evaluative. Not just “what happened as a result of this program?” but “how “good” were the things that happened from the program?” (where “good” needs to be fleshed out – e.g., what do we consider “good”? how “good” is good enough to be considered “good?”).

The Better Evaluation website gives us some useful tips on developing KEQs:

  • they should be open-ended (not something that you can answer with “yes” or “no”)
  • they should be “specific enough to help focus the evaluation, but broad enough to be broken down into more detailed questions to guide data collection”
  • they should relate to the intended purpose of the evaluation
  • 7 +/- 2 is a good number to have
  • you should work your stakeholders to development them

I think it’s really important to think about who gets to decide on what the evaluation questions are. Since the rest of the evaluation will be built based on the questions, whoever gets to decide on the questions holds a lot of power. This could be a whole blog posting topic on its own, but in the interest of actually getting this posted, I think I will leave that for another day.

Resources

A nice resource on working with your stakeholders to develop evaluation questions is Preskill & Jones’ A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions. The CDC’s Good Evaluation Questions Checklist can also be helpful in thinking through/improving your evaluation questions.

Image source: Posted on Flickr with a Creative Commons license.

Posted in evaluation, evaluator competencies | Tagged | Leave a comment

I’m back to blogging

Over on my personal blog, I’ve decided to try blogging every day in the spirit of November as National Blog Posting Month (NaBloPoMo) – that was a thing years ago when blogs were more popular. The idea is to blog every single day during the month of November. That got me thinking that it had been a while since I blogged here… and it turns out that has been more than a year!

I remembered that I had been doing a series on evaluator competencies where I wrote one blog posting a week on each of the Canadian Evaluation Society (CES) evaluator competencies and that I had decided to take a “short break” when stuff was getting busy with the courses I was teaching. So “short” may have not been the right word there. In my defence, the world was turned rather upside down for most of that time, what with a global pandemic and reckoning on racism.

My other issue with actually getting things up on here is my battle with perfectionism. During these pandemic-y times I’ve been doing a fair bit of professional development 1As presentations and workshops have had to move online in response to the pandemic, it’s resulted in a lot of events that otherwise might have been just held locally being available anywhere in the world. And with a reckoning on racism bringing more attention to the work of BIPOC (Black, Indigenous, and people of colour) activists, scholars, and organizations, webinars on anti-racism and reconciliation have been amplified. and from the various webinars and online workshops I’ve attended, I’ve started many, many blog postings as a way to capture notes from these events. But then I think “Oh, I need to summarize this better/come up with a good conclusion/figure out what actions I should take from what I’ve learned/find a good Creative Commons licenced photo to go with this/provide links to the webinar recording/etc./etc.” and then it sits in my drafts folder for ever and ever.

So here’s my new plan. I’m going to re-start on my evaluator competency series – I’ll post once a week on that. And I’m going work through my drafts folder and actually get my notes from each of these events in a reasonable, but not perfect, shape, and post those too. Or I’ll decide that I didn’t get enough value from a given webinar or workshop and hit the “delete” button. I won’t blog every day – but I’m going to aim for two blog postings per week in addition to my evaluator competency one, for the month of November.

Progress, not perfection
Sticky note I have above my desk in my home office. Though in these the pandemic-y times, some days it’s more survival than progress that I hope to achieve.

Footnotes

Footnotes
1 As presentations and workshops have had to move online in response to the pandemic, it’s resulted in a lot of events that otherwise might have been just held locally being available anywhere in the world. And with a reckoning on racism bringing more attention to the work of BIPOC (Black, Indigenous, and people of colour) activists, scholars, and organizations, webinars on anti-racism and reconciliation have been amplified.
Posted in blogging, me, reflection | 2 Comments

CES Webinar Notes: Retrospective Pretest Survey

These are my rough notes from today’s CES webinar.

Speaker: Evan Poncelet

  • was asked “are retrospective post test (RPTs) legit?”, so it did some research on them
  • you can’t always do a pre-test (e.g., evaluator brought on after program has started; providing a crisis service, you can’t ask someone to do a pre-test first)
  • “response shift bias” – “you don’t know what you don’t know”. Respondents have a different understanding of the survey topic before and after an intervention. So they might rate their knowledge high before an intervention, then they learn more about the topic during the intervention and realize that they didn’t actually know as much as they thought they did. So afterwards, you rate your knowledge lower (or rate it as the same as before the intervention, but only because while you learned a lot of stuff, you also know more about the topics that you still don’t know). So you have a different internal standard before and after the intervention that you are judging yourself against.
  • a brief history of RPTs
    • emerge in the literature in 1950s (not much research on them – more “if you can’t do pre/post, do RPT”)
    • 1963 – suggested as an alternative to pre/post or a supplement (if you do both pre test and an RPT, you can detect historical effects)
    • 1970s-80s – suggested as a supplement to pre-test; research on RPTs (as a way to detect response shift bias)
    • now – typically used in place of pre-test; common in proD workshops (e.g., a one-day workshop)
  • what do they look like?
  • e.g., give a survey after a webinar:
NowBefore the
Webinar
I’m confident in designing RPT Agree
Neutral
Disagree
Agree
Neutral
Disagree
  • But if you have the pre next to post on the same survey, very easy to give a socially desirable answer or to have answer affected by effort justification (i.e., people say there was an improvement to justify the time they spent taking part in the program)
  • give separate surveys for pre and post (to reduce the social desirability bias)
  • research shows that separate surveys does show reduced bias, more validity
  • another option: perceived change:
NowRate your improvement
attributable to webinar
Your confidence in designing
RPT
Low
Med
High
None
A little
Some
A lot
  • research shows this option shows this is subject to social desirable bias
  • not a lot of research (could probably use more research)
  • advantages of RPTs
    • addresses response shift bias
    • provides a baseline (e.g., if missing pre-data)
    • research supports validity and reliability (e.g., an objective test of skill is compared with results of these surveys)
    • can be anonymous (don’t have to match pre- and post-surveys via an ID)
    • convenient and feasible
  • disadvantages of RPTs
    • motivation biases (e.g., social desirability bias, effort justification bias, implicit theory of change (you expect a chance to happen, so you report a change has happened)
    • can use a “lie scale” (e.g., include an item in your survey that has nothing to do with the intervention and see if people say they got better at that thing that wasn’t even in your intervention – detect people over inflating the effect of the workshop)
    • memory recall (so be very specific in your questions – e.g., “since you began the program in September…”). If you have long interventions, may be really high to recall
  • program attrition – missing data from dropouts (could actively try to collect data from the dropouts)
  • methodological preferences of the audience (what will your audience consider credible. RPTs are not well known and some may not consider them a credible source)

Other Considerations

  • triangulate data with other methods and sources (a good general principle!)
  • do post-test first, followed by RPT (research shows this gives respondents an easier frame of reference – it’s easier to rate how they are now, and then think about before)
  • type of information being collected:
    • if you want to see absolute change (frequency, occurrence) – do traditional pre/post test (it can be hard to remember specific counts of things later)
    • changes in perception (emotions, opinions, perceived knowledge) – do RPT

Slides and recording from this webinar will be posted (accessible to CES members only) at https://evaluationcanada.ca/webinars)

Posted in evaluation, evaluation tools, surveys, webinar notes | Tagged , | Leave a comment