CES Webinar Notes: Retrospective Pretest Survey

These are my rough notes from today’s CES webinar.

Speaker: Evan Poncelet

  • was asked “are retrospective post test (RPTs) legit?”, so it did some research on them
  • you can’t always do a pre-test (e.g., evaluator brought on after program has started; providing a crisis service, you can’t ask someone to do a pre-test first)
  • “response shift bias” – “you don’t know what you don’t know”. Respondents have a different understanding of the survey topic before and after an intervention. So they might rate their knowledge high before an intervention, then they learn more about the topic during the intervention and realize that they didn’t actually know as much as they thought they did. So afterwards, you rate your knowledge lower (or rate it as the same as before the intervention, but only because while you learned a lot of stuff, you also know more about the topics that you still don’t know). So you have a different internal standard before and after the intervention that you are judging yourself against.
  • a brief history of RPTs
    • emerge in the literature in 1950s (not much research on them – more “if you can’t do pre/post, do RPT”)
    • 1963 – suggested as an alternative to pre/post or a supplement (if you do both pre test and an RPT, you can detect historical effects)
    • 1970s-80s – suggested as a supplement to pre-test; research on RPTs (as a way to detect response shift bias)
    • now – typically used in place of pre-test; common in proD workshops (e.g., a one-day workshop)
  • what do they look like?
  • e.g., give a survey after a webinar:
NowBefore the
I’m confident in designing RPT Agree
  • But if you have the pre next to post on the same survey, very easy to give a socially desirable answer or to have answer affected by effort justification (i.e., people say there was an improvement to justify the time they spent taking part in the program)
  • give separate surveys for pre and post (to reduce the social desirability bias)
  • research shows that separate surveys does show reduced bias, more validity
  • another option: perceived change:
NowRate your improvement
attributable to webinar
Your confidence in designing
A little
A lot
  • research shows this option shows this is subject to social desirable bias
  • not a lot of research (could probably use more research)
  • advantages of RPTs
    • addresses response shift bias
    • provides a baseline (e.g., if missing pre-data)
    • research supports validity and reliability (e.g., an objective test of skill is compared with results of these surveys)
    • can be anonymous (don’t have to match pre- and post-surveys via an ID)
    • convenient and feasible
  • disadvantages of RPTs
    • motivation biases (e.g., social desirability bias, effort justification bias, implicit theory of change (you expect a chance to happen, so you report a change has happened)
    • can use a “lie scale” (e.g., include an item in your survey that has nothing to do with the intervention and see if people say they got better at that thing that wasn’t even in your intervention – detect people over inflating the effect of the workshop)
    • memory recall (so be very specific in your questions – e.g., “since you began the program in September…”). If you have long interventions, may be really high to recall
  • program attrition – missing data from dropouts (could actively try to collect data from the dropouts)
  • methodological preferences of the audience (what will your audience consider credible. RPTs are not well known and some may not consider them a credible source)

Other Considerations

  • triangulate data with other methods and sources (a good general principle!)
  • do post-test first, followed by RPT (research shows this gives respondents an easier frame of reference – it’s easier to rate how they are now, and then think about before)
  • type of information being collected:
    • if you want to see absolute change (frequency, occurrence) – do traditional pre/post test (it can be hard to remember specific counts of things later)
    • changes in perception (emotions, opinions, perceived knowledge) – do RPT

Slides and recording from this webinar will be posted (accessible to CES members only) at https://evaluationcanada.ca/webinars)

This entry was posted in evaluation, evaluation tools, surveys, webinar notes and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *