Webinar Notes: “Cross Boundary Insights: Plans, Critical Incidents, and Outcomes” by Jonathan Morell

As part of EvalYear 2015, the School of Public Affairs at the University of Colorado Denver, is holding a webinar series called: “Practical Applications of Systems to Conduct Evaluation: Cases and Examples“. They have a great list of 30 minute webinars running from January to June related to use of systems thinking in evaluation. Systems thinking is something I’ve been reading up on lately, so these webinars are very well timed from my perspective!

Here’s the notes I took during the webinar:

Webinar: Cross Boundary Insights: Plans, Critical Incidents, and Outcomes
Speaker: Jonathan Morell   jamorell.com/
Date: February 11, 2015

  • This example is not an entire, self-contained evaluation method on its own, but a way to do some analysis – still need to do other things
  • started as a simple exercise looking at timelines, but he looks at things in terms of complexity
  • had a project plan
  • added critical incidents 1Determine critical incidents by asking “what happened that really affected the program for better or worse?” or “what do you think will happen that would affect the program for better or worse?”, desirable consequences & undesirable consequences, which he added to the timeline
  • colour coded items that were unexpected
  • things never go according to plan
  • mapped out the “actual project plan”, compared to original timeline
  • there are good reasons why plans don’t work out:
    • people are optimistic – overestimate our abilities and underestimate the things that will get in the way
  • should look at how long similar projects have taken
  • looked at why the delays happen, which he did by looking at the critical incidents – though not sufficient on its own, it’s useful – and see how they affected the timeline
  • he also looked across the different lanes

Webinar on 11 Feb 2015

  • Used the timeline to try to understand the uncertainty
  • Used the critical incidents to try to understand what happened and how it affected timeline
  • Used desirable and undesirable consequences to understand effects
  • Suggests that collecting a little bit of data often is better than collecting a little bit of data less frequently
    • if you see something interesting, can do a deeper dive (but if you only collect data infrequently, you might miss stuff)
    • can have a short conversation with people involved – “What’s going on?”
    • keep your ears open, ask simple questions
  • you also need to monitor the outside world – the outside world affects your stuff too
  • when asked how he sells this type of work to the evaluation client who just wants to know if they achieved their intended outcomes (and is afraid of talk of complexity), he felt that you don’t need to tell the client that you are thinking about complexity
  • his view on evaluation capacity is that people should: “Respect data. Trust judgment” (his tagline)
  • people wouldn’t hire you as an evaluator unless they believed that what they are doing is going to work

Host: Danielle Varda

Other webinars from this series that I attended:

Footnotes

Footnotes
1 Determine critical incidents by asking “what happened that really affected the program for better or worse?” or “what do you think will happen that would affect the program for better or worse?”
This entry was posted in evaluation, evaluation tools, notes, webinar notes and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *