Complexity eStudy Notes – Session #1

WARNING: unbalanced footnote start tag short code found.

If this warning is irrelevant, please disable the syntax validation feature in the dashboard under General settings > Footnote start and end short codes > Check for balanced shortcodes.

Unbalanced start tag short code found before:

“Not everything that you’ll read about when you read about complexity is useful in evaluation:attractorsemergencesensitive dependenceunpredictable outcome chainsnetwork effects among outcomesjoint optimization of uncorrelated outcomesIt’s hard to talk to people (like evaluation stakehold…”

Given my interest in complexity and evaluation, I decided to take the American Evaluation Association (AEA) eStudy course being facilitated by Jonny Morell. I’ve seen Jonny speak at conferences before and have learned some useful things, so figured I could learn a few things from him in this more extended session. Sadly, the live presentation of the eStudy conflicts with other meetings that I have, so I’m only going to be able to see part of the presentations live and will have to watch the other parts of the presentations after the fact from the session recording.

This is one of those blog postings that is probably more useful for me as a brain dump than it is for other people to read.


It was recommended that we check out a few of Jonny’s blog postings on complexity before the presentations.

Here’s one quote from his posting on complexity having awkward implications for evaluators that jumped out at me:

Contrast an automobile engine [not complex] with a beehive, a traffic jam, or an economy [complex]. I could identify each part of the engine, explain its construction, discuss how an internal combustion engine works, and what role that part plays in the operation of the engine. The whole engine may be greater than the sum of its parts, but the unique role of each part remains. The contribution of each individual part does not exist with beehives, traffic jams, or economies. With these, it may be possible to identify the rules of interaction that have to be in place for emergence to manifest, but it would still be impossible to identify the unique contribution of each part.

Source (emphasis mine)

This really is a challenge for evaluators! Imagine being hired to evaluate a program – your job is to answer “what happens as a result of this program?”, but you know that your program is just one part of a larger, complex system, so you can never really definitively say “this program, and this program alone, caused X, Y, and Z”, as you know that outcomes are affected by so many things that are outside of the control of the program. That is the situation that we evaluators find ourselves in all the time. That’s not to say that we can’t do anything, but just that we need to be thoughtful in how we try to determine what results from a program in the context of everything else in the system. Learning about complexity and systems thinking can help us do that.


I had a conflicting meeting during the first session, held on July 9, 2019, so I watched the recording afterwards. Here are my notes:

Rugged landscapes
  • people seems to think complexity is “mysterious” and “magic” – Jonny feels it is not
  • he feels that “most of the time you won’t have to use it at all”
  • if you learned thematic analysis or regression, you’d say “cool method, I’ll use it when it is needed and I won’t use it when I don’t need it”. He thinks complexity should be the same – use when it’s needed.
  • Two modes:
    • you might use complexity instead of another method (like you might say “thematic analysis is better than how I’ve been analyzing open ended survey data. I will use thematic analysis instead of what I was doing before”)
    • but you could also thinking about it like this: it can help you change how you conceptualize the problem, the data analysis strategies – “you begin to think differently about the world”
  • people seems to think that you need to use new fancy tools to apply complexity – and sometimes you do, but often you don’t – you can use familiar methods while applying complexity concepts
  • there’s no agreed upon definition of complexity – but he doesn’t worry about that
    • “systems” is a huge area (but he’s not that interested in it – though he did plug the AEA Systems TIG)
    • “complexity” also a huge area – and he thinks lots of the concepts are useful to evaluators
  • “I don’t know what complex systems are, but I know what complex systems do. I can work with that” – we can use that to make practical decisions on models, on methods, data interpretation, how to conceptualize the program.
  • He thinks that complexity is popular in evaluation today because there is a sense that programs aren’t successful and evaluators are the messenger (and people are shooting the messenger). And people think that maybe complexity can help explain why programs aren’t working.
  • The fact that everything is connected to everything else is true, but useless.” He wants to help us learn the “art” of getting a sense of what connections are worth dealing with and which aren’t. We need to “discern meaning within the fact that everything is connected to everything else.”
  • Cross cutting themes in complexity science
    • pattern
    • predictability – what can we predict and how well can we predict it
    • how change happens –
  • Complex behaviours that might be useful in evaluation ((Not everything that you’ll read about when you read about complexity is useful in evaluation:
    • attractors
    • emergence
    • sensitive dependence
    • unpredictable outcome chains
    • network effects among outcomes
    • joint optimization of uncorrelated outcomes
  • It’s hard to talk to people (like evaluation stakeholders) about complexity
    • if we show people a logic model or theory of change, they can understand how things they do in their program are believed to lead to outcomes they are interested in
    • but talking about things like a program might benefit a few people a lot and most people not at all, or network effects – these are things we aren’t used to talking to evaluation stakeholders about
    • it’s difficult to say to people that we might not be able to show “intermediate outcomes” on the way to long-term outcomes (because results aren’t so linear)
    • your program may have negative effects in the broader system (programs are siloed, so you are only working within your own scope and aren’t concerned (or incentivized to be concerned) about stuff outside of your program. If we throw all of our financial and intellectual resources into HIV, we’d make a lot of improvements with respect to HIV. But that pulls the resources away from prenatal care, palliative care, primary care, etc., etc., etc. You are “impoverishing” the environment for every other program – and those programs will have to adapt to that.
  • preferential attractors – e.g., snowflakes – the odds of a molecule attaching to a big clump is more than a little clump; same thing with business – you are more likely to attach to a bigger centre of money than a small one
Bee & Beehive
  • emergence is NOT “the whole is greater than the sum of the parts” – it’s about the WAY that the whole is greater than the sum of the parts. An engine is greater than the sum of its parts. But I could explain what the contribution of each of the parts is to the engine. That’s not the same for complex systems (like traffic jams, beehives, or economies) – you can’t explain the whole economy based on the contribution of each of its parts. Not just because we haven’t studied these enough – but because it is “theoretically impossible” to do so.
  • “Ignoring complexity can be rational, adaptive behaviour”
    • stovepipes are efficient ways to get things done
    • different programs have different time horizons
    • different organizations have different cultures
    • it takes resources to coordinate different programs/systems/organizations
  • Even if our stakeholders don’t buy into complexity, it’s still important for evaluators to think about and deal with
    • “if program designers build models that do not incorporate complex behaviour, they will:
      • miss important relationships
      • not be able to advocate effectively
      • not be effective in making changes to improve their programs
      • misunderstand how programs operate and what they may accomplish
    • these problems cannot be fixed in an evaluation, but it is still possible to evaluate the complex behaviours in their models”
    • e.g., he showed a logic model and talked about if you have a bunch of arrows leading into an outcome, are those “AND” or are they “OR” (i.e., do you need all of the outputs to happen to lead to that outcome, or do you only need one? Or only need some combo? He also added unintended consequences and about network effects.
    • the evaluator can still look at these complex behaviours – look for the data to support it. You can superimpose a complex model on top of the traditional logic model. You can do this even if the program stakeholders only see the logic model. You can show them the data interpreted based on their logic model, and then also show them how the data relates to the model that includes complexity (that might be what it takes to incorporate it).
    • He thinks more unintended consequences are undesirable and there are methods for measuring unintended consequences and they can be measured within the scope of an evaluation.
    • Jonny hates the “butterfly effect” because, in his world, he doesn’t see big changes happening super easily. He sees people making lots of policy/program changes, but the outcomes don’t change! His take on sensitivity to initial conditions is that you can run the same program multiple times and get different results each time because there are difference in the context of where its implemented and so you can’t necessarily replicate the outcome chain. But if the program is oeprating within an attractor, you might be able to get to the same ultimate outcome.
    • E.g., if you roll a boulder down a hill, you won’t be able to predict it’s exact path (e.g., might hit a pebble, wind might move it), but we know it will end up at the bottom of the hill because there is an attractor (gravity).
    • He’s not arguing to not measure intermediate outcomes, but we should think about these concepts [and maybe not be too overconfident in what we think we know about the outcome chain?]

Image Sources

This entry was posted in evaluation, event notes, notes, webinar notes and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *