Developmental Evaluation – Chapter One Notes

Chapter One: Developmental Evaluation Defined & Positioned

Key Take Home Messages:

  • Developmental Evaluation is not a magic bullet – it is one type of evaluation that is suited to particular situations (i.e., complex, innovative)
  • the role of the developmental evaluator is to “to infuse team discussions with evaluative questions, thinking and data, and to facilitate systematic data-based reflection and decision making in the developmental process
  • senior executive need the capacity to use the data to which they have access


Systems Thinking

  • complex system: “characterized by a large number of interacting and interdependent elements in which there is not central control; self-organizing and emergent behaviors based on sophisticated information processing generate learning, evolution, and development” (p. 1)
  • rather than looking for a cookie-cutter approach, we need “effective principles that can inform practice and minimum specifications (min specs) that can be adapted to local context.” (p. 26) – often here from front lines “but we are different here” when talking about standardization; perhaps need to think about things in this way (min specs) to allow for/work within, these differences in local context
Role of the Developmental Evaluator
  • the developmental evaluator’s primary function: “to infuse team discussions with evaluative questions, thinking and data, and to facilitate systematic data-based reflection and decision making in the developmental process” (pp. 1-2) – I already play a role like this, especially at our Public Health planning meetings, but also in most meetings that I attend. It’s not as formal or as thorough as in Dev Eval, but it’s there
  • evaluation is ultimately about reality testing, getting real about what’s going on, what’s being achieved – examining both what’s working and what’s not working.” (p. 5) – virtually all of my evaluations ask this: what’s working, what’s not, and what can we do better (and how can we do those things better)? Thinking back to the Research Methods course I’ve taught, where I cover some philosophical worldviews that underpin research approaches, Developmental Evaluation is underpinned by a pragmatist worldview.
  • research shows that “how we decide what to do is far from rational.” (p. 15)
  • “We operate within and see the world through paradigms [another name for “worldviews”]. A paradigm is a worldview built on implicit assumptions, accepted definitions, comfortable habit, values defended as truths, and beliefs projected as reality. As such, paradigms are deeply embedded in the socialization of adherents and practitioners. Paradigms tell us what is important, legitimate, and reasonable.” (p. 15)
    • paradigms tell us what to do without having to go through long drawn out philosophical discussion
    • strength: allows us to take action
    • weakness: hides the reason for the action through unquestioned assumptions (and what if those assumptions turn out to be wrong? or at least wrong in the situation at hand?)
  • There is no one best way to conduct an evaluation.” (p. 15)
    • evaluators need to match the evaluation approach/design to the situation, to “achieve the intended uses.” (p. 15)
    • thus, must be able to analyze the situation
Developmental Evaluation
  • DE is a subtype of Utilization-Focused Evaluation – where evaluations are designed specifically (and evaluated on their ability) to be actually used. “Use concerns how real people in the real world apply evaluation findings and experience the evaluation process.” (p. 13). “In DE, the intended use is development.” (p. 14)
    •  evaluations are more likely to be used if the intended users if they:
      • understand the evaluation
      • feel ownership of it
      • more likely to do the above two if they are actively involved in the evaluation process
  • two niches:
    • to support exploration and innovation before there is a program model to improve and summatively test” – i.e,. it is “preformative, but can lead to the generation of a model that is subsequently evaluated formatively and summatively” (p. 17)
    • for those dynamic situations […] where program staff and funders expect to keep developing and adapting the program, so they never intend to conduct a final summative evaluation of a standardized and hypothesized best practice model. This niche is nonsummative […] (p. 17)
  • 5 purposes/uses
    1. ongoing development
    2. adapting effective general principles to a new context
    3. developing a rapid response
  • analogy:
    • formative evaluation: cook tastes the soup
    • summative evaluation: guests taste the soup
    • developmental evaluation: “begins when the chef goes to market to see what vegetables are freshest, what fish has just arrived, and meanders through the market considering possiblities, thinking about who the guests will be, what they were served last time, what the weather is like, and considers how adventitious and innovate to be with the meal […]. If the chef decides to attempt a new creation, innovate, and develop a new dish especially well suited for these particular guests int eh context of this particular evening, then the stipulation opens up the possibility for creatively and development evaluation. And when a guest and cook create and concoct a soup together, that co-creation is developmental.” (p. 27)
Things That Work
  • great companies engage in “vision-directed reality testing: no rose-colored glasses, no blind spots, no positive thinking” (p. 7)
  • the human brain’s hardwired need for order, meaning, patterns, sense making, and control, ever feeding our illusion that we know what’s going on” (p. 8; emphasis mine)
  • 0ur repeated tendency to go for the short-term quick fix rather than to examine, come to understand, and take action to change how a system is functioning that creates the very problems being addressed.” (p. 11) – This really resonates with me as a Public Health professional. Health promotion and population health are all about addressing the underlying causes rather than treating the outcomes (e.g., addressing the systematic barriers to healthy eating & physical activity to prevent obesity, rather than providing bariatric surgery for obesity and quadruple bypasses for heart disease; addressing the need for safe and affordable housing, violence prevention, etc. to prevent mental illness and addictions, rather than trying to treat (or ignore) mental illness and addictions down the road). These approaches are not quick fixes, they are challenging, but ultimately they can have great effects if we do them well.
  • single loop learning: “problem detection-and-correction process” (p. 11)
  • double loop learning: “a second loop that involves questioning the assumptions, policies, practices, values, and system dynamics that led to the problem in the first place and intervening in ways that involve the modification of the underlying system relationships and functioning” (p. 11)
Real Time Results
  • In evaluation situations, real time typically means getting results to intended users in a day or two, or at most a couple of weeks rather than in months or on a routine schedule of standard quarterly reprots (a common information system reporting time frame).” (p. 12) I think this is especially important given that decisions have to be made really quickly in health care.
Role of Senior Executive
  • conclusion from study of performance of business organizations (from Sutcliffee & Weber, 2003): “the way senior executives interpret their business environment is more important for performance than the accuracy of data  they have about their environment.” … i.e., “there was less value in spending a lot of money increasing the marginal accuracy of data available to senior executives compare to the value of enhancing their capacity to interpret what data they had.” [emphasis Patton’s] (p. 12) – need the abilities to “think evaluatively and critically, and be able to appropriately interpret findings to reach reasonable and supportable conclusions” (p.13)
  • “the role of senior managers isn’t just to make decisions; it’s to set direction and motivate others in the face of ambiguities and conflicting demands. In the end, top executives must manage meaning as much as they must manage information” (pp. 12-13)
This entry was posted in Developmental Evaluation Community of Practice notes and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *