Trying to avoid falling into yet another rabbit hole of reading (this time on “Implementation Science”1There’s another rabbit hole of “Program Science” awaiting me as well!), but here are notes from a couple of papers I’ve read trying to get the lay of the land on this.
Implementation Matters
Diffusion or Technology Transfer = “the spread of new ideas, technologies, manufactured products […] or […] programs” (p. 327)
- Phases of program diffusion:
dissemination: “how well information about a program’s existence and value is supplied to” end users - adoption: whether end users “decides to try the new program”
- implementation: “how well the program is conducted during a trial period”
- sustainability: “whether the program is maintained over time” (p. 327)
8 aspects of implementation:
- fidelity: “the extent to which the innovation corresponds to the originally intended program (a.k.a. adherence, compliance, integrity, faithful replication)”
- dosage: “how much of the original program has been delivered (a.k.a. quantity, intervention strength)”quality: “how well different programs have been conducted”
- participant responsiveness: “degree to which the program stimulates the interest or hold the attention of participants”
- program differentiation: “extent to which a program’s theory and practices can be distinguished from other programs (a.k.a. program uniqueness)”
- monitoring of control/comparison conditions: “describing he nature and amount of services received by members of these group (treatment contamination, usual care, alternative services)”
- program reach: “rate of involvement and representativeness of program participants (participation rates, program scope)”
- adaptation: “changes made in the original program during implementation (a.k.a. program modification, reinvention) (p. 329)
In order to evaluate whether a program –> outcomes, you need to monitor:
- how the program is being implemented:
- so you know what you are actually evaluating
- because negative results could occur because you didn’t actually implement as planned (and if you don’t monitor what is actually implemented, you would come to the incorrect conclusion that the program doesn’t work)
- because positive results could come from an innovation that was implemented instead of what was planned (and if you don’t monitor what is actually implemented, you would come to the incorrect conclusion that the program works and would miss out on being able to sustain/spread the innovation that actually does work)
- what the comparator group is actually getting (so you know what you are actually comparing your program to)
It’s important to find the right mix of fidelity and adaptation. Although fidelity can –> improved outcomes, no program is implemented with 100% fidelity and some adaption to local context can also improve outcomes; so, it is important to “find the right mix of fidelity and adaption”. Importantly, you need to “specify the theoretically important components of interventions, and to determine how well these specific components are delivered or altered during implementation. This is because core program components should receive emphasis in terms of fidelity. Other less central program features can be altered to achieve a good ecological fit.” (p. 341)
Source: Durlak, J. A., & DePre, E. P. (2008) Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol 41: 327-350.
Making sense of implementation theories, models
and frameworks
- implementation science came from struggles with getting research into practice; often attempts to implement evidence-based practice were not based in an explicit strategy/theory and it was hard to “understand and explain how and why implementation succeeds or fails, thus restraining opportunities to identify factors that predict the likelihood of implementation success and develop better strategies to achieve more successful implementations.” (p. 1)
- in response, researchers have created a lot of theories and used some from other disciplines and now people find it difficult to pick a theory to use
- Implementation Science: “the scientific study of methods to promote the systematic uptake of research findings and other EBPs into routine practice to improve the quality and effectiveness of health services and care” (p. 2)
- diffusion – dissemination – implementation continuum
- diffusion – practices spread through passive, untargeted, unplanned mechanisms
- dissemination – practices spread through active mechanisms/planned strategies
- implementation – “the process of putting to use or integrating new practices within a setting” (p. 2)
- theory – “a set of analytical principles or statements designed to structure our observation, understanding and explanation of the world” (p. 2) – usually described as “made up of definitions of variables, a domain where the theory applies, a set of relationships between the variables and specific predictions. A “good theory” provides a clear explanation of how and why specific relationships lead to specific events” (p. 2)
- model – ” a deliberate simplification of a phenomenon or a specific aspect of a phenomenon. Models need not be completely accurate representations of reality to have value”.
- not always easy to distinguish betwen a “model” and a “theory” – “Models can be described as theories with a more narrowly defined scope of explanation; a model is descriptive, whereas a theory is explanatory as well as descriptive” (p. 2)
- framework – “a structure, overview, outline, system or plan consisting of various descriptive categories, e.g. concepts, constructs or variables, and the relations between them that are presumed to account for a phenomenon. Frameworks do not provide explanations; they only describe empirical phenomena by fitting them into a set of categories”
- in implementation science:
- “theory usually implies some predictive capacity […] and attempts to explain the causal mechanisms of implementation”
- models “are commonly used to describe and/or guide the process of translation research into practice […] rather than to predict or analyse what factors influence implementation outcomes”
- frameworks “often have a descriptive purpose by pointing to factors believed or found to influence implementation outcomes” (p. 3)
- models and frameworks are typically checklist and don’t specify mechanisms of change
- there is overlap among these five categories
- “the use of a single theory that focuses only on a particular aspect of implementation will not tell the whole story. Choosing one approach often means placing weight on some aspects (e.g. certain causal factors) at the expense ofo thers, thus offering only partial understanding. Combining the merits of multiple theoretical approaches may offer more complete understanding and explanation, yet such combinations may mask contrasting assumptions regarding key issues. […] Furthermore, different approaches may require different methods, based on different epistemological and ontological assumptions.” (p. 9)
- research is needed to determine if use of theories/models/frameworks does, in fact, improve implementation
Source: Nielsen, P. (2015). Making sense of implementation theories, models and frameworks. Implementation Science. 10:53. (full text)
Footnotes
↑1 | There’s another rabbit hole of “Program Science” awaiting me as well! |
---|