I recently completed an online learning module about the RE-AIM framework for program planning and evaluation offered by the Centre for Training and Research Translation (TRT) at Center for Health Promotion and Disease Prevention at the University of North Carolina at Chapel Hill. Here are some notes (with my more extensive raw notes after the jump).
Re-AIM is a framework that can be used for program planning and evaluation that goes beyond just looking at “is a program effective?” and considers who the program reaches, how well is it adopted, the fidelity of implemented, and sustainability of the program over the long-term.
The elements of the framework are:
Collectively, each of these elements has an effect on the overall Public Health Impact of the initiative.
Reach = # of people actually exposed to/served by the initiative
# of people who could be exposed/served in an ideal world
- we are especially interested in whether those who are being exposed to/served by the initiative are those with the highest risk/most in need of the service, so we also want to compare those who are actually reached by the program with the overall group of people who could be reached on any relevant characteristics (e.g., does the program reach all genders? does it reach people of the different cultural backgrounds in the population? people with different levels of ability? people who don’t speak English? people of differing income levels?)
Effectiveness = “how well your initiative affects a change in intended outcomes and whether or not there are any unanticipated outcomes”
- consider effects on the primary outcomes of interest, but also other outcomes
- unanticipated outcomes can be positive or negative
- are outcomes consistent across different subgroups?
- how confident are you that the benefits outweigh the adverse consequences?
- when planning – look for existing evidence (research, evaluations), be clear on the outcomes you are trying to achieve (e.g., create a logic model!), and think about how the evidence relates to your specific context
- for evaluations – looking at the outcomes of your program
Adoption = # of settings that actually participate
# of settings that could participate in an ideal world
- like Reach, but at the organizational level
- “setting” can be things like schools, daycares, health units, community-based agencies, etc.
- may be multiple levels of settings (e.g., school districts, then schools)
- are there differences between the settings that participate and those that don’t?
- setting and reach word together (e.g, if you have 5 settings that each serve 50 people: reach = 250 people; if you have 1 setting that serves 150 people: reach = 250 people)
Implementation= “extent to which your intervention is delivered as intended or designed”
- implementation fidelity
- “core elements” = “components of the intervention that are critical tot he effectiveness of the intervention” – based on theory/logic/main strategies; can be adapted, but not changed
- are all the components being implemented as planned?
- is it implemented the same way by all staff?
- do staff change the way they implement over time?
- what is the time and money cost of delivery?
- design process measures for each core element – if your evaluation does not describe implementation, then you don’t really know what you are evaluating
Maintenance = “what are the long-term effects of the initiative and is it sustainable?”
- both individual and organizational level
- does your initiative produce lasting effects? (at individual and/or setting level)
- individual: “the long-term effects of the intervention on both targeted outcomes and quality-of-life indicators”
- setting/org: “extent to which a program is sustained (or modified or discontinued) over time”
Putting it all together:
- our goal is to :
- reach more people
- more settings adopt
- implemented as intended
- resulting in effective
- over the long-term (maintained)
- all together –> Public Health Impact
- need to attend to all 5 dimensions
Here’s my certificate of completion for the course:
My more extensive raw notes that I took while going through the module (which I’m sure no one other than me would ever be interested in) are after the jump.
RE-AIM Framework – Online Course – Notes
- “The overall goal of the RE-AIM framework is to encourage program planners, evaluators, readers of journal articles, funders, and policy-makers to pay more attention to essential program elements that can improve the sustainable adoption and implementation of effective, evidence-based health promotion programs.”
- “RE-AIM is an evolving framework to help translate research into practice”
- created for evaluating health promotion activities – to go beyond just “effectiveness”
- close gap between research studies and practice
- Abrams et al defined Public Health Impact as Impact = Reach x Effectiveness but RE-AIM extends that to include Adoption, Implementation, and Maintenance
- RE-AIM can be used for:
- developing new/adapting existing interventions
- choosing between different interventions
- assessing interventions as part of QI
- evaluating interventions; framing evaluation questions
The framework:
- “to maximize overall impact”, must “perform well across all five elements”
Reach
Reach = # of people actually exposed _
# of people ideally exposed
- “# & % of people affected by an initiative or ppl whose health may be improved as a result of the initiative”
- measurement:
- ideally how many people would be exposed/served ?
- how many are actually exposed/served?
- are those exposed/served representative of the target population
- are the individuals most at risk among those who are reached?
- “extent to which a program attracts its intended audience”
- access, barriers
- who can benefit the most?
- esp interested in if those in most need/at highest risk are reached
- need to identify a denominator of “# of people ideally exposed” (can be challenging)
- tools available to help estimate (www.re-aim.org/2003/commleader.html)
- numerator = “# of people actually served”
- measured as:
- absolute number & proportion of people willing to participate
- essentially, how many people can you recruit & retain
- also consider representation of the population
- questions to ask:
- how many people are eligible to participate? (denominator)
- how many actually participate & to what extent (numerator)
- are they representative of the population? (compare people actually exposed/served vs. those ideally exposed/served)
- are those most at risk the ones reached?
- Factors that influence reach:
- available resources
- perceived benefits vs. costs
- meet the needs of the targeted population
- mandatory vs. voluntary policy
- if you aren’t getting good reach (e.g., you aren’t reaching the most at risk, or levels of participation are low):
- look at characteristics of those who participate and those who don’t – what’s the difference?
- ask those who don’t participate why they don’t
- may need to adapt the program and/or recruitment methods to meet their needs, make it accessible/acceptable to them, lower the costs to participate
Effectiveness
- effectiveness = “how well your initiative affects a change in intended outcomes and whether or not there are any unanticipated outcomes”
- unanticipated outcomes may be positive or negative
- effects on primary program outcomes of interest
- measurement:
- will your initiative achieve the outcomes you intend?
- when planning: look for existing evidence – the stronger the evidence, the more confident you can be that your initiative will be effective
- be clear about the outcomes you are trying to achieve – create a logic model!
- are the outcomes consistent across different subgroups of the population?
- are there any unanticipated consequences?
- how confident are you that the benefits outcome any adverse consequences?
- will your initiative achieve the outcomes you intend?
- “measuring improvement on intervention targets and impact on quality of life”
- any unanticipated consequences(including adverse consequences)
- how it worked across sub-groups of the population (e.g., men vs. women; children vs. adults; low income vs. higher income)
- is it consistent across sub-groups?
- “temporarily appropriate outcomes’ – those that can reasonably be expected in a given timeframe
- logic models can help illustrate temporarily appropriate outcomes
- the most reliable evidence comes from research and evaluation, though often we don’t have the resources to do that
- other forms of evidence: indirect/parallel evidence from similar programs, theory or logic of the program, expert opinion
- plan for evaluation upfront (logic models can be very useful)
- identify primary outcome of interest, consider multiple outcomes, use valid & reliable measures of change, measure participation in the intervention, and track the implementation of the intervention (fidelity)
- build an evidence base for the intervention before it is designed (i.e., is there direct evidence that an existing intervention works? what strategies and techniques are known to be effective? theories? – use these to build your intervention)
- factors that influence effectiveness:
- use evidence-based resources (may need to adapt to your setting, but can serve as guidance)
- strength of implementation (influenced by resources available)
- improvement strategies
- develop documentation and tracking systems
- allow for ongoing input from target population
Adoption
Adoption = # of settings that actually participate _
# of settings that could participate
- like Reach, but at the organizational level
- “participation rate among potential settings and the representativeness of these settings”
- “setting” can be, e.g., schools, daycares, community-based agencies, etc.
- maybe multiple levels for settings (e.g., administration of organization, staff)
- place where you will be able to reach the people you want to reach
- can the program “be adopted by most settings, esp those having few resources” (i.e., not just as part of a funded study)
- need to identify a denominator of “eligible settings” (can be challenging)
- tools available to help estimate (www.re-aim.org/2003/commleader.html)
- measured as:
- participation rate among possible settings
- also consider representativeness of those settings
- adoption = numerator/denominator
- questions to ask:
- how many settings and/or staff are eligible to participate? (denominator)
- how many actually participate & to what extent (numerator)
- are there differences between those that adopt the intervention and those that don’t? (to determine the “representativeness” of the settings that do participate
- differences in size & capacity of the sites
- differences in demographics of the people in the setting
- do they have previous experience with this type of intervention
- factors that affect adoption:
- does it fit with organizational mission? (e.g., if it’s a school, frame it as educational, fit with curriculum)
- political will (urgency) re: the issue
- organizational capacity (resources & expertise) to do the intervention
- how complex is the intervention?
- is there evidence that the intervention is effective?
- do they think they can succeed at implementation?
- you need to meet with the setting to assess capacity, training needs, etc.
- adoption & reach work together (e.g., if you have 5 settings that each serve 50 people: reach = 250 people; if you have 1 setting that serves 150 people: Reach = 250 people)
Implementation
- a.k.a. “intervention fidelity”
- “the extent to which your intervention is delivered as intended or designed”
- is each section/component implemented as planned?
- is it implemented the same by all staff?
- do staff change how they implement over time?
- what is the time & cost of delivery of the intervention?
- core elements: “components of the intervention that are critical to the effectives of the intervention” – based on theory/logic/main strategies
- can be adapted, but not changed
- include: method of delivery, dose & intensity (e.g., # and length of sessions or exposures to the intervention)
- changing essential elements of a program can change the outcomes
- need to think about what the “core element” is – e.g., core element might be “do a dietary assessment”, so if you don’t do any dietary assessment, you aren’t implementing with fidelity. But the dietary assessment itself might be modified (e.g., including more culturally relevant foods)
- also concerns with if intervention delivered consistently across different staff members and “extent to which programs are adapted or modified over time”[1]
- measurement:
- what activities are required to implement your initiative?
- “required components are the key components that must be completed for your initiative to be effective?” – maybe based on theory, a logical sequence of activities, evidence-based strategies
- are those activities occurring as intended?
- what is the cost (time & money) of your initiative?
- what adaptations, if any, were made during the initiative?
- what is the acceptability of your initiative?
- what activities are required to implement your initiative?
- design process measures for each core element – if your evaluation does not describe implementation, then you don’t really know what you are evaluating
- factors influencing implementation:
- how complex is the intervention?
- specific types and level of staff
- what are the costs (including staff time, materials, but also cost to participants to participate)? what type(s) of staff are required to implement (e.g., does it require an RN?)
- improvement strategies:
- get stakeholder input BEFORE you start,
- adapt materials to local culture, literacy, and social norms, without diluting the required activities
- pilot the invention
- provide staff training, technical assistance & protocols (even with detailed protocols, still need training; may set up ongoing training)
- monitor implementation through data collection & adapt initiative if necessary
- you need:
- clear implementation protocols
- staff training
- tracking system for implementation (data collection, observations)
Maintenance
- “what are the long-term effects of the initiative and is it sustainable?”
- measurement:
- does your initiative produce lasting effects? (at individual and/or setting level)
- look at targeted outcomes from your logic model
- is there consistent support from the organizations involved?
- examine how staff, settings and partners are involved?
- e.g., activities in initiative in staff job descriptions, build on infrastructure, ongoing support to the setting
- is there adequate funding for maintenance of your initiative?
- determine strategies for funding (e.g., identifying all the costs, identifying staff who can apply for funding, financial planning early on)
- if and how your initiative was adapted long-term?
- does your initiative produce lasting effects? (at individual and/or setting level)
- both individual participant level and setting/organizational level
- individual: “the long-term effects of the intervention on both targeted outcomes and quality-of-life indicators”
- setting/org: “extent to which a program is sustained (or modified or discontinued) over time”
- depend on making long-term changes
- factors that influence maintenance:
- individual level: continued social support, policy & enviro supports
- individual & setting level: benefit relative to cost
- setting level: amount of training, technical assistance & support to staff, settings & partners, level of ongoing funding, ongoing engagement of partners
- plan for sustainability early on, not just at the end
- improvement strategies:
- design initiative with resource limitations in mind
- design initiative with low complexity or can be adapted over time
- contact those exposed periodically
- encourage social support groups
- institute policy & incentive supports
- health marketing: “to protect and promote health of diverse populations” (CDC)
- 4 Ps of the marketing mix:
- product: program/services, practices, products
- price: cost of product, cost associated with making behaviour change
- place: where product reaches the individual
- promotion: ads, PSAs, flyers, newsletters, coupons, etc.
- 4 Ps of the marketing mix:
- the different dimensions depend on each other
- R & A work together
- E depends on I
- effectiveness depends on the fidelity of implementation
- often issues with evaluation are not about outcomes, but about not evaluating implementation – if you don’t implement as planned, you aren’t actually assessing the outcome of the program
- M happens last
Putting it all together:
- our goal is to have:
- reach more people
- more settings adopt
- implemented as intended
- resulting in effective
- over the long-term (maintained)
- need to attend to all 5 dimensions
[1] I see a distinct difference in philosophy here (e.g., that the program is “correct” and any changes to it will weaken it) vs. a complexity approach (e.g., we need to take into account context (what works for who, when and how long) and that things change over time, so even if something works well now, it might not later).