Webinar Notes: Using Randomized Controlled Trial Designs in Community Settings

Using Randomized Controlled Trial Designs in Community Settings

Presenter: Tim Aubry, U of Ottawa
Date: 26 March 2014
CES Webinar

  • RCTs often referred to as the “gold standard”, but not everyone believes it
  • “methodological pluralist pragmatists” – program development in various stages, taking context into consideration  vs. RCTs as gold standards
  • evaluation is messy – doesn’t lend itself to RCTs – need to be nimble and flexible

The “At Home/Chez Soi” Demonstration Project

  • “action research on how to support people with severe mental illness to exit homelessness”
  • funded by Mental Health Commission of Canada
  • 85% funding into services, 15% into research
  • intervention = subsidized housing + support (ACT or ICM) – provided separately
    • housing provided immediately, private market units, people hold their own lease, max of 30% income on rent (rest subsidized), funding not tied to particular housing unit
    • Assertive Community Treatment (ACT) or Intensive Case management (ICM) teams
    • both well known in community mental health

Methods:

  • protocol paper published in BMJ Open
  • multi-site (Vancouver, Winnipeg, Toronto, Montreal, Moncton)
  • non-blind parallel group RCT
  • effectiveness and cost-effectiveness
  • 2 fidelity assessments & 2 implementation evaluations
  • high needs (ACT) & moderate needs (ICM) vs. usual care
  • inclusion: mental health (with or without co-existing substance use), homeless or “precariously housed”, adults
  • intent to treat analysis

Challenges

  • top-down initiation of the project, initial resistance from the community
    • each city involved in adaptions for their “version” (e.g., Aborigianl group in Winnipeg, cultural community group in Toronto, rural project in Moncton)
    • presented as “new & innovation” addition to services, not taking away
    • emphasis on credibility of research for Housing First
  • ethical concern: people feel that those people in the intervention group are getting something better and withholding it from the control group; compounded when working with marginalized groups
    • emphasize that we don’t actually know that the intervention works (there may be studies in other places that suggest it works, but those studies are limited/conducted in different places that might not be relevant to your context)
    • adding service to some, but not withdrawing anything from anyone (e.g., if this project was not happening, those in the control group would be in the same situation as they are in the project)
    • you can argue that it would be unethical to provide services that haven’t been evaluated
    • would like to have done a “waiting list” – where those in control group are first in line for the services if the intervention is found to be significantly better
  • integrity of randomization – needs to be truly random assignment, avoid selection bias
    • referrals went to research personnel to determine eligibility and do randomization (so service providers can’t be tempted for selection bias – e.g., if service provider thinks the intervention really works, might want to assign particularly vulnerable clients to intervention group)
    • randomization occurred after first interview
    • look at characteristics of groups to see if they equal
  • sample attrition
    • especially in a transient population
    • could get differential attrition when one group connected to a service and the other not
    • oversampled to get needed power
    • actively tracked (data collection check ins every 3 months; interviews every 6 months), get permission of participant to be able to contact family members/service providers to track them down if you lose contact, rely on personal connections, honorarium for interviews
  • ensuring fidelity of the intervention
    • complex psychosocial intervention – worry about it being implemented correctly to start, but also worry about program “drift” over time
    • significant training & ongoing technical support throughout the project
    • 38 fidelity standards – did fidelity assessments

Outcomes

  • bigger improvement in housing, quality of life, community ability, and substance use in intervention vs. control (both groups improved, but intervention resulted in bigger gains)
  • there is some regression to the mean (as people are recruited when they are having a rough time), so some natural improvement (so, if didn’t have an RCT, could end up concluding that intervention causes all of the improvement, even though some of it would have happened anyway).
  • cost analysis – $10 invested in intervention –> $9.60 savings (comprehensive costs – health care, justice, etc.)
  • influenced budgeting policy – big cities have to put 2/3 of funding towards Housing First

 

This entry was posted in evaluation, methods, webinar notes and tagged , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *