I’m in Halifax for the Knowledge Translation Canada conference. Figured my blog would be a good place to collect my notes. There was also a hashtag of #KTCanada15 where people were tweeting their thoughts (and often tweeting out links to resources that speakers were referring to, which is super helpful, because then I don’t have to search them out myself – especially when I don’t catch the full reference.)
Opening Plenary: Richard Lilford, University of Warwick “Reconciling Scientific Rigour with Service Need”
- there’s a tension between the rigour of science and the needs of health service (time – service needs answers now, doesn’t coincide with timelines for science)
- controlled before & after studies – need pre- and post-design to account for baseline
- stepped wedge study – different sites get intervention at different times, but pick order of sites by randomization
- can even do stepped wedge nested stepped wedge (e.g. hospitals are stepped with region and regions are stepped)
- pros: logistical, ethical & political (hard to go to clusters and tell them they will be a control group, especially when the intervention is promising; also often not easy to intervene at all places at the same time; a fair way to distribute the resource to sites); allows you to explore the interaction of time/intervention effect (e.g., often that as time goes on, effect weakens); suitable for large scale roll-out); often more statistically precise than alternatives
- cons: analytically complex; problem if randomization order is broken (e.g., if one site isn’t ready and you skip it, it screws up the stats); disadvantages that apply to all cluster studies, e.g., if you recruit individuals to a study and put them in a cluster, not as good as if cluster is a population)
the use of stepped wedge design is on the increase
- non-randomized stepped wedge – need to be concerned with if the sites are getting the intervention in conjunction with some other thing (e.g., you are giving the intervention to those deemed most in need first)
- my current project will be a non-randomized stepped wedge design – should we be collecting data at the non-intervention sites (at least some subset of data) during the
pre- phase, not just the baseline at 3-months prior to their go-live?
- need Bayesian Epistemology
- method 1: mental integration alone: systematic review, theoretical knowledge, pilot data, multi-level/multi-method observation –> consider all of this and decide on an estimate and a distribution
- method 2: Bayesian Causal Network Analysis: I think this was coming up with estimates for each of the steps in the system
- but sometimes you need a quantitative estimate (e.g., if you question is “is this intervention worth the cost?)
- you may need to look at qualitative studies for some things
triangulation – look at the pattern of data (e.g., if your data suggests it works, the literature suggests it works, the qualitative data points in the right direction – it gives you some confidence) - you can (and should) look at the literature of all the steps along the way
- but sometimes you don’t have those direct measures
- you often have data on the generic intervention (e.g., CPOE) and the outcome (e.g., adverse events)
- you can measure process things (activities, outputs) and outcomes
- when you are at the level of generic processes (e.g., should we change the nurse:patient ratio) – many possible outcomes (e.g., adverse events, quality of life, patient satisfaction, death) – sometimes called “an intervention with diffuse effects” and the outcomes aren’t just at the patient level, but at the level of service processes and clinical processes (his diagram is like a logic model!)
- classifying health interventions
policy (e.g. national/provincial) –> generic service process (e.g., policy in health org) –> targeted service process (make the process better, e.g. guidelines) –> clinical process (e.g., drug trial) –> patient outcomes - multiple end points
- maintaining independence
- some people are uncomfortable with the above: those who have too much confidence in quantitative (feel that the above isn’t objective enough) and those who are from qualitative (feel that you can’t reduce things to a number like this)
- this is about science for decision-making (instead of science for hypothesis testing)
- he evaluated the “Safer Patients Initiative” – IHI project to make hospitals safer – found huge improvements in outcomes, but seen in both the intervention *and* control hospitals
- study where participants were given 2 studies (one with good methods, one with bad methods) that came to opposite conclusions on whether capital punishment is a deterrent for homicide. They asked them their opinion of capital punishment and then asked them to assess methods and people based their opinion of methods on which outcome they agreed with (not on the actual methods) (Source)
- he thinks that different people should do the formative evaluation (who work with managers, may become invested in the program) and summative evaluation (who would be more objective)
intervening vs. encouraging innovation
- closed-frame = describe what you need to do to intervene with fidelity
- open-frame = tell people to do what they need to do in your context
- science is about abstracting from the detail, not about the details itself
- e.g., you can compare the new surgery vs. “usual care” and it’s OK if “usual care” is different at each site. In order to decide whether you should use the new surgery, do
- you really need to exact details of what was done in each usual care instance, or do you just care if the new treatment is better than whatever was being done before?a
Oral Presentations
- Decision Regret = negative emotion involving distress/remorse following a health-related decision; an important patient-reported outcome measure
- Theoretical Domains Framework (TDF-2) – [look up this model]
- James Lind Alliance Priority Setting Process
- developed in UK in 2004
- premise is to engage those not usually involved in research prioritization
- an organized 4-step process
- bring together patients, carers, and clinicians to identify and prioritize the treatment uncertainties/answered questions that they agree need to addressed
- 4-steps:
- identify partners – those who can help you reach out to the population of interest; range of participation (e.g., steering committee, help you recruit, participate in final priority setting workshop)
- gather uncertainties – used open-ended questions in survey (online/paper/in person interviews) + literature/clinical practice guidelines (to look for listed uncertainties)
- process & collate submitted uncertainties – removed those out of scope, separated into categories, rolled those into “summary questions”, had people rank those
- final priority setting workshop – they did 1:1 clinician:patio ratio; people asked to rank the 30 questions they had created; facilitated group meeting – started with big group to explain what they were doing; asked people to talk about their top 3 and bottom 3 questions (and why they ranked them as they did), gave people the background to the questions (e.g., how many submissions from step 2 fed into that question); then explored where there were major discrepancies and why people ranked as they did; now shopping their top 10 around to e.g., Alberta Health Services and anyone else who might be able to advocate for these questions to be researched.
Panel Discussion
There was quite a bit of discussion about “what is a patient?” Some participants talked about having their voice silenced because they were seen as “too educated/informed” in health/healthcare, they aren’t an “authentic patient” because of this.
One person noted that asking patients to fill out a CV to be a “knowledge user” on a research grant sends the wrong message about what patients are brining (it’s not about them bringing their education/occupation, but rather about them bringing their lived experiences/values/
To do:
- find an online course in Bayesian statistics
- read up on stepped wedge design (e.g., this article by Lilford)
- look back at notes from my priority setting course in my MBA