Pragmatic Science

Another posting that was languishing in my drafts folder. Not sure why I didn’t published it when I wrote it, but here it is now!

  • Berwick (2009) wrote an interesting commentary called “Broadening the view of evidence-based medicine” in which he describes how “scholars in the last half of the 20th century forged our modern commitment to evidence in evaluating clinical practices” (p. 315) and though it was seen as unwelcome at the time, they brought the scientific method to bear on the clinical world, and over time, the randomized controlled trail (RCT) because the “Crown Prince of methods […] which stood second to no other method” (p. 315). And while there has been a huge amount of benefit from this, he says “we have overshot the mark. We have transformed the commitment to “evidence-based medicine” of a particular sort into an intellectual hegemony that can cost use dearly if we do not take stock and modify it” (p. 315). He points out that there are many ways of learning things:
  • “Did you learn Spanish by conducting experiments? Did you master your bicycle or your skis using randomized trials? Are you a better parent because you did a laboratory study of parenting? Of course not. And yet, do you doubt what you have learned?” (p. 315)
  • “Much of human learning relies wisely on effective approaches to problem solving, learning, growth, and development that are different from the types of formal science […and …] some of those approaches offer good defences against misinterpretation, bias, and confounding.” (p. 315).

  • He warns that limiting ourselves to only RCTs “excludes too much of the knowledge and practice that can be harvested from experience, itself, reflected upon” (p. 316)
  • “Pragmatic science” involved:
    • “tracking effects over time (rather than summarizing with stats)
    • using local knowledge in measurement
    • integrating detailed process knowledge into the work of interpretation
    • using small sample sizes and short experimental cycles to learn quickly
    • employing powerful multifactorial designs (rather than univariate ones focused on “summative” questions) ” (p. 316)
 explanatory trials  pragmatic trials
Definition
  • evaluating efficacy (how well does it work in a tightly controlled setting)
  • clinical trials that test a causal research hypothesis in an ideal setting
  • evaluating effectiveness (how well does it work in “real life”)
  • trials that help users decide between options
Validity
  • high internal validity
  • high external validity
Test sample & setting
  • focus on homogeneity
  • focus on heterogeneity
  • explanatory and pragmatic are not a dichotomy as most trials are not purely one or the other – there is a spectrum between them
  • Thorpe et al (2009) created a tool (called PRECIS) to help people designing clinical trials to distinguish where on that pragmatic-explanatory continuum their trial falls; it involves looking at 10 domains (see table below), with scores on these criteria placed on a 11 spoke wheel (to give you a spider diagram type of picture)
Criteria   explanatory trials  pragmatic trials
participant eligibility
  • strict
  • everyone with condition of interest can be enrolled
experimental intervention – flexibility
  • strict adherence to protocol
  • highly flexible; practitioners have leeway on how to apply the intervention
experimental intervention – practitioner expertise
  • narrow group, highly skilled
  • broad group of practitioners in broad range of settings
comparison group – flexibility
  • strict; may use placebo instead of “usual practice”/”best alternative”
  • “use practice”/”best alternative”, practitioner has leeway on how to apply it
comparison group – practitioner expertise
  • standardized
  • broad group of practitioners in broad range of settings
follow-up intensity
  • extensive follow-up & data collection; more than would routinely occur
  • no formal follow-up; use administrative database to collect outcome data
primary trial outcome
  • outcome known to be direct & immediate results of intervention; may require specialized training
  • clinically meaningful to participants; special tests/training not required
Participant compliance with intervention
  • closely monitored
Practitioner compliance with study protocol
  • closely monitored
Analysis of primary outcome
  • intention-to-treat analysis usually used; but usually supplemented with “compliant participants” analysis to answer question of “does this intervention work in the ideal situation?”; analysis focused on narrow mechanistic questions
  • intention-to-treat analysis (includes all patients regardless of compliance)
  • meant to answer the question “does the intervention work in “real world” conditions, “with all the noise inherent therein” (Thorpe et al, 2009)

I also came across this article in Forbes magazine: Why We Need Pragmatic Science, and Why the Alternatives are Dead-Ends. It’s a short read, but it succinctly summarizes an argument I find myself often making: science is a powerful tool for understanding and explaining the world. It’s not the only tool (philosophy and the other humanities, for example, are great tools for different purposes), but it’s certainly the best one for certain purposes and it’s a fantastic one to have in our toolbox!

References:

Berwick, D.M. (2005). Broadening the view of evidence-based medicine. Quality & Safety in Health Care. 14:315-316. (full-text)

Thorpe, K.E., Zwarenstein, M., Oxman, A.D., Treweek, D., Furberg, C.D., Altman, D.G., Thus, S., Bergel, E., Harvey, I Magid, M.J., & Chalkidou, K. (2009). A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Canadian Medical Association Journal. 180(10): E47-E57.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *