Another posting that was languishing in my drafts folder. Not sure why I didn’t published it when I wrote it, but here it is now!
- Berwick (2009) wrote an interesting commentary called “Broadening the view of evidence-based medicine” in which he describes how “scholars in the last half of the 20th century forged our modern commitment to evidence in evaluating clinical practices” (p. 315) and though it was seen as unwelcome at the time, they brought the scientific method to bear on the clinical world, and over time, the randomized controlled trail (RCT) because the “Crown Prince of methods […] which stood second to no other method” (p. 315). And while there has been a huge amount of benefit from this, he says “we have overshot the mark. We have transformed the commitment to “evidence-based medicine” of a particular sort into an intellectual hegemony that can cost use dearly if we do not take stock and modify it” (p. 315). He points out that there are many ways of learning things:
- “Did you learn Spanish by conducting experiments? Did you master your bicycle or your skis using randomized trials? Are you a better parent because you did a laboratory study of parenting? Of course not. And yet, do you doubt what you have learned?” (p. 315)
-
“Much of human learning relies wisely on effective approaches to problem solving, learning, growth, and development that are different from the types of formal science […and …] some of those approaches offer good defences against misinterpretation, bias, and confounding.” (p. 315).
- He warns that limiting ourselves to only RCTs “excludes too much of the knowledge and practice that can be harvested from experience, itself, reflected upon” (p. 316)
- “Pragmatic science” involved:
- “tracking effects over time (rather than summarizing with stats)
- using local knowledge in measurement
- integrating detailed process knowledge into the work of interpretation
- using small sample sizes and short experimental cycles to learn quickly
- employing powerful multifactorial designs (rather than univariate ones focused on “summative” questions) ” (p. 316)
explanatory trials | pragmatic trials | |
Definition |
|
|
Validity |
|
|
Test sample & setting |
|
|
- explanatory and pragmatic are not a dichotomy as most trials are not purely one or the other – there is a spectrum between them
- Thorpe et al (2009) created a tool (called PRECIS) to help people designing clinical trials to distinguish where on that pragmatic-explanatory continuum their trial falls; it involves looking at 10 domains (see table below), with scores on these criteria placed on a 11 spoke wheel (to give you a spider diagram type of picture)
Criteria | explanatory trials | pragmatic trials |
participant eligibility |
|
|
experimental intervention – flexibility |
|
|
experimental intervention – practitioner expertise |
|
|
comparison group – flexibility |
|
|
comparison group – practitioner expertise |
|
|
follow-up intensity |
|
|
primary trial outcome |
|
|
Participant compliance with intervention |
|
|
Practitioner compliance with study protocol |
|
|
Analysis of primary outcome |
|
|
I also came across this article in Forbes magazine: Why We Need Pragmatic Science, and Why the Alternatives are Dead-Ends. It’s a short read, but it succinctly summarizes an argument I find myself often making: science is a powerful tool for understanding and explaining the world. It’s not the only tool (philosophy and the other humanities, for example, are great tools for different purposes), but it’s certainly the best one for certain purposes and it’s a fantastic one to have in our toolbox!
References:
Berwick, D.M. (2005). Broadening the view of evidence-based medicine. Quality & Safety in Health Care. 14:315-316. (full-text)
Thorpe, K.E., Zwarenstein, M., Oxman, A.D., Treweek, D., Furberg, C.D., Altman, D.G., Thus, S., Bergel, E., Harvey, I Magid, M.J., & Chalkidou, K. (2009). A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Canadian Medical Association Journal. 180(10): E47-E57.