More About Measuring Errors and Adverse Events

The Measurement of Active Errors

  • in quality improvement, we often need to measure things for comparative purposes:
    • to compare organizations/clinicians with each other
    • to draw cause and effect conclusions about how something (e.g., a policy, a process) affected safety/quality
  • to be able to do this, we need data that is measured accurately and precisely

Measurement of Outcomes vs. Processes

  • outcomes include:
    • mortality
    • physical morbidity
    • psychological well-being
    • satisfaction with services
  • the latter two can be sensitive to quality of care, but the former two are often not due to poor quality care (healthcare services are often provided to those who are unwell and/or those at risk of something bad happening to them and so morbidity and mortality can be the result of things other than just quality of care). So if you want to use outcomes as a measure of quality, you need to be sure that it is actually a good indicator of quality (and not biased by something else)!
  • when comparing differences in outcomes, we need to adjust for prognosis (otherwise poorer outcomes might be due to the patients in that group being more sick, rather than getting less effective/poorer care)
  • however, there are 2 “risk adjustment facilities”:
    1. overadjustment: when “the quality related factor [is] associated with the risk factor so adjustment obscures real differences” – e.g., if age is a risk factor for death and older people are given minimal care compared to young people
    2. under adjustment: when we don’t have sufficient data to actually adjust for all the “relevant prognostic variables”; arguably the more frequent of the two
  • “identifiable processes are one of many factors that affect outcome: the signal (outcome due to process) cannot be distinguished from the noise (outcome due to other factors)”
  • some will argue that we should measure quality based on outcome, but “maximising […] outcomes cannot be achieved by misattributing cause and effect”
  • “Not only does a system of punishment and reward based on outcome run a high risk of penalising and favouring the wrong providers, it also has little potential to improve health.”
    • if you identify a clinician or organization that is an outlier (e.g., higher rates of poor outcome than other clinicians/organizations), you can only identify a few, but if you identify an inadequate process, you can “shift the whole performance curve” (including even making the best ones better than they currently are)
    • [I think this may be an explanation of a key difference between evaluation and quality improvement (QI), which are oftentimes difficult to differentiate. QI focuses on improving existing processes, tries to “shift the performance curve”. Process evaluation (which I think is the type of evaluation that is most similar to QI) often focuses on if activities are being implemented as intended, and why or why not. There’s definitely overlap here, of course.]
  • Another benefit of measuring processes is that sample sizes needed to demonstrate effectiveness are small than for outcome measures.
    • errors that cause severe harm are (thankfully) rare, so though they are generally easy to measure, they they don’t affect errors rates much
    • more common errors don’t cause much (if any) harm, but thus are not easily noticed/not commonly reported
  • 3 reasons to measure process (i.e., clinical quality/active error rates):
    • clinical outcomes are not a good reflection of quality of care
    • allow you to make bigger gains by shifting the whole performance curve (vs. outcomes which tends to focus on outliers
    • errors are much more common than adverse events

Active Errors

  • active errors = “errors in patient care itself rather than in the system that may predispose to such errors”
  • since we are interested comparing (i.e., making inferences), we are interesting in measuring rates
  • reporting systems for errors do not give rates – they just give amounts (i.e., the numerator – but you don’t know the denominator)
  • errors rates can be measured by looking at:
    • documentary (including electronic) data
      • retrospective: looking back at charts
      • prospective: completing a pro forma at the time
    • observations:
      • real time
      • retrospectively (video)
  • two methods for assessing quality of care:
    • explicit (a.k.a. criterion-based assessment): assesses care compared to predetermined criteria
      • pro: doesn’t reply on expert judgment; “protects again bias by expressing error rates in terms of the maximum number of errors possible in a data set”
      • con: misses out on diversity of errors that aren’t in the algorithm
    • implicit (a.k.a., holistic judgement): based on expert judgment, not constrained by predetermined criteria
      • pro: can pick up more diversity of errors that aren’t in the algorithm
      • con: poorly standardized; expensive (requires a lot of time and skill)
  • “We hypothesize that explicit measurement of predefined error will be much more reliable than implicit assessment, but that it will miss more errors”

Bias in Measurement

  • measuring errors (as opposed to just adverse events) – still subject to case mix “because different patients have different opportunities for error” – two approaches to this problem:
    • express errors as % of “opportunities for error” (rather than error per patient, since some patients represent more opportunities for error than others)
      • however, this requires us to come up with a determining of what is an “opportunity for error” beforehand
    • express errors as a % of patients (or patient days) with statistical adjustment for case mix (though, as discussed above, this isn’t perfect)
  • information bias = “the diligence with which information is recorded may influence the “visibility” of errors” – e.g., someone who is more diligent about recording information in the chart may appear to have more errors when you do a chart review than someone who is not as diligent
    • “a particular type of information bias arises when an intervention designed to reduce error interacts with the measurement method. e.g., computer systems designed to improve care may affect the recording of information in case notes and hence the proportion of errors that are detected” by chart audit.
  • observer bias: can be difficult/expensive to make notes to enable blind measurement; a practical method to mitigate is to blind observers to the hypothesis being tested

Sensitivity & Specificity

  • many errors are not reporting (when using error reporting systems) not captured in the patient chart (when doing chart audits), so these are not sensitive methods (i.e., they miss a lot of errors)
  • prospective methods are more sensitive (i.e., picks up more errors), but are subject to bias (e.g., if you are asking clinicians to fill out a pro forma on errors before and after an intervention, then the clinicians are “both the subject of the change and observers of the effect of that change”
  • “unobtrusive direct observations made with appropriate consent by third party observers blind to the hypothesis being tested” is probably ideal, but it is very expensive!

More Research Is Needed

  • new methods to measure errors that are sensitive and specific need to be developed
  • perhaps there could be a way to combine scores from different methods to create a composite score? More research is needed!
  • “the study of measurement of error is in its infancy”
Lilford, R.J., Mohammed, M.A., Braunholtz, D., Hofer, T.P. (2003). The measurement of active errors: methodological issues. Qual Saf Health Care. 12:ii8-ii12 [Full-text]

Measuring Errors and Adverse Events in Health Care

  • this paper reviews the pros and cons of 8 different methods of measuring errors & adverse events and suggests a model for choosing which one(s) use in a given situation
  • error = includes “mistakes, close calls, near misses, active errors, and latent errors”; do not necessarily harm patient
  • adverse events = includes “terms that usually imply patient harm, such as medical injury and iatrogenic injury”; harms patient
  • latent errors = “include system defects such as poor design, incorrect installation, faulty maintenance, poor purchasing decision, and inadequate staffing.”; “difficult to measure because they occur over  broad ranges of time and space and they may exist for days, months, or even years before they lead to a more apparent error or adverse event directly related to patient care”
  • active errors = “occur at the level of the frontline provider […] and are easier to measure because they are limited in time and space”
 Method Pros Cons Notes
Morbidity & Mortality conferences & autopsies
  • can suggest latent errors
  • cannot provide error rates (too few/nonstandard examples)
  • reporting bias
  • hindsight bias
 
Malpractice claims analysis
  • can suggest latent errors
  • many perspectives (e.g,. patients, providers, lawyers)
  • cannot provide error rates (nonstandard examples)
  • reporting bias
  • hindsight bias
 
Error reporting systems
  • can suggest latent errors
  • many perspectives over time
  • can be part of routine operations
  •  underreporting (e.g., people afraid to report, people too busy to report, people don’t notice an error occurred)
  • hindsight bias
  • reporting bias
 
Administrative data analysis
  • data readily available
  • inexpensive
  •  data may be incomplete/inaccurate
  • data separated from clinical context
 
Chart review
  • data readily available
  • data incomplete (not all errors/AEs in chart)
  • judgements about AEs not reliable
  • hindsight bias
  • expensive
 note: this paper written before Global Trigger Tool was published
Electronic Health Record review
  •  data readily available
  • integrates multiple data sources
  • real-time monitoring
  • inexpensive (after you set it up initially)
  • data incomplete (not all errors/AEs in chart)
  • expensive to set up
  • not useful for detecting latent errors
 
Observation of patient care
  •  accurate & precise
  • more comprehensive than other methods for measuring active
  • expensive & time consuming
  • requires lots of training
  • concerns about confidentiality
  • Hawthorne effect
  • hindsight bias
  • not good for detecting latent errors
 
Clinical surveillance
  •  accurate & precise
  • expensive
  • not good for detecting latent errors
 
  • hindsight bias – we are influenced by knowing the outcome (e.g., if we know the patient died, we are more likely to say that there were errors/issues with quality of care (even if the care given was identical to another case where the patient didn’t die)
  • reporting bias
  • Hawthorne effect – people change their behaviour when they know they are being watched (so what you observe is not what would have happened if there were no observer present)
  • their model for choosing which method to use is based on the idea that the methods “exist on a continuum that illustrates the relative utility of each method for measuring latent errors as compared with active errors and adverse events”
Thomas, E.J., & Petersen, L.A. (2003). Measuring Errors and Adverse Events in Health Care. J Gen Intern Med: 18:61-67.
This entry was posted in healthcare, notes and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *