What is the “Global Trigger Tool”?

If we want to be able to  make our healthcare system safer, it’s important to be able to identify and measure adverse events – we need to know how much “harm from care” is occurring in our healthcare system and to be able to measure if our efforts are effective in reducing that harm. Relying on voluntary reporting systems of errors and adverse events by healthcare staff, which is how this has traditionally been done, is less than ideal because it requires:

  • staff notice that an error occurred (and given that most errors do not cause adverse events, it’s reasonable that errors might go unnoticed)
  • staff feel safe to report an error or adverse event (which they may not if the culture of the organization is one in which they feel they will be blamed/shamed/punished for the error – there are valid concerns about legal liability (both for the healthcare provider and the organization) and about losing one’s job
  • staff make the effort to report errors or adverse events (which might not be seen as a priority when healthcare staff are busy providing care, especially if they don’t see reports of errors and adverse events being used to improve the system)

Research shows that voluntary reporting captures ~10-20% of errors and that 90-95% of errors do not cause harm (IHI, n.d.).

Since reducing errors is one of the goals of the project I’m evaluating, I’ve been reading up on potential ways to measure errors/adverse events – because how will I know if the project is reducing errors if I can’t measure how many happen, both before and after implementation?

One of the methods I’ve been reading about is “trigger tools”.

  • “Trigger tools” are a method that can be used to identify  adverse events (i.e., harm) and track them over time.
  • 2 approaches (IHI, n.d.):
    1. “monitor the overall level of harm as a dashboard item” – e.g., IHI Global Trigger Tool
    2. “track harm in a specific topic or area” – e.g., IHI Trigger Tool for Measuring Adverse Drug Events 
    • organization-wide measure
    • uses retrospective review of records of adult inpatients in acute care
    • tool includes list of known adverse event “triggers”,  protocol for selecting records, forms for data collection, etc.
    • harm vs. error
      • “medical errors are failures in processes of care and […] have the potential to be harmful”
      • “events of harm are clear clinical outcomes” (IHI, 2009)
      • can be useful to detect and analyze errors (to find ways to improve the system to reduce/mitigate the errors)
      • but looking at harms shifts focus from “individual blame for errors to comprehensive system redesign that reduces patient suffering” (IHI, 2009)
      • IHI GTT’s definition of harm: “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment, or hospitalization, or that results in death” (IHI, 2009)
        • only includes harm from “active delivery of care (commission)”, not from absence of/”substandard care (omission)”
        • does not include psychological harm
        • does not matter if the harm was deemed preventable or not 1“One could argue that today’s “unpreventable events” are only an innovation away from being preventable.” And since the GTT measures harmful events over time, if you judged events are non-preventable (and thus not counted) now, but the same type of harm as preventable (and thus counted) next year, it would look like an increase in harmful events, when really there was not (IHI, 2009).. “If an adverse event occurred it is, by definition, harm” (IHI, 2009).
        • severity ratings adapted from the National Coordinating Council for medication error Reporting and Prevention Index for Categorizing Errors – except includes only those categories related to adverse events (not errors that did not result in harm) and includes all physical harm, not just medication-related harm
    • process:
      • 2 primary reviewers (with clinical backgrounds and understanding of the hospital records and care processes) review the files independently and then compare their findings to come to consensus
      • physician validates the consensus of the 2 primary reviewers; reviews their notes, not the original records (unless needed) (physician is final arbiter).
      • sampling: 10 patient records randomly sampled from entire population of discharged adults every 2 weeks
        • should be a truly random sample
        • select a few extra records in case one of the ones you chose doesn’t meet criteria (but only review them if that’s the case)
        • in small sites with fewer than 10 inpatients per 2 weeks, review them all
        • can do more, but no added value beyond 40 every two weeks
        • should be discharged more than 30 days prior, as readmission within 30 days is a trigger)
        • retrieve records of hospital admission before and after the index record (but only review to check if trigger is associated with readmission – do not do a full review on the before/after charts)
    • sample size is small, but aggregation over time improves precision
    • chart data on run charts to see patterns over time
    • selection criteria:
      • closed & completed record (all coding done)
      • length of stay at least 24 hrs, formally admitted to hospital
      • age >= 18 years
      • exclude inpatient psychiatric and rehabilitation patients
    • GTT contains 6 modules:
      • Cares and Medications – reflect adverse events anywhere in the hospital
      • Surgical, Intensive Care, Perinatal, Emergency – specific to those departments
    • review record for presence of triggers (don’t need to review entire record) – experienced reviewers have found this order the most useful:
      • discharge codes (esp. infections, complications, certain diagnoses)
      • discharge summary
      • med admin record
      • lab results
      • prescriber orders
      • operative record
      • nursing notes
      • physician progress notes
      • if time permits, other areas of the record
    • no more than 20 mins per patient record (GTT not meant to identify every single adverse event in a record – 20 min rule created because there was a propensity to review shorter records, which biases the data)
    • if trigger noted, review pertinent portions of the record (documented close to the proximity of the trigger) to determine if there was adverse event (not all triggers have an associated adverse event – triggers are just a clue that an adverse event may have occurred)
    • sometimes adverse events will be noted in the absence of a trigger – they still count as adverse events
    • some triggers are, by definition, adverse events (e.g., nosocomial infection, 3rd or 4th decree laceration), so when you see those triggers, you’ve found the adverse event
    • something is an adverse event if it is an “unintended harm from the viewpoint of the patient
    • if an adverse event is present on admission to the hospital, it still counts (remember, it has to meet the definition of “harm related to medical care” – e.g., if medical care at a doctor’s office lead to a harm that cause the patient to go to a hospital – that counts. Record it as such (as it’s useful to know harm that occurred in hospital vs. occurred somewhere else), but the key issue is that this measure is of “what the patient experienced, not what happened in the hospital”
    • when adverse event is identified, assign it a severity level
    • report:
      • report as run charts for:
        1. adverse events per 1,000 patient days
        2. adverse events per 100 admissions
        3. percent of admission with an adverse events
      • report a bar chart of the distribution of harm by category
      • can also report data by type of adverse event (infections, medications, procedural complications) and those that occurred prior to admission vs. present on arrivalIHI Global Trigger Tool

So that’s the Global Trigger Tool. I’ve read about some other studies that measured errors/adverse events, but those will have to be in another blog posting!

Training Resources



1 “One could argue that today’s “unpreventable events” are only an innovation away from being preventable.” And since the GTT measures harmful events over time, if you judged events are non-preventable (and thus not counted) now, but the same type of harm as preventable (and thus counted) next year, it would look like an increase in harmful events, when really there was not (IHI, 2009).
This entry was posted in evaluation tools, methods, notes and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *