Will be working on creating some indicators, so I figured it was a good time to brush up on them! Read two items – a guide and a journal article – and my notes from them are below.
The Good Indicators Guide
- Indicators are “succinct measures that aim to describe as much about a system in as few points as possible” and “help us understand a system, compare it and improve it” (p. 5)
- 3 key roles of measurement:
- “for understanding: to know how a system works and how ti might be improved (research role)
- for performance: monitoring if and how a system is performing to an agreed standard (performance/managerial/improvement role)
- for accountability: allowing us to hold ourselves up to patients, the government, and taxpayers and be openly scrutinised as individuals, teams, and organizations (accountability/democratic role)” (p. 5)
- It’s important to reminder that indicators merely indicate – they don’t give you a definitively answer, but rather, they “suggest the next best question to ask that ultimately WILL give the answer required” (p. 5). For example, if a hospital has a high death rate, it indicates that you should look at what’s going on in that hospital. Maybe it is because things are being done poorly there, but maybe it’s because it’s a hospital where all the sickest patients are sent. Thus, it is important to understand an indicator in context. (Think of your car’s dashboard – there’s a warning indicator light that tells you something is wrong with your car. When it flashes, you stop the car and investigate what the problem is).
- Indicators force us to think through what it is we are trying to achieve.
- Indicators require you to think about numbers – and it’s important to think about whether they are absolute values, ratios, etc.
- Indicators shouldn’t be associated with “fault-finding” – they are meant to help us identify “high performers (from whom we can learn) and systems (or parts of systems that may warrant further investigation and intervention” (p. 6)
- Measurement, on its own, won’t lead to improvement (“You can’t make pigs fatter just by measuring them!” (p. 7)). Measurement helps us understand where to look and then we need to figure out what we need to do in order to improve things.
- The first thing you need to do is “gain clarity over what the systems is aiming to do” (p. 7). Getting everyone on the same page about what you are trying to do is a really valuable thing to do – often, there is a “lack of shared understanding” that causes “inefficiencies in a system” (p. 7)
The (Short) Anatomy of an Indicator
Example (from p. 9):
metadata | metadata | data |
title | definition | data |
infant mortality rate | # of deaths of children aged < 1yr for every 1000 live births in that community in the same year | 56 deaths of children aged < 1yr in a community with 4963 live births |
9 deaths per 100 live births |
- the metadata will help you decide if an indicator is:
- “important and relevant to you
- able to be postulated with reliable data
- likely to have a desired effect when communicated well” (p. 10)
- 10 keys to ask to help you create metadata for an indicator (or judge if an existing indicator’s metadata is good for your purposes) (from p. 10)
- what is being measured?
- why is it being measured?
- how is the indicator actually measured?
- who does it measure? (e.g., ages? sex/gender? everyone in a population group or some subset? if a subset, how is the subset chosen?)
- when does it measure? (e.g., what day/month/year? are there seasonal effects to worry about?
- does it measure absolute numbers or proportions? (which is most appropriate? or do you need both to get a good understanding?)
- where does the data actually come from?
- how accurate and complete will the data be?
- Are there any caveats/warnings/problems? (e.g., potential errors in collection, collation, and interpretation such as under sampling of certain ethnic groups, young people, homeless people, migrants, and travellers)
- Are particular tests needed, such as standardization, significance tests, or statistical process control to test the meaning of the data and the variation they show? (see below)
- You want to make the most appropriate indicator that you can and populate it with the highest quality data possible. But there is a “trade-off between what is convenient (and possible) to collect, and what you ideally want”. It’s also important to remember that front-line staff are extremely busy, so you should minimize any additional data collection you ask them to do – and when you absolutely must have them collect the data, spend some time talking to them about why you are collecting the data, what you will do with it – “aim to nurture some active ownership of the data and indicators with frontline staff” (p. 11). E.g., ask staff “how the service works; what, if anything, they want to change about it; what barriers they face; what information they already collect; what they consider the fairest measure of their work process and its outcome” (p. 12)
Statistical Process Control
- SPC involves distinguishing between:
- common cause variation – “normal, everyday, inevitable (and usually unimportant) variation which is intrinsic and natural to any system
- special cause variation – “which is indicative of something special happening and which calls for a fuller understanding and often action” (p. 13)
- SPC can be used “within a single system (e.g., an institution) over time or […] to analyze variation between different institutions” (p. 14)
- common mistake: failure to see common cause variation and special cause variation as fundamentally different, resulting in:
- wasting resources investigating an “outlier” when that value is really within the acceptable range (i.e,. treating common cause variation as if it were special cause variation)
- wasting resources changing a whole system that is working well overall because of an outlier that is truly an outlier (instead of focusing on that one outlier) (i.e., treating special cause variation as if it were common cause variation)
- SPC can help you to see if you have:
- a system where average performance is acceptable, not outlier – ideal!
- a system where average performance is acceptable but with outliers – address the outliers (figure out what’s going on with that outlier and what to do about it)!
- a system where average performance is not acceptable (regardless of variation) – focus on the whole system rather than individuals within the system
- note that just because there isn’t special cause variation, it doesn’t mean the system is performing well – it could be the that whole system is underperforming. It’s important to define what an “acceptable” level of performance before you get data so you know if the performance is, in fact, acceptable. Also – think “acceptable to whom?” (e.g., if accreditation or a funding agency mandates a specific level of performance, then that would be the level of performance that’s acceptable to them)
- Check out my previous blog postings on run charts and control charts for more info on SPC
Indicators on their own are not enough!
- It’s important to be able to communicate to get people to change
- 4 principles for changing the way people think:
- think about the audience: how can you present the information in a way that the audience (a) understands and (b) feels they can do something about it
- presentation matters: make it clear (use labels, text, and colour to make things readable), don’t oversimplify, but don’t let it be so complicated that you can’t read it
- test your approach: show the presentation to someone from your target audience to see if they understand it
- appeal to emotions: find the story in the data and tell the audience that story
Criteria for good indicators and good indicator sets
- no indicator is perfect for all purposes
- no indicator will be perfect on all of the following questions – but make sure you ask these questions, be systematic in your assessments, decide what compromises you can accept, and make explicit any compromises you are willing to make
- first ask:
- Does the indicator(s) address something important?
- indicators must:
- measure key parts of process and/or outcome
- related to the objectives of the system
- if considering a set of indicators – is it a balanced set? (i.e., “all important things are covered without undue emphasis on any one area” (p. 24)
- indicators must:
- Is the indicator(s) scientifically valid?
- does the indicator measure what it is claiming to measure?
- Does the indicator(s) address something important?
- If you answer “no” to either of those, do not proceed with those indicators
- If you answer “yes” to both, then ask:
- Is it possible to populate the indicator with meaningful data?
- are there sufficiently reliable data available at the right time, for the right organizations with the appropriate comparators? If no, is with worth the extra effort/cost to collect the data? (If the results you get are likely to change a decision you need to make, it may be worth it. If it is just a “nice to know”, then probably not).
- What is the meaning? What is the indicator telling you and how much precision is there in that?
- Once you populate the indicator with data, will you understand what it means?
- the indicators needs to identify issues that need further investigation (but not issues that don’t need further investigation – we want signal, not noise!)
- Will you be able to judge the acceptable limits of the value of the indicator (i.e., will you be able to tell when something is an outlier and so you need to do something about it?)
- can you understand the indicator and what it means in terms of the reasons for the results? If you don’t understand how the indicator is constructed well enough to know what you can do with the results, it will not be useful.
- What are the implications (i.e., what are you going to do about the results?)
- do you understand the system well enough to know how to act (or be willing to invest the time/resources in researching how to act) once you have results that suggest you need to do something about them?
- is the indicator something that people are likely to “game” (i.e., you don’t want people to change superficial things in order to get the indicator results to look good- you want them to use the results to get to the root of any problems and fix the problems!)
- does the timeframe of the data for the indicator work for your purposes? e.g., your system needs to be repsonisve enough that you’ll catch problems early, but you need to be aware that it will take time for the indicator to respond to any changes you make
- Is it possible to populate the indicator with meaningful data?
Some Final Thoughts
- “Indicators exist to prompt useful questions, not to offer certain answers. Promoting a healthy uncertainty and stimulating the right degree of unbiased, informed debate, are what indicators are all about” (p. 28)
- “No indicator is perfect, so “the real question is: are the data good enough for the purpose in hand?” (p. 28)
- “Indicators only indicate,; they are no more diagnostic than a screening test” (p. 29)
The (Full) Anatomy of an Indicator (from page 35-36)
Indicator name | |
Indicator definition | be specific (e.g., if using a proportion, specify numerator and denominator; specify units; specify timeframe) |
Geography | what area does the indicator data come from? |
Timeliness | how often is the data collected |
What this indicator purports to measure | |
What this indicator is important (rationale) | i.e., why this topic is important |
Reason to include this particular indicator | i.e., what will you do with the results (e.g., to inform program changes; to demonstrate a need for preventative actions) |
Policy relevance | |
Interpretation: what does high/low value mean? | e.g., an increased value for a diagnosis could mean there is an increasing number of people with a disorder, but it could also mean that the item is being diagnosed more now than before |
Interpretation: Potential for error due to measurement method | e.g., is there potential for “gaming” the system? |
Interpretation: Potential for error due to bias and confounding | e.g., are some subgroups over or under represented? |
Confidence intervals | describes “uncertainty around a point estimate of a quantity” |
Source: The Good Indicators Guide: Understanding how to use and choose indicators. National Health Service Institute for Innovation and Improvement
Making Sense of Indicators
- indicator = “a single measure (usually expressed in quantitative terms) that captures a key dimension of health [or] various determinants of health […] or key dimensions of the health care system” (p. 24)
- indicators can capture what is happening, but not why it is happening
- “indicator chaos”
- overwhelming amount of data collected
- lack of a coordinated plan across the health system on what to collect and how to interpret/use those data
- can lead to:
- duplication of effort –> wasting scarce resources
- developing programs/services that aren’t actually needed and/or useful (if data is being interpreted incorrectly and then used to inform program/service decisions) –> waste and potentially harm (if program is making things worse instead of helping)