I recently started a new job where I will be spending the next few years evaluating the implementation of a major health information system across several health organizations. So, naturally, I’ve been doing some research on how other organizations have evaluated major health IT projects. This will likely be the first of many blog postings of my notes of my literature research.
Towards an Evaluation Framework for Electronic Health Records Initiatives:
A Proposal For an Evaluation Framework (March 2004)
- papers is ten years old now (!), but there doesn’t seem to be an updated version
- evaluations of health information systems traditionally focused on:
- “technical & system features that affect system use
- cost-benefit analysis
- user acceptance
- patient outcomes” (p. iii)
- pre-post design “most widely agreed upon approach” and and randomized control trials (RCTs) are “problematic in the evaluation of complex health information systems” (p. vi)
- “3 general types of rationale for why evaluation is conducted in the field of health information systems:
- to insure accountability for expenditure of resources
- to develop and strengthen performance of agencies, individuals and/or sysems
- to develop new knowledge” (p. vi)
- should engage stakeholders in discussion re: which of these three things you want to do, as you have limited evaluation resources to deploy – so what do you hope to learn from an evaluation?
- seven steps to conduct [extremely generic – I would thing you would use these in any evaluation – and I disagree on a few of them – will put my thoughts inside square brackets]
- Step 1: Identify Key Stakeholders in Each Jurisdiction
- Step 2: Orient Key Stakeholders To the Electronic Health Record Initiative and Reach Agreement on Why an Evaluation Is Needed
- determine stakeholders “expectations of the EHR initiative” and “views on what an evaluation plan should address” (p. 17)
- “given the diversity of key stakeholders involved […] it is highly likely that they will identify different rationales for conducting evaluations” (p. 17)
- Step 3: Agree on When To Evaluate
- should be longitudinal
- recommend 3 or more time points:
- baseline (pre-system implementation)
- during implementation
- post implementation (preferably multiple measures at 6, 12, and 18 months post-implementation)
- Step 4: Agree on What To Evaluation [Not sure why they have “what” after “when”. I would think “what” you are evaluating would inform “when” you needed to evaluate those things]
- endless numbers of things you could evaluate
- “it is very important that each jurisdiction feels that it is gaining the maximum benefit it can from the investment of scarce resources in evaluation.” (p. 17)
- “a priority setting exercise with key stakeholders is one way to (a) identify the questions that it is important to answer (verses the questions that it is easy to answer) and (b) insure that all key stakeholders have an investment in the evaluation of projects which are undertaken” (p. 17)
- Step 5: Agree on How To Evaluate
- rationale for evaluating and specific evaluation questions “have implications for the methods chosen” (p. 17)
- recommendation: “undertake an evaluation which
- focuses on a variety of concerns
- uses multiple methods
- is modifiable
- is longitudinal
- includes both formative and summative approaches” (p. 18)
- “the current thinking around evaluation of complex health information systems leans towards evaluation geared to performance enhancement and knowledge development, and away from accountability, particularly costing approaches to net benefits assessment. However, accountability remains a strong value in Canadian society in general and increasingly in the health and technology sector, and therefore we recommend that some type of accountability question be included int eh evaluation approaches in each jurisdiction” (p. 18)
- Step 6: Analyze and Report [They forgot to mention actually collecting the data!]
- given the multiple questions, “the evaluation effort will consist of several sub-components which are in fact separate evaluation projects, including different methods and disciplines. We recommend that findings from each evaluation project with the evaluation initiative be shared with those key stakeholders identified in Step 1 […] this will permit fuller discussion of the interpretation and implications of the results obtained through different projects, or through the use of multiple methods with each project.” (p. 18)
- Step 7: Agree on Recommendations and Forward Them to Key Stakeholders [engage stakeholders in recommendations to make recommendations more feasible and useful]
- engage the key stakeholders in “generating the recommendations which arise from the findings of the evaluation” (p. 18)
- there may be different interpretations given the different perspectives, but there is “a greater likelihood that common stances […] will be found if those involved are:
- familiar with the main issue from the start
- aware of the different perspectives each team member brings to the discussion
- comfortable that the variety of methods used in the evaluation produced the most unbiased results possible” (p. 19)
Canada Health Infoway
- a federally funded non-profit
Canada Health Infoway’s Benefits Evaluation Framework
Canada Health Infoway Benefits Evaluation Indicators – Technical Report (version 2.0 – April 2012)
- a large list of potential indicators that we might want to use
- the basic conceptual model they use is:
- and here’s an example:
- I think this document will be a really useful resource to go back to when we are ready to decide on indicators
A Framework and Toolkit for Managing eHealth Change: People and Processes
This document is focused on change management, but I focused on the sections related specifically to evaluation.
- “successful implementation of change is achieved when the systems, processes, tools and technology of the change initiative are embedded in the new way health care providers do their work.” (p. 3)
- research shows that “poorly managed change” can lead to:
- “turnover of valued employees
- lower productivity
- resistance in all forms
- disinterested, unengaged, detached employees
- increased absenteeism
- cancellation of projects
- slow or non-adoption of new methods and procedures
- little or negative return on investment (ROI)” (p. 9)
- Change Management Working Group (CMWG) assessed change projects across Canada and identified some best practices, the following of which interface with evaluation:
- “demonstrating early results based on comprehensive data” [requires evaluation to be conducted with timely reporting of data]
- “continuous quality improvement cycles should be applied [again, timely evaluation]
- “initiatives that demonstrate clinical value will be supported and those that do not include clinical adoption from the beginning will struggle or fail to be adopted” [benefits/outcomes need to be relevant] (p.11)
- “Monitoring and evaluation [M&E] provides opportunity to identify risk, […] opportunities to improves process, to identify gaps or to recognized success, […] to understand and manage progress toward the future state. Lessons learned and process improvements need to be integrated into real-time, to avoid repeated mistakes” (p. 14)
- M&E extends “throughout the lifecycle of eHealth projects and into the operational life of the solution” (p. 32)
- Infoway’s System and Use Survey is an available survey tool that facilitates evaluation and analysis of use and user satisfaction” (p. 32)
A recent blog post on Harvard Business Review, “Convincing Employees to Use New Technology”
- ” the true ROI of their digital investments: collaboration among actively engaged users, smarter decision making, increased sharing of best practices and, over time, sustained behavior change.”
- 3 related problems as to why new tech often does not get used:
- “CIOs and technical leaders too often take a limited “tech-implementation” view and measure success on deployment metrics like live sites or licenses. They consider business adoption someone else’s job, but in fact no one is made accountable for it.”
- “platform vendors often oversell the promise of instant change through digital technology. They make their money by selling products and software, rarely by getting them used at scale.”
- “user adoption programs cost money.”
- suggestions:
- focus on investing in technologies you believe will really offer you a benefit and which you feasible can get done
- “plan for adoption from the start” – includes learning, communications, change management, and evaluation/having the right metrics
- leaders need to lead by example
- identify and support influential front-line staff who will champion the initiative
- align incentives/reward systems with the behaviours you want to see