Canadian Evaluation Society 2016 conference recap

I recently spent a week at the Canadian Evaluation Society (CES)’s 2016 national conference in St. John’s, NL. I’ve been to the CES national conference twice before – 2010 in Victoria, BC and 2014 in Ottawa, ON – as well as the CES BC & Yukon chapter’s provincial conference for the past two years, and in all cases I’ve learned a tonne and had a great time. There’s something very special about spending time with people who do the thing you do, so I was glad to have a chance to engage with fellow evaluators, both in the formal workshops and presentations and in the informal networking and socializing times.

I’m one of the program co-chairs for the CES 2017 national conference being held in Vancouver next year, so I thought it was extra important for me to go this year and I certainly saw the conference through a different lens, jotting down notes about conference organization and logistics along with the notes I was taking on content throughout the sessions. I took a tonne of notes, as I generally do, but for this blog posting I’m going to summarize some of my insights, in addition to cataloguing all the sessions that I went to 1There were a lot of presentations being held at every session, so I didn’t get to go to half of the ones that I wanted to, but that seems to be the case with every conference and I’m not sure how any conference organizer could solve that problem, short of recording and posting every session, which would be prohibitively expensive.. So rather than present my notes by session that I went to, I’m going to present them by topic area, and then present the new tools I learned about 2Or old tools that I know of but I haven’t thought about using in an evaluation context.. Where possible 3Where by “possible” I mean, when (a) I wrote down who said something in my notes, and (b) I can read my own writing. I’ve included the names of people who said the brilliant things that I took note of, because I think it is important to give credit where credit is due, but I apologize in advance if my paraphrasing of what people said is not as elegant as when the people actually said them.

Evaluation

There isn’t a single definition of evaluation. Some of the ones mentioned throughout the conference included:

  • Canadian Evaluation Society’s definition: “Evaluation is the systematic assessment of the design, implementation or results of an initiative for the purposes of learning or decision-making.” 4See the source of this for further elaboration on the pieces of this definition
  • Carol Weiss’s definition: “Evaluation is the systematic assessment of the operation and/or outcomes of a program policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy” 5I googled to get this definition, which was alluded to the in the workshop I went to, and found it on this site.
  • Australasian Evaluation Society’s definition: “systematic collection and analysis of information to make judgements, usually about
    the effectiveness, efficiency and/or appropriateness of an activity […including…] many types of initiatives, not just programs, but any set of procedures, activities, resources, policies and/or strategies designed to achieve some common goals or objectives.” 6Source: AES Ethical Guidelines [pdf]

Evaluative Thinking

  • Emma Williams used a lot of interesting analogies in her workshop on Evaluative Thinking, one of which was the meerkat. People these days are working with their noses to the grindstone – like a meerkat down on all fours running like the wind – but it’s important every so often to make like meerkat, who stops, stands up, and looks around to see what’s going on. We as evaluators can encourage people to do stop, look around, and reflect. I like this image of the meerkat as a reminder of that.
  • Also from Emma Williams: evaluators are like the worst of a 4 year old (always asking “Why? Why? Why?”) and the worst of a skeptical teenager (think: arms folded saying, “That’s what you think! Prove it!”

Evaluation Reasoning

  • Evaluation is about judging the merit or worth of a program. Researchers tend to be uncomfortable with making judgements, whereas that is what evaluators do.
  • Evaluation reasoning involves deciding:
    • what criteria will you use to judge the program
    • what are the standards by which you will be able to decide if it is good enough to judge it as good or not
    • collecting data to make those judgments
    • have a “warranted” argument to link evidence to claims
  • If you have a program theory, use that to develop your criteria and compare your evidence to your theory.

The Evaluand

The Evaluand is the thing that you are evaluating. When you say that “it worked” or “it didn’t work”, the evaluand is the “it”.

  • Evaluating strategy. A strategy is a group of (program and/or policy) interventions that are all meant to work towards a common goal. We don’t learn in evaluation education how to evaluate a strategy. Robert Schwartz gave an interesting talk on this topic – he suggested that strategies are always complex (including, but not limited to, their being multiple interventions, multiple partners, interactions and interactions among those interactions, feedback loops, non-linearity, and subpopulations) and we don’t really have a good way of evaluating all of this stuff. He said he wasn’t even sure it is possible to evaluate strategies “but can we get something from trying?” I thought this was an interesting way to approach the topic and I did think we learned some things from his work.
  • Evaluating complexity. Jonathan Morrell did an interesting expert lecture on this topic 7His slide deck, which was from a longer workshop that he did previously (so he didn’t cover all of this in his lecture) is available here.. Some of the key points I picked up from his talk:
    • Our program theories tend to just show the things that are in the program being evaluated (e.g., inputs, activities), but there are many things around the program that affect it as well, and some of those things we do not and cannot know.
    • We can draw on complexity science (a) instrumentally and (b) metaphorically.
    • Science cares about what is true, while technology cares about what works. If we think of evaluators are technologists (which it seems Morrell does), then he’s in favour of invoking complexity in any way that works (e.g., if using it metaphorically to help us think about our program/situation, then do that and don’t work if you aren’t using “complexity science” as a whole). He notes that “science begins to matter when technology stops working”).
    • Some of the concepts of complexity include:
      • small changes can lead to big changes
      • small changes can cascade through a system
      • there can be unintended outcomes, both positive and negative, of a system
      • attractors – “properties toward which a system evolves, regardless of starting conditions”
    • NetLogo Model Library contains many different models of agent-based social behaviours.
    • We might not even evaluate/measure things that seem “simple” (e.g., if we don’t understand that feedback loops can cause unpredictable things, then we won’t look for or measure things).
    • There is no grand unified theory of complexity – it comes from many roots  8Check out this map of the roots of “complexity” science and it’s a very different way of looking at the world (compared to thinking about things as being more linear (input -> activity ->output)
    • Program designers tend to design simple programs – it’s very hard to align with all the other programs out there that all have their own cultures/process/etc. – would take so long to do that that no one would ever get anything done. (People know that system-level interventions are needed, but they can only do what’s in their scope to do)
    • Implications for evaluation – need to be close to the program to observe small changes, as they can lead to large effects; and because you can’t always predict what feedback loops there may be, you need to be there to observe them.
    • Even if the program doesn’t recognize the complexity of their situation, evaluators can use complexity concepts to make a difference.

Data Collection, Analysis, and Interpretation

  • “Data literacy” = “the ability to understand and use data effectively to information decisions (e.g., why to collect data, how to collect data, how to interpret data, how to turn data into information)”
  • Anytime you expect program staff (as opposed to evaluators) to collect data (or even to participate in things like being observed by an evaluator for data collection), you have to remember that collecting data takes away time and attention from direct service provision. A staff member will think “I can fill out this data collection sheet or I can save another life. You have to make sure that staff understand the importance of the data and what it is going to be used for (e.g., to improve the program or to secure future funding for the program/help insulate the program against potential funding cuts by having evidence that the program is having an effect) if you expect them to put effort towards collecting it.
  • Anyone who is going to be entering data (even if it’s data that’s collected as part of providing service but which will also be used for evaluation) needs to understand the importance of data quality. For example, do staff understand that if they put a “0” when they actually mean that the data is not available, that 0 will erroneously decrease the average you calculate from that data set?
    • Make sure data entry protocols are very clear about what exactly the data collector needs to do and *why* they need to do it, and that you include a data dictionary – you’d be surprised how differently people can interpret things.
  • What the data “says” vs. what the data “means”? It is very possible to misinterpret data, so it’s important to think about your data, your methods, and their limitations. For example, if you have survey data that tells you everyone loves your program, but the survey response rate was 5% or the survey questions were all biased, the data may “say” that everyone loves your program, but it just means that the 5% who responded love your program or that the answers to the biased questions gave you positive results, but you don’t actually know what people thought about your program. Another example: if rates of errors when up after an intervention (what the data says), does it mean that more errors actually occurred, or that the new system is better at detecting errors?
  • Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” 9Source
  • Unanticipated consequences – we all talk about them, but few evaluations explicitly included looking for them (including budgeting for and doing the necessary open inquiry on site, which is the only way to get at unintended consequences)
  • Consequential validity – everything we do has consequences. In terms of questions/measures, consequential validity =”the aftereffects and possible social and societal results from a particular assessment or measure. For an assessment to have consequential validity it must not have negative social consequences that seem abnormal. If this occurs it signifies the test isn’t valid and is not measuring things accurately.”10Source – e.g., if an test shows that a subgroup consistently scores lower, it could be the result of the test being biased against them (and thus the test is not validly measuring what it purports to be measuring).

Implementation

  • “Implementation” = “a specific set of activities designed to put into practice an activity or program of known dimensions” (Cunning et al)
  • effective interventions X effective implementation X enabling contexts = socially significant outcomes (that is, you need interventions that work, and you need to implement them well, and the context has to enable that)
  • there is a growing evidence base of ‘what works’ in implementation – we should be evidence-based in our attempts to implement things

Quality Improvement

  • Hana Saab discussed working as an evaluator in a healthcare environment where people tend to make assumptions like: PDSA cycle = evaluation (even though quality improvement projects are rarely rigorously evaluated and reasons for achieving results are often not understood); better knowledge = improved practice (even though there are many steps between someone attending an education session and actually using what they learned in actual practice); that contexts are homogeneous (which they aren’t!). She also noted that sometimes people conclude a program “didn’t work” but don’t differentiate between implementing the program as intended and it didn’t lead to the intended outcomes vs. the program wasn’t even implemented as intended [and, I would add, you could also conclude a program “worked” but it actually worked because they didn’t implement it as intended, but rather adapted it to something that did work (but if you didn’t note that, you’d think the original program design worked), or maybe the program is fine in other contexts, but not in this one].
  • Realist evaluation allows the integration of process & outcome evaluation and focuses on “what works for whom and in what circumstances”

Aboriginal Evaluation

  • The CES has issued a statement on Truth & Reconciliation in response to the Truth & Reconciliation Commission of Canada’s report in which they resolved to:
    • include reconciliation in the existing CES value of “inclusiveness”
    • include reconciliation explicitly in the evaluation competencies
    • strengthen promotion of and support for culturally responsive evaluation,
    • implement consideration for reconciliation in its activities
  • Culturally-responsive evaluation:
    • There are many ways of knowing – Western ways of knowing are privileged, but all ways of knowing have strengths and limitations.
    • It’s important to recognized power differentials and inequities.
    • A bottom up, strength-based approach is advocated
    • The 4 Rs (Kirkness & Barnhardt, 1991):
      • Respect
      • Relevance
      • Reciprocity
      • Responsibility
    • Reciprocal Consulting, who presented this in one of their presentations that I attended, provides a great description of the 4 Rs on their website.

Creativity

  • The opening keynote speaker showed a video clip of an activity, where they had a group of people line up by birthday without talking  People tend to go with the first right answer they find, which is why we end up with incremental improvements, rather than going on to find other right answers, some of which could be truly innovative.
  • We need spaces to be creative. Cubicles and offices and meeting rooms with whiteboard markers that don’t work are not conducive to being spontaneous, to rapid prototyping, or to other ways of being creative. It doesn’t cost that much to set up a creative space – room for people to get together, put up some foam boards or flip chart papers that you can write on or put sticky notes on, have a stock of markers and random prototyping supplies.

Communication

  • “The great enemy of communication, we find, is the illusion of it.” – William H. Whyte 11The keynote speaker, Catherine Courage, had a slide with a similar quote that she attributed to George Bernard Shaw (“The single biggest problem in communication is the illusion that it has taken place.”), but when I just googled to confirm that – because I know that most people don’t actually look for sources of quotes, I found out that George Bernard Shaw never said this. Shame though – I like the wording of the way people think – erroneously – Shaw said it better than the way that Whyte actually did say it. Don’t assume that because you wrote a report and sent it to something that (a) it has be read, or (b) that it has been understood.
  • We make meaning from stories. Stories + stats are far more memorable, and more likely to drive people to action, than stats alone.

The Credential Evaluation designation

The CES created a professional designation program for evaluators – the only one in the world, in fact. Of the 1696 members of CES, 319 people(19%) currently hold this designation 12Full disclosure: I am one of these Credentialed Evaluators., with a further 140 in the process of applying. The society has put a lot of work in creating the designation, getting it off the ground, optimizing the infrastructure to sustain it 13e.g., the process by which you apply, as well as developing educational materials to ensure that CEs have options for education as they have to do a certain number of education hours to maintain their credential. But the CE designation, I learned at this conference, is not without controversy.

  • Kim van der Woerd asked an astute question in a session I was in 14She asked many, in fact, but in this instance I’m talking about a specific one that struck me. on quality assurance for evaluation. The idea being discussed was that one might include “having the evaluator(s) working on the evaluation” as a criteria for a high quality evaluation. Kim pointed out that doing that would privilege and give power to those people holding a CE designation, as well as the ways of knowing and evaluating that are dominant and thus included in the evaluation credentialing process. What about other evaluators? 15I don’t think she mentioned this specifically in that session, but I was thinking about evaluations I’ve seen with community members as part of the evaluation team, where they were specifically included because they had lived experiences and relationships within the community that were invaluable to the project, but they did not have the things deemed necessary by CES to get CE designation. I would think their inclusion in the evaluation would make for a higher quality evaluation than if they had been excluded.

Meta-Evaluation

Meta-evaluation is evaluation of evaluation. How do we know if we are doing good quality evaluations? Moreover, how do we know if our evaluations are making a difference?

    • One study in New Zealand found that only 8 of 30 evaluations they assessed met their criteria for a “good” evaluation. A study in the UK National Audit Office found only 14 of 34 evaluations were sufficient to draw conclusions about the effects of the intervention 16Source.
    • The Alberta Centre for Child, Family, and Community Research is working on a quality assurance framework for evaluation. It’s not done yet, but when it is it will be published on their website, so I’ve made a mental note to go look for it later.
    • We don’t actually have a good evidence base that evaluation makes a difference. A project by Eval Partners contributed to that evidence based by showcasing 8 stories of evaluations where they did truly make a difference. They provided a visual that I found helpful in thinking about this (I’ve recreated the image and annotated it with the key points]

evalautions_making_a_difference

  • 8_ways_to_enhance_evalution_impact
  • One audience member in a presentation I was in used an analogy of auditors for accounting – an auditor doesn’t *do* your accounting for you, but rather they come in an verify that you did your accounting well(according to accounting standards). But imagine if an auditor came in and you hadn’t done any accounting at all! That’s like bringing in an external evaluator to a program and saying “evaluate the program”, but you have not set up anything for evaluation!
  • Meta-evaluation isn’t just something we should do at the end of an evaluation to see if that was a good evaluation we did. You should engage in meta-evaluation throughout the project, while you still have the opportunity to strength the evaluation!

Miscellaneous:

  • Several people referred to Eval Agenda2020, the global agenda for evaluation for 2016-2020, created by EvalPartners.
  • The Canadian Evaluation Society has a new(ish) strategic plan:

  • Context-driven evaluation approach 17Cunning et al – having an overarching evaluation framework and tools (e.g., shared theory of change, outcome measures, reporting structure, database), but with the ability to adapt to local, organizational, & community contexts (as people adapt their programs at local sites)
  • “Deliverology” was the new buzzword this year – it was defined in one presentation as an approach to public services that prioritizes delivering results to citizens. Apparently it’s been talked about a lot in the federal public service.
  • Several people also mentioned that the Treasury Board Secretariat has a new evaluation-related policy on the horizon.
  • In his closing keynote, relating to the conference theme of “Evaluation On the Edge”, Michael Quinn Patton asked the audience to reflection on “Where is your edge?” My immediate thought on this was a reflection I’ve had before – that when I look back on the things I’ve done in my life so far that I thought were really amazing accomplishments – doing a PhD, playing a world record 10-day long game of hockey, doing a part-time MBA while working full-time – I started each one of them feeling “Oh my god! What am I doing? This is too big, too much, I won’t be able to do it!” I felt truly afraid that I’d gotten too close to the edge and was going to fall – not unlike how I felt when I did the CN Tower Edgewalk. But in each case, I’d decided to “feel the fear and do it anyway!” 18That’s the definition of courage, right? and while all of those things were really hard, I did accomplish them and they are some of the best things I’ve ever done. I also remember having that same feeling when I took on  my current job to evaluate a very big, very complex, very important project – “oh my god! It’s too big, it’s too much, what if I can’t figure it out??” But I decided to take the plunge and I think I’m managing to do some important work 19I guess only time will really tell!. I think the lesson here is that we have to push ourselves to the edge – and have the courage to walk there – to make great breakthroughs

Tips and Tools

  • Use the 6 Thinking Hats to promote evaluative thinking. I’ve used this activity in a teaching and learning context, and seen it used in an organization development/change management context, which now that I think of it were examples of evaluative thinking being applied in those contexts. I’ve usually seen it done where the group is split up so that some people are assigned the blue hat perspective, some the red hat perspective, etc., but Emma suggested that the way it is intended to be used is that *everyone* in the group is supposed to use each hat, together in turn.
  • Don’t use evaluation jargon when working with stakeholders. You don’t need to say “logic model” or “program theory” when you can just saw “we are going to draw a diagram that illustrates how you think the program will achieves its goals” or “let’s explain the rationale for the program.” Sometimes people use professional jargon to hide gaps in their knowledge – if you really understand a concept, you should be able to explain it in plain English.
  • Backcasting: Ask stakeholders what claims they would like to be able to make/what they want to “prove” at the end of the evaluation and then work backwards: “What evidence would you need to be able to make that claim?” and then “How would we collect that evidence?”
  • Thinking about “known contribution” vs. “expected contribution” in your program theory. Robert Schwartz talked about this when talking about IPCA for evaluating strategy, but I think this is useful for program logic models as well. I’ve thought about this before, but never actually represented it on any of my logic models.
  • Wilder Collaboration Factors Inventory, a “free tool to assess how your collaboration is doing on 20 research-tested success factors”
  • Adaptation Framework to adapt existing survey tools for use in rural, remote, and Aboriginal communities available from Reciprocal Consulting.
  • Treasure Board Secretariat’s Infobase – “a searchable online database providing financial and human resources information on government operations”
  • “Between Past and Future” By Hannah Arendt – has six exercises on critical thinking.
  • The Mountain of Accountability

Sessions I Presented:

  • Workshop:  Accelerating Your Logic Models: Interactivity for Better Communication  by Beth Snow & Nancy Snow
  • Presentation: Quick wins: The benefits of applying evaluative thinking to project development by M. Elizabeth Snow & Joyce Cheng

Sessions I Attended:

  • Workshop: Building Capacity in Evaluative Thinking (How and Why It is Different from Building Evaluation Capacity) by Emma Williams, Gail Westhorp, & Kim Grey.
  • Keynote Address: Silicon Valley Thinking for Evaluation by Catherine Courage
  • Presentation: Evaluating the Complex with Simulation Modeling by Robert Schwartz
  • Presentation: Blue Marble Evaluators as Change Agents When Complexity is the Norm by Keiko Kuji-Shikatani
  • Presentation: Organizational Evaluation Policy and Quality Assessment Framework: Learning and Leading by Tara Hanson & Eugene Krupa.
  • Presentation: Exemplary Evaluations That Make a Difference by Rachel Zorzi
  • CES Annual General Meeting
  • Presentation: Evaluation: Pushing the boundaries between implementing and sustaining evidence-based practices and quality improvement in health care by Sandra Cunning et al.
  • Presentation: Indigenous Evaluation: Time to re-think our edge by Benoit Gauthier, Kim van der Woerd, Larry Bremner.
  • Presentation: Drawing on Complexity to do Hands-on Evaluation by Jonathan Morell.
  • Presentation: Navigating the Unchartered Waters of INAC?s Performance Story: Where program outcomes meet community impacts by Shannon Townsend & Keren Gottfried.
  • Presentation: Supporting decision-making through performance and evaluation data by Kathy Gerber & Donna Keough.
  • Presentation: Utilizing Change Management and Evaluation Theory to Advance Patient Safety by Hana Saab; Rita Damignani
  • Closing Keynote: The Future: Beyond here there be dragons. Or are those just icebergs? by Michael Quinn Patton

Image credits:

 

Footnotes   [ + ]

1. There were a lot of presentations being held at every session, so I didn’t get to go to half of the ones that I wanted to, but that seems to be the case with every conference and I’m not sure how any conference organizer could solve that problem, short of recording and posting every session, which would be prohibitively expensive.
2. Or old tools that I know of but I haven’t thought about using in an evaluation context.
3. Where by “possible” I mean, when (a) I wrote down who said something in my notes, and (b) I can read my own writing.
4. See the source of this for further elaboration on the pieces of this definition
5. I googled to get this definition, which was alluded to the in the workshop I went to, and found it on this site.
6. Source: AES Ethical Guidelines [pdf]
7. His slide deck, which was from a longer workshop that he did previously (so he didn’t cover all of this in his lecture) is available here.
8. Check out this map of the roots of “complexity” science
9. Source
10. Source
11. The keynote speaker, Catherine Courage, had a slide with a similar quote that she attributed to George Bernard Shaw (“The single biggest problem in communication is the illusion that it has taken place.”), but when I just googled to confirm that – because I know that most people don’t actually look for sources of quotes, I found out that George Bernard Shaw never said this. Shame though – I like the wording of the way people think – erroneously – Shaw said it better than the way that Whyte actually did say it
12. Full disclosure: I am one of these Credentialed Evaluators.
13. e.g., the process by which you apply, as well as developing educational materials to ensure that CEs have options for education as they have to do a certain number of education hours to maintain their credential
14. She asked many, in fact, but in this instance I’m talking about a specific one that struck me.
15. I don’t think she mentioned this specifically in that session, but I was thinking about evaluations I’ve seen with community members as part of the evaluation team, where they were specifically included because they had lived experiences and relationships within the community that were invaluable to the project, but they did not have the things deemed necessary by CES to get CE designation. I would think their inclusion in the evaluation would make for a higher quality evaluation than if they had been excluded.
16. Source.
17. Cunning et al
18. That’s the definition of courage, right?
19. I guess only time will really tell!
Posted in evaluation, evaluation tools, event notes, notes | Tagged , , , , , | Leave a comment

One week until the 2016 Canadian Evaluation Society conference

One week from today, I’ll be on the opposite side of Canada, attending the Canadian Evaluation Society’s 2016 conference.

I’m doing presenting in two sessions at the conference: one pre-conference workshop and one conference presentation.

On June 5, my sister and I are giving a workshop based on a project we’ve been working on:

Accelerating Your Logic Models: Interactivity for Better Communication by Beth Snow and Nancy Snow

Logic models are commonly used by evaluators to illustrate relationships among a program’s inputs, activities, outputs, and outcomes. They are useful in helping intended users develop programs, communicate a program’s theory of change, and design evaluations. However, a static logic model often does not allow us to convey the complexity of the interrelationships or explore the potential effects of altering components of the model.

In this workshop, we will explore and create interactive logic models that will allow you to more easily demonstrate the logic within a complex model and to explore visually the implications of changes within the model. In addition, participants will be introduced to information design principles that can make their logic models – even complex ones – easier for intended users to understand and use.

Bring a logic model of your own that you would like to work on or work with one of ours to get some hands on practice at accelerating your logic model.

You will learn:

  • to create an interactive logic model in a virtual environment

  • to speak and write in a more informative way about the visual representations in your logic models

  • to apply information design-based principles when generating logic models

On June 6, I’ll be giving a presentation based on my main project at work:

Quick wins: The benefits of applying evaluative thinking to project development by Beth Snow and Joyce Cheng

The Clinical & Systems Transformation (CST) project aims to transform healthcare in Vancouver by standardizing clinical practice and creating a shared clinical information system across 3 health organizations. Ultimately, the system will be used by 40000 users at 40 hospitals, residential care homes, etc. The project includes an evaluation team tasked with answering the question “Once implemented, does CST achieve what it set out to achieve?” By being engaged early in the project, the evaluation team has been able to use evaluative thinking and evaluation tools to influence non-evaluators to advance the project, long before “the evaluation” itself is implemented. This presentation will explore the ways in which the early work of the evaluation team has influenced the development of the project — including facilitating leadership to articulate goals and helping the project use those goals to guide decisions — at the levels of individuals, project subteams, and the project as a whole.

There’s still time to register if you are interested!

Posted in evaluation, event notes | Tagged , , , , , , | Leave a comment

Reflections from the Canadian Evaluation Society 2014 Conference

I had the good fortune of being able to the attend the 35th annual conference of the Canadian Evaluation Society that was held at the Ottawa Convention Centre from June 16-18, 2014. I’d only been to one CES conference previously, when it was held in Victoria, BC in 2010, and I was excited to be able to attend this one as I really enjoyed the Victoria conference, both for the things I learned and for connections I was able to make. This year’s conference proved to be just as fruitful and enjoyable as the Victoria one and I hope that I’ll be able to attend this conference more regularly in the future.

Disappointingly, the conference did not have wifi in the conference rooms, which made the idea of lugging my laptop around with me less than appealing (I’d been intending to tweet and possibly live blog some sessions, but without wifi, my only option would have been my phone and it’s just not that easy to type that much on my phone). So I ended up taking my notes the old fashioned way – in my little red notebook – and will just be posting highlights and post conference reflections here 1Which, in truth, is probably better for any blog readers than the giant detailed notes that would have ended up here otherwise!.

Some of the themes that came up in the conference – based on my experience of the particular sessions that I attended, were:

  • The professionalization of evaluation. The Canadian Evaluation Society has a keen interest in promoting evaluation as a profession and has created a professional designation called the “Credentialed Evaluator” which allows individuals with a minimum of two years of full-time work in evaluation and at least a Master’s degree, to complete a rigorous process of self-reflection and documentation to demonstrate that they meet the competencies necessary to be an evaluator. Upon doing so, one is entitled to put the letters “CE” after their name. Having this designation distinguishes you as qualified to do the work of evaluation – as otherwise, anyone can call themselves an evaluator – and so it can help employers and agencies wishing to hire evaluators to identify competent individuals. I am proud to say that I hold this designation – one of only ~250 people in the world at this point. At the conference there was much talk about the profession of evaluation – in terms of CES’s pride that they created the first – and practically only 2Apparently there is a very small program for evaluation credentialing in Japan, but it’s much smaller than the Canadian one. – of this type of designation in the world, as well as distinguishing between research and evaluation 3Which is a very hot topic that leads to many debates, which I’ve experienced both at this conference and elsewhere..
  • Evidence-based decision making as opposed to opinion-based policy making or “we’ve always done it this way” decision making 4Or, as a cynical colleague of mine once remarked she was involved in: decision-based evidence making.. This brought up topics such as: the nature of knowledge, what constitutes “good” or “appropriate” evidence, the fallacy of the hierarchy of evidence 5Briefly, there is a hierarchy of evidence pyramid that is often inappropriately cited as being an absolute – that higher levels of the hierarchy are absolutely and in all cases better than lower levels – as opposed to the idea that the “best” evidence depends on the question being asked, not to mention the quality of the specific studies (e.g., a poorly done RCT is not the same as a properly done RCT). I’ve also had this debate more than once..
  • Supply side and demand side of evaluation.The consensus I saw was that Canada is pretty good at the supply side – evaluators and providing professional development for them – but could stand to do more work on the demand side – getting more decision makers to understand the importance of evaluations and the evidence they can provide to improve decision making.
  • “Accountability vs. Learning” vs. “Accountability for Learning”. One of the purposes for evaluation is accountability – to demonstrate to funders/decision makers/the public that a program is doing what it is intended to do. Another purpose is to learn about the program, with the goal of, for example, improving the program. But some of the speakers at the conference talked about re-framing this to be about programs being “accountable for learning”. A program manager should be accountable for noticing when things aren’t working and for doing something about it.
  • If you don’t do it explicitly, you’ll do it implicitly.This came up for me in a couple of sessions. First, in a thematic breakfast where we were discussing Alkin & Christie’s “evaluation theory tree” , which categorizes evaluation theories under “use,” “methods” or “valuing”, we talked about how each main branch was just an entry point, but all three areas still occur. For example, if you are focused on “use” when you design your evaluation (as I typically do), you still have to use methods and there are still values at play. The question is, will you explicitly consider those (e.g., ask multiple stakeholder groups what they see as the important outcomes, to get at different values) or will you not (e.g., you just use the outcomes of interest to the funder, thereby only considering their values and not those of the providers or the service recipients)? The risk, then, is that if you don’t pay attention to the other categories, you will miss out on opportunities to make your evaluations stronger. The second time this theme came up for me was in a session distinguishing evaluation approach, design, and methods. The presenter was from the Treasury Board Secretariat who evaluated evaluations conducted in government and noted that many discussed approach and methods, but not design. They still had a design, of course, but without having explicitly considered it, they could easily fall into the trap of assuming that a given approach must use a particular design and discounted the possibility of other designs that might have been better for the evaluation. “Rigourous thinking about how we do evaluations leads to rigourous evaluations.”

One of the sessions that particularly resonated for me was “Evaluating Integration: An Innovative Approach to Complex Program Change.” This session discussed the Integrated Care for Complex Patients (ICCP) program – an initiative focused on integrating healthcare services provided by multiple healthcare provider types across multiple organizations, focused on providing seamless care to those with complex care needs. The project was remarkably similar to one that I worked on – with remarkably similar findings. Watching this session inspired me to get writing, as I think my project is worth publishing.

As an evaluator who takes a utilization-focused approach to evaluation (i.e., I’m doing an evaluation for a specific purpose(s) and I expect the findings to be used for that purpose(s)), I think it’s important to have a number of tools in my tool kit so that when I work on an evaluation I have at my finger tips a number of options of how best to address a given evaluation’s purpose. At the very least, I want to know about as many methods and tools as possible – their purposes, strengths, weaknesses, and the basic idea of what it takes to use the method or tool, as I can always learn about the specifics of how to do it when I get to a situation where a given method would be useful. At this year’s conference, I learned about some new methods and tools, including:

  • Tools for communities to assess themselves:
    • Community Report Cards: a collaborative way for communities to assess themselves 6The presentation from the conference isn’t currently available online – some, but not all presenters, submitted their slide decks to the conference organizers for posting online – but here’s a link to the general idea of community report cards. The presentation I saw focused on building report cards in collaboration with the community..
    • The Fire Tool: a culturally grounded tool for remote Aboriginal communities in Australia to assess and identify their communities’ strengths, weaknesses, services and policies. 7Again, the presentation slide deck isn’t online, but I found this link to another conference presentation by the same group which describes the “fire tool”, in case anyone is interested in checking it out..
  • Tools for Surveying Low Literacy/Illiterate Communities:
    • Facilitated Written Survey: Options are read aloud, respondents provide their answer on an answer sheet that has either very simple words (e.g., Yes, No) or pictures (e.g., frowny face, neutral face, smiley face) on it that they can circle or mark a dot beside. You may have to teach the respondents what the simple words or pictures mean (e.g., in another culture, a smiley face may be meaningless).
    • Pocket Chart Voting: Options are illustrated (ideally photos) and pockets are provided to allow each person to put their vote into the pocket (so it’s anonymous). If you want to disaggregate the votes by, say, demographics, you can give different coloured voting papers to people from different groups.
  • Logic Model That Allows You To Dig Into the Arrows: the presenters didn’t actually call it that, but since they didn’t give it a name, I’m using that for now. In passing, some presenters from the MasterCard Foundation noted that they create logic models where each of the arrows – which represent the “if, then” logic in logic model is clickable and when you click it, it takes you to a page summarizing the evidence that supports the logic for that link. It’s a huge pet peeve for me that so many people create lists of activities, outputs, and outcomes with no links whatsoever between then and call that a logic model – you can’t have a logic model without any logic represented in it, imho. One where you actually summarize the evidence for the link would certainly hammer home the importance of the logic needing to be there. Plus it would be a good way to test out if you are just making assumptions as you create your logic model, or if there is good evidence on which to base those links.
  • Research Ethics Boards (REB) and Developmental Evaluation (DE). One group noted that when they submitted a research ethics application for a developmental evaluation project, they addressed the challenge that REB’s generally want a list of focus group/interview/survey questions upfront, but DE is emergent. To do this, they created a proposal with a very detailed explanation of what DE is and why it is important, and then creating very detailed hypothetical scenarios and how they would shape the questions in those scenarios (e.g., if participants in the initial focus groups brought up X, we would then ask questions like Y and Z). This allowed the reviewers to have a sense of what DE could look like and how the evaluators would do things.
  • Reporting Cube.Print out key findings on a card stock cube, which you can put on decision makers desk. A bit of an attention getting way of disseminating your findings!
  • Integrated Evaluation Framework[LOOK THIS UP! PAGE 20 OF MY NOTEBOOK]
  • Social Return on Investment (SROI) is about considering not just the cost of a program (or the cost savings you can generate), but to focus on the value created by it – including social, environmental, and economic. It seemed very similar to Cost-Benefit Analysis (CBA) to me, so I need to go learn more about this!
  • Rapid Impact Evaluation: I need to read more about this, as the presentation provided an overview of the process, which involves expert and technical groups providing estimates of the probability and magnitude of effects, but I didn’t feel like I really got enough out of the presentation to see how this was more than just people’s opinions about what might happen. There was talk about the method having high reliability and validity, but I didn’t feel I had enough information about the process to see how they were calculating that.
  • Program Logic for Evaluation Itself  Evaluation —> Changes Made —> Improved Outcomes. We usually ask “did the recommendations get implemented?”, but need to ask “if yes, what effect did that have? Did it make things better?” (and more challengingly, “Did it make things better compared to what would have happened had we not done the evaluation?”)

A few other fun tidbits:

  • A fun quote on bias: “All who drink of this remedy recover in a short time, except those whom it does not help, who all die. Therefore, it is obvious that it fails only in incurable cases.” -Galen, ancient Greek physician
  • Keynote speaker Dan Gardiner mentioned that confidence is rewarded and doubt is punished (e.g., people are more likely to vote for someone who makes a confident declaration than one who discusses nuances, etc.). An audience member asked what he thought about this from a gender lens, as men are more often willing to confidently state something than women. Gardener’s reply was that we know that people are overconfident (e.g., when people say they are 100% sure, they are usually right about 70-80% of the time), so whenever he hears people say “What’s wrong with women? How can we make them be more confident”, he thinks “How can we make men be less confident?”
  • a great presentation from someone from the Treasure Board Secretariat provided a nice distinction between:
    • evaluation approach: high-lelve conceptual model use din undertaking evaluation in light of evaluation objectives (e.g., summative, formative, utilization-focused, goal-free, goal-based, theory-based, participatory) – not mutually exclusive (you can use more than one approach)
    • evaluation design: tactic for systematically gathering data that will assist evaluators in answering evaluation questions
    • evaluation methods: actual techniques used to gather & analyze data (e.g., survey, interview, document review)
approach strategic evaluation objectives
design tactical evaluation questions
methods operational evaluation data
  • In addition to asking “are we doing the program well?”, we should also ask “are we doing the right thing?” Relevance is a question that the Treasury Board seems to focus on, but I think I haven’t given it much thought. Something to consider more explicitly in future evaluations.
  • Ask not just “how can I make my evaluations more useful?” but also, “how can I make them more influential?”
  • In a presentation on Developmental Evaluation, the presenter showed a diagram something like this (I drew it in my notebook and have now reproduced it for this blog), which I really liked as a visual:

through feedback loops
It shows how we are always making decisions on what actions to take based on a combination of knowledge and beliefs (we can never know *everything*), but we can test out our beliefs, feed that back in, and repeat, and over time we’ll be basing our actions more on evidence and less on beliefs

Footnotes   [ + ]

1. Which, in truth, is probably better for any blog readers than the giant detailed notes that would have ended up here otherwise!
2. Apparently there is a very small program for evaluation credentialing in Japan, but it’s much smaller than the Canadian one.
3. Which is a very hot topic that leads to many debates, which I’ve experienced both at this conference and elsewhere.
4. Or, as a cynical colleague of mine once remarked she was involved in: decision-based evidence making.
5. Briefly, there is a hierarchy of evidence pyramid that is often inappropriately cited as being an absolute – that higher levels of the hierarchy are absolutely and in all cases better than lower levels – as opposed to the idea that the “best” evidence depends on the question being asked, not to mention the quality of the specific studies (e.g., a poorly done RCT is not the same as a properly done RCT). I’ve also had this debate more than once.
6. The presentation from the conference isn’t currently available online – some, but not all presenters, submitted their slide decks to the conference organizers for posting online – but here’s a link to the general idea of community report cards. The presentation I saw focused on building report cards in collaboration with the community.
7. Again, the presentation slide deck isn’t online, but I found this link to another conference presentation by the same group which describes the “fire tool”, in case anyone is interested in checking it out.
Posted in evaluation, evaluation tools, event notes, methods, notes | Tagged , , , | Leave a comment

Intro to Philosophy – Week 7 – Time Travel

  • this module focused on the paradoxes of time travel and some ways to defend the logical possibility of backwards time travel (mostly from a David Lewis paper)
  • time travel involves:
    • external time = “time as it is registered by the world at large” – e.g., movement of times, rotation of the Earth; “time as it is registered by the majority of the non-time-travelling universe”
    • personal time = ” time as it is registered by a particular person or a particular travelling object” – e.g., your hair greying, the accumulation of your digestive products
    • normally, external time = personal time
    • but for time travel, the two diverge
  • forward time travel – “external time and personal time share the same direction, but have different measures of duration
  • backward time travel – “external time and personal time diverge in direction” and duration (in that you are travelling, for example, -50 years of external time while personal time goes forward)
  • Einstein’s Special Theory of Relativity says that if you travel fast enough, forward time travel does occur (because of time dilation)
  • backward time travel is more speculative – it’s debated whether physics supports the notion of backward time travel, though the General Theory of Relative “seems to predict that under certain circumstances” (e.g., enormous mass, enormous speed of mass) “it is possible to create circumstances where personal time and external time direct in duration and direction)
  • Lewis provides an argument that backward time travel is “logically possible” – not that it is physically possible
  • the grandfather paradox is basically that backward time travel is not possible because:
    • “if it was possible to travel in time it would be possible to create contradictions.
    • it is not possible to create contradictions.
    • Therefore, it is not possible to travel backwards in time”
  • e.g., if you  could travel backwards in time, you could kill your grandfather before they father your parent, which would prevent you from ever being born, but if you didn’t exist, how could you go back in time to kill your grandfather?
  • another example, you can’t go back into time to kill Hitler in 1908 because you already know that Hitler lived until 1945, so if you did travel into the past, you are guaranteed not to succeed in killing Hitler. So your actions in the past are restricted, but that’s not the same as saying it’s impossible you traveled back in time
  • Lewis agrees that contradictions can’t occur, but argues that time travel need not necessarily create contradictions
  • compossibility: possible relative to one set of facts may not be possible relative to another set of facts.
    • e.g., it’s compossible that I speak Gaelic, in the sense that I have a functioning voice box, but I can’t actually speak it because I’ve never learned it
  • so it’s “compossible” to kill Hitler in the past (he was mortal, I am physically capable of shooting a gun), but relative to the fact that Hitler was alive in 1945, it’s not “possible” for him to be killed in 1908
  • two senses of change:
    • replacement change: e.g., if I knock a glass off a table, I’d replace whole glass with a pile of glass fragments
    • counterfactual change: “the impact that you have assessed in terms of what would have happened (counterfactually) if you hadn’t been present”
      • e.g., my alarm clock going off this morning changed the course of my day (relative to if it hadn’t gone off)
  • Lewis thinks replacement changes can happen to concrete objects, but not to time
  • he also says that time travellers could cause a counterfactual change – i.e., the time traveller can affect things in the past (compared to if they hadn’t been there) – they don’t cause a replacement chance (i.e., it’s not like the past happened one way and then it changed to another way – it always happened only one way
  • causal loops are “a chain of events such that an event is among its own causes”; they aren’t paradoxes, but they do “pose a problem for the intelligibility of backward time travel”
  • e.g., imagine you travel back in time with a 2012 copy of Shakespeare’s complete works and give them to the young Shakespeare, who then claims them as his own – well, the only reason the 2012 copy exists is because you gave it to Shakespeare – but who wrote it? where did the information in it come from?
  • [or could become your grandfather by sleeping with your grandmother in the past, but you could only do that if you existed and you couldn’t exist unless you’d fathered your own parent, which could couldn’t do if you didn’t exist first.]
  • Lewis agrees that causal loops are strange, but they aren’t impossible
  • there are 3 possible chains of events:
    • infinite linear chain: ever event has a prior cause, so you can never get an answer of what the first cause was because you can always ask “but what caused that?”
    • finite linear chain: the first event int he chain has no cause – e.g., the Big Bang wasn’t just the first event in time, it was the beginning of time – no time existed before that (As Hawking says, asking “hwaht happened before the Big Bang is like asking “what’s north of the north pole?”) – so you still have the problem of “where does the information come from?”
    • finite non-linear chain: (causal loops) – again, we still have no explanation of where the information originally came from, but it’s no more problematic than the other two
  • there are other questions that philosophers think about with respect to time travel:
    • how can you bilocate? i.e., how can you from the future be standing next to you from the present
    • what physical laws govern time travel?
  • there’s also the idea of branching histories – you could go back to the past and kill Hitler, but you’d have killed Hitler in one version of history but in the version of history where you came from still had a Hitler who lived until 1945 (which raises the question: is this really time travel if you traveled to what is really a different history?)
  • another “interesting question is whether the mechanisms from time travel that general relativity may permit, and the time travel mechanisms that quantum mechanics may permit, will survive the fusion of general relativity and quantum mechanics into quantum gravity”
  • Hawking has posed another challenge to the “realistic possibility of time travel” – if time travel is possible, where are all the time travellers? Why haven’t we seen them?
  • “closed time-like curve is a path through space and time that returns to the very point whence it departed, but that nowhere exceeds the local speed of light. It’s a pathway that a physically possible object could take, that leads backward in time.” – it’s debated if this is realistic
  • but if it’s true, you could only access history once a closed time-like curve has been generated (e.g., if it is generated in 2017, then people in the future can travel back only as far as 2017)- so perhaps we haven’t seen time travellers yet because no one has yet generated a closed time-like curve
Posted in notes, online module notes, philosophy | Tagged , , , , | Leave a comment

Intro to Philosophy – Week 6 – Are Scientific Theories True?

  • this module was not what I expected – I was expecting to learn about the philosophy of science (e.g., positivism, post-positivism, etc.), but instead the whole module was about the debate between scientific realism vs. scientific anti-realism – a debate about the aims of science (rather than a debate on a specific scientific topic)
  • two main aims of science seem to be:
    • science should be accurate and provide us with a good description and analysis of the available experimental evidence in a given field of inquiry. We want “scientific theories to save the phenomenon”
    • science is not just about providing an accurate account of the available experimental evidence  and to save the phenomena, but to “tell us a story about those phenomena, how they came about , what sort of mechanisms are involved in the product of the experimental evidence, etc.
    • [I don’t fully understand what “save the phenomena” means – the instructor in the lecture just says it like we should understand it. Some further elaboration was given in the elaboration on the quiz that appeared in the first lecture, where the course instructor wrote that “saving the phenomena” is also known as “saving the appearances”: providing a good analysis of scientific phenomena as they appear to us, without any commitment to the truth of what brings about those phenomena or appearances” ]
  • Ptolemic astronomers described the motion of planets as being along small circles that were rotating along larger circles; they didn’t necessarily believe this to be what was actually happening, but rather it was a mathematical contrivance that “saved the phenomena” – that is, as long as the calculations agree with observations, it didn’t matter if they were true (or even likely)
  • Galileo, however, “replaced the view that science has to save the appearances, with the view that science should in fact tell us a true story about nature”
  • scientific realism: = “view that scientific theories once literally construed, aims to give us a literally true story of the way the world is.”
    • a semantic aspect to this idea: “once literally construed” means that “we should assume that the terms of our theory have referents in the external world” (e.g., planets are planets. Electrons are electrons.)
    • an epistemic aspect to this idea: “literally true story” – “we should believe that our best scientific theories are true, namely that whatever they say about the world, or better about those objects which are the referents of their terms, is true, or at least approximately true”
  • the “No Miracles Argument” suggests that unless we believe that scientific theories are at least approximately true, the success” of science at “making predictions later confirmed by observation, explaining phenomena, etc.” would be very unlikely
  • constructive empiricism” agrees with the semantic aspect of scientific realism (i.e., we should take the language of science at face value), but disagrees with scientific realism with the epistemic aspect (i.e., it thinks that a theory does not need to be good to be true). They think “Models must only be adequate to the observable phenomena, they are useful tools to get calculations done, but they do not deliver any truth about the unobservable entities” (e.g., atoms, protons, etc. that we cannot observe with the naked eye) – so the theory does not need to be “true” – it just needs to be “empirically adequate”. They think that science is successful because the theories that survive turned out to be the “fittest” (survival of the fittest) – the ones that best “saved the phenomena” over time.
  • Constructive empiricists view the “metaphysical commitment” necessary for scientific realism to be “risky”. If we discover later that something in our theory was non-existent, it would make scientific realism wrong, but not constructive empiricism.
  • The scientific realist would counter that the theories that survive do so because they are true (and those that fail do so because they are false).
  • Another issue is the distinction between observed vs. unobserved. E.g., observing with the “naked eye” and observing with scientific instruments. Why should we believe one more than the other?
  •  Philip Kitcher and Peter Lipton say that “we are justified to believe in atoms, electrons, DNA and other unobservable entities because the inferential path that leads to such entities is not different from the inferential path that leads to unobserved observables”
  • e.g., we know about dinosaurs from fossil evidence – we didn’t observe the dinosaurs ourselves, but can infer from the fossils. Similarly, we can infer Higgs Bosons from the evidence we get from the Large Hadron Collider.
  • “inference to the best explanation” = “we infer the hypothesis which would, if true, provide the best explanation of the available evidence”
Posted in Uncategorized | Tagged , , , , , | Leave a comment

Intro to Philosophy – Week 5 – Should You Believe What You Hear?

  • This week we are talking about “whether and in what circumstances you can believe what other people tell you”
  • will talk about the Enlightenment (1700-1800) – where reason, science, and liberal democracy were on the rise and religion and monarchy were in decline
  • intellectual autonomy was an ideal/virtue in the Enlightenment
  • David Hume is well known for his naturalistic philosophy – no appeal to God/supernatural in his philosophical explanations
  • Hume concluded “you should never believe a miracle occurred on this basis of testimony”
    • testimony = “any situation in which you believe something on the basis of what someone else asserts, either verbally or in writing”
    • a lot of what we believe is based on the testimony of others (we can’t directly experience everything – so we believe lots of things based on what others say or write)
    • “you should only trust testimony when you have evidence that the testifier is likely to be right”
    • evidentialism “a wise man… proportions his belief to the evidence”
  • miracle: “an exception to a previously exceptionless regularity” – e.g., someone rising from the dead – we’ve never seen that happen before
  • by definition, miracles are unlikely, and since we shouldn’t trust testimony unless there is evidence that the testifier is right, we shouldn’t trust a testimony of miracle
  • as well, people are often wrong when they testify (either intentionally (lying) or unintentionally (mistaken)).
  • so Hume concludes that you should never trust a testimony that a miracle occurred
  • Thomas Reid was a minister who challenged Hume’s argument
  • Hume and Reid both believed that we don’t trust our senses only when we have evidence they are likely to be right,
  • So Reid argued that trusting testimony is like trusting your senses,  so we shouldn’t demand we only trust testimony if we have evidence that it’s likely to be right (since we don’t demand that of our sense)
  • Hume & Reid both believed that we are hardwired to think in certain ways – e.g., that we are hardwired to believe our senses
  • Reid further believed that we are hardwired to trust other people’s testimony – “he thought we had an innate “principle of credulity“, which he defined as “a disposition to confide in the veracity of others and to believe what they tell us”
  • Reid noted that small children are very much disposed to believe what people tell them (even more so than adults) (so basically, he thinks it’s natural to believe people, since kids do it the most, and it gets constrained by experience) – he argues that “if our trust in testimony were based on experience (as Hume claims) it would be weakest in children”, but it’s not – “therefore, the principle of credulity is innate and not based on experience”
  • but Hume is talking about what people ought to do – he would say that children should not trust other people without evidence, whereas Reid is talking about what children actually do
  • Reid also believed in a “principle of veracity” = “a principles to speak the truth … so as to convey our real statements” and so “lying is doing violence to our nature”
    • basically, Reid are naturally trusting, naturally honest beings
  • Hume noted many ways that people testify falsely – sometimes we have motives to lie, people enjoy believing what they are told because we “find the feelings of surprise and wonder agreeable”; sometimes we lie because we get pleasure from telling of news (even if it’s not true) [also, one might testify falsely because they are mistaken – they believe what they say is true, but they are actually wrong]
  • Immanuel Kant, German philosopher, wrote: ““Enlightenment is man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own understanding without the guidance of another. […] The motto of the enlightenment is therefore: Have courage to use your understanding.”
    • Kant felt that not trusting another person’s testimony = a virtue; called intellectual autonomy – e.g. don’t believe something just be an authority, a religion tells you to
    • Kant said you should obey what authorities tell you to do, but not obey whai they tell you to think
  • Hume would be a fan of intellectual autonomy; it’s OK to trust other people, but it’s not OK to trust other people blindly
  • Reid held intellectual solidarity  (because we are “social thinkers: our beliefs and opinions are naturally guided by other people”) to be a virtue (rather than intellectual autonomy)
  • Kant appeals to the Latin motto “sapere aude” = “Dare to be wise” or, slightly less literally translated to “dare to know” – he argues that “if you base your beliefs on testimony, they will not amount to knowledge
  • a philosophical tradition, going back to Plato, that says that “Genuine or real knowledge requires what Plato called the ability to “give an account”: the ability to explain, or to situate that knowledge in some broader body of information.” – you can’t get that from testimony
  • so, the value of intellectual autonomy comes from the fact that knowledge/understanding/wisdom is only possible for an intellectually autonomous person
  • another way to look at it is that our beliefs/opinions tend to be passed on from parents to kids, and from people around you (your community) to you
  • Reid would view this as a good tendency, but Hume would be skeptical that this is a good thing
  • if you value progressive/innovative ideas breaking with tradition, you’ll side with Hume, but if you value conservation of your community’s beliefs and don’t like radical breaks from tradition, you’ll side with Reid
Posted in philosophy | Tagged , , , , , , , , , | Leave a comment

Intro to Philosophy – Week 4 – Morality

  • the first lecture explored the “status of morality” – not “is this moral statement correct?” but rather “what is it that we are doing when we make moral statements? are moral statements objective facts? or are they relative to cultural/personal? are they emotional?”
  • empirical judgments are things that we can discover by observations (e.g., the earth rotates around the sun; electricity has positive and negative charges; the Higgs-Boson exists; it was sunny today)
  • moral judgments are things that we judge to be right/wrong, good/bad (e.g., it is good to give to charity; parents are morally obliged to take care of their children; Pol Pot’s genocidal actions was morally abhorrent; polygamy is morally dubious)
  • 3 questions to ask to about these judgments
    1. are they the kinds of things that can be true/false or are they merely opinions? (empirical judgments can be true/false and some philosophers think that moral judgments are merely opinion, though other disagree
    2. if moral judgments can be true/false, what makes them true/false?
    3. if they are true, are they objectively true? (or only true relative to a culture/personal approach)
  • three broad approaches that philosophers have taken to these questions: objectivism, relativism, and emotivism

Objectivism

  • “our moral judgments are the sorts of things that can be true or false, and what makes them true or false are facts that are generally independent of who we are or what cultural groups we belong to – they are objective moral facts”
  • in this approach, if people disagree about morality of something, they are seen as disagreeing over some objective fact about morality
  • e.g., genocide is morally abhorrent – this seems to be something that can be true/false, seems to be objectively true (if someone disagreed, we’d probably thing they are wrong!)
  • e.g., polygamy is morally dubious – but many cultures practice it – perhaps it isn’t objectively true – so this example argues against objectivism
  • objection to objectivism: how can we determine what the empirical truth of a moral claim is? We can’t observe it like we do with empirical judgements.
    • potential responses to the objection: if you take the position that what is right is what maximizes overall happiness, then you can observe which option maximizes overall happiness to make your moral judgements. Or you can say that there are mathematically empirical facts that we can know without observing them in the physical world – instead, you reason them. So we can do the same with morals.

Relativism

  • “our moral judgments are indeed things that can true or false, but they are only true or false relative to something that can vary between people”
  • e.g., the statement “one must drive on the left side of the road” – is true in Britain, but false in the US (so it’s a statement that can be true or false, but whether it is true or false is relative to where you are)
  • e.g., polygamy is morally dubious – can be true or false, but depends on your culture
  • e.g., Oedipus sleeping with his mother was morally bad – (remember, he didn’t know it was his mother) – if you consider incest wrong, is it wrong across the board or only wrong if you know?
  • subjectivism: a form of relativism where “our moral judgments are indeed true or false, but they’re only true or false relative to the subjective feelings of the person who makes them” “X is bad” = “I dislike X”
    • subjectivism has a hard time explaining disagreements
  • cultural relativism: a form of relativism where “our moral judgments are indeed true or false, but they’re only true or false relative to the culture of the person who makes them.” “X is bad” = “X is disproved of in my culture”
  • objection to relativism: it seems like there is moral progress (e.g., people used to think that slavery was morally OK, but now we’ve progressed to say that slavery is morally wrong. However, under a relativism view, you’d say that slavery was morally acceptable relative to the time and culture. So relativism does not allow for moral progress.
    • potential answer to the objection: cultures overlap – so, for example, if you consider “America” a culture

Emotivism

  • “moral judgments are neither objectively true/false nor relatively true/false. They’re direct expression of our emotive reactions”
  • objection to emotivism: we reason our way to moral conclusions – e.g., you might say “it’s wrong for Oedipus to sleep with his mother,” but then someone says “But he didn’t know it was his mother” and then you reason “OK, he can’t be held morally responsible since he didn’t know.” But emotivism says that moral judgments are only based on emotions
    • potential answer to the objection: some evaluations are reason – e.g., if you prefer A to B and prefer B to C, but then you prefer C to A – that’s irrational. So we do use reason when it comes to emotions/preferences.
  • some people in the class discussion asked questions like “Can’t there be a universal principle that unites objectivism and relativism? E.g., a relativist might say “Women should wear headscarves in some cultures but not others, but an objective could say the principle is “When in Rome, do as the Romans do” – which would work out to “Women should wear headscarves in those cultures where that is what is expected and not in other cultures where it is not”. Another discussion point was that we could agree on a moral judgment but disagree on the reason for it (e.g., We agree that kicking dogs is morally wrong, but one might think it’s because you are causing pain to the dog, while another thinks it’s because it desensitizes the person doing it to cruelty”)
  • “Objective” can mean moral principles independent of us, or it can mean moral principles apply to everyone equally (relativists would just object to the latter).
  • Another question from the class was could objectivism be right for some moral principles, relativism is best for other moral principles, and emotivism is best for yet other principles. Philosophers talk about “agent neutrality” – the reasons that morality provide for whether something is moral are independent of the individual and they talk about morality is overriding. If this is correct, you’d expect there is a unified domain of morality.
  • Probably none of these theories are right – they all need some work to figure out which, if any, is correct.
Posted in philosophy | Tagged , , , , , , , | Leave a comment

Intro to Philosophy – Week 3 – Philosophy of the Mind

  • Cartesian dualism: the body is made of material stuff (i.e., stuff that has “extension” (i.e., takes up space)) and the mind is made of immaterial stuff (i.e., does not have extension)
  • Princess Elizabeth of Bohemia was a student of Decartes who brought up the following problem: how can an immaterial mind affect a material body? Our thoughts cause us to do things, but how does the immaterial interact with the material?
  • Another problem is how does the ingestion of a material substance (e.g., psychoactive drugs) affect an immaterial mind (i.e., hallucinations)?
  • Physicalism = “all that exists is physical stuff”
  • Identity theory = one view of physicalism in which “mental phenomena, like thoughts and emotions, etc. are identical with certain physical phenomena”
    • e.g., the mental state of “pain” is identical to a particular type of nerve cell firing
    • a reductionist view – i.e., reduces mental states to physical processes
    • token = instances of a certain type (e.g., Fido and Patches are two tokens of the type “Basset hound)
    • token identity theory = instances of mental phenomena (e.g., a particular pain that I am feeling) is identical to a particular physical state that I’m in
    • type-type identity theory = types of mental phenomena (like “pain” or “sadness”) are identical to types of physical phenomena (e.g., a particular cocktail of neurotransmitters, hormones, etc.)
      • type identity theory is a stronger claim than token identity theory
  • problem with type-type identity theory:
    • a human, an octopus, and an alien can all feel pain, but have very different brain states
    • Hilary Putnam raised this issue of “multiply realisability” in 1967 –  the same mental state can be “realized” from different physical states
    • similarly – currency can be coins & paper in one place, but shells in another place – so currency is “multiply realisable”. It doesn’t matter what they are made of – what matters is how they function.
  • Functionalism = “we should identify mental states not by what they’re made of, but  by what they  do. And what mental states do is they are caused  by sensory stimuli and  current mental states and cause behaviour and new mental states”
    • e.g. the smell of chocolate (a sensory stimulus) causes a desire for chocolate (mental state) may cause the thought (another mental state) “where is my coat?” and the behaviours of putting on coat and going to the store; but if I have a belief that there is chocolate in the fridge, the desire for chocolate could lead to the behaviour of getting the chocolate out of the fridge
    • functionalism gets away from the question of “what are mental states made of?” and instead focuses on what mental states do
  • philosophers often use the computer as a metaphor for mind – a computer is an information processing machine and it doesn’t matter what it’s made of, it only matters what it does
  • this is a computational view of the mind
  • Turing Test – you ask an entity questions and you don’t know if you are talking to a person or a computer. If we can build a computer that can fool the person asking questions into thinking they are human, we have built a computer that is sufficiently complex to say that it can “think” or it has a “mind”
    • some problems with the Turing test:
      • it’s language-based, so a being that can’t use our language couldn’t pass it
      • it’s too anthropocentric – what about animal intelligence? or aliens
      • does not take into account for the inner states of a machine – e.g., a machine that is doing a calculation of 8 + 2 = 10 is going through a process, but a machine that just has a huge database of files and just pulls the answer 10 out of its “8 + 2” file – we wouldn’t want to say that it is “thinking
  • John Searle’s Chinese Room Thought Experiment
    • You are in a room where you get slips of paper with symbols on them delivered to you through an “input” hole in the wall and you have a book that tells you what symbols to write in response to those symbols which you write down on a slip of paper and pass through the “output” slot in the wall. As it turns out, the symbols are Chinese characters and the book is written in such a way that you are giving intelligent answers to the person sending the questions to you. When they receive your “answers”, they are convinced you are a being with a mind that is answering their questions – but you have no idea that it’s questions and no idea what you are responding because you cannot read nor write in Chinese. This is how computers work – they get an input, they are programmed with a list of rules to produce a certain output. But we don’t say that they computer is “thinking” and more than the person in the room understands Chinese. There is no understanding going on within a computer – it doesn’t have a “mind” and if it passes the Turing test, it’s just a really good simulation.
    • syntactic properties = physical properties, e.g., shape
    • semantic properties = what the symbols means/represents
    • a computer only operates based on syntactic properties – it is programmed to responded to the syntactic property of a given symbol with a given response – it does not “understand” its semantic properties
    • aboutness of thought – thoughts are “about” something – they have meaning
  • some problems with the computational view of the mind
    • doesn’t allow us to understand how we can get “aboutness of thought”
    • the “gaping hole of consciousness”
    • the hard problem of consciousness: what makes some lumps of matter have consciousness and others don’t have consciousness?
  • a lot of philosophers were writing when computers were becoming a big deal, so perhaps their thinking was limited by thinking of minds as computers – perhaps we should step away from computational analysis as a metaphor for the mind because it’s limiting our thinking?

 

Follow-up discussion

  • most philosophers use the phrase “intentionality”, which the prof of this session avoided when she talked about “aboutness of thought” because it comes with a lot of philosophical “baggage” that she didn’t want to get into
  • in the discussion forum of the class, people were asking things like “do animals have mind? and how could we know if animals have mind?”
    • one school of philosophy says that you need to have language to have thoughts and since animals don’t have language (as far as we know), they don’t have thoughts
    • but others don’t think this is a fair argument – e.g., if a dog is barking at a squirrel a tree, just because it might not have as “rich” a concept of squirrel as humans do (e.g., a squirrel is a mammal with a bushy tail etc.), we can still infer from its behaviour that it is “thinking” something we can roughly describe as “the dog thinks there’s a squirrel in the tree”
    • she suggests checking out Peter Carrurthers’ work on the animal mind for more information
  • someone in the discussion said that the Turing test doesn’t test if a machine is conscious, but rather it tests at what point humans are willing to attribute conscious states to other things (similar, at what point do infants start to think of other people as having a consciousness?)

 

Posted in notes, online module notes, philosophy | Tagged , , , , , , , , , , | Leave a comment

Intro to Philosophy – Week 2 – Epistemology

Epistemology

  • studying and theorising about our knowledge of the world.
  • we have lots of information, but how do we tell good information from bad information?
  • “propositional knowledge” = knowledge that a certain proposition is the case
  • “proposition” = what is expressed by a declarative sentence, i.e., a sentence that declares that something is the case.
    • e.g., “the cat is on the mat” is a sentence about how the world is
    • it can be true or false
  • not all statements are declarative (e.g., “Shut that door” is not a sentence that declares how the world is. It cannot be true or false).
  • “ability knowledge” = know-how (e.g., knowing how to ride a bike)
  • two conditions for propositional knowledge
    • truth condition – if you know something is the case (e.g., you know that the cat is on the mat), then it has to be true (e.g., the cat really is on the mat)
      • you cannot know a falsehood
      • you can think you know a falsehood, but you cannot actually know it
        • we are interested in when you actually know something, not just when you think you know
    • belief – knowledge requires that you believe it to be true (e.g., if you don’t believe Paris is the capital of France, you cannot have the knowledge that Paris is the capital of France
      • when someone says “I don’t just believe that Paris is the capital of France, I know that Paris is the capital of France.” But this doesn’t mean that belief in a proposition is different in kind from knowledge of that proposition, just that we don’t merely believe it, but that we also take ourselves to know that proposition, and this is indicative of the fact that a knowledge claim is stronger than a belief claim. (i.e., knowledge at the very least requires belief).
  • this doesn’t mean you have to be infallible or certain, but if you are wrong about the fact, then you didn’t really know it (you just thought you did)
  • also, when we talk about propositional knowledge, we aren’t talking about knowledge that something is likely or probably true – we are talking about something that either is or is not true
    • we do sometimes “qualify” or “hedge” our knowledge claims (perhaps because we are unsure), but we are really concerned with actual true
  • knowledge isn’t just about getting it right – it also requires getting to the truth in the right kind of way
    • e.g., imagine a trial where the accused is, in fact, guilty. One juror decides that the accused is guilty based on considering the evidence/judge’s instructions/the law, while another juror decides the accused is guilty based on prejudice without listening to any of the evidence. Although they both “got it right” (i.e., what they believe is true), the first juror knows the accused is guilty, but the second juror does not know it
  • there are two intuitions about the nature of knowledge:
    • anti-luck intuition – it’s not a matter of luck that you ended up at the right answer; you actually formed your belief in the right kind of way (e.g., considering the evidence, making reasoned arguments), not that you got to the truth randomly/by chance
    • ability intuition – you get to the truth through your ability (e.g., the juror who used prejudice and happened to get the right answer did not get to the right answer by their abilities)

The Classical Account of Knowledge and the Gettier Problem

  • knowledge requires more than just truth and belief – but what is it that is required?
  • the classical account of knowledge (a.k.a., the tripartite account of knowledge):
    1. the proposition is true
    2. one believes it
    3. one’s belief is justified (i.e, you can offer good reasons in support of why you believe what you do)
  • until the mid-1960s, this classical account of knowledge was accepted by most people
  • but in 1963, Edmund Gettier published a 2.5 page paper that demolished this account – he showed some counter-examples of situations that fit the three above named criteria, but where people don’t know – they actually come to their belief by luck
  • his examples were very complicated, but here are some simple counter examples (we can call them Gettier-style cases)
  • e.g., the stopped clock example:
    • you come downstairs and see the clock says 8 am and you believe, based on the justification that this clock has always been reliable, that it is 8 am. And it happens to be 8 am. So you have a justified true belief (i.e., it satisfies the classical account of knowledge). But imagine the clock stopped 12 hours ago, but you just happened to look at the clock when it was 8 am – so you got it right by luck. So you cannot actually know the time based on looking at a stopped clock!
  • e.g., the sheep case
    • a farmer looks out his window, sees what looks like a sheep, and believes there is a sheep in the field. There is a sheep in the field, but it is hidden behind a sheep-shaped rock, which is what the farmer actually saw. So his belief is true (i.e., there is a sheep in the field) and he has a justification (he sees what looks like a sheep in the field), but he got it right only by sheer luck that there was a sheep hidden behind that rock. If there had not been a sheep hidden behind that rock, he would be believe there was a sheep in the field and he would be wrong. So he does not actually know there is a sheep in the field (he just thinks he knows and happens to be right just by luck)
  • people try to attack Gettier-style cases – e.g., asking “does the farmer really believe that there is a sheep in the field or do they believe that the rock is a sheep?” because if it is the latter, then they have a false belief (i.e., the rock is not a sheep) and thus it does not violate the classical account of knowledge – but this is just attacking a single case – to knock down Gettier-style cases in general, you’d need to think about Gettier-style cases as a whole and find a way to blow up the whole thing
  • there is a general formula for constructing Gettier-style cases
    • take a belief that is formed in such a way that it would usefully result in a false belief, but which is still justified (e.g., looking at something that looks like a sheep, or looking at a stopped clock)
    • make the belief true, for reasons hat have nothing to do with the justification (e.g., hidden sheep, happening to look at the stopped clock at the right time)
  • at first, people thought there would be some simple fix (e.g., adding a fourth condition onto the classical account), but after much trying, no one has found a way to do this
  • one example of how someone tried
    • Keith Lehrere proposed adding a fourth conditions that says the subject isn’t basing their belief on any false assumptions (a.k.a., “lemmas”)
    • this sounds like a reasonable approach
    • but what do we mean by “assumptions”?
    • a narrow definition of “assumptions” = something that the subject is actively thinking about (but you don’t look at the clock and actively think “I assume the clock is working” – you just believe it is without actively thinking about that assumption)
    • a broad definition of “assumptions” = a belief you have this is in some sense germane to the target belief in the Gettier-style case (e.g., you do believe the clock is working even though you aren’t actively thinking that) – but this is so broad that it will exclude genuine cases of knowledge because of all the things we believe, some of them may false, so then we’d exclude genuine cases of knowledge
  • two questions raised by Gettier-style cases
    1. is justification even necessary for knowledge?
    2. how does one go about eliminating knowledge-undermining luck?
  • so, it really is not that obvious what knowledge is

Do We Have Any Knowledge?

  • radical skepticism contends that we don’t know nearly as much as we think we know – and in its most extreme form suggests that we can’t know anything
  • skeptical hypotheses are scenarios that are indistinguishable from normal life, so you can’t possibly know they aren’t occurring
    • e.g. brain-in-a-vat – if you were a brain in a vat being feed the necessary nutrients to stay alive and being fed fake experiences
    • there is no way to know this isn’t true because any “evidence” you can provide against it (e.g., I can feel objects around me, I can have a conversation with you) could be explained by the situation of being a brain in a vat (e.g., your brain is being fed signals that make it appear that you can feel objects or have a conversation)
    • note that radical skepticism isn’t saying you are a brain-in-a-vat or even that it’s likely that you are a brain-in-a-vat. It’s just asking “How would you know that you aren’t a brain-in-a-vat?” And really, you can’t know.
Posted in notes, online module notes, philosophy | Tagged , , , , , | Leave a comment

Report on “Delivering the Benefits of Digital Health Care”

Delivering_the_benefits_of_digital_health_careA report on “Delivering the benefits of digital health care” from Nuffield Trust in the UK recently came across my desk. It covers a bigger scope of technology than the project I’m working on (which is a project about transforming clinical care (and implementing an electronic health record across three large health organizations to support this clinical transformation), but does not include telehealth and some of the other IT “solutions” talked about in this report), but some of the “lessons learned” that they share resonate with what we are doing.

Some highlights:

Clinically led improvement, enabled by new technology, is transforming the delivery of health care and our management of population health. Yet strategic decisions about clinical transformation and the associated investment in information and digital technology can all too often be a footnote to NHS board discussions. This needs to change. This report sets out the possibilities to transform health care offered by digital technologies, with important insight about how to grasp those possibilities and benefits from those furthest on in their digital journey” (p. 5, emphasis mine)

  • this report suggests that rather than focusing on the technology with an eye to productivity gains, “the most significant gains are to be found in more radical thinking and changes in clinical working practices” (p. 5).
    • it’s “not about replacing analogue or paper processes with digital ones. It’s about rethinking what work is done, re-engineering how it is done and capitalising on opportunities afforded by data to learn and adapt.” (p. 6)
    • This reminds me of what my IT management professor in my MBA program liked to say: “If you automate a mess, all you get is an automated mess”. It’s much better to focus on getting your processes right, and then automating them, rather than just automating what you have.
    • “”It’s fundamentally not a technology project; it’s fundamentally a culture change and a business transformation project” (Robert Wachter, UCSF)” (p. 22)
  • in a notable failure, the NHS in the UK spent 9 years and nearly £10 billion and failed to digitise the hospital and community health sectors with reasons for the failure being “multiple, complex, and overlapping” including “an attempt to force top-down change, with lack of consideration to clinical leadership, local requirements, concerns, or skills” (p. 14)
  • it is noted that implementing an electronic health record (EHR) [which is what the project I’m working on is doing) is particularly challenging
  • they also note that things take longer than you expect:
    • “The history of technology as it enters industries is that people say ‘this is going to transform everything in two years’. And then you put it in and nothing happens and people say ‘why didn’t it work the way we expected it to?… And then lo and behold after a period of 10 years, it begins working.” (Robert Wachter, University of California San Francisco (UCSF)” (p. 20)
  • and they note that “the technologies that have released the greatest immediate benefits have been carefully designed to make people’s jobs or the patient’s interaction easier, with considerable investment in the design process.” (p. 20)
  • poorly designed systems, however, can really decrease productivity
  • getting full benefit of the system “requires a sophisticated and complex interplay between the technology, the ‘thoughtflow’ (clinical decision-making) and the ‘workflow’ (the clinical pathway)” (p. 21)
  • systems with automated data entry (e.g., bedside medical device integration, where devices that monitor vital signs at the bedside automatically enter their data into the EHR, without requiring a clinician to do it manually) really help maximize the benefits

Seven Lessons Learned

  1. [Clinical] Transformation First
    • it’s a “transformation programme supported by new technology” (p. 22)
  2. Culture change is crucial
    • “many of the issues face […] are people problems, not technology problems” (p. 23)
    • you need:
      • “a culture that is receptive to change
      • a strong change management process
      • clinical champions/supporting active staff engagement” (p. 23)
  3. User-centred design
    • you need to really understand the work so that you design the system to meet the needs of the clinician
    • “the combination of a core package solution with a small number of specialist clinical systems is emerging as the norm in top-performing digital hospitals” (p. 8)
  4. Invest in analytics
    • data analytics allows you to make use of all the data you collect as a whole (in addition to using it for direct clinical care)
    • requires “analytical tools available to clinicians in real time” (p. 8)
  5. Multiple iterations and continuous learning
    • you aren’t going to get it right the first time, no matter how carefully you plan [this is something that our new Chief Clinical Information Officer is always reminding us of] and so you will need “several cycles – some quite painful – before the system reaches a tipping point where all of this investment starts to pay off” (p. 26)
  6. Support interoperability
    • to provide coordinated care,  you need to be able to share data across multiple settings
    • “high-performing digital hospitals are integrating all their systems, to as low a number as possible, across their organisation” (p. 9)
  7. Strong information governance
    • when you start to digitize patient information, the size and scope of privacy issues change (i.e., while there is risk that an authorized person could look at a patient’s paper record or paper records could be lost when being transported between places, with digitized record there is a risk that all of your patients’ record could be accessed by an unauthorized person and that it is much easier to search electronic records for a specific person, condition, etc.)
    • you need “strong data governance and security” (p. 9)

Seven Opportunities to Drive Improvement

  1. More systematic, high-quality care
    • health care “often falls short of evidence-based good practice” (p. 31)
    • “technologies that aid clinical decision-making and help clinicians to manage the exponential growth in medical knowledge and evidence offer substantial opportunities to reduce variation and improve the quality of care” (p. 31)
    • integrated clinical decision support systems and computerized provider order entry systems:
      • reduce the likelihood of med errors (they cite a review paper (Radley et al, 2013) [which I have now obtained to check out what methods the papers they reviewed used to measure med errors]
      • reduced provider resource use
      • reduced lab, pharmacy & radiology turnaround times
      • reduced need for ancillary staff (p. 32)
    • at Intermountain Healthcare, “staff are encouraged to deviated from the standardised protocol, subject to clear justification for doing so, with a view to it being refined over time” (p. 34) – “hold on to variation across patients and limit variation across clinicians” (p. 35) as “no protocol perfectly fits each patient” (p. 35)
    • need to avoid alert fatigue – by only using them sparingly (or else they will get ignored and the really important ones will be missed) and targeting them to the right time (e.g., having prescribing alerts fire while the provider is prescribing)
    • be on the lookout for over-compliance – “Intermountain Health experience problems where clinicians were too ready to adopt the default prescribing choice, leading to inappropriate care in some cases” (p. 37)
  2. More proactive and targeted care
    • “patient data can be used to predict clinical risk, enabling providers to target resources where they are needed most and spot problems that would benefit from early intervention” (p. 38)
    • drawing on patient data, computer-based algorithms “can generate risk scores, highlighting those at high risk of readmission and allowing preventative measures to be put in place” (p. 39)
    • “it may also have a role in predicting those in the community who are likely to use health care services in the near future” (p. 39)
    • “monitoring of vital signs, [which are then] electronically recorded, [can be used to] calculate early warning scores [and] automatically escalate to appropriate clinicians [and] “combine these data with laboratory tests to alert staff to risks of sepsis, acute kidney injury or diarrhoeal illness” (p. 39)
      • Cerner estimates using early warning system for sepsis “could reduce in-hospital patient mortality by 24% and reduce length of stay by 21%, saving US$5,882 per treated patient” (p. 41)
    • there’s also opportunity to “check a patient’s status from remote location within the hospital, as well as facilitating handover between staff and task prioritisation using electronic lists” (p. 39)
    • monitoring of vital signs throughout the whole hospital is best to maximize benefits
    • predictive analytics is only as good as the quality of the data you put into the system
    • lots of data is unstructured – need to find ways to use these data (e.g., natural language processing)
  3. Better coordinated care
    • coordinated care leads to a better care experience, reduces risk of duplication or neglect
    • “if all health care professionals have access to all patient information in real time, there is significant potential to reduce waste (e.g., duplication of tests). It can help make sure things are done at the right time, at the right place and not overdone” (p. 45)
    • “chasing  report or a result […is…] an inefficient use of time, effort and energy and doesn’t really give confidence to the patient and carers” (p. 47)
    • but note that “systems to share results/opinions digitally can remove the opportunity for informal exchange of views and advice across teams, which often enrich and improve clinical decision-making” (p. 48), so alternative ways of doing this may need to be provided.
  4. Improved access to specialist expertise
    • telehealth (not part of the project I’m working on)
  5. Greater patient engagement
    • this section referred to tools, like wearable tech (e.g., Fitbit) or patient portals that empower patients to take more control of their own health  (not part of the project I’m working on)
    • “patient co-production of data into a hospital EHR will redefine the interaction with care services” (e..g, questionnaires that patients fill out before they even come to the healthcare facility, tracking of long-term data (e.g., blood pressure, weight))
  6. Improved resource management
    • e-rostering (i.e., of staff), patient flow management, business process support (e.g, HR, facilities, billing) all discussed (not relevant to the project I’m working on)
    • ability of staff to remotely access health records “can transform the way hat staff in the community deliver care” (p. 66)
  7. System improvement and learning
    • “feeding learning from clinical and non-clinical data back into existing processes is essential to fully realising the benefits of digital technology” (p. 70)
    • Intermountain Healthcare:
      • captures 3 type of data:
        • intermediate & final clinical outcomes
        • cost data
        • patient satisfaction and experience
      • “clinical registries are derived directly from clinical workflows” – currently has “58 condition-specific registries – tracking a complete set of intermediate and final clinical and cost outcomes by patient” (e.g., 71)
      • remember that data collection is costly, so only collect data routinely if you are using it for some purpose that adds value (“Intermountain Healthcare does this through small individual projects, before building data collection into existings processes”) (p. 76)

What could the future look like?

  • operational improvement from:
    • combining impact of a bunch of small changes [this assumes that (a) the different elements of the system are additive, as opposed to complex, and (b) the “benefits” outweigh the unintended negative consequences]
    • getting the “full benefit” out of all the technologies (i.e., it will take time for people to implement the available technologies and to optimize their use) [this doesn’t even include technologies that are not yet available)
  • “benefits” they expect are most likely to see:
    • “reduced duplication and rework
    • removing unjustified variation in standard clinical processes
    • identifying deteriorating patients and those at risk
    • predicting the probability of an extended stay or readmission
    • cutting out unnecessary steps
    • improving communication and handoffs
    • removing administrative tasks from clinical staff
    • scheduling and improving flow
    • inventory & procurement management
    • rostering, mobile working, and staff deployment
    • patient self-service for administrative tasks such as booking
    • other automation, e.g., robotics in back office” (p. 80-1)
  • redesigning the whole pathway:
    • “reduced variation
    • ability to ensure the most appropriate level of care
    • fitting staffing skill mix to demand more effectively” (p. 81)
  • population health management
    • “early intervention & targeting
    • enabling patient self-management
    • shared decision-making
    • measuring outcomes and value rather than counting activities” (p. 82)
      • all this requires better data and analytics, learning & improvement processes, and supporting patients with self-management and supporting shared decision-making (p. 82)

“Early strategic priorities should be the areas where technology is able to facilitate some relatively easy and significant wins. Most notable are the systematic and comprehensive use of vital signs monitoring and support for mobile working. In the short to medium term, the use of EHRs, telehealth, patient portals and staff rostering apps can also generate savings and improve quality. However, these require sophisticated leadership with support for organisational development and change management to ensure that the full benefits are realised. In the longer term, the really big benefits will come from the transition to a system and ways of working premised on continual learning and self-improvement.” (p. 88, emphasis mine)

Potential intended consequences mentioned in the report:

  • decreased productivity if the system is poorly designed (e.g., time spent on data entry, time spent responding to unhelpful alerts)
  • “over-compliance” – “Intermountain Health experience problems where clinicians were too ready to adopt the default prescribing choice, leading to inappropriate care in some cases” (p. 37)
  • “systems to share results/opinions digitally can remove the opportunity for informal exchange of views and advice across teams, which often enrich and improve clinical decision-making” (p. 48),

Limitations:

  • they noted there was little evidence on this type of work in the literature, particularly in terms of return on investment
Imison, C., Castle-Clarke, S., Watson, R., & Edwards, N. (2016).Delivering the benefits of digital health care. Nuffield Trust. [Download the full report.]
Posted in healthcare, information technology, notes | Tagged , , , , , , , , , , , , , , | Leave a comment