false
Catalog
SCCM Resource Library
Pro: ICU Outcomes or ICU Metrics: Is It Gaming the ...
Pro: ICU Outcomes or ICU Metrics: Is It Gaming the System?
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
presenting a case for how ICU metrics can improve ICU outcomes. So I have a couple of disclosures that have nothing to do with what I'm talking about here. So again, I think my charge today is to try to convince you that ICU metrics can be used to improve ICU outcomes and that these are not just numbers that we're trying to game or a system that we're trying to game. So I want to start this by proposing a few ways in which ICU outcomes may improve metrics and then look into what data we do or don't have to support that. So in first, as a first issue, I think ICU metrics may improve outcomes by helping us identify problems that we don't previously know about. The, I think, other extreme of this is to bury our heads in the sand and decide that problems we don't know about don't actually exist. But if we don't identify things that are true problems, we are not in a position to try to fix them. So I think that's one thing that metrics can help us with. The second is, I think, as we look at metrics across a system, it allows us to identify peers who may be doing something well that we can then learn from or mimic. In an individual institution or practitioner standpoint, it may allow us to track our progress if we identify an issue through some of our metrics and we try to institute practices or policies to change those. And then we can track that metric over time to see if we're getting better. And then finally, I personally think, and I think oftentimes metrics are brought up as ways that we can work against one another, that we are not necessarily incentivizing ourselves correctly. But I think if used intelligently, they can allow us to appropriately incentivize for positive change, and particularly to work together to do that. So I just want to go through this now. Before I do that, though, I'd like to point out that metrics alone, this is maybe to preempt Derek's position, metrics alone are obviously not sufficient. And so these are two, oops, sorry. Why are you going backwards? Because I am not good at this, apparently. What is going on? I did nothing different. Okay, I'm sorry. Maybe an ICU outcomes expert. I'm certainly not a technology expert. So these were two studies that were published simultaneously in an issue of JAMA in 2015. And both of them looked at different cohorts of hospitals who had instituted the National Surgical Quality Improvement Program at their institutions and compared them to hospitals that did not. And in both instances, the transition to not having, from not having metrics to having metrics compared to hospitals who never transitioned showed no improvement in mortality complications or resource use, which may suggest that metrics are useless and we should get rid of them entirely. But I think the most important piece of this was in an editorial written by Don Berwick to accompany these two pieces. And I've just pulled out some of the sentiments that he has written here. So as he writes on the left, no doubt some hospitals in every group, both those that instituted NISQIP metrics and those that did not, will have achieved large gains. And in particular, some almost certainly use the NISQIP information to do better for their patients, just as hospitals may have done so without it. However, as he says on the right-hand side, the most likely explanation for the findings, particularly the no impact of the metric utilization from these two studies, was that metrics are necessary for improvement but not sufficient. And as he notes in the bottom right-hand corner, which I thought was an interesting proverb for us to remember in this context, all hospitals should take note that measurement alone is not enough for improvement. And in particular, as an African proverb says, "'Weighing a pig does not make the pig fatter.'" And so I think this idea that having metrics is all we need to do to change outcomes is nonsense, right? It's what we do with them. Okay, so I'm going to go back to those four main ideas of how I think metrics could improve outcomes. And I want to call your attention to an interesting series of studies that were published. And I'm not a pediatrician, so I feel honored to be able to present pediatric data here with my co-speaker. But this was information that came out of the Polisi Network and in particular, a project called NEAR4Kids that I've forgotten the acronym meaning, but it's essentially a national evaluation of emergency airway management. And they sought to identify problems in airway management for pediatric patients, then identify peers that were doing well and try to institute a process that would allow them to improve outcomes across the cohort. And so I think, again, this targets three of our key areas. So first they had this publication in 2014 where they did an evaluation of site level variance in the incidence of tracheal intubation associated poor outcomes or events across 15 North American PICUs. Thereafter, they published a protocol and a bundle that they'd put together that was really dependent a lot on the success of the hospitals that were doing well to try to come up with a practice pattern that might work. And then finally, they tracked their progress and showed what impact this may have had in the larger healthcare setting. And so just to call your attention to a couple of the key points, I think from this series of papers. The first is here in their site variation paper, you can note and what we have here is the percentage of intubations, I'm sorry, on the Y-axis and the individual sites in their cohort on the X-axis and what the rates of tracheal intubation associated events were. And as you can see, although there was a median of 20% across the entire cohort, there was wide variability, right? There were some hospitals that seemed to be having a better outcome associated with tracheal intubation. And so they targeted those hospitals to learn something. And after they created this bundled protocol that was published in that second study and they report on their results, what you see here are sort of the four-year progression of before these new bundles were put in place across all 15 sites and the year and the second, the first year and the subsequent year after there was sort of appropriate adoption of this bundle at the hospital defined by 80% or more of all their tracheal intubations using this bundle. I guess I don't really have a, can you see my mouse? Oh, you can, okay. So here's the unadjusted information. You can see the baseline rates of tracheal intubation associated events were in the 17 to 15% range before the checklist. The initial year after checklist adherence was not greatly different, but you start to see a trend where after it's been in place for a while, it drops down in terms of the rates of adverse events. And in particular, when they did a multivariate assessment accounting for patient level factors, you could see a statistically significant improvement in the rates of tracheal intubation when using this bundled approach. And so again, I think this is a really nice example of how we can learn from metrics, identify peers who are doing very well, and then use that information to allow all of us to get better. And then fourth on that list was that metrics incentivize. And I think honestly, many of those incentives, at least when I think about them, I think about them potentially negative incentives, right? Me as a practitioner, instead of spending time with my patient, I'm too busy documenting stuff so that we make sure that we meet all of our metrics. Or there have been instances, right, where people will create or put in the documentation things that are not correct just to sort of improve either the case index or whatever it might be. And then I think also what I try to indicate here on the right-hand side is we all may think that some of the metrics we're targeting are important, but they may not be the highest priority for our institution at that moment, but because they are metrics that we need to track, they sort of take precedence. And so an example that I can think about is when we all transitioned over to lactate, having to have a second follow-up lactate four hours after our first. And that really became central for patients with sepsis. And was that really the most important thing for us to focus our quality improvement efforts on? So I think that there's potential for metrics to incentivize us poorly, but I really do think there are also the possibility that if we use them correctly and we align our incentives appropriately, they can help us. So what are examples of that? So I thought this was sort of an interesting study. This is not particularly within critical care, but this was a review published this past year of 287 randomized control trials that looked at interventions to change clinician outcome, behavioral interventions. And I've just tried to put here in the orange bar that out of the 290 or so, 190 of those were basically social comparisons. How am I doing versus Dr. Lansbaugh? How is my hospital doing versus the hospital down the street? Right, that there really is something in social science that says that allowing us to see how we're doing and sort of setting up a friendly competition may be a way to change our behavior for the good. And what about particularly, what about examples of this that have been done for practitioners? This is outside of our critical care realm, but this was in a large healthcare system in Hawaii where they had, I think, 60 sites with over 70,000 patients and they basically had two different approaches, their control approach and their intervention approach. And the control approach was that all practitioners received a dashboard. I think it was weekly. That was basically, these are your metrics. This is your mortality. This is your utilization of different services. And they were just sent to you in a passive way. The intervention, you got that, but you also were actively sent and provided information about how you as a practitioner were doing next compared to your peer and how your site was doing compared to other sites. And as you can see here in this forest plot, we have the measures that they looked at down the left-hand side. Their primary measure was this compositive quality score. And then on the X-axis, I apologize, the percentage point difference were those to the right of the dotted line favored the intervention and those to the left of the dotted line favored the control group. As you can see, their primary outcome, the providers received a higher quality score as a result of being in the intervention arm. But as you can also see, there was much more in the way of processes of care that were better in terms of screening, as well as potentially clinical outcomes, right? Blood pressure was more under control in this intervention group. And the only difference, right, was providing people information about their peers. There are some concerning findings, of course, if you look here, maybe it's just that kids are different than adults. I'm not sure. But if you look here for the pediatric vaccinations, there was actually a signal for harm associated with this intervention, whether that's a statistical fluke or whether that's something we need to worry about is not clear. But I think there is some evidence here, right, that having this sort of know about how you're doing compared to your peers, and whether it's just intrinsically to help us understand that we need to do differently, or maybe it's to set up a little bit of a competition, who knows, may improve clinical outcomes. And there was actually this study published out of the Mount Sinai School of Medicine in New York City that not only took incentives and tried to make them helpful to the practitioners, but actually aligned financial incentives with the metrics that they were being asked as a unit to provide. So in particular, starting in 2017 and 18, they followed here on the right-hand side, these given metrics that they felt were specifically in the realm of physician control, or at least partly in the realm of physician control. The standard incidence ratios of central line-associated bloodstream infections and catheter-associated urinary tract infections, the frequency of hand hygiene prior to going to a patient's room. I apologize, these should say central line and urinary catheter, not infections, the standard utilization rates, and then the rate of, they had a respiratory care bundle, which I think is, think of it basically as the ABCDEF liberation protocol. And they basically said that if you met certain metrics, you received a 7% bonus if your ICU exceeded those metrics. And what they saw, which again, was temporarily coincident. We can't say that this is due to the incentivization, but as you can see, prior to that incentive, and then in 2017 was when it was instituted, in 2018 is the follow-up year, you can see trends all in a positive direction, associated with providing this incentive. And here you can see both hand-washing and the use of this ABCDEF bundle also improved. So again, suggesting that maybe if we incentivize us in a way that we all know humans respond to, right? Financial incentives, we are no different as clinicians than other human beings. We can actually use this incentivization and metrics to our benefit. So again, of course, ICU metrics are not all good. I'm not going to pretend that this is again a way maybe for me to preempt the speaker following me, but they need to be appropriate and they need to be well-defined. A good example I think of frequently, I work in a hospital that has a lot of cancer patients for whom we have many patients receiving salvage chemotherapy. If all we're looking at is ICU mortality as our primary outcome, when in fact many of these patients may be transitioning their goals of care to hospice or comfort, that's probably not the right approach, right? So making sure we have a reasonable outcome. We need to make sure that the data we're collecting is accurate. If we're getting data, I think we've all probably experienced this, they're sent back to us that we know is incorrect, it's very hard to trust, right? It's very hard to believe that we should follow a metric when the data that we're getting doesn't resonate with us and what we see in day-to-day practice. I think we shouldn't demand actions with limited or no clear value. As you can see, I obviously don't really think repeated lactate does a lot in sepsis, I've now said that twice. But whatever your thing is, if you as a clinician don't buy into what's being followed, then showing it to people is very hard to gain their trust. And then I think we have to be careful that we don't divert resources away from things that we all believe are more patient-centered. For instance, if I'm gonna focus so much on mortality and as a result, I'm not gonna invest in having goals of care conversations and bringing palliative care consultation into my ICU in a setting where that's appropriate, it's probably not in the patient's best interest. So in conclusion, I think that we have to choose the best of sort of what we have available for us. And so I think two bad options when it comes to ICU metrics are to bury our heads in the sand and pretend that if we don't find a problem, it doesn't exist, and also to basically throw the baby out with the bathwater. Just because metrics are imperfect and we have challenges with them, we have to be careful that we don't then say we'd just rather not have any data. And so I think where we are now is that this is a good starting place and we need to do better and we need to improve them, but that overall ICU metrics are to our benefit. And with that, I will scarily hand over the podium to my con partner. Thank you.
Video Summary
The speaker presents a case for using ICU metrics to enhance patient outcomes, emphasizing their potential to identify unknown problems, enable learning from successful peers, and track progress over time. They highlight that metrics are essential but not solely sufficient for improvement, as noted in studies showing no immediate impact from metric use alone. The speaker references successful examples, including pediatric airway management, to demonstrate tangible improvements. They caution, however, against poor implementation and emphasize that metrics must be appropriate, accurate, and incentivizing without overshadowing patient-centered care, advocating for refining and improving their application.
Asset Caption
One-Hour Concurrent Session | Pro/Con Debate: ICU Outcomes or ICU Metrics: Is It Gaming the System?
Meta Tag
Content Type
Presentation
Membership Level
Professional
Membership Level
Select
Year
2024
Keywords
ICU metrics
patient outcomes
pediatric airway management
patient-centered care
metric implementation
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English