false
Catalog
SCCM Resource Library
Implementation Science Concepts in Critical Illnes ...
Implementation Science Concepts in Critical Illness
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Nicole, thank you for that very generous introduction. So I just have a couple of disclosures. I do have some funding from NIGMS and NHLBI related to implementation science. I don't think there are other conflicts related to these other funding sources. So for a moment, let's just frame what it is, why it is we're talking here today. My work is largely around sepsis and the care delivery for sepsis. And as we all know in this room, time to antibiotics is a critical determinant of outcomes in sepsis. Multiple large, well-designed and well-controlled, well-adjusted studies have now demonstrated for each hour that antibiotics are delayed, there's a 3 to 11% relative increase in the odds of mortality or about a 1% absolute increase in the odds of death. But despite that data and despite it being around for quite a long time now, a large proportion of patients don't get timely treatment for sepsis. This is a data set of ours from four EDs that showed that 43% of patients don't get antibiotics within three hours of ED arrival. And three hours seems to be beyond the fact that it's often a number used for mandates. In multiple states, that does seem to be the inflection point beyond which there is an hour but clear hour by hour increase in mortality. Now we can all talk about how sepsis is difficult to diagnose. It is. Sepsis has multiple presentations and can, there are many other conditions which can mimic sepsis. But that said, we don't take in other time to, emergency conditions like MI, like stroke where time to treatment is important, we don't say, oh, this was an atypical presentation. It's okay we didn't diagnose the STEMI. Instead we say what do we have to do with our systems and what are the factors in our systems that are preventing us from delivering optimal care. This is an example, again, from our research from a few years ago from sepsis where we showed that after adjustment for patient mix, patient characteristics, severity of illness, there was five-fold variation in physician-level average door-to-antibiotic time for patients who met sepsis three criteria while in the emergency department, suggesting that there is substantial clinician-level factors that are contributing to suboptimal care for some patients. This has also been shown at the hospital level. This is data from a separate study of about 5,000 patients, inpatients and outpatients, inpatients and ED patients with septic shock. The time here is time from hypotension to antibiotic initiation. And we see that at the hospital level there's also substantial variation that is likely clinically important, with the best-performing hospitals having average time to antibiotics that are on the order of two to almost three hours shorter than the worst-performing hospitals and that about a third of hospitals have statistically and likely clinically significant average time to antibiotics that are longer. This is an example of one example in the critical care space of what's called the no-do gap. A commonly cited statistic says that between the time that there is adequate evidence to change practice and actual widespread adoption of that new practice, or de-adoption in some cases, that that time is about 17 years. Now, the caveat here is that this number is very, we'll call it hand-wavy, for want of a better word. But that said, we can see that there are numerous areas where there are substantial gaps. Take ARDS, for instance. We're all familiar with the ARMA trial. That was published in 2000. But as of 2016, when we did the International Lung Safety Observational Study, only 65% of patients who had ARDS were receiving lung protective ventilation. In sepsis, we've just talked about time to antibiotics, but we also can talk about bundled care. Again, a complicated topic about exactly what is bundled care and when the evidence base really got strong. But say we attribute the starting point to the RIVERS trial, and since then we've seen numerous studies that show that bundled care, in whatever form, whatever exactly bundled care means, but systematic approaches to delivery of the multimodal care that sepsis patients need improves outcomes. But only 65% of patients who had sepsis as of 2017 received bundled care. Sedation, coordinated SAT, SBT improves outcomes, again, and care for patients on the ventilator. ARDATA, and we're addressing this currently with a NHLBI-funded study, 30 to 50% of patients are receiving that. Transfusions, we all know that less than seven grams per kilogram. But in one British study, only 34% of transfusions in the ICU were occurring for patients with hemoglobins less than seven. And then imaging, we talked about adoption. This is de-adoption. We used to do a chest x-ray every day for every ICU patient on the ventilator. I remember being responsible as a resident for ordering the chest x-ray every day for every patient. But even by 2008, we had a pretty strong evidence base that this wasn't benefiting patients. And we started to see, by 2011, the American College of Radiology said it's usually not appropriate to get daily chest x-rays. And then in 2014, it became one of the top five choosing wisely items to not order chest x-rays every day. But we see in this study by Haley Gershengorn that despite these changes, we saw only a minimal change in actual observed practice. Each of these lines represents a group of patients from a cohort of 500,000 patients from the PREMIER database who are ventilated. And on average, 65% of patients across all durations of mechanical ventilation got a daily chest x-ray while intubated. These data show the prevalence or how many patients across the different levels. But we see that among patients who were intubated for seven days, 50% of them, even at the end of 2014, were still receiving a daily chest x-ray, a chest x-ray every day. I think we can do better, right? That's what implementation science is all about. Implementation science is the study of the mechanisms by which effective healthcare interventions are either adopted or not adopted or appropriately de-adopted in clinical and community settings. And we have to importantly distinguish between implementation, which is the act of using systematic methods to adopt practices, and implementation science, which is what we're here about today, which seeks to understand how and why adoption occurs or fails and to develop and test novel strategies for interventions. The work in this field falls into four big areas, four big types of publications, if you want to think about that. Theory and frameworks, identification of barriers and facilitators, design of implementation strategies, and evaluation of those strategies effectiveness. People actually often include theory, the use of theory and frameworks as a defining feature of implementation science. And what this really means is that we use a systematic approach to doing the work we just described. The frameworks used can be divided into five broad areas. Process models, which talk about what are the steps needed for implementation. Determinant frameworks, which fundamentally talk about what are the barriers and facilitators. They help us systematize our evaluation. And importantly, determinant frameworks emphasize that barriers and facilitators have multiple levels, that it's not just about what's happening right at the bedside or right with the specific intervention. Theories move a little bit more broadly. We have two types of theories. Classic theories are theories drawn from other areas that talk about how clinicians or individuals make decisions, about how implementation is socialized or how practices are socialized. Whereas implementation theories are more specific, obviously, to implementation. We're going to talk more about the COMBEE theory in particular in a bit. And then evaluation frameworks help us, again, pretty clearly, evaluate what it is we do, the success and failure thereof. We're going to talk a bit more, but there's a lot, a lot, a lot of frameworks, literally hundreds of theories and models that have been used in the implementation science literature, in medicine specifically. It's impossible to talk about it all. One of the big take-homes is find a framework that works for you, or a couple of frameworks that you use for you, and meld them. Often you're using frameworks from multiple of those different categories. But a couple that are increasingly popular are CFER as a determinant framework, and RE-AIM as an evaluation framework. And COMBEE has become a pretty important and pretty dominant implementation theory in medicine. Many of you have probably been exposed to CFER. It was first published in 2009 as a framework of frameworks. We're not going to talk about the details of this slide. But actually, it's essentially integrating existing frameworks into a list of five broad domains within which sit constructs that help us think about the barriers and facilitators to adoption of a given intervention. The update in 2022 had some improvements. It clarified the distinction between the innovation that we're seeking to get people to adopt and the actual implementation process we're using to drive that. It maintained the concept of the outer setting, which is the broadest context in which the implementation is taking place, societal health factors, health system factors, potentially hospital factors when you're looking at it within an individual unit. The inner setting, by contrast, is the care team, the ICU that are adopting a new process. And then the individuals, of course, who are involved in that. And that includes both the patients and the providers that are delivering that innovation. The big changes here, number one is that CFER went from, I'm going to say this wrong, 26 to 42 constructs. So it got way more complicated and added 19 subcontracts. It is truly painful. Let's just be honest about that. But it's a very thoughtful, these are very thoughtful changes. It's not a direct criticism, but it's complicated. This is not something that you can just pick up one day and use the next. This is something that people spend a lot of time thinking about, a lot of time interacting with to be able to use effectively. The other big change is that in the individuals area, it sort of scrapped what they used to use and adopted a new approach. Now instead of having a number of different contracts, we actually have the roles construct, which means all the different people involved in the intervention, including the recipient, as well as all the providers ranging from the highest level health system directors all the way down to the bedside care delivery folks. And on the other side, you interact that with the behavior change characteristics from the COMBI implementation therapy, which more directly brings this framework more directly into how do we implement change. This is an example from our work of how we applied the CIFAR framework, and this is the older one, for understanding barriers and facilitators to adoption of lung protective ventilation as a major innovation and using computerized decision support to implement that. And you can see here, I'm a little short on time, so I'm not going to go into the details here, but each of these individual facilitators and barriers was matched to a construct within the CIFAR framework. By contrast, the RE-AIM framework is really about evaluation of an implementation program. So it gives five broad areas that we can use to think about how we are measuring the success of an implementation effort. How many patients, how many target recipients were reached? What was the effectiveness in driving increased utilization of the innovation in question? How well was the intervention, the implementation program itself, not the innovation, but the implementation adopted, including how well was it employed? What was the adherence to that? And then how well is it maintained over time, providing a generalized framework for understanding and systematically evaluating our performance? This is an example where we were focused on the effectiveness aspect of this framework from a paper we have in press at Annals ATS. We will, again, going back to the issue of lung protective ventilation for computerized decision support, we implemented a multimodal intervention to try and get increased utilization of this computerized decision support program that we have. And what we saw is that relative to our pre-intervention period, we saw a substantial step-off, a substantial bump in utilization of this computerized decision support for these patients. So inclusion, we have a major no-do gap in critical care, and this is true across medicine, but we have a lot of important areas in critical care, especially where there's room for improved delivery of evidence-based care. And implementation science uses frameworks and theory to determine why adoption and evidence-based care occurs or doesn't occur, and to develop and test strategies to aid that adoption. Thank you very much.
Video Summary
In this video, Dr. Prescott discusses the importance of implementation science in improving the delivery of evidence-based care in critical care settings. He highlights the issue of delayed antibiotic treatment in sepsis and the need for timely interventions. Dr. Prescott emphasizes that despite the available evidence, many patients still do not receive optimal care for sepsis. He discusses the concept of the "no-do gap," which refers to the time lag between evidence and widespread adoption of new practices. Dr. Prescott also introduces the use of frameworks and theories in implementation science and discusses the CFIR and RE-AIM frameworks as examples. He concludes by highlighting the role of implementation science in closing the gap between evidence and practice.
Asset Subtitle
Research, 2023
Asset Caption
Type: one-hour concurrent | Facilitating Change Management (SessionID 1169225)
Meta Tag
Content Type
Presentation
Knowledge Area
Research
Membership Level
Professional
Membership Level
Select
Tag
Research
Tag
Evidence Based Medicine
Year
2023
Keywords
implementation science
evidence-based care
sepsis
no-do gap
frameworks
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English