false
Catalog
SCCM Resource Library
Implementation Science - Bringing Evidence to the ...
Implementation Science - Bringing Evidence to the Bedside - Discovery Research Webcast
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, and welcome to today's webcast, Implementation Science, Bringing Evidence to the Bedside. Today's webcast is brought to you by Discovery, the Critical Care Research Network at Society of Critical Care Medicine, in collaboration with the CPP section. My name is Komal Pandya, Critical Care Pharmacist and Assistant Adjunct Professor at the University of College of Pharmacy, University of Kentucky College of Pharmacy. I have no disclosures. Thank you for joining us. A few housekeeping items before we get started. There will be a Q&A session at the end of the presentation today. To submit questions throughout the presentation, type into the question box located on your control panel. This webcast is being recorded. The recording will be available to view on my SCCM website within five business days. There is no CME associated with this educational program. However, there will be an evaluation sent at the conclusion of the program. The link to that evaluation is also listed in the chat box for your convenience. You only need to complete it once at the end of this webcast. Your opinions and feedback are important to us as we plan and develop future educational offerings. Please take five to 10 minutes to complete the evaluation. And now, I'd like to introduce your speakers for today. Our first presenter for today is Dr. Michelle Ballas, an Associate Professor and Researcher at The Ohio State University College of Nursing. Her research is focused on developing and testing interventions aimed at improving the cognitive, functional, and quality of life outcomes of critically ill older patients. Dr. Ballas has been consistently funded as a PI or COI on several federal, private foundation, and university sponsored grants. Most recently, Dr. Ballas' team was awarded an R01 from the NHLBI that supports their continued partnership with the Society of Critical Care Medicine's ICU Liberation Collaborative. Guided by the consolidated framework for implementation research, the overall objective of this T4 research is to develop multi-level implementation strategies to enhance sustainable adoption of the ABCDEF bundle in routine ICU practice. The team is particularly interested in discovering how various patient, provider, and organization level characteristics and implementation strategies affect ABCDEF bundle adoption. Our second presenter for today is Dr. Molly McNett. Dr. McNett is a Professor of Clinical Nursing in the College of Nursing at The Ohio State University and Assistant Director of Implementation Science at the Helene Fuld Health Trust National Institute for Evidence-Based Practice. Dr. McNett's research centers on the care of critically ill patients after severe neurological injury. In her work, she has led funded multisite national and international trials with diverse research teams. Dr. McNett has extensive experience working with leadership teams in health systems to apply research findings and evidence-based practice recommendations to practice and create infrastructures for ongoing monitoring of outcomes using the implementation science approach. Dr. McNett has led numerous interdisciplinary task forces to integrate key components of research, evidence-based practice, implementation science, and quality improvement initiatives into healthcare settings. And now, I'll turn things over to Dr. McNett. Thank you so much, and thank you everyone for joining us today. So, as we get started, I think that one of the important things that we need to note is that one of the guiding principles behind the development of implementation science is that evidence-based practice is certainly important, which I think we can all agree upon, but having an evidence-based support or approach to implementation is also equally important. So, evidence-based practice has to be accompanied by evidence-based implementation in order to be effective. So, the definition of implementation science really highlights its role in generating evidence to support those implementation efforts. So, the definition, therefore, is a scientific study of methods to promote the systematic uptake of evidence-based practice into routine practice in order to improve the quality and the effectiveness of health services and, ultimately, the care that we provide. So, certainly, evidence can come from a variety of sources, some of which is filtered and some of which is unfiltered, and so the field of implementation science really seeks to identify the optimal methods and the strategies for increasing utilization and uptake of this evidence, both from filtered and unfiltered sources, into real-world practice settings, and really investigating the best methods to ensure sustainability of that EBP uptake over time. So, a lot of people ask, you know, why did the field of implementation science emerge? Did we really need that? What does this field entail? And I think we're all familiar with the research-to-practice gap and those estimates that it can take up to 17 years for research to be translated and actually then utilized in practice settings. And that 17-year estimate is really only for the 14% of studies that actually secure the funding and the resources that are needed to successfully complete their studies, and then also build evidence to guide best practices. So, if we think back even to 17 years ago, so just to even put it into a pop culture context, you know, that was a time when we had Jennifer Aniston and Brad Pitt were officially a couple. You know, there was a lot of things going on back then. And if you think about just, again, from a pop culture context, how much has changed since then, you know, we really had quite a lot of developments during that time period. So, if you even think back to, again, 17 years ago, one of the top studies in critical care, ironically, was some of the early research evaluating daily interruption of sedation for patients on mechanical ventilation. And so, again, if you think about everything that's happened over that 17-year period of time, we've seen progress towards seeing this evidence be integrated into real-world critical care settings. But it's also been a really long road to try to get us there. And here, even 17 years later, it's not consistently being done across all settings all the time. So, the field of implementation science ultimately emerged in efforts to try to address this research-to-practice gap, not only in healthcare, but in other disciplines as well. It's focused on really a systematic ability to look at facilitators and barriers to implementation of best practices and look at a contextual analysis of not only the clinicians involved in the actual implementation of the practice into clinical settings, but also what factors are present within the settings themselves that can either increase or decrease the ability to use that best practice in our settings. It also aims to identify what some of the best effective strategies are to promote implementation of that best practice into our settings and to be able to generalize use of those strategies across hospitals. So, some of the early developments in implementation science actually didn't come out of healthcare, many of them came out of the fields of education, some in public health, and some in the mental health fields. And really, initial developments within the field worked towards establishing a theoretical basis for implementation, and then also identifying a structured nomenclature of strategies. So, using consistent terminology about what some of those methods are for implementation that could then be studied for their effectiveness across settings on the best way to integrate a best practice into clinical settings. So, we'll talk later in later slides about what some of those strategies are and what the structured nomenclature is for those in the efforts to advance the science. So, when we look at where implementation science lies along the research to practice or a translation science spectrum using a traditional linear model, it initially resided between T2 and 3 and sometimes even into T4. But if we've seen with the various research spectrum models, some of the later ones and more recent ones really incorporate more of an integrated model. So, different components of the spectrum actually inform one another and can move in various directions. So, instead of just being a solely prescriptive or linear mode, we're really seeing a lot more movement between the different areas on the spectrum in terms of knowledge development. So, within this model, you can see implementation research still falls. Typically, we're in within how clinical discoveries are translated into practice. But instead of just lying at the end of the spectrum, knowledge from that field is then used to filter back and forth and then inform additional developments in clinical and sometimes, some cases, even some preclinical or some basic research efforts. And when you look at the NCATS research spectrum, it really displays a true iterative relationship between the various types of research. And certainly, the field of implementation science would reside within the clinical implementation component of this model. So, in the various models, you can see there's always been a place for implementation-based research. And depending on the model and the time at which it was used, you can also see how this type of research has evolved from just being an endpoint to really being a key component that can be used to inform knowledge development and research efforts and other components of the spectrum. So, when we look at what this means in the actual setting of the critical care environment, the need for evidence-based care really becomes even more important. And I know all of us here are certainly probably familiar with these statistics that are listed here on the millions of ICU admissions in the United States annually, and certainly the significant short and long-term implications of the high morbidity and mortality and also cost for patients admitted to a critical care unit. And these estimates don't take into account additional burdens that we experience and we see with family members, the extent of resource use that's needed throughout the critical care period, and then also even just fatigue or burden of clinicians providing this level of care for patients over sometimes a really prolonged period of time. And so, with all the resources and access to research information that provides us evidence and recommendations about safe and effective practices aimed at limiting some of these adverse occurrences and events that we see in critical care, it's unfortunate a significant number of critical care patients still don't receive the evidence-based care. And some of those common areas are listed here, including the lung protective ventilation, spontaneous awakening trials, spontaneous breathing trials, and the sepsis bundle. And those are just a few of the key ones that we see. And so, one of the questions often posed when we're talking about implementation science is, why do we need implementation science? We've got quality improvement initiatives, and we've got a lot of these in critical care. Many of them are aimed at addressing these evidence-based practices. So, isn't implementation science essentially just quality improvement? And the answer is yes, but no. There's never quite a clear black and white answer, a little bit of gray in between. But we'll talk about that today. So, quality improvement is certainly a very important tool, and it's shown to be effective in addressing many of the inefficient and wasteful practices. And we can use rapid cycle approaches to really streamline care quickly and reduce or eliminate some variations in practice in the critical care setting. And we've seen many initiatives that have had quite a great deal of success using this approach. But one of the limitations of QI is that it's not always generalizable, and it doesn't always consistently evaluate most effective strategies across different settings to improve adherence or uptake of a specific evidence-based practice or guideline-directed care that could then be used across many different ICUs. Typically, it's very internal-based, and there are internal specific factors that are evaluated. So certainly, we see QI techniques sometimes integrated as a key component in larger implementation science evaluations. And so certainly, there are areas of overlap between the two fields, but they also have some characteristics that are inherently unique, so we'll talk a little bit about those. So when we do a side-by-side comparison of some of the characteristics of QI and implementation science approaches, specifically when we're considering the origin or the trigger for the initiative, the purpose of the project, the methods, and the overall or direct goal for the project, we can see where some of these differences emerge. So typically, in QI, the initiative is triggered by a specific internal problem, usually within the unit or within the health system, or an inefficiency that's being identified or there's practice variations. And typically, pilot strategies or some of the QI methodological approaches are used here. So we see PDSA cycles, we see root cause analysis, fishbone diagrams, you know, some of the lean or Six Sigma approaches are used and really focuses on some rapid cycle change to improve whatever that inefficiency might be or those practice variations. So the focus of QI is often process-related and typically disseminated and used internally, as often it's an internal trigger within that unit or that organization that initiates the work that needs to be done. So in contrast, when you're looking at implementation science, the trigger for implementation science always starts with an established evidence-based practice. So it's something that we have research on that shows we need to be integrating it into our practice. And the approach to that is always a systematic, typically theory-based approach that scientifically looks at effectiveness of different strategies using both quantitative and qualitative methods. So the overarching goal with implementation science is to actually produce generalizable knowledge about which of those strategies are most effective and what are some of the contextual factors that influence uptake of that established evidence-based practice. So because the goal is to contribute to generalizable knowledge, usually findings are disseminated externally so that across hospitals and across health systems, we can start to build knowledge and evidence about what some of those best implementation methods are to increase utilization of that EBP. So another question that often comes up when we're talking about implementation science is, you know, how's it different than comparative effectiveness research and where does it fit within that whole spectrum? So this figure presents really a nice schematic of how some of these different approaches influence clinical care. And so you can see how certainly efficacy trials are used to inform clinical knowledge of some of the therapies. Certainly comparative effectiveness research is used to build clinical knowledge about some of when comparing different therapies to one another. And then you can see how implementation science is really the component of the figure where the focus is on taking situations where we know what the best or most established practice should be based on those prior trials, but recognizing that there's actually low utilization or uptake of that evidence-based practice into a clinical setting. And so implementation science then is the investigation of how we can move from a low uptake situation into a high uptake situation. And so now we'll talk about what some of the core principles of the implementation science approach entails. So typically there's an implementation intervention to facilitate the change in practice. So an example of this would be efforts to change behavior at the patient level, the provider level, or even sometimes system or policy level. Typically the designs also include evaluation of an implementation strategy, either as an integrated set or a bundle of specific interventions. And again, we'll talk more about what some of those specific interventions and strategies are later in the talk. And then of course, the implementation science design always has to include an evidence-based practice because really that's the crux or the trigger that initiates the work. So one of the key defining features in an implementation science design that's really important to comprehend are the outcomes. These outcomes can vary quite a bit from traditional research outcomes. And there's a seminal article by Nola Proctor that really gives a great initial framework for identifying what are the outcomes that you would expect to see within those implementation science designs. So the outcomes are structured under the domains of implementation-based outcomes, service outcomes, and then also client outcomes. And so you can see some of the examples of which type of the outcomes are listed here. And you can see the implementation-based outcomes really focus on those characteristics of the evidence-based practice as the team works to integrate it into routine clinical care. And these are the implementation outcomes that you would expect to see in most implementation science designs. Not necessarily all of them, but at least have a few of them included. And so often in some of these designs, we also see service and or client outcomes included as well, which most of us are pretty familiar with the ones that are listed here. But again, in implementation science, you're going to expect to see those implementation outcomes interwoven into the project design. So the next few slides give better in-depth definitions of some of those implementation science outcomes. And in the interest of time, I'm not going to spend a lot of time explaining each and every one of these implementation outcomes, but it's important to note they are a discrete set of outcomes that are typically included with an IIS design. And so when we look at acceptability, for example, it refers to how the evidence-based change is perceived among the stakeholders or among the end users or those people who are actually going to be affected by the practice change. Adoption refers to the initial intention to integrate the best practice into routine clinical care. Appropriateness is the perceived fit for the setting. And that certainly cost is somewhat self-explanatory here, and one that most people are probably somewhat familiar with. And then the remaining implementation outcomes that you'll commonly see with the IIS designs are listed here. So feasibility or how well the evidence-based change can successfully be integrated into the setting. Fidelity, which is how well it can be implemented as it was originally prescribed or originally investigated in the core studies that helped build the evidence behind that recommendation. And then penetration and sustainability, which really evaluate how well the change integrates into existing structures and then is able to persist or be sustained over time in these clinical settings. And so with that, I'm actually going to turn the next part of the talk over to Dr. Ballas, who will talk to us about some of the evaluation techniques in implementation science design, as well as the models that are used, and provide some examples from some of her work in critical care settings with an IIS design. Thank you, Dr. McNett. With the remainder of our time, we're actually going to briefly discuss some really critically important topics in implementation science, and I thought we would start with exposing you to some terms that are used to evaluate implementation science work. So starting with evaluation, evaluation in implementation science we know is vital for understanding how interventions, particularly complex interventions like those that we deliver in critical care, function in different settings, including if and why they have different effects or sometimes don't even work at all. Generally speaking, there's three different types of evaluations that are used in implementation science. These are the process evaluations, the formative evaluations, and summative evaluations. All the three evaluations are similar in that the overarching aim is to describe the characteristics of the evidence-based practice use, so how that's being used in practice, and the fact that data collection can occur really any time before, during, or after the evidence-based practice is implemented. The major difference in these types of evaluation is how the data is used and when it is presented. So in the process evaluations, data is analyzed by the research team without giving feedback to the implementation team. So without giving feedback to the people that are actually, you know, trying to get this evidence-based practice into care. There's no intention to change the process, right? So you're evaluating it as it occurs in real life. The formative evaluations are a little different in that data is still being analyzed by the research team, but after that analysis is done, it is fed back. It's fed back to the people that are implementing the practice or the staff during the study intentionally in order to adapt and improve the entire implementation process. And the summative evaluations, data in that type of evaluation, is normally presented at the end of the study. So this type of evaluation is typically done to assess the impacts of the evidence-based practice on processes of care or to characterize the economic impact of an implementation strategy or its effect. In implementation science studies, we can just, there we go, the data that we use in implementation science work and the evaluations that we just briefly discussed is derived from various sources. It's a little bit different than clinical research. This data can come from patients, providers, big healthcare systems, or even broader environmental factors such as community policy or economic indices. The evaluations could, in implementation science, use either qualitative or quantitative data. Usually they use both. Quantitative data comes from sources such as structured surveys or tools, administrative data sets, the fidelity measures that we use to follow whether or not they're being delivered the way they should be. And qualitative data would include maybe methods using, say, semi-structured interviews, focus groups, sometimes direct observation of the clinical processes, document reviews. So various sources. So what are the questions that we frequently experience? What about theory in implementation science? Is it important? You betcha. And here's why. Anybody who's tried to move evidence into clinical practice knows it's hard work. A lot of the strategies and evidence-based practices are, as Dr. McNeck said, multi-component. These multi-component interventions must be adapted to meet the needs of the people that are delivering them. We have multiple providers often delivering an evidence-based intervention and a wide variety of differences in the settings, in intensive care unit settings, populations, delivery modes, and things like that. So really, implementation science does require theory because we need some way of getting a clear, collective, consistent way of capturing data to figure out what works where and why. So the field of implementation science itself is guided by numerous theories. Great paper here for anybody that's interested in the topic. Numerous theories, models, and conceptual frameworks. My experience as a teacher has been that when the terms theory, model, frameworks are used in the classroom setting, you know, students' heads begin to nod. I don't know if anybody would admit to doing that. My experience as an ICU nurse tells me that when these terms are used in the clinical setting or in clinical discussions, most clinicians will need to have to leave the discussion due to, you know, an unexpected emergency or something that, you know, unexpectedly comes up. I get it. Theory is often, and research, unfortunately, is really, to some, not the most exciting topic, but it is, again, so important for generating that new knowledge. The theoretical approaches that are used in implementation science generally have three major overarching goals, as you can see on this slide. The first goal is to describe and or guide the process of translating research into practice. So process models in nursing, I think even in undergraduate nursing, we often are exposed to theories such as the Stetler model or the Iowa model. The second major goal is understanding and or explaining what influences implementation outcomes. And again, there's really three different classes of theories that fit into this category. We have the determinant frameworks. These specify the types of determinants which act as barriers or facilitators that influence implementation outcomes. Examples of that include the consolidative framework for implementation research. The classic theories, as Dr. McNett mentioned, as implementation science has developed over the years, a lot of theories are borrowed from other disciplines such as psychology or sociology that are used to provide an understanding or an explanation for the different aspects of the implementation process. And then finally, the implementation-specific theories. So again, relatively new field, but these theories are developed from people by implementation researchers to guide the implementation process. And then finally, probably, I'm going to guess, the ones that are most familiar to researchers in critical care are really the evaluation frameworks. So these theories are used to, as a way of evaluating the success of implementation. So models such as the RE-AIM model, PRE-C, PRO-C model, and things like that. So I'm just going to give you an example of two of the most common theories used in implementation science research. The purpose is not to give a theoretical discussion, but to illustrate how some of the concepts that are really believed to be important and common barriers and facilitators to evidence-based adoption, particularly in the ICU setting, some of the things that you should consider when designing these projects, and also an example of an evaluation theory. So the first example of an implementation science theory really isn't a theory, it's more of a framework, but it's the one that I've used most often in the work that I've done with the ABCDF bundle to date. This slide shows you the five major domains that are captured in the consolidated framework of implementation research. These major domains are the things that are believed to affect adoption of evidence or to serve as barriers or facilitators in terms of getting that evidence into practice. So the first thing to consider when implementing a project is the actual characteristics of the intervention itself, right? It matters what the evidence-based practices that you're trying to get adopted. One of the major considerations, and I think, again, Molly did a great job at explaining the importance of the evidence being based on strong evidence, but the evidence strength and quality. Is what you're trying to move into practice based on solid evidence? Is the intervention complex? We know that more complex interventions are more difficult to have, it's more difficult to increase optimal use in practice. Is it the intervention, the evidence-based practice costly? How are you designing and delivering that package? The next setting in the CFIR is the outer setting. Now, we don't see this too much yet in some of the critical care literature, but these are the external factors that could influence implementation, right? So when we think about it, I like to think about it as some of those external policies or incentives to adopt. So we have mandates that come from JACO or from other quality and safety committees or quality and safety committees that are maybe putting pressure on ICUs to influence uptake or de-implement harmful practices. So those things that come outside of what you would typically consider the normal place of delivery. The inner setting domain, I think a lot of people have heard of some of these parts of the inner setting domain, that culture. How often do we hear when we're trying to move any quality improvement project or implementation science project forward, the importance of culture, but also considering implementation climate. Is now the best time to initiate a new evidence-based practice initiative? Maybe not. So right now in the ICU, you know, considering the tremendous amount of stress we're under right now with the COVID epidemic, but is it a good time to change? Are they ready to change? Do the partners that you're working with in the clinical setting have the leadership engagement and the resources that they need to change? Taking those contextual factors into account. The next is the characteristics of the individuals. Clearly we need to consider these characteristics. We need to consider the needs, desires, and goals of the people that will be implementing the evidence-based practice. Do they know the evidence supporting the intervention? What do they think about it? Are they able to believe it's an effective intervention and worthy of changing? Are they motivated to change? What would be the best way to provide education related to intervention? So focusing on those people that are involved in the delivery of the intervention. And finally, and most importantly, and probably the biggest part of this lecture that you're getting today where we're compressing hours of implementation science data into a 45 minute presentation, but really thinking about that process domain. Really thinking about how the move to integrate the evidence-based practice is delivered. How much time do you need to plan for the process? It's kind of like when you do a home remodel and you always need to overestimate 20 to 50% of costs. It takes a long time to plan. Who are the people that you need to engage in the change initiative? There's champions or the people that are opinion leaders. How is the project going to be executed? Do you have the resources that you need to do that? How is data going to be collected? And most importantly, how is the change initiative going to be evaluated and importantly, fed back to the people that have helped you made this important change? Probably, again, one of the most familiar models in terms of D&I theories is the RE-AIM model. It came about around 1999, used initially to evaluate prevention and health behavior change programs. The broad goals of the RE-AIM model, again, remember this is an evaluation model, is to have adoption occur broadly, sustain implementation at a reasonable cost. There's really no point in trying to make major changes to clinical practice if those efforts aren't going to be sustained at a reasonable level. You want to reach large numbers of people in this change initiative, those people that are important to the success. And also, again, you want it to be able to be repeated. You want that access to transfer to other units, maybe in your health system, and have a long-lasting impact on the outcomes. The R in the RE-AIM model, again, symbolizes reach. Pragmatically, who is intended to benefit and who actually participates or is exposed to the initiative? In critical care, most often, our audience will probably, the population we want to reach is critically ill. The people that are going to be participating in the change initiatives are ICU providers, maybe extended out to administration, and commonly. But really having a clear definition of who do you want to reach with this initiative. Effectiveness. What is the most important benefit you're trying to achieve through the change initiative? And this is sometimes hard, right? But there needs to be a clear operational understanding of what is it? What is that most important clinical outcome you want to have occur? So, for example, if we're talking about something like the ABCDF bundle, maybe the most immediate concrete goal would be to improve survival, decrease mortality. Or it could be something more easily operationally defined than, you know, our goal of saving patients' brains. Both are important. You could choose probably as many as you want, as long as the team is aware of why this is happening. Adoption. Where is the program going to be applied and who applied it? I think that's pretty self-explanatory. Implementation. Kind of similar to the CFIR, we really want to know how consistently, and this is one of the things that I've noticed that isn't always expressed as or published as often as it should be, but the fidelity to the program. How consistently was the program delivered? How was it adapted by the clinicians? Costs. Why did those results come back? So again, that focus on those implementation outcomes that Dr. McNett previously described. And finally, again, that maintenance. When was the program operational? How long were the results sustained? That follow-up, often important. But again, hasn't been the focus, not only in critical care, but in implementation science, that maintenance or the sustainability hasn't been studied as much, but we're getting there now with some of the more effective projects that have been occurring. In terms of designs that are used in implementation studies, most still do use controlled implementation trial designs. They're probably the most common, right? The parallel group and the interrupted time series. The parallel group designs that are used in implementation science are probably familiar to most health services researchers, similar to the work that's been done in that or other clinical trials. One parallel group design that's becoming more frequent in implementation science trials is the stepped wedge design. So in that design, all the participating sites will receive implementation support, but when they get it is staggered in time. Other examples include SPART randomization. I think we actually had a webinar on that at one point from the SCCM. And also some other really cool creative stuff people in implementation science are doing. Another frequently used controlled implementation design is interrupted time series. So interrupted time series is often used in the real world of policy rollouts or program changes where parallel group designs aren't feasible, right? For either pragmatic, political, or other reasons. In such situations, interrupted time series designs can be actually extremely helpful. So we're seeing some more publications come out in terms of that. One of the more exciting advances is this blending of effectiveness trials and implementation trials. There's a key paper here for those that are interested if you want to learn more. But the blending of effectiveness research and implementation research designs, again, is generally collapsed into three different categories. Hybrid type one designs and hybrid type one designs focus on determining the effectiveness of an intervention. So these are useful, hybrid type one designs are useful when we have still some remaining questions regarding the effectiveness of the evidence-based practice itself, right? We're still not really super sure if this will, this treatment is effective. So for example, there may be a ton of data supporting SATs and SBTs in the adult population. But the effect of using those interventions in, say, the pediatric population isn't as well-developed. A hybrid type one design would be helpful in that situation in that you will be mainly focusing on determining, do those two interventions work, while at the same time gaining a better understanding of some of the context of implementation. So with that type one, it still more leans towards the effectiveness. The type three designs are kind of the other extreme. Type three, the evidence is clear. This evidence needs to get into clinical practice. Type three designs are more almost solely focused on testing and implementation strategy, while at the same time gathering some kind of clinical intervention outcomes, right? The aim is focusing on the implementation strategy, not the evidence-based practice. And type two is kind of a blend of the both, where you'll be doing either through co-aims or through a primary and a secondary aim, focusing on both effectiveness and implementation outcomes. So those types of designs you'll be seeing more and more frequently, I think, used in studies of critical care. So evidence-based implementation strategies. The evidence base for implementation strategies is totally steadily developing. One of the major accomplishments in the last few years was this compilation, the completion of the AIRC project. And again, for those interested, you can look at this particular study. The results of the AIRC project led to the classification, description, and definition of 68 different—that's a lot, right?—68 different implementation strategies. And some of these evidence-based implementation strategies may be more familiar to you than others. For example, the use of audit and feedback, or using computerized reminders, you know, separate implementation strategies. Interestingly, if you look at the effectiveness of these evidence-based implementation strategies, it's interesting to see how that science has developed. Initially, most of the research focused on looking at the effect of a single implementation strategy, right, say audit and feedback on delivery of something in the primary care setting. A lot of great success with that particular implementation strategy. Because they were successful, those implementations, those single approaches were tested in subsequent studies, but didn't seem to have the same effect as they did in prior work, all right? That kind of just throwing a single implementation strategy at different clinical problems was just based on the assumption that if it was effective in one, probably as effective in the other. That really kind of did lead to rather limited success. Next people took the approach of, well, if one implementation strategy is good, what about if we used a bunch of different implementation strategies to increase adoption of evidence and practice? And it's kind of interesting. I mean, that's intuitive, right? While the use of multifaceted, many different implementation strategies is intuitive and does have considerable face validity, the evidence regarding the superiority of using single component strategies versus multi-component is really kind of mixed. For example, there was one systematic review of 25 systematic reviews that found there was really not too compelling of evidence that taking that multifaceted approach to implementation interventions was any better than using a single strategy, right? So using the kitchen sink versus the single, still debate on whether that works. The current thinking in general is the selection of the implementation strategy, what type of, you know, of the 68 implementation interventions you're going to use really needs to be based on an understanding of the quality, the causes of the quality care gap. What are those barriers or facilitators? So for example, if we have 25 studies that early mobility in the ICU is hard because there's not enough staffing and that's well-documented, then the selection of an implementation strategy to improve adoption should probably focus, be one that's selected on, you know, maybe getting those new resources. So that's kind of where we are there. Number of great resources in terms of if you're interested in learning more about the effects of all these different types of implementation strategies, Cochrane group's done great. There's I think currently like 132 different systematic reviews on the effectiveness of these interventions. So it may be beneficial to take a kind of, take a look at that if you're more interested in looking at what you should try to adopt into, you know, which one you should try using in your work. The last thing I'm going to touch upon is evaluation, because I think this is important, particularly for beginning implementation science researchers or those interesting in developing a career in implementation science and critical care. There are standard approaches for evaluating implementation science grants, okay? And this next slide is a summary of those recommendations. So here are the things that reviewers are going to be looking at when evaluating your proposal. There needs to be a clear description of the care slash quality gap. You need to prove to the reviewers that this, we have this evidence, but it's not being adopted in clinical practice. You also need a really good justification for the evidence that you're selecting to be implemented. And the justification really when it's being evaluated comes down to, again, that science base. Is this worth, is this effort worth it? Is there enough evidence out there to support moving it into clinical practice? Use of the conceptual models and the theoretical justification like we touched upon, that's why we did that in this lecture, you know, this work needs to have some kind of grounding in these models and some reason that you're choosing the implementation strategy that you are. Also the demonstration that this is a priority to the key stakeholders and that the people that you'll be involving in your implementation science trial are engaged and really want to do this work, that they're ready to adopt new services or treatment programs, that that momentum is out there to do that. Clear description of the implementation strategy, the process, you betcha if you get a great implementation science reviewer, they're going to be looking for what is your implementation outcome? How are you defining that outcome? How are you measuring that outcome? And most importantly, the fidelity and how you're going to monitor that fidelity. Team experience with the setting that you can work with these people, the providers in the setting that you're working in, team experience with the evidence, the treatment that you're trying to do, and also key team experience with the implementation process. It's simply not enough to have, you know, somebody on the implementation science team that's done qualitative work in the past. It really is including someone who has the understanding of this new and emerging field. Typical to trial designs, the feasibility of the proposed research, what design you're using, what method you're using, overcoming that bias in critical care for the use of, you know, those, that prospective randomized clinical trial designs, great, but not often necessary in this type of work. How these things are going to be measured and analyzed and also evidence that, you know, that there's momentum from the policy and funding environment that the topic that you're being, the topic that you're proposing is, you know, interesting and there'll be leverage at a societal level for the work that you're doing. Now, I did have in here examples, but I think at this point it would probably be more beneficial. So, I'm just going to flip through these. To open up, because we have covered so much in the last few slides, taking it up for questions from the audience, because I've always learned more that way. So, I'm going to give it back to my SCCM leaders here. Awesome. Well, thank you guys for your presentation. One question that we have is, how do you view the doctor of nursing practice degree and PhD as both contributing to implementation science and implementation projects? Hi, this is Molly. I'm happy to take that one. A lot of the work that we're doing within our FOLD Institute was actually geared at exactly that. So that's a great question. We actually view it and we encourage work to occur in tandem between, you know, advanced practice nurses or nurses with advanced degrees at the doctoral level. So, you know, typically the PhD is identified as, you know, the traditional research doctorate. And so the thought with neuroscientists who are interested in implementation science with a PhD to really be knowledgeable about these designs and these approaches that we've talked about in order to develop the skill set and how to design these implementation-based studies to generate evidence about the best approach to implementing these best practices in clinical settings. The DNP, which is, you know, traditionally a clinically-based doctorate, is really in a prime position to lead implementation efforts based on the science. And so a lot of the curriculum that we offer within our College of Nursing at Ohio State and then certainly within our National FOLD Institute for Evidence-Based Practice and the training we do with DNP students and DNP clinicians is really an application of the knowledge that we have from the science of implementation and how do we then take that information about the evidence behind those best strategies for implementation, behavior change theories, and how do we actually implement this based on the science into practice. So we view the PhD-prepared nurse as a generator of the evidence regarding the most effective approaches for implementation. And the DNP is the critical clinical partner who then uses application of that knowledge to actually institute change within those practice settings. So it's a great, you know, synchronous relationship that we really encourage. Thank you for that. Another question that we have is, clearly this is a very valuable skillset knowledge base for multiple disciplines on a healthcare team to have at their disposal. So how would you envision, or do you think there's a role for integrating implementation science into academic program, whether it's a nursing program, a pharmacy program, or medicine, PA, what have you? Michelle, do you want to take that one? Sure, I will. So, I mean, obviously we're biased. We, you know, one of the most critical challenges, I think, particularly in critical care is it takes so long, right, to do these clinical trials. To not have the results integrated into practice is just disheartening, to say the least. Extending on what Dr. McNessey said about the DNP and the PhD, kind of see the same thing happening at different levels, right? So we could envision the need for MD trialists, right? I don't know if any of you guys follow this discussion on Twitter, great discussion. But those that are doing ICU research clinical trials, the need for them to engage with their practice clinicians. Yeah, I mean, a lot of people are doing both the clinical trials and still practicing, but there's some MDs that aren't doing the, have no involvement in the clinical trial world and the benefits of doing so, or the PharmDs or pharmacists. So I think, yes, absolutely. I think there needs to be integration into both curricula, I know we do that in nursing, into curricula, but also integration into programming because of the emphasis that this field is getting in terms of funding and the need to demonstrate, you know, why develop a therapy if it's never going to be used. So I think society, you know, the professional societies across disciplines, PT, OT, pharmacy, respiratory, I think there has to be an emphasis at the professional society level to have more emphasis placed on this. Not saying there's no quality improvements gone away, but this is, we really do need to develop a stronger understanding of what works and what doesn't in terms of changing behavior. And I think we've seen too as well, you know, Dr. Ballas and I have been looking at the literature in terms of the progress of implementation science specifically within critical care. And what we've seen too with the interventions is that the complexity of the evidence-based interventions has grown substantially. So it used to be, you know, kind of single provider focused, evidence-based practice that they're trying to integrate. And now we see these really complex evidence-based recommendations or approaches that need to be implemented that are not single to a specific discipline. So as Dr. Ballas mentioned, you know, they involve, you know, stakeholder involvement with respiratory therapy and pharmacy and administration and nursing. And so we certainly don't practice in silos at the bedside and implementation of EBP because of the complexity of some of the EBPs that need to be implemented. And the approach that needs to be taken is inherently, you know, multidisciplinary. Great. We have kind of a two-part question. The first part is a request basically to show the slide basically entitled, what should implementation science grant applications have? I don't know if there's the possibility of going back to that slide, but the question associated with that. Oh, sorry. Go ahead. No, there is. I just need to have the arrow to go back. Oh, great. While we're facilitating that, the question along those lines is in 2016 through 2018, this particular attendee used CFIR to implement music around surgery for which there's very good evidence. Is there a framework for implementation of complementary and integrative health interventions? So I think that's, I think that is a great idea. Number one, I think the models and theories it would use for like an implementation would be the same as the ones that we presented. I don't think there'd be anything unique to a non-pharmacologic intervention. I think the problem may be the evidence base surrounding any particular non-pharm intervention. So as long as you can convince the reviewers that there is enough evidence showing that that intervention works, I think that would be the key. The frameworks, the evaluation, all of this stuff that we talked about today would be same for pharm versus non-pharm intervention. But I think the challenge will be for the non-pharm stuff to show the evidence and the evidence-based behind that. And here is the paper, if anybody's interested in that reviewer criteria. So see, it'd be this evidence-based practice to be implemented would be, you know, describing the care quality gap, what happens if people don't get music in the surgical setting, and how many trials have been done that showed that that intervention was effective. That would be key. Great. I think we have one more question in the chat box. Almost seems like QI curriculums ironically distract from the more important goals of teaching providers implementation sciences. Do you think we need to refocus our educational efforts in this matter? You know, I don't think it's an either-or question. I think it's both. And so certainly within the last several years, we've seen such a focus on quality of care, and rightfully so. And we should never, you know, suggest that those efforts, you know, have been for not, because they really have helped to, you know, streamline some of our inefficient processes, you know, decrease variation in terms of what is done. So certainly there is a case for clinicians certainly knowing about QI approaches. But I think equally important is also some information about implementation science. And so kind of what we talked upon today, how there are some characteristics of both approaches that are inherently unique, but there is an area of overlap in identifying, you know, what is the initiative that needs to be done within the clinical setting, and which approach is really most effective to give us the information that we need in order to improve the outcomes that we're interested in. And so I think it's, I think that it shouldn't be, you know, detracting from QI, but I think that QI needs to include also, you know, that curriculum needs to include QI and also some of the key components of implementation science. So that knowledge, even if you're not setting out to become an implementation scientist, at least you can apply some of that knowledge when you're in the clinical practice settings from the field of implementation science, in addition to be able to apply some of the concepts of QI and knowing which approach is appropriate and when. Yeah, I'd have to agree with that. And again, the thing that I always come back to is I think in critical care, specifically, I think we need to develop a better of understanding of what implementation approaches work. In terms of, you know, we talked about that throwing the kitchen sink kind of in, we sometimes as a field seem to kind of still be there. We need to determine what is the best way of getting these interventions adopted. And I think that's the unique and specific role and where we need to progress in the field is figuring out which ones of these work. I mean, we all want quality and quality improvement is going to, quality improvement as a field is developed again over the years so dramatically, but getting those key questions answered as to what changes, what implementation strategies work will be key in the next decade or so instead of just throwing everything out there. Thank you, Drs. Ballas and McNutt for your time. All right, thanks, everyone. Thank you, everyone. Again, thank you to our presenters and the audience for attending. Again, you will receive a follow-up email with a link to complete an evaluation. The link to that evaluation is also listed in the chat box for your convenience at the bottom of this page if you do not wish to wait for the follow-up email. You only need to complete it once. There is no CME associated with this educational program. However, your opinions and feedback are important to us as we plan and develop future educational offerings. Please take five to ten minutes to complete the evaluation. The recording for this webcast will be available on the My SCCM website within five business days. That concludes our presentation today. Thank you all for joining us.
Video Summary
Today's webcast focused on implementation science and the importance of evidence-based practice. The presenters discussed the role of implementation science in improving the quality and effectiveness of healthcare services. They emphasized the need for evidence-based implementation strategies to support the adoption of evidence-based practices. The presenters also highlighted the importance of theoretical frameworks and models in guiding implementation efforts. They discussed the Consolidated Framework for Implementation Research and the RE-AIM model as examples of frameworks used in implementation science. The presenters also discussed different types of evaluation methods used in implementation science, including process evaluations, formative evaluations, and summative evaluations. They stressed the importance of evaluating implementation outcomes and measuring the fidelity of the implementation process. Additionally, the presenters discussed the integration of implementation science into academic programs and the need for interdisciplinary collaboration in implementation efforts. They highlighted the role of healthcare professionals, such as nurses, pharmacists, and physicians, in implementing evidence-based practices. Overall, the webcast provided an overview of implementation science and its significance in bridging the gap between research and practice in healthcare.
Asset Subtitle
Research, Quality and Patient Safety, 2020
Asset Caption
As in other areas of medicine, critical care is plagued by a long delay between the generation of evidence and the incorporation of evidence into clinical practice. The emerging field of implementation science, the scientific study of methods to promote the systematic uptake of research findings into routine practice, has developed to facilitate the spread of evidence-based practice. During this webcast, expert faculty will introduce implementation science methods and illustrate how these methods can enhance the development of evidence-based critical care practice.
Meta Tag
Content Type
Webcast
Knowledge Area
Research
Knowledge Area
Quality and Patient Safety
Knowledge Level
Intermediate
Knowledge Level
Advanced
Membership Level
Select
Membership Level
Professional
Membership Level
Associate
Tag
Research
Tag
Evidence Based Medicine
Year
2020
Keywords
implementation science
evidence-based practice
healthcare services
theoretical frameworks
evaluation methods
implementation outcomes
fidelity of implementation
interdisciplinary collaboration
bridging the gap
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English