false
Catalog
Deep Dive: Using Bundled Data in the EHR Online
EHR Data Utilization for Effective ICU Management ...
EHR Data Utilization for Effective ICU Management (Part 1)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
here. Thank you for reversing that. Thank you for inviting me to be part of round three of three for today's pre-Congress event. My name is Valerie Dinesh, and my clinical background and training is in adult ICU nursing, and I've got programs of research in post-intensive care as well as escalations in care, so really just along the continuum. For this afternoon's talk, I'm going to do really an application-based focus on early warning scores in the context of rapid response teams, and I think it really dovetails and builds very nicely on our first two rounds of speakers. I don't have any conflicts of interest, which actually is a pretty unusual attribute in the speaker circuit when it comes to early warning scores. There are a lot out there and a lot of developers. So the objectives for this talk are precisely aligned with our overall objectives for the session overall, so we'll touch on all of these components when we're running through this applied example of using EHR data in the context of physiologic deteriorations for rapid response teams. So I framed everything in a classic Donabedian structure process outcome perspective just to kind of set the stage for really the state of the science and rapid response and early warning scores, and there are a lot of components that really get pulled into what can be termed as a classically kind of wicked problem of physiologic deterioration and just severe decline during hospitalization, and so early warning scores that are derived from electronic health record and vital sign documentation and all of these many sources of data that Robert Stevens has so nicely described the depth and breadth. Despite all or I guess really the early warning score component is just one component of many when it comes to moving the needle on behavior change on both recognizing deterioration and acting on deterioration. To give us kind of a common language around rapid response teams, these are the very traditional criteria that have been in play since the mid-2000s, and many of us probably have badge cards with our hospital IDs that say call RRT if you see any of these kind of single parameter or multi-parameter kinds of criteria, and these were really developed out of Franklin's work in 1994 when it was kind of reflecting on there seemed to be kind of precursor signs before there's a code blue and kind of digging in charts, and so everything kind of came from that 1994 into the mid-2000s for general activation criteria, and we'll talk about some more advanced kinds of applications pulling in more than vital signs, and so to get very quickly into the so what, a lot of rapid response team and deterioration literature is focused on mortality outcomes, but I'd like to bring to this group the relevance when it comes to effective ICU management and effective hospital management as well as just patient-centered care and patient safety and quality that non-mortality outcomes hold a very strong place when it comes to evaluating the effectiveness of interventions for rapid response teams and how to leverage early warning scores for the right treatment at the right time for the right patient. So this is a lovely study from Gabriel Escobar out of Kaiser Permanente, and he analyzed I think all of their northern division, I want to say it was 17 or 18,000 patients, and just really amplified the point that escalations in care, meaning going to the ICU from being in the ward or being in the intermediate care, is only happening in a fraction of hospitalized patients at 3.7 percent, but that 3.7 percent of patients is disproportionately representing all of these quite large proportions of ICU admissions, hospital deaths, and hospital days. Other studies have pointed to the ability to stabilize patients before, stabilize patients in intermediate care settings prior to ICU, and also point toward really pretty startling rates of what are termed preventable ICU transfers. So there was a Belgian study with over six hospitals by Marquet that demonstrated that one-fourth of their patients over a very large period were designated as having preventable unplanned ICU transfers. So I think that it's important to kind of see where we came from in order to see where we are now when it comes to early warning scores and the state of the science for deterioration. So we've got the Franklin signs in 1994 that then morphed into the traditional EWS, early warning score, and just like any good scoring system, there's been iterations and versions. So just like we have gone from Apache 1, 2, 3, 4, SAPS 1, 2, 3, there's the modified early warning score, the pediatric version, the national early warning score, which is largely adopted in the U.K., and then this kind of new cohort of advanced analytics that really source data beyond those traditional vital signs. So for example, we've got some of the creators in the room now related to Vicencia and TrueScore and the deterioration index in Epic is, I think, on a lot of our institutional radars. So as this has kind of progressed, so have the use cases and also kind of the lessons learned with use case failures. So as rapid response teams were originated in Australia, like in 2004, 2005 with the MERIT trial, and then jumped to really being more widely adopted across the globe, in the U.S., many hospital early adopters started using rapid response in 2006, 2008, and started thinking about how to manually use those early warning scores and those modified early warning scores and manually computing in the flow charts, okay, if they've got a deranged blood pressure and the systolic is, like, above 160, that's one point, and if the respiratory rate is greater than 20, that's two points, okay, now they have three points, so I should call an RRT. So even just explaining it to this room, like, we can all feel like that's a lot of time when it comes to integrating into something as routine as vital signs. So as an example of an attempt at streamlining this, what if this kind of simple, like, summative arithmetic addition of vital sign ranges was just automatically computed and displayed on the vital sign monitor? So this is a visual of a version of one of those vital sign monitors. Some of us may know them as Dynamaps or just rolling vital sign machines in the ward and the intermediate care units. And so this was an intervention trial, and over here in the top left, it's got the early warning score so that you have a stoplight system, so if it gets to be read based off of these kind of traditional vital signs and summing those different values, then you should call the RRT. So I think that it kind of goes without saying that since many of us in this room have probably never seen this in practice, like, there was an uptake. It didn't work at all. In fact, this is one of my favorite slides to show because we pulled out all of the data with an old-school USB download from the rolling vital sign machines, and I know it's too small to see, but what it says is that the respiratory rate is 20, 20, 20, 20, 20, 20, 20, and then just to repeat that, and I think we had 12,000 rows of data, all 20. So of course, we had, like, the running joke that if everyone in the hospital were really breathing in sync, we could probably blow the windows out of this place, and then, of course, also from a quality and safety perspective, this, like, required a little bit of digging, and on one of my forays into checking out the environment, I said, walk me through like you're training me, and they said, oh, Ms. Valerie, it's easy. If their eyes are open, it's 20. If they're closed, it's 18. So that explained it times 7,000 times, you know, however many people that are involved, which is definitely more than this one person, and of course, as all good learning health systems do, we did massive re-education, built it into annual clinical competencies, all of those components, and tried to follow through, but I can say that this remains a pervasive problem across healthcare systems 10 years later. If any of us did polls from our local settings, I'm pretty confident that you'd have a magic number, and talking with some co-presenters this morning, I heard one site's magic number was 17, so everyone has their magic number, and so this is just one reason why having summative early warning scores where you add slightly and medium and highly out-of-range vital signs just didn't work well for specificity and sensitivity, and I think it also just sets the stage for how we as a healthcare community, and specifically as a critical care community, are really in the position to demand more, like we need better, and so it's really kind of advancing this traditional progression from descriptively, like how are our patients doing, to what are the risks for our patients, and getting more into the predictive kinds of sites. So Franklin, 90s, we're talking code blue and trying to kind of get ahead of it. 2000s, we're talking about rapid response teams and paying attention to deranged vital signs, and now in the last 10 years, it's really moved from reactive to proactive rounding, and instead of waiting on kind of triggers and activations and calls, what can we do to proactively look at all of the data in a synthesized way to proactively identify which patients could benefit from a call, whether or not we receive one or not. So if you haven't seen Sanchez-Pinto's 2018 paper in CHESS and you're interested in big data, I highly recommend it as a great primer. It's a great starting place for fellows as well, and I'll cite it a couple of times throughout this talk, but I think that this offers just a nice kind of landscape, and I had to add in nursing in big red letters just to kind of append it to the existing info in the paper. And I think this has been well amplified by previous presenters, that there is a whole trove of data within the electronic health record, and it's up to us to identify which pieces and how to use it. Kind of digging a bit deeper into this review paper, I think they do a beautiful job of kind of distilling the kind of three buckets of machine learning algorithms that you often see in early warning scores. I think that the most predominant and prevalent that we see in the current landscape of vendors is broadly this kind of classification piece, and it speaks to the, I think it was the TBI study where you have the validation cohort and testing cohort and looking at classifications, and that would apply really across all of these different machine learning approaches. And then there's cluster analysis broadly, and then I think in the omics presentation it kind of started getting into some of the image-based and really high-fidelity kind of deep learning and looking at hidden layers to identify outputs that could be actionable. So as an example, using that first option of kind of the classification matrix where you're trying to take all of these discrete characteristics and kind of essentially phenotype, it enables basically a line graph of a patient condition over time. And so a lot of the early warning score vendors are presenting something very similar. They all have their own kind of claim to fame on different components, but broadly it offers an index of patient condition over time as opposed to just that simple kind of number of a red nine on a monitor so that you can take everything into context and look for sharp declines or maybe slow deteriorations that are maybe happening over days or weeks and require attention. So I'm offering just a sample screenshot here from one vendor, but including a bunch over here on the left, and a lot of them are quite similar, and it just kind of offers the value of having some transparency to end users on what the underlying pieces are. There are some vendors that don't offer that. For example, the deterioration index with Epic, like you can't click on it and see what the underlying components are, and the underlying components are kind of limited as well. So we've talked about process pieces and some outcome pieces related to escalations in care, but what I want to do is to really augment that because that's only part of the picture. I know that our session is really focused on embedded and bundled data within the electronic health record, but as kind of the questions pointed us from the first session, okay, we've got the data, but how do we actually act on it, and what are the pieces to actually move the needle? And so the previous slide was the CFER framework. It's a classic implementation science framework, and there was a nice review paper, senior author Ruth Kleinpell in critical care medicine that offers some good history, but it's basically the science of implementation and what works and why, and taking into all of the, like the context components, including things like organizational culture and settings and infrastructure. My experience has been largely drawn from two health systems, one an eight-hospital system in central Florida and the other a 52-hospital system in central and north Texas, and I can tell you that as much as people like to criticize studies coming out of single institutions as not having as much external validity, there's a lot of variation in care. So we started digging in on some of the characteristics, for example, on rapid response team, and I know this is small, but my take-home message here was to show you the variation in colors that the rapid response team within a single integrated health system is not operationalized the same way. So in some, the team are in the same thing for codes. The team leader varies whether or not they're endorsing structured debriefings varies, whether they use early warning scores, the additive version or other components vary, and so it just kind of sets the stage also for acknowledging that a hospital with 46 beds is, of course, going to operate and have different infrastructure than one that has 846 beds, and then also just from like a culture perspective. So a component I wanted to highlight, too, from the implementation science angle is how to set up really acting on early warning scores, and my kind of bias and approach is that I think a laser-tight, a very dense kind of laser-focused approach is better than a very kind of broad distributed approach. So in previous work, we've used what we nicknamed an air traffic controller model, and I know that there's broad kind of endorsement of inclusivity, that we want everyone to have access to all of the same data and everyone to use it, but I think that this concept of the diffusion of responsibility comes into play when it comes to taking action on items. And so the diffusion of responsibility is actually a social psychology concept that started in the 1960s, and they did some experiments with simulated seizures and found that basically the larger the group, the fewer the people that help. So the more people that can, like, see it, the more they think someone else is going to do it. So the diffusion of responsibility is the exact reason why we all receive in our ACLS training that you need to point and say, like, Rahul, go get the AED, and that's exactly what it is, because 85 percent of the – it's – 31 percent of the time, if you're in a large group, like, people just aren't going to do anything. And this was all drawn from a brutal attack on a young woman in, I think, 1964 when there were 38 witnesses and not one person called the police. And so that's what – that's what caused the – that's what caused kind of the whole concept and the subsequent experiments. And then similarly, I think that – just to build on, I think it was Jill's comments about actually prospectively testing these kinds of interventions instead of purely relying on retrospective modeling on how it would and should perform. So when we implemented in a medium-sized community hospital, we had – we asked the rapid response team nurses to document. So that added not only that diffusion of responsibility, that it was that one person, like, this is what you need to do during your shift is you need to look at all of the patients that meet whatever threshold, but we're also asking for accountability for documenting, like, did it work? And what we found was that – was that two in three times. They didn't need to do anything, so it was basically a false alarm. One in three times they did, and this is a bar graph kind of describing the kinds of things that were happening with really, we think, the largest benefit being nurse-to-nurse coaching, and we were able to also in real time address things like repeated vital signs of respiratory rate of 20 before a rapid response team activation is happening and they're, you know, breathing 55, yet the chart reflects those large 20s. So in just the last minute, I want to comment on what I think are some of the future directions when it comes to using EHR data in the context of rapid response, and I think that this hidden layers and deep learning component is really interesting and taking – getting a lot of traction outside of healthcare domains. So this is a paper by Madrigal-Garcia. It was a UK paper on three dozen rapid response team activations. And what they did is they actually did a cluster analysis, so they didn't use computers to do the algorithm. They manually coded it and then analyzed it, but they were looking for what they termed prototypic serious illness phase. And so really looking at microexpressions as a form of pair language and kind of a reflex. And the image here, I think image B is one of their study participant patients. Image C is from a hospice – a hospice photographer. And then the painting is from the 1500s with the death of Christ mourned. And they all go into, like, little bits of, like, slight grimaces and different angles of, like, mouth expression or eye expression, and they were looking at really coding that could this kind of amplify the predictive power of early warning scores. And then another component that I think is really advancing and critical is kind of back to the basics and thinking about the GEIGO principle of garbage in and garbage out. And in the context of code and RRT documentation, that we are seeing a big interest and uptake in code documentation, largely driven by regulatory changes with the Joint Commission in the U.S. And realizing that many health systems don't have, like, systematic and robust code and RRT documentation components, and that the out-of-the-box solution by major EHR vendors is pretty gosh-darn awful and needs a lot of help. So we're doing very focused implementation science measures before, after, doing a lot of changes. Our intent is really out – is really to support workflow within our own institution. And then from the altruistic perspective that we anticipate and that the vendors will adopt it and share it with all other healthcare systems as well. But I just wanted to point out just kind of an early look. I'm going to analyze the before-after data next week, but at our baseline data, we could already see differences in the acceptability of the documentation based off of role. And what I wanted to highlight here is that surveying everyone that may use it is great, but make sure to stratify, because for us, the charge nurse and the RRT nurse are the ones that really, really use it, and they're the ones that really, really don't like it. So you have kind of different perceptions. And so with that, I will pass the baton over to Matthias, because I think that this has covered a lot of ground on the patient-level work, and he'll be advancing discussion into the staffing side and kind of the response side. So thank you so much.
Video Summary
In this video, Valerie Dinesh, a clinical nurse with a background in adult ICU nursing, discusses the use of early warning scores in the context of rapid response teams. She explains that early warning scores, which are derived from electronic health record and vital sign data, are just one component of a larger system for recognizing and acting on deteriorations in patient condition. Dinesh highlights the importance of non-mortality outcomes in evaluating the effectiveness of interventions for rapid response teams. She also discusses the variations in how rapid response teams are structured and operationalized within health systems. Dinesh suggests that a focused, targeted approach to using early warning scores is more effective than a general and distributed approach. She concludes by discussing future directions for research, including the use of deep learning algorithms and improvements in code and rapid response team documentation.
Keywords
Valerie Dinesh
clinical nurse
early warning scores
rapid response teams
patient condition
non-mortality outcomes
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English