false
Catalog
SCCM Resource Library
Physiologic Scoring Systems for Sepsis Prediction
Physiologic Scoring Systems for Sepsis Prediction
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Oh, good. Starting out with applause, that's a plus. I'm just going to get it out there. This is the first time I'm speaking at SCCM, and so I'm extremely nervous. And when I get nervous, I tend to talk even faster than normal. So interactive, please feel free to give me the slow down thing from anywhere in the audience if you feel like that is appropriate. All right. So I'm going to speak on physiologic scoring systems for sepsis prediction. I have been in health care since about 1994. I started in the US Army as a flight paramedic, seen a lot of septic patients, gave a lot of Zygeres for a while, so I've seen things come and go. And I currently practice near the US-Mexico border. So we've all had a chance to digest the 2021 guidelines where they've recommended against using QSOFA as compared to a couple other screenings as a single screening tool. And this makes sense because initially, SOFA was not really intended to be used as a tool for patient management. It was more as a means to clinically characterize the patients with sepsis. So I think the question comes down to often, are we talking about predictors of mortality with these scores, or are we actually talking about predictors of actually having sepsis? So the objective for this talk is going to be to discuss the most recent guidelines from 2021 to 2022. I'm sorry, not guidelines, data and articles. And just kind of go over these scores and concepts. I'm purposefully excluding anything that has to do with predicting COVID-19 sepsis because it's just such a specialty subtopic. So the first one we're going to talk about was Sparks and Colleagues, published in 2022 in the British Journal of Infectious Disease. What they did was a retrospective age-matched cohort study, which compared these various scoring systems. And the cohorts matched patients by age with positive blood cultures containing a true pathogen to those without a positive blood culture. And so I just kind of wanted to quickly show the definitions for these different criterias, one of which, at least Shapiro, I wasn't particularly familiar with, never having worked in the emergency department. But QSOFA, I think we're all familiar with, 2016 sepsis guidelines. We all started using this and looking at it routinely. The SERS criteria, I actually also didn't realize, came from way back in 1992 as a joint definition between CHESS and SCCM, clarified the early response of a host to a nonspecific insult. So it could be infectious or non-infectious. And then Shapiro, which is the one I hadn't really heard of prior, came from 2008. It's a clinical prediction rule that really guides mostly emergency departments how to stratify patients for blood cultures based upon those who are at high risk of bacteremia. And the sepsis kills pathway was an Australian effort in a state, I guess it's not a state province, it's its own state, New South Wales, that they wanted to provide a sepsis pathway for not only the ER but inpatient. And it focuses with objective data as well as things like nursing thoughts on the fact that the patient is re-presenting to the hospital, has indwelling devices, or is immunocompromised. So I just wanted to kind of show what those looked like. And so here is kind of the outcome of this retrospective cohort. The sepsis-related mortality was higher in the bacteremic groups, which obviously tracks. The modified Shapiro had the highest sensitivity around 88 percent, modest specificity at 38 percent. QSOFA had the highest specificity at 82 percent, but was poorly sensitive, about 19 percent. So as you can see from all this, the performance of all of these scoring systems and pathways were really pretty suboptimal for the sepsis population patients with bacteremia, which are ones that we definitely don't want to miss. So then focusing more on QSOFA versus early warning systems. This was published in August 2021, and they wanted to do a systematic review, head-to-head comparison between the EWS and QSOFA because they're the most widely used. They used the same cohorts at the recommended thresholds, and this included more than 400,000 patients in 13 different studies. The papers generally reported higher ROC values for EWS than QSOFA, but there was also a wide range in these estimates and a lot of high heterogeneity, which really limited the conclusions they were able to draw. Allowing for this, however, the EWS ROC was consistently higher than QSOFA within these papers, and both allowed threshold setting, which was determined by the preferred compromise between the sensitivity and the specificity. So at these established thresholds, with all those caveats, as it were, EWS tended to a higher sensitivity, and QSOFA tended to a higher specificity. So next, let's talk about and just focus on the QSOFA, and then when you're adding in a lactate level. So Gill et al. really tried hard to do a meta-analysis on this question in the British Medical Journal this year, but insufficient data and that same heterogeneity of the studies really allowed only for a systematic review. So not every single paper that they found included the data on lactate, but the ones that did, there were 10 of them. Nine of those 10, the ROC curve for QSOFA was improved with the addition of lactate. In those that detailed it, sensitivity increased. That was three of seven that talked about that marker, and specificity increased in four of seven. They were able to show that adding lactate to QSOFA increased the accuracy of detecting sepsis-related mortality retrospectively, but as they noted here in their conclusions, we really should work on prospective studies and to see whether or not this could be evaluated for any sort of predictive value. Finally, I want to talk about this from the electronic health record, the early warning system plus CSRS, and this was actually a clinical nursing research journal article from 2021. It started as a QI project due to alarm fatigue, and they found they were actually missing quite a few sepsis indicators early, and I don't know if anyone in the audience has ever practiced with an EHR that gives you the red and yellow boxes 87 times a day on the same patient, but I could see how this would be a problem for them. And so they did this modified screening tool, which brings in CSRS, MUSE, and NUSE criteria. Their primary inclusion for this was any adult admission to a medical tele, PCU, step down, and they excluded the ICU and the ED from this because they're such specific populations and they have lots of repetitive screening systems. They also excluded bone marrow transplant admissions and OBGYN because they were working on their own QI projects related to sepsis. And so here's kind of what they did with this. So they took where in the slide kind of shows where the current values were and then what they changed, and so for test one, they changed the vital sign definitions, and by doing this, they made them more consistent with having a MUSE or NUSE score of three to four. So they took the respiratory rate from 20 to greater than 21, I'm going to come back to that in just a minute, the heart rate greater than 90 to they made it greater than 101 before it would fire. For blood pressure, they actually used to use a MAP goal of less than 70, but they changed that entirely and switched to using yellow was a systolic goal less than 101, red was systolic less than 91. And they took out all the O2 demand criteria because it was really inconsistent with NUSE and MUSE theory. And so going back to the respiratory rate, I thought this was a very interesting component of this study. They really found, and I think we're all pretty aware of this, I kind of like to refer to the respiratory rate as, you know, it's kind of like how short is too short? You know it when you see it. Because you know breathing when it's not good on either end of the spectrum, you know when someone's not breathing enough, and you know when they're breathing too much, but kind of that middle range, you know, sometimes somebody can fool me with that. And having been a bedside nurse and even now as a practitioner, I'm very lucky because the monitor tells me what the respiratory rate is most of the time. But on these floors that we're talking about, medical, surgical, tele, most of them they need a physical actual assessment of visual assessment. And so at one point they noticed that almost every patient on the unit had the exact same respiratory rate. So they kind of went back and looked and they drilled down even further on that because the respiratory rate is so vital to the scoring on these. And so if we're not doing that part right, they realized they weren't getting valid scores. And there was a lot of inter-rater variability, not only between like if the CNA captured it and the RNs captured it, but even within the RNs and even within the CNAs. So they actually went back and did a lot of focused education on how to properly obtain respiratory rate and focused on why it was so important. And so I thought that was just an interesting, you know, side piece of this. When they did round two, they adjusted the heart rate to greater than 110 and the respiratory rate to greater than 24. So then here's kind of the demographics and you can see at the bottom the total firings. And so in test one, they had 1,034 patients and it fired 4,000 times almost. And in test two, they had 1,000 patients, 1,005, and it fired 3,000 times. So you do see some reductions. I will say I almost put up a little graphic on this slide that showed like Dante's Inferno and I was going to have everyone vote as to which level we thought it was to have it firing 7,000 times because that just, thankfully I don't work in a system with this anymore, but that's a lot of firing. All right, so here's the results that they found from this. So with this updated yellow tool, it fired 10% less and I did try and put all the numbers specific for you there. I'm not going to read them all, but kind of generalized it and it had an 18% improvement in correct identification. The updated red fired 46% less with correct classification and happening more than three times more frequently. So they were able to show improved yellow specificity to about 97% with low sensitivity though at 7%. So here are their, based upon this work, here are their proposed changes to the sepsis screening tool. The challenge obviously remains as always in finding the right combination of criteria that will have a positive predictive value for diagnosing sepsis, but doesn't falsely fire too often, right? That's the ideal. So the concept that compromise must be achieved when finding a measuring tool was really reinforced from this and I think they, and they also felt like the possibility exists that the two alert could be eliminated and really bringing it to a solo alert, which could be modified in some ways that are consistent with Muse and News, but continue to input the lab values we find with SIRS. So this would improve both your sensitivity and the specificity. And so in summary, how are we going to try and predict sepsis? We're still looking for that easy, cheap and fast way to do it quickly, but none really of the ones that they looked at really outperformed the other and head to head comparisons. Adding lactate to the QSOFA showed, you know, good, good evidence, but it needs perspective studies to really be able to put into practice. And then maybe we should consider kind of doing the same process of modifying the current EHR early warning systems to reduce alarm fatigue. And thank you very much. I was very pleased to be here. I appreciate it.
Video Summary
The speaker discusses physiologic scoring systems for sepsis prediction and reviews recent guidelines and studies on the topic. They compare various scoring systems, including QSOFA, SIRS, Shapiro, and the sepsis kills pathway, and analyze their effectiveness in predicting sepsis-related mortality. They also compare QSOFA with early warning systems and examine the impact of adding lactate levels to the QSOFA score. Additionally, they discuss a modified screening tool using the electronic health record to improve sepsis detection and reduce alarm fatigue. The speaker emphasizes the need for further research and prospective studies to enhance sepsis prediction methods.
Asset Subtitle
Sepsis, 2023
Asset Caption
Type: one-hour concurrent | Challenges in Sepsis Prediction and Prognosis (SessionID 1228529)
Meta Tag
Content Type
Presentation
Knowledge Area
Sepsis
Membership Level
Professional
Membership Level
Select
Tag
Scoring Systems
Year
2023
Keywords
physiologic scoring systems
sepsis prediction
QSOFA
SIRS
sepsis-related mortality
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English