false
Catalog
SCCM Resource Library
October Journal Club: Critical Care Medicine (2021 ...
October Journal Club: Critical Care Medicine (2021)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, and welcome to today's Journal Club Critical Care Medicine webcast. The webcast, hosted and supported by the Society of Critical Care Medicine, is part of the Journal Club Critical Care Medicine series. In today's webcast, we feature two articles from the October issue of Critical Care Medicine. This webcast will be available to registrants on demand within five business days. Simply log into myfccm.org and navigate to the My Learning tab. Hello everyone. My name is Tony Gerlach, and I'm a clinical pharmacist at Ohio State University Wexner Medical Center here in Columbus, Ohio, and I will be moderating today's webcast. Thank you for joining us, and just a few housekeeping items before we get started. First, during the presentation, you'll have the opportunity to participate in several interactive polls. When you see a poll, simply click on the bubble next to your choice. Second, there'll be a question and answer session at the conclusion of the presentation. To submit a question throughout the presentation, just simply type it into the question box located on your control panel, and you can do that at any time. Third, if you have a comment to share during the presentation, you can also use the question box for that. And finally, for everyone joining us today, there will be a follow-up email that will include an evaluation of the webcast. Please take five minutes or so to complete the evaluation, as your feedback is greatly appreciated. Please note, this presentation is for educational purposes only. The material presented is intended to represent an approach, view, statement, or opinion of the presenter, which may be helpful to others. The views and opinions expressed herein are those of the presenters and do not necessarily reflect the opinion or views of SCCM. SCCM does not recommend or endorse any specific test, physician, product, procedure, opinion, or other information that may be mentioned. And now I'd like to introduce today's speakers. First is Dr. Wolfgang Bauer, who's an emergency physician who also trained in internal medicine at the Benjamin Franklin University Hospital, Charade T., Berlin, Germany, and is a corporate member of the Freie Universität Berlin and Humbert Universität Berlin. He completed his medical degree, as well as his doctorate, at the Carl Gustav Carus University in Dresden, Germany. Dr. Bauer did his internal medicine residency at the University Hospital Charade T., Berlin, and fellowship in hematology and cardiology. Afterwards, he completed a subspecialty in emergency medicine under Professor Soma Sondaram. Dr. Bauer is a sepsis and critical care expert. His research interests include infectious disease, sepsis, and its monitoring and resuscitation. He's also a member of the Berlin Air Ambulance Helicopter Team. Our second speaker is Dr. Ephraim Salik, who's an associate professor in the Department of Medicine in the Department of Molecular Genetics and Microbiology at Duke University. He practices in clinical infectious disease and emergency medicine at the Durham VA Health System. Dr. Salik spent the past decade focused on diagnostics research, leading to his current role as associate director of the ARLG Laboratory Center. Throughout these various roles, Dr. Salik evaluates existing and emerging biomarkers, supports the development of diagnostic platforms for pathogen identification and characterization, and has led the development of a new paradigm for host response-based diagnostics. He serves as a technical advisor for the Diagnostics Core of RADx UP program and is the chief medical officer at BioMe, a point-of-care precision medicine company. Thank you both for being here today. Now I will turn things over to our first presenter, Dr. Wolfgang Bauer. Hello, everyone. My name is Wolfgang Bauer, and I'm proud to be part of this Journal Club webcast. Right now, I'm sitting in Lisbon, Portugal at the Annual Conference of the European Society of Emergency Medicine Users, so I'm not at home in Berlin. But let's go on with the webcast and our prospective trial. Here are my conflicts of interest. Basically, the trial I'm going to present was incorporation and financial support. My presentation will be based on our recently published clinical trial in my emergency department at home in Berlin. We performed a prospective application of the INSEPT test and dramatics to introduce you to the idea of this test. Let's start from my point of view as an emergency physician. Acute infections, and with that, of course, sepsis, are one of the most frequent reasons why patients present to the ED, and it's also a leading cause of morbidity and mortality. It's challenging to diagnose an infection because the infectious source is not always immediately apparent. So, to diagnose a patient, I will have to answer at least two questions. Is an infection causing the patient's symptoms? Which pathogen is causing the infection? And depending on this, which therapy is needed? How are these questions currently answered? Well, we are chasing the buck. To establish the diagnosis of an acute infection, blood cultures are standard of care. They usually have a turnover time of nearly two days. In addition to that, most patients with acute bacterial infections have non-systemic local infections that have not yet invaded the bloodstream. If you look for routinely used biomarkers, to rely on these protein biomarkers is highly unspecific, and especially for viral infections, there are no routinely used biomarkers. In this unmet need, we were immediately interested when Inflammatics introduced us in 2018 to the concept to read the host immune response to a pathogen with a bedside test. In particular, a host immune mRNA expression microarray to build a classifier for acute infections. Inflammatics used a multi-cohort analysis to identify a set of seven genes for robust discrimination of bacterial and viral infections. It was then validated in 30 independent cohorts, which were then combined with a previously developed and published 11 gene ZEPSIS metascore to differentiate ZEPSIS from ZILS. The ZEPSIS algorithm also provides a prediction of disease severity by predicting the 30-day mortality. Right now, we will focus on the results of the predicted presence of a bacterial or viral infection. The BVN, bacterial, viral, non-infected algorithm, was improved by more than 1,000 patients and then evolved with 3,400 patients to BVN2, for which we now perform the first external validation in a real-life setting in our emergency department at home at the Charlotte University in Germany. The Charité Berlin is one of Europe's largest university hospitals, and our EP is one out of three university emergency departments of the Charité, located in the south of Berlin, in the former American sector. We see around about 65,000 patients a year, and we are the emergency department of a so-called tertiary care hospital, which means we see a lot of patients from our specialized department like hematology, oncology, rheumatology, or cardiology, so we regularly see new compromised neutropenic and multi-morbid emergency patients, and these are the patients in which we have to deal a lot with acute infection, and, of course, with ZEPSIS. We decided to conduct a prospective non-interventional trial to validate the INSEP test, and we show a rather simple study design with simple inclusion criteria. We had patients coming to our ED with a suspected acute infection and at least one change of vital signs. The patients received the standard of care treatment, and we draw an extra blood sample in a Pax gene RNA tube, which was frozen and then sent to informatics to be analyzed on a nanostring. We enrolled 370 patients and could analyze 312 of them to decide whether or not to do a bacterial and or a viral infection was present. We performed a short review including all available clinical findings like x-ray results, blood tests, microbiological findings, and to assess the complete progress of the disease, we performed a 30 and a 90-day follow-up by blue calls. The results of these short reviews were a clinical identification into one out of three groups, a rule out an infection and a proven infection. Or if there were some inconclusive findings, the adjudicator could also choose unlikely or probable, which means this was done separately for bacterial and for viral infection. To evaluate only the clear cases, we chose the so-called consensus adjudication, which means we only analyzed patients with a proven infection or with an infection. The important part to mention is that the adjudicator were blind for the inceptive test results, but unblind for the PCP and CRP results for the patients. I will come back to this in a few minutes. So, out of 312 patients with the initially clinical suspected infection, we found 56 percent with a bacterial infection and 24 percent with a viral infection. Just a quick view on our patients' characteristics. We enrolled quite old patients with those infected as we expected for our part of the city in Berlin. To measure the severity of our patients, we used the quick SOFA index, and we found 25 percent of our patients as clinically ill or with a potentially poor prognosis, and around about 20 percent of our patients were in compromise. To assess the inceptive test results, we created receiver-operating characteristic curves to show the ability of the inceptive test to differentiate between infection and non-infection significantly for bacterial and viral infection. To compare the performance, we also analyzed the PCP C-reactive protein and the white blood cell count, but once again, PCP and CRP are a tough sparing partner because Adjudicator used them to decide if a bacterial infection is present. So, if you compare the numbers with the inceptive test with AROP from 0.9 to Procalcitone with 0.89, it's even more interesting that you could find a comparable, maybe even slightly better result for the inceptive score than for CRP or PCP. For viral infections, we found also good performance, and as you can see, all the other routinely used biomarkers, of course, not useful to assess a viral infection. To translate the BVN2 scores into clinically acceptable results, we transferred the test results into these four interpretation bands. We used previously established and locked cut-offs to achieve a high specificity to rule in an infection and for bacterial infection an even higher sensitivity to rule out an infection. What it means, by getting a high test result, we won't miss an infection and we can pretty sure avoid antibiotics in patients with low test results. And as you can see, nearly the same results for viral infections, and just a quick view on the likelihood ratios, for very likely bacteria and for very likely viral infection, we get likelihood ratios around 10. Another point I would like to mention is that we performed a subgroup analysis to make sure the inceptives will also work in decompromised patients. We used the Mann-Whitney U tests and we could show that there's no impact in the test performance by patients' immune status to a p-value of 0.8. So, to summarize our trial, with our burning code, we tried to represent a clinical use of the test as close as possible to real life. We prospectively enrolled patients and draw the blood sample at the very moment when the patient is presented to our team. We found high accuracy of the inceptive test to identify bacterial infection, at least comparable to PCT, but with a bias that the adjudicator used the PCT. The viral score also performed well. The test performance is most likely not impacted by the patient's immune status. And the test exists in a real validation trial. The signature was pre-logged and applied to our bilingual cohort and we did not use a so-called cross-validation and so we don't have to fear overfitting. What are our limitations? While the gold standard for the clinical adjudication is not perfect, we might miss an infection and the more complicated part, prove the absence of an infection for the adjudicator. We left out the unlikely and probable rated patients as indetermined with our consent certification, but you can check the data in our CCM application. The trial was not a multi-center trial and finally, high accuracy does not necessarily match clinical utility. So we need a dimensional study, which in fact we are currently planning for next year. So what are the clinical implications? Translated into a part of care device, the test might improve patients' outcome while upholding antibiotic stewardship and that's, I think, the main point here. We will get a test to prove a viral infection for which right now we have no biomarkers routinely available beside a multiplex PCR by which we can only prove an infection for a limited number of predefined viruses. So with this test, we get a new possibility to identify a virus infection and in the future, we could find a benefit to use a test not only in the emergency department or on the ICU, but also in a GP and a nursing home with expected turnaround time by 30 minutes. This could change not only the triage in the emergency department, but change the patient care in nursing homes and at the GP. Yes, thank you and it's a pleasure now to turn over to Dr. Salek. Well, thank you very much, Dr. Bauer. That was a great presentation and I'll add that I really enjoyed reading the paper when it came out in the journal. Since I have the luxury of going after Dr. Bauer, I don't have to impress upon you the importance and the need for developing the sorts of approaches that are described in the two papers that were being discussed today. So what I'll do is I'll just take a moment to give you a little bit of background on host response tests in general. Before I do that, I'll just mention my disclosures here and I will highlight one commercial disclosure, which is a role that I play in Biomeme, a molecular diagnostics company. And so this is the first polling question, which I think is going to open up for you momentarily for you guys to respond. There we go. So a host gene expression test has not been commercially developed for which of the following conditions? Breast cancer recurrence, acute allograft rejection, rheumatoid arthritis, colon cancer prognosis, and coronary artery disease. So essentially, gene expression tests have been commercially developed for all of these but one. Which one is that? Okay, so we have the fact that it's split so nicely between a third and two-thirds makes me wonder how many people voted, but nevertheless, two-thirds said coronary artery disease has not had a test developed, one for acute allograft rejection. It turns out the correct answer is actually rheumatoid arthritis. So there are certainly host gene expression tests that have been commercially developed for all of these but one. It turns out the correct answer is actually rheumatoid arthritis. So there are certainly host gene expression signatures that have been developed and described but none have been commercially developed for rheumatoid arthritis, whereas Allomap is a commercially available gene expression test for heart transplant rejection, and Corus CAD was on the market for a time that measures the likelihood of obstructive coronary artery disease in patients with chronic angina symptoms. And so the highlight here is again that there are gene expression tests that have been developed for a variety of conditions, but why not for, and these are some of them, just as an example, I won't belabor each individual one, but why not for infectious disease? Well, some of the challenges are largely technical, which is that with an infectious disease, you can't really afford to take a sample and send it to a referral laboratory and get your result a few days later. Decisions generally need to be made at the point of need where the patient is there and you're making a decision about whether to treat them with antibiotics, and if so, what class of antibiotics, for example, an antibacterial, antiviral, antifungal. And so results need to be available at the point of need. They need to be available rapidly. The test ideally, in order to support its utilization, should be of low complexity. Ideally, to support a low complexity test, it should be one that integrates sample processing. These are blood-based samples for the most part. The tests, as Dr. Bauer described, for the INSEP test, as well as the one I'll describe momentarily, are multiplexed. We're not just measuring one biomarker, we're measuring many mRNA biomarkers. So we need to be able to do that all in unison. You need to incorporate normalization and standardization processes as well. And then, of course, there are challenges for adoption in terms of the cost of the test, the workflow. Is it clinically valid? And ultimately, really, the major question is whether it's clinically useful. I will add that there is one FDA-cleared test for infectious disease, which is septicide, but the test is not commercially available for purchase as of yet, at least to my knowledge. And so the signature I'll tell you about here is one that we've been developing for, I guess, nearly 15 years. I've been focusing on developing the field of host response-based diagnostics for quite a while. This particular signature was published in 2016 in STM, although we have significantly reduced its size since then. And as with the signature that Dr. Bauer was describing, we validated this signature in silico in many thousands of individuals. But again, the challenge is how do we actually generate that test? So we've done some work translating host response signatures to clinical platforms. One of them, in the top bullet there, is a description of a lab-based high-complexity research assay similar to the nanostring that was in the earlier paper. And then we also described the development of a gene expression test for viral and infection diagnosis using a sample-to-answer format in collaboration with a company called Qbella. And what I'm going to tell you about now is essentially our work to do something similar with another company, BioFire, that many of you will be familiar with. I'll sort of describe a little bit of their technology momentarily. But this study describes the development of a bacterial versus viral test using BioFire system. BioFire, if you're not familiar, makes syndromic panels for use at the point of need, whose run times are anywhere from 45 minutes to an hour and only require a couple of minutes of hands-on time. Commercially available panels are those that are listed there. And what the instruments look like is on the right. They come in either sort of a single-run format or in a stacked format, the tower, which is the image depicted on the bottom right. But the magic really seems to happen in the pouch, and that's depicted on the left. I won't go through the details, but basically all of the elements that I described in terms of sample preparation, lysis, cleaning, amplification, and detection all happens in this integrated pouch, which is then disposed of after a single use. And so in the experimental flow for our study, we started off with 623 individuals in total that were analyzed. And that fell into two main groups. 422 subjects, about two-thirds of them were people that we call microbiologically confirmed. These are individuals who had an identified pathogenic cause of their illness, 135 bacterial, 183 viral. And we had 104 individuals who were adjudicated as having a non-infectious illness. That group was then randomly split into a training cohort and a validation cohort. You can see those numbers distributed or demonstrated there. And then we had a bunch of individuals who we determined to have an indeterminate phenotype, and that was defined as people who had a clinical syndrome that was suspicious for a bacterial etiology. For example, they might have presented with a low-bar pneumonia, purulent sputum, fevers, cough, chest pain, and clinically, all the world looked like a bacterial pneumonia, but no bacterial pathogen was identified. Or perhaps they had typical cold symptoms, but no virus was found, and so we consider those suspected. We also had 36 cases of co-infection, where bacterial and viral pathogens were both identified. Again, I won't go into significant details here, but generally speaking, the number of cohort had a mean age in the mid-40s. It was fairly evenly split between male and female. Slight predominance for male among the suspected bacterial case, but otherwise slight predominance for females in the other categories. We had a racially diverse population that was fairly evenly split between white and black, but with a small representation of other groups. As far as the etiology, again, those numbers were presented earlier. You can see here the training cohort, 39% of them had, generally speaking, in the high 30s had fevers, and about half of the population was hospitalized. That number was much smaller in the group of suspected viral infections, where the rates of hospitalization were generally much lower. What I'll highlight here, and I'll just take a moment to explain what this is, every individual essentially has two results that come out of the test. One of them is the probability that they had a bacterial infection. The other is the probability that they had a viral infection. Those two are, for all practical purposes, independent of one another. Each individual patient is represented by a circle, and their placement on the graph is indicative of the combination of their bacterial probability and their viral probability. The ground truth was clinical adjudication, which we acknowledge, just as Dr. Bauer did, is an imperfect reference, but it's the best that we've got so far. The blue circles are individuals who were adjudicated as bacterial infection. This region of this plot here are going to be the people who had a high probability of bacterial infection, essentially anything above this threshold, and who had a low probability of viral infection, which is anyone going to be below this horizontal dotted line. The viral infection cases should be clustering in the top left. Patients who had no infection, in other words, those who had a low bacterial and a viral probability should be in the bottom left. And then in principle, people who are in the top right area could be considered as potentially having co-infection. And so the training cohort is on the left, the validation cohort is on the right. You see the patterns here pretty much hold true. The area where we saw the greatest error was in the non-infectious group who had some spillover into the bacterial area, a few spillover into the viral area. And we chose these thresholds to allow for a slightly higher false positive rate among non-infected patients to try to minimize the risk of having false negatives, meaning taking someone who has a bacterial infection and incorrectly claiming that they don't. That would be the greatest clinical error one could make. And so we tried to minimize that possibility with our thresholds. Here, this sums up essentially what some of those key performance metrics are. In our training cohort, we have AUCs of 90% for bacterial, 92% for viral, procalcitonin was 84%. And the sensitivities were in the mid 80s for both bacterial and viral infection, but you can see procalcitonin, while the specificity was similar, at a much lower sensitivity. This was statistically significantly lower. In the validation cohort, we see the numbers drop off just a little bit. The AUC for viral infection actually is more or less the same for bacterial infection drops off. And for procalcitonin, even though this is procalcitonin cutoff of 0.25 was used in both groups, for whatever reason, the accuracy for procalcitonin is quite a bit lower here. But these trends hold true in the sense that the performance of the gene expression test remains superior to procalcitonin. Our population included, for the most part, patients with respiratory infections. That held true for almost all of the viral cases and for a majority of the bacterial cases in terms of any particular anatomic site. But we also looked at non-respiratory infections, specifically regarding bacterial etiologies. And when we compare respiratory to all non-respiratory, we see similar sensitivity. PPA is essentially sensitivity, 88 versus 84. Specificity is, I'm sorry, this is in our validation cohort, a sensitivity of not 80 versus 79. The one area, and I'll highlight that here, is urinary tract, where we do see a little bit of a drop off. And this is for reasons that were unclear to us when we looked at the specific details of the cases. But the accuracy for urinary tract infection was a little bit lower, about 10 percentage points lower in the training cohort, 15 in the validation cohort. Other sites of infection were very well characterized and highly accurate, or perhaps had too small a number to really draw a very significant conclusion, such as the skin and soft tissue category. I'll also highlight both the categories of the indeterminate cases I spoke about before. On the left are cases of super infection. And this itself falls into two categories. The suspected were the ones where we did not have microbiological confirmation of there being both a bacterial and a viral etiology, but based on the clinical presentations, it was suspected. And confirmed are the ones where we actually did have microbiological confirmation. And so those are the ones in blue. And I'll highlight that all of the confirmed super infections or bacterial viral co-infections were classified as bacterial. Whereas the ones that were suspected, you see having a little bit of a different pattern. Many of them were bacterial. Some were actually in this co-infection group. Many of them were classified very confidently as just being viral. And that raises the question as to whether or not some of the things that we're seeing as clinically suspected co-infections may in fact just simply be viral infections or perhaps a secondary viral infection. Unfortunately, there's no way to know the ground truth with a capital T, so to speak. Among the cases of suspected infection, we see also I think an interesting pattern. Again, it's impossible to know what the truth is in these patients since these were just suspected as being bacterial or viral. But in the suspected bacterial group, the vast majority of them are in fact classified as bacterial. Some are classified as non-bacterial and non-viral. And then some have very high predicted probabilities of having a viral infection. With our suspected viral group, most are viral. A good number of them don't seem to have an infection at all based on their host response. And then a good number of them are bacterial as well. And again, this really raises the question as to whether or not our clinical suspicions were bacterial and viral infection, how accurate they really are. And of course, it's impossible to differentiate whether these are test inaccuracies or adjudication inaccuracies. The next thing I wanna tell you a little bit about is how we go about thinking, how we go about reporting the results. Now, what I mean by that is that there really is no standard. Again, host gene expression tests for infectious disease, for the most part, there aren't really very many of them. And there are a number of different ways in which one can report results. So far, all of the results I've described to you have been using the top scheme, which is a single threshold that defines whether or not someone either has a bacterial infection or doesn't, or whether they have a viral infection or not. Those thresholds don't need to be the same, but it's essentially one single cutoff. The equivocal zone is a strategy that some other host response approaches have taken. And here, what they do is they recognize that results that fall kind of in that middle zone are not really clearly diagnostic of a bacterial infection or the alternative. And so you can take those results and effectively throw them out, right? You get somebody that has a result in the equivocal zone, and we just simply don't know what to do with that. It's as though you never tested that individual at all. Your post-test probability really hasn't budged. The other option is to use something like bands, and this is along the lines of what Dr. Bauer had described. Or you could think of them in loose terms as quartiles, but they need not be quartiles. They can be defined in any one of a number of ways. This is also the way in which septicite reports the results in bands. And I bring this up because we took our results and we asked what impact that would have on the results that we get in terms of the performance characteristics, but what cost that comes at. And I highlighted two rows there, which is really just focusing on the validation cohorts for the bacterial model, which is, again, asking if you have a bacterial infection or you don't. And if you don't, it could either be viral or noninfectious. The viral model asks whether you have a viral infection or not, and the alternative, if you don't, is you either have a bacterial infection or some noninfectious illness. And so if we just go take the bacterial group first, the sensitivity, the percent positive agreement, is 79% in the validation cohort and 81% specificity in the validation cohort using a single threshold, but we get results that are interpretable in 100% of the population. When we use an equivocal zone, you can see that our sensitivity goes up a few points. Our specificity goes up a few percentage points, but now we lose the opportunity to have interpretable results in 12% of the population. Well, if we focus on quartiles or bands, well, now we can really see the numbers start to look very good by focusing only on individuals who fall in the top and the bottom bands, but this comes at the expense of missing out on the results for half of the population. In the viral group, the viral model, you see the same trends. The numbers are pretty good for the single threshold. They actually drop off in sensitivity but improve in specificity when you have your equivocal zone and then they get even better when you look at your bands. So, again, we're not necessarily indicating which is the right approach, but just that there are many ways of doing this. Each comes with advantages and disadvantages. Better performance characteristics, usually the trade-off is a smaller percentage of the population for which those results are applicable to. And just to give you, again, a flavor for what you can see if we report performance by bands, in the top two are the bacterial model for the training cohort and the validation cohort, and you can see that the sensitivity in the highest band is 100%. In the discovery group, in the validation group, it's 94%, and the specificity in the lowest band is 90 to 92%. For viral infection, you can see that the sensitivity is 97% and the specificity is 95 to 98% depending on whether you're looking at the training or the validation cohort. So, the results in those highest and lowest bands hold up very well and give you really exceptional performance characteristics for the test, albeit with a smaller number of individuals that those results apply to. But even if you look at the results in the next bands, the ones in the middle, which I don't have circled, but you can see that they expectedly drop off, but they still remain pretty good and notably very much better than what you see with procalcitonin and currently available testing. So, even with those middle bands, you're getting an improvement over current clinical diagnostic testing and strategies. And so, what I want to conclude with here is to highlight that this is really the first development of a post-gene expression test for bacterial viral discrimination. In contrast to the results reported by Dr. Bauer, because there could certainly seem to be a lot of overlap, this uses a commercially available platform. I will highlight that this is a research-use-only assay. It is not available commercially, but this would be a test that gets you results in 45 minutes as opposed to the nanostring, which is really more of a... certainly is a commercial assay, but is a high-complexity and much slower test. Certainly an important stepping stone to getting something similar to what we have here. This test was superior to procalcitonin. We demonstrated the ability to detect bacterial and viral co-infection. I didn't show you the results, but there was no impact by age, race or ethnicity or comorbidity. And again, we can deliver results. This demonstrates the ability to deliver results at the point of need in as little as 45 minutes and potentially even faster. As far as future directions of our particular work, we have ongoing prospective validation in larger and more heterogeneous populations. It will be critical to evaluate the clinical utility of a test like this on outcomes such as antibacterial utilization. And then it's also very much of interest, and again, work that we have in process that evaluates the kinetics of the host response related to treatment. And so similar to what you might see with procalcitonin in terms of normalization with treatment, can you see the same thing? And I'll just sort of preempt that by saying that our preliminary data does support its use in that context. And then, of course, to expand the pipeline of other host gene expression tests, there are very many gene expression signatures for infectious diseases and other diseases that we have developed. And with building out this pipeline to get from signature discovery to test development and ultimately verification and validation, we anticipate that this is really going to be at the vanguard for the development of diagnostic tests and prognostic tests going forward. So I will end there and open it up to any questions that might come up. Well, thank you very much, both Dr. Bowers and Dr. Salek. That was a very educational talk on, I think, some of the future of what we're going to see in medicine, which is great. And this is just a reminder for anyone in the audience, feel free at any time to use the questions tab to ask questions because, and I'll get to them as soon as possible. So one of the first questions, and it's something that I practice. I practice in a surgical trauma burn ICU. It's really pro-calcitonin has a lot of limitations, especially in people who come in with surgery, major trauma, and burns. Were there any patients in either of your studies that took part? And would any of these either rapid gene expression tests or the IMXBVM2 be affected by patients coming in with major surgery, major trauma, or burns? Well, our cohort did not include any patients with burns or trauma, but the IMXBVM2 algorithm has been validated using data from these patients, and not by us, but by Inflammatics. Another manuscript was recently published in Critical Care Explorations, and in addition to that, Inflammatics is planning specific studies in burn patients. I'll echo that as well on our side. Our signature was specifically developed with those patient populations in mind, and our non-infectious cohort specifically included a variety of conditions, including trauma. There were some burn, but admittedly not many in our population, but things like pancreatitis and other highly inflammatory conditions. And so the machine learning algorithms that were used to discover the signature, by incorporating those in the background, they were able to select differentially expressed genes that specifically distinguish between those other inflammatory states and bacterial infection or viral infection in a specific manner. No, thank you. I think that that's very cool, because I think that's the limitation to what we truthfully have commercially available today for most people, is you have procalcitonin or something else that's really just a general marker of inflammation. To have these more dynamic tests that you talk about, I think is actually going to help, because I do see, especially in patients who might have aspirated on the floor, having a test to roll out, do you need some sort of anti-microbial therapy, is actually going to be a game changer once we get it up and running in more and more ICUs. So it's awesome to see that that's going to come in the near future. Now, do these tests have any utilization with fungal infections? Yeah, I can jump in on that one, Tony. That's also a fantastic question, especially if you have someone coming in with sepsis. We know that fungal sepsis mortality is particularly high, and a large part of that is simply because antifungal therapy is not part of the empiric antibiotic regimen. And it's often not started until cultures come back positive, leading to a lot of delay and initiation of appropriate therapy. As it turns out, the host response, and I have to admit, I was a little surprised by this myself, but the host response to fungal infection is quite distinct from that of the response to bacterial infection. And so we have some publications describing the host response to fungal infection and demonstrate that it's actually quite different. When we apply this particular signature, it does essentially identify patients with fungal infection as non-bacterial and non-viral, which may not be all that helpful in terms of positively identifying fungal infection. But as I said, we have another signature that can be used to distinguish fungal from bacterial infection in particular, that is certainly on our pipeline for development. Yeah, and I may add, the IMXPN2 test does not have a fungal readout, and in immunocompromised patients, they are quite rare to occur. Well, thank you very much. Now, it's interesting to see, and the whole host response and genomics of how some people are more predisposed to fungal infection versus bacteria, I think is quite interesting. And I think if we can have some sort of functional test like this, it's going to do our patients good. I do have a question that I think is very interesting, is what is the impact of prior antimicrobial exposure on these assays? For example, if a patient who's been exposed to multiple courses of antibiotics and may have a lower yield of bacterial counts, but still appear extremely septic, are these tests valid in them? And if I might add, it might be something, you know, these patients often have differences in their microbiota, and is it sensitive enough to assess some of those differences that you might get, especially with our patients with prolonged ICU stay? Yeah, maybe I will start here. So, we have these data in our cohort, and we have found about 10 to 15 percent of patients already be on antibiotics when they were enrolled, when they came to our emergency department. And we didn't exclude them from the trial, so we included them, but we have not yet, it's ready to give an answer, but it's coming up in the next manuscript about these data, and that's a very interesting question, because it will have some clinical input, because you would get an idea if you need to change the antibiotics, even if you combine the IMXBVN2 algorithm with the severity algorithm, you see a patient already on antibiotics has still a high severity index, and an index for bacterial infection might be time to change the antibiotics, but these are data where they are not yet analyzed. You know, on our side, we've looked at that data, and we show that the test is still valid in people who've been treated with antibiotics. The assumption there, though, is that those are still individuals who, despite receiving antibiotics, are still coming in sick. Of course, if somebody has been treated and they're now feeling better, that may not necessarily be the population that you would enroll and analyze in this particular context, but if they're coming in and still quite sick in spite of prior antibiotics, we don't really pick up a difference in terms of how the host response BV test performs. And when you think about it, to me at least, that kind of makes sense, right? Antibiotics are, they have an antimicrobial effect, but not necessarily a particularly potent anti-host effect, and so if a patient has been receiving antibiotics, perhaps they're ineffective antibiotics, and so the infection continues to rage, or perhaps they're effective but just insufficient because you haven't achieved source control, but in those scenarios, your patient is still experiencing a very acute illness, and their response, their immunological response, is still very robust, and that's really what we're measuring. So if the patient is still quite ill related to their infection, that response will be there regardless of whether they've been receiving antibiotics or not. Thank you very much, and I guess one of my follow-up questions, then, is what about those that might have an abnormal immune response, either those specifically that might be febrile and neutropenic because they've been getting, you know, their chemotherapy for cancer, or maybe transplant patients that are on immunosuppressive drugs? I have shown the data for immunocompromised patients, and we have in this immunocompromised cohort also a small number of neutropenic patients, and so my answer might be underpowered, but so far the test has performed well in neutropenic patients. Yeah, in our hands, we've previously published another paper in clinical infectious diseases specifically asking how the signature performs in patients with immunocompromised, and most of those were, it was a mix of transplant, chemotherapy, biologics, steroids, a variety of different forms of immunocompromised, and the general conclusions are that the test was a little bit less accurate. The numbers were not huge. It was a couple of hundred patients across both groups, and in some cases, it was statistically lower. I would say that in those populations where the importance of having your diagnostic test to be right is just that much more important, my view is that even with that decrease, a test like this can still be useful, but really shouldn't be the only determinant of treatment for those sorts of patients, again, because the stakes are a bit higher, and I think it stands to reason that depending on the degree of immunocompromised, you can certainly expect that a host response approach, which is essentially a reading of the immune system, may not work quite as well. I think febrile neutropenia is a group that we in particular have not looked at, and I think there the challenge is simply having enough white cells to measure the response from, and so I fully expect that there are going to be certain populations where this is just simply not the right approach, but immunocompromised as a rule is not necessarily an exclusion for somebody to be considered for a host response-based test. I think it really just depends on the nature and the degree of immunocompromised. Yeah, it seems like with all of these tests, it's not a one-size-fits-all, and we tend to think of things, you know, dichotomous. They're either immunosuppressed or they're not, but someone who is just going stem cell transplant and just got their massive chemotherapy might be inherently different than who's on their biologic for their asthma, for example. Exactly. So I think, yes, I think, you know, obviously this is not going to be the silver bullet and be the end-all be-all, but it's really another piece in therapy and diagnosis, and I think another question that comes to mind is, would these tests be used once? Are they something that they can be repeated serially, or more importantly, when you have a patient who might come in with a, you know, who you didn't know if they had a trauma or aspirated, and they've been in the ICU because they had that COPD, and then two weeks are looking septic again. Is it something that we can repeat, or are they really just a one-time test? I think there's certainly the opportunity to have this be a serial test to assess response to therapy, for example, or if you see that someone has normalized and then they become newly ill while in the hospital, could there be something new going on? I think we need to be careful about being overly optimistic without actually the data to support it, because, for example, we know that the immune system just doesn't work quite the same in somebody who's already had sepsis or a critical illness. They end up in this hypoimmune state, and I think we would need to show prospectively in future cohorts whether or not the host response or these tests works equally well in somebody with a secondary hospital-acquired infection as it does in somebody who's showing up for the first time with that infection. Yeah, and probably for the IMF, the test is designed to give an answer just at the point of admission to the hospital, but maybe in an ICU setting and not an emergency department setting, there might be a use for longitudinal analysis. Well, thank you very much. I think this was very informative to me, and it's nice to see what some of the future holds in store. So, I want to thank everyone for joining us, and thanks to both our presenters yet again, Dr. Zauer and Dr. Salik. And again, for everyone who joined us today for the webcast, you will receive a follow-up email that will include evaluation. Please take five or so minutes to complete your evaluation as your feedback is greatly appreciated. And on our final note, please join us for our next Journal Club on Thursday, November 18th. This concludes today's presentation. Have a great day.
Video Summary
The webcast featured two articles from the October issue of Critical Care Medicine. The first speaker, Dr. Wolfgang Bauer, presented on the prospective application of the INSEP test and the BVN2 algorithm for the diagnosis of bacterial and viral infections in the emergency department. The INSEP test is a host immune mRNA expression microarray that can classify bacterial and viral infections based on a set of seven genes. The BVN2 algorithm combines the INSEP test with an 11 gene sepsis metascore to differentiate sepsis from sepsis. Dr. Bauer presented the results of a trial conducted at the Charlotte University Hospital in Berlin, which showed that the INSEP test had high accuracy in identifying bacterial infections, comparable to that of procalcitonin. The test also performed well in identifying viral infections. Dr. Bauer noted that the test has the potential to improve patient outcomes while reducing unnecessary antibiotic use. The second speaker, Dr. Ephraim Salik, discussed the development of a bacterial versus viral test using the BioFire system. The system is a point-of-care platform that delivers results in 45 minutes. Dr. Salik presented the results of validation studies, which showed that the test was superior to procalcitonin in distinguishing between bacterial and viral infections. The test also demonstrated the ability to detect bacterial and viral co-infections. Dr. Salik noted that the test could be a useful tool in the diagnosis of infectious diseases, particularly in critically ill patients. Overall, the webcast highlighted the potential of host response-based diagnostic tests in improving the diagnosis and management of bacterial and viral infections in the emergency department.
Asset Subtitle
Infection, Research, 2021
Asset Caption
"The Journal Club: Critical Care Medicine webcast series focuses on articles of interest from Critical Care Medicine.
This series is held on the fourth Thursday of each month and features in-depth presentations and lively discussion by the authors.
Follow the conversation at #CritCareMed."
Meta Tag
Content Type
Webcast
Knowledge Area
Infection
Knowledge Area
Research
Knowledge Level
Intermediate
Knowledge Level
Advanced
Membership Level
Professional
Membership Level
Select
Membership Level
Associate
Tag
Infectious Diseases
Tag
Outcomes Research
Year
2021
Keywords
webcast
Critical Care Medicine
INSEP test
BVN2 algorithm
bacterial infections
viral infections
procalcitonin
BioFire system
host response-based diagnostic tests
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English