false
Catalog
SCCM Resource Library
April Journal Club: Critical Care Medicine (2023)
April Journal Club: Critical Care Medicine (2023)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello and welcome to today's Journal Club Critical Care Medicine webcast. This webcast, hosted and supported by the Society of Critical Care Medicine, is part of the Journal Club Critical Care Medicine series. This webcast features two articles that appear in the April 2023 issue of Critical Care Medicine. This webcast is being recorded. The recording will be available to registrants on demand within five business days. Log in to mysccm.org and navigate to the My Learning tab. My name is Talal Zagmani and I'm a professor of intensive care medicine at Cardiff University in the United Kingdom. I will be moderating today's webcast. Thanks for joining us. Just a few housekeeping items before we get started. There will be a Q&A session at the conclusion of both presentations. To submit questions throughout the presentation, type into the question box located on your control panel. If you have a comment to share during the presentations, you may use the question box for that as well. And finally, everyone joining us for today's webcast will receive a follow-up email that will include an evaluation. Please take five minutes to complete this. Your feedback is greatly appreciated. Please note the disclaimer that the content to follow is for educational purposes only. And now I would like to introduce today's presenters. Dr. Elmer graduated from Mount Sinai School of Medicine in 2008 and completed residency in emergency medicine at the Massachusetts General Hospital Brigham and Women's Hospital Combined Training Program. He completed critical care medicine and neurocritical care training at the University of Pittsburgh before joining as faculty. His research is focused on improving delivery of acute post-arrest care to improve patient outcomes. Specific domains of his work include elucidating the effect of neurocritical care and systems of care on patient outcomes, advancing the science of neurological prognostication and post-arrest risk stratification, and developing robust methods for analysis of continuous correlated physiological and EEG data. Julia is a clinical nurse specialist in the intensive care unit at the Royal North Shore Hospital, Sydney, Australia, and a clinical research fellow with the University of Sydney. She has extensive clinical experience working in the general ICU and is currently working with the Agency for Clinical Innovation to develop a method to identify pressure injuries in clinical progress notes using natural language processing techniques. Julia has a Master's of Intensive Care Nursing from the University of Sydney and is currently a PhD candidate with the Faculty of Health at the University of Technology, Sydney. Her PhD project is investigating the effects of pre-existing mental health disorders on patients admitted to adult intensive care units. Thank you both for joining us today. I will now turn the presentation over to Dr. Elmer. Thank you so much. Thank you all for being here. The article that I'm going to talk about today was titled Time to Awakening and Self-Fulfilling Prophecies After Cardiac Arrest. My disclosures are here. I do have grant funding that both supported this work and is related to prognostication and predictive analytics after cardiac arrest. All of these disclosures are either my employer or grants to the institution, nothing to me directly. Self-fulfilling prophecies are a broadly general problem, I think, in clinical medicine. They really affect the bedrock on how we think clinically as doctors and nurses and bedside providers, how we do research and how we generate knowledge in medicine. So before sort of getting into the weeds of the article and the methodology, I wanted to highlight the clinical relevance of this work first. So when I was a fellow, one of the faculty in our cardiothoracic ICU who had decades at that point of experience was explaining to me the patients with a lactate more than 20 in the context of a particular disease entity that was common in the unit. And his experience just don't recover. They don't do well. On a best case scenario, they languish on ECMO for days or weeks. The care for these patients is futile, costly, undignified. And he had a lot of wisdom. He was a very seasoned practitioner. And, you know, I think he advocated what any of us would do in the context of perceived futility or medically ineffective care. He would advocate to families as part of well done shared decision making that they transition towards comfort oriented care, including a palliative extubation when faced with really dismal prospects for meaningful recovery. Families typically follow our treatment recommendations, at least in our hospital. They tend to defer to the physicians and say, what would you do, doc? What should I do? And so when families follow our recommendations to transition towards comfort oriented care, they make decisions that fairly consistently result in a comfortable and a dignified death of the patient. And we learn from these observations. We can't help but learn from our life experience in the ICU and our prior belief that this condition elevated lactate and the disease would invariably lead to death gets reinforced. And it perpetuates a belief that we might have clinically that these patients are too sick to recover because we've never seen them recover. With a little bit of sort of simple mathematical formalisms, you know, I think using this example to build on it, what we think we're learning without thinking too critically about it is the probability that a patient will recover given that they have disease X and a lactate more than 20. And that's, I think, how most of us think clinically, given what I know about this patient. Do I think that they'll recover? What we want to learn, certainly from a research perspective or from a knowledge generation perspective, is the probability of recovery given the disease, the test, in this case, the lactate and continued aggressive care, sort of the best case scenario for recovery. But what we actually learn, at least in this example, is the probability of recovery given the disease, the test result and comfort care, which not surprisingly, is near zero. More generally, what we learn from our observations is the probability of recovery given the clinical stuff we know and also our treatment decisions, which is an important distinction from what we typically actually are studying when we look at data sets, even trials where treatment decisions may be targeted non-randomly after the initial randomization to one treatment arm or another. In prognostication research, which is what I do, most studies are of observational data sets. They include clinical characteristics for cardiac arrest. How old is the patient? What was their arrest rhythm? How long was their arrest? What did they look like after their cardiac arrest? And it includes their outcomes. But what it doesn't record, again, is these treatment decisions, which essentially obscure our ability to learn a ground truth outcome in situations where life-sustaining therapies are limited or withdrawn. We never have an opportunity to see what the counterfactual recovery potential might have been had we made a different treatment decision. And so, again, the association which we really want to learn, which is the probability of recovery given a set of clinical information, is not what we're actually learning. What we learn is the probability of recovery given the clinical information and the treatment decisions that we've made. The clinical folks in the audience know this already, but cardiac arrest is a leading cause of morbidity and mortality in high-income nations. And in the U.S. and in much of Europe, most hospital deaths, in the U.S. it's about 6 out of 10, occur after a treatment decision to withdraw life-sustaining therapies based on a perceived poor neurologic prognosis. So these patients, and more generally neurologically critically ill patients in general, there's this tremendous potential risk that the outcomes that we observe are an artifact of the treatment decisions we make rather than reflecting a ground truth recovery potential. We develop treatment guidelines, and we use those, but we develop those treatment guidelines based on the associations which were learned from these observational data sets. Which makes them suspect. And as a potential guard against this, we recommend acquisition of multiple independent prognostic modalities. But these prognostic modalities assess what are fundamentally the same underlying pathophysiology, the same underlying brain injury, the same underlying organ failure. And so just getting more independent measures of the same pathophysiology doesn't really guard against the risk of self-fulfilling prophecies. So our study concept is broadly that there is a path forward, that we can still learn from observational data, even with this risk of self-fulfilling prophecies, but that we need to take precautions. We don't advocate simply throwing up our hands and saying we can't trust any knowledge. We still need to learn, and we need to do the best we can, but we need to account for the fact that our observed outcomes may be influenced or contaminated by these self-fulfilling prophecies. And more specifically, our concept is that death observed after a decision to withdraw life-sustaining therapies does not reflect a credible ground truth from which knowledge can be acquired. Many of these patients may indeed have irrecoverable patterns of injury. We certainly hope they are. We hope that we're not withdrawing life-sustaining therapies from folks who are likely to do well. But we don't know, and we can't be confident in the observed outcomes in those cases. And so specifically, we hypothesize that models that would treat death after withdrawal of life-sustaining therapy as a ground truth would be systematically more pessimistic than those which treat those patient outcomes as unknown, particularly in groups of patients who are at high risk of withdrawal of life-sustaining therapies or patients for whom it is common. We use data to test this hypothesis from patients treated at two academic medical centers, one here in Pittsburgh and University of Alabama. There were about 1,250 patients, all of whom were resuscitated from cardiac arrest or comatose on arrival with some of the usual exclusions for trauma or neurologic etiologies of arrest. And all of the patients underwent continuous EEG monitoring as standard of care, in large part because EEG features were one of the main prognostic factors that we were interested in. We got a lot of features extracted from the electronic health record, both time-invariant stuff like their clinical and arrest characteristics, quantitative EEG features, SOFA scores, medications, many thousands of features. And our primary outcome of interest was to predict awakening from coma as a necessary first step in a functionally favorable recovery from cardiac arrest. Obviously, awakening from coma is not a gold standard outcome, but it was reliably assessed in all of these patients at least daily, if not more frequently. And you can't have a high-quality recovery without awakening. And in fact, at both of these centers, most patients who awakened also did well overall. There are three models that I want to highlight, and then we'll look at the results from them. One was the sort of traditional approach. Did I see the outcome or not? Did this patient awaken? Did this patient not awaken? For that, we used logistic regression that predicts a yes-no outcome. The main comparison model was a Cox regression, which predicted awakening or time to awakening, but censored the outcome after withdrawal of life-sustaining therapies. It said, I don't know what would have happened in the future. It treated the patient as though they were, for example, lost to follow-up. And then to identify patients at high risk of withdrawal of life-sustaining therapies, we developed the third model, which simply predicted, did the patient have withdrawal of life-sustaining therapies or not? And from that, for each individual patient, we could derive their predicted probability of withdrawal, regardless of whether or not it occurred. This is a high-level snapshot of the summary. We used some machine learning approaches for feature selection, a lasso both for the logistic and the Cox regressions, and divided the sample into training and test sets. For some of the models, based on available data, we evaluated performance only on one center, the Pittsburgh patients, which was about 1,000 of the patients. Where data were available from both EHRs, we tested the model on the Alabama patients as an external test set. And the model performance metrics, which I'll show you, are derived from pooled test sets, so they reflect the full cohort of patients, but are all out-of-sample predictions. Overall, the models performed pretty well. Both the binary, did the patient wake up, yes, no, and the time-to-event regression had a mean area under the curve in the test sets of 0.93. I think it's striking how similar the receiver operating characteristic curves are for the two models. This is sort of the graph that I think highlights the meat of the results of the analysis across the x-axis for each patient in the test set is their predicted probability of withdrawal of life-sustaining therapies. And you can see a lot of red dots among those at high probability of withdrawal. This model also performed very well. Not surprisingly, patients who look really bad when they come into the hospital and have multiple unfavorable clinical and prognostic characteristics are at high risk of withdrawal based on their perceived poor neurologic prognosis. And those who look pretty good have a much lower predicted probability of withdrawal. Many of them awaken the black dots here. On the y-axis here is the difference between the Cox and the logistic models, and I don't want to get too much into the weeds of how we compared them. The output from a Cox regression isn't really analogous to a probability like a logistic model prediction, but we did some math on the back end to allow a direct comparison of the two models. And the important point here is the fit line and the sort of overall distribution of the plots. What we see here, as we hypothesized, is that for patients at low probability of withdrawal overall, the logistic models were more optimistic. And the pessimism of those models increased relative to the survival analysis for those patients at high probability of withdrawal. What that means in plain language is that if you treat death after withdrawal of life-sustaining therapy as a ground truth outcome against which you develop prognostic models or guidelines, the result of those models or guidelines will be systematically more pessimistic for the patients at highest risk of withdrawal of life-sustaining therapies than would models or guidelines developed treating those observed outcomes as potentially flawed or biased by treatment decisions. Now, I should note that this approach takes sort of an extreme example in that it completely censors the outcome after withdrawal in the survival model. And so it assumes that there's nothing we can learn there, and that's probably overly pessimistic because when we withdraw life-sustaining therapies, we do it based on a large amount of data and biomedical knowledge. And so it's probably not as bad as this approach looks. But the difference between those patients at low and high probabilities across the models is non-trivial. If you were to extrapolate it out to the U.S. population of cardiac arrest patients, we're looking at thousands of lives lost potentially from overly pessimistic withdrawal of life-sustaining therapies based on overly pessimistic models trained on spurious observed outcomes. And so, you know, although the line is certainly not vertical or even steep, a few percent difference when magnified out across the many tens of thousands of patients affected by cardiac arrest who are hospitalized each year in the U.S. and in other high-income nations, the effect on lives lost is really dramatic. So overall, the conclusions of our paper, the key ones at least, were number one, early clinical features. I didn't mention this earlier, but we really restricted our analysis to the first six hours or so after hospital arrival are adequate to predict time to awakening after cardiac arrest or to predict awakening. Not perfectly, but quite well. When we treat the outcomes of patients as high probability of withdrawal of life-sustaining therapies as ground truth, we get systematically more pessimistic outcome predictions from the logistic model that treats it as a ground truth compared to censoring those data and viewing them skeptically. And traditional analytical approaches, which is really looking at binary outcome prediction, did the patient get better, did the patient not get better, did they survive, did they not survive, did they recover, yes or no, are likely to perpetuate self-fulfilling prophecies and when instantiated into our clinical practice and our guidelines may lead to avoidable poor outcomes. I'm going to stop there and turn the presentation over to Julia, our next speaker, for a presentation of her paper. Thank you, Jonathan, and thank you to the organizers for having me here. I'm here to present my paper on the association between pre-existing mental health disorders and adverse outcomes in adult intensive care patients. So, again, no real disclosures to make. I did receive funding from the Australian Government RTP, which is the funding program for PhD programs here, and also a small amount of money from the Ramsey Research Scheme. So, I might start with how I actually came up with the topic for this study, which happened when I was working as a clinician in the ICU and became interested in how we manage or don't manage mental health disorders. In our critically ill patients, there was one particular lady who'd come to ICU with severe pancreatitis. She also had a history of depression, which was usually well controlled in the community using an SSRI prescribed by a GP, but this therapy had been ceased on admission to the unit. Unfortunately, her medical condition deteriorated and she eventually required invasive ventilation via a tracheostomy for multiple weeks while her pancreatitis resolved. During this time, her mental health also significantly deteriorated and every day we'd hoist her out of bed and she'd just sit in her chair crying and crying until it was time to go back to bed in the afternoon. A referral was placed with consultation liaison psychiatry. However, due to her inability to communicate verbally as a result of the tracheostomy, the psychiatry team reported that they were unable to perform an assessment and requested that she be re-referred when able to verbally participate in an interview. No other supports such as an ICU psychology service were available and so I went to consult the literature for suggestions about how to best manage the mental health concerns of this patient. It quickly became apparent that there was a significant research gap relating not only to the management of patients with pre-existing mental health disorders in the ICU, but also regarding the prevalence and characteristics of these patients. So it was clear that there was a need to better understand the characteristics and outcomes of patients with pre-existing mental health disorders in order to develop targeted strategies to support their mental and physical health during and after the ICU admission. So first we conducted a formal review of the literature regarding the prevalence of pre-existing mental health disorders in patients admitted to ICU. Nine studies were identified and we calculated a pool prevalence rate of 19%. However, while this did suggest that these patients do form a significant subgroup within the ICU, all of the identified studies also had significant biases relating to their external validity and most relied on clinical codes attached to medical records, a method known to be highly inaccurate for identifying pre-existing mental health disorders. This highlighted a significant research gap and demonstrated the need for a multi-centre prevalence study to be conducted on this topic and to find a method that would provide high quality data about the mental health history of a large cohort of critically ill patients. So in order to address these gaps, we conducted a study to meet the following objectives. To determine the prevalence and relative frequency of pre-existing mental health disorders in adult ICU patients and to identify any association between the presence of a pre-existing mental health disorder and shortcut term outcomes such as ICU mortality and length of stay and ICU interventions like mechanical ventilation. We also aim to develop and validate a novel method using natural language processing or NLP techniques to extract information about participants' mental health history from the clinical progress notes. To address these objectives, we performed a retrospective cohort study using linked administrative data sets from eight ICUs across the state of New South Wales, Australia. Given that we wanted as representative a sample as possible, all patients admitted to the participating ICUs in the calendar year 2019 were included with the only exclusion criteria being readmissions within the same hospitalisation. So how did we obtain information about the sample's mental health history? As I previously mentioned, most other studies using administrative data utilise clinical codes attached to medical records. However, previous studies assessing the accuracy of these codes have shown that they miss up to 80% of pre-existing mental health conditions. Therefore, in order to obtain a more accurate estimate of the prevalence of pre-existing mental health disorders, we developed a novel approach to directly extract each patient's mental health history from their clinical progress notes, rather than relying on clinical codes. So to develop the natural language processing algorithm, we first identified a list of keywords that were likely to be indicative of the mental health diagnoses we intended to classify, based on the categories in the International Classification of Diseases, as well as the names of medications that are commonly used to treat mental health disorders which could be used as a surrogate. We then asked the data manager at each site to search for and extract only those notes containing one of the specified keywords. This is similar to when you perform a search in a journal database and uses similar techniques like wildcards to ensure all potentially relevant results are identified. Each note was also extracted alongside a unique patient ID to ensure all notes were allocated to the correct patient and to allow linkage with other datasets later in the study. So next we needed to create classification dictionaries to differentiate between true and false positive cases of mental health disorders. This was done by searching for each keyword term, like depression, in the dataset and assessing the words and phrases that occurred before and after the keyword to determine whether they should be included in the dictionary. So for example, when creating the classification dictionary for depression, the word depression was included in the true positive dictionary, whereas phrases like ST depression and respiratory depression were included in the false positive dictionary. So next we count the number of times the various words and phrases in the classification dictionaries occur in each clinical progress note. So using the depression example again, you can see that the words from the true positive dictionary, i.e. the word depression, occurs once in each of the sample notes on the slide. Next, when we look at the frequency of terms from the false positive dictionary or the not depression dictionary, we can see that the phrase ST depression occurs once in note B, and the phrase respiratory depression occurs once in note C. So finally, when we use Boolean logic to state that if the count of phrases from the true positive dictionary is greater than the count of from the false positive dictionary, then the note should be classified as depression. So when this logic is applied to the three example notes, we can see that note A would be classified as a positive case of depression, whereas notes B and C are not. So now that we have information about the mental health status of the sample, we needed to acquire data about their clinical and demographic characteristics. These data were obtained from the Australia and New Zealand Intensive Care Society Adult Patient Database, or ANZICS APD, and were linked to the mental health history data through a unique patient ID that was extracted alongside the progress note data. The ANZICS APD is a clinical registry maintained primarily for quality assurance and benchmarking purposes and contains information such as the reason for ICU admission, interventions received during the admission, and severity of illness scores. Importantly, the ANZICS dataset contained information about all patients admitted to the participating ICUs, not just those with mental health disorders, and so also provided the denominator for the calculation of the prevalence rate. So in the end, approximately 16,000 ICU episodes of care were included in the study, and just over 160,000 progress note excerpts were used to determine the mental health history of participants. So looking first at the results of the NLP algorithm validation, using a manual chart review as a gold standard, the algorithm demonstrated 90% accuracy in classifying pre-existing mental health disorders. It was also noted that the main reason for misclassification was when a patient's mental health history was only documented in the ward, rather than the ICU clinical information system, or was recorded on a paper-based summary that was later scanned in and therefore not accessible to the algorithm. Similar to other studies, again, the clinical coding summary performed poorly, only demonstrating 75% accuracy compared to the manual chart review. So now looking at the overall characteristics of the sample, patients with a pre-existing mental health disorder were younger, more likely to be female, and had slightly higher levels of socioeconomic disadvantage. They were much more likely to be admitted to ICU for a medical rather than a surgical reason, and much less likely to have a cardiovascular admitting diagnosis than those patients with no pre-existing mental health disorder. Interestingly, despite these differences, the two groups had almost identical severity of illness scores as measured by the APACHE-3. So regarding our first objective, the prevalence of pre-existing mental health disorders, 31%, so almost a third of the cohort, were found to have a pre-existing mental health disorder. 10% had a drug or alcohol dependence disorder, and 6% were admitted to ICU following a suicide attempt. So looking next at the relative frequencies of mental health disorders, we found that affective disorders, which included depression and bipolar disorder, to be the most common type of pre-existing mental health disorder, occurring in 16% of patients, followed by anxiety disorders, which occurred in about 10% of patients. So next we looked at the ICU interventions and outcomes of the sample, and you can see that patients with a pre-existing mental health disorder were more likely to require invasive ventilation, and once ventilated, they had a much longer duration of ventilation. They were more likely to experience delirium, although there was quite a lot of missing data for that variable, I should give a small caveat, and had a significantly longer ICU and hospital length of stay than those patients with no pre-existing mental health disorder. Despite this, the two groups had similar rates of ICU readmission, as well as similar hospital and ICU mortality rates. So in order to better understand the differences between the two groups, we performed several mixed effects regression models. So this is the results of the linear model for ICU length of stay, and you can see that despite controlling for other factors like admitting diagnosis or need for mechanical ventilation, the presence of a pre-existing mental health disorder was associated with a longer ICU length of stay. This model shows that patients with pre-existing mental health disorders stay in ICU about 13% longer than other patients, which on average equates to approximately eight hours. These are the results of the mixed effects logistic regression model, looking at need for invasive ventilation during the ICU admission, and you can see again that even when adjusting for other factors like admitting diagnosis and severity of illness, patients with pre-existing mental health disorders are 1.42 times more likely to require invasive ventilation during their admission than other patients. We also ran a model looking at mortality, which confirmed that mental health status was not associated with mortality during the ICU admission. So our study found that the prevalence rate of pre-existing mental health disorders in ICU patients to be 31%, with affective disorders being the most common type of pre-existing mental health disorder, accounting for just over half of all disorders identified. This rate is somewhat higher than that identified in the previous studies that relied on hospital derived clinical codes to identify mental health disorders, but is fairly similar to studies that use codes derived from primary care databases as well as hospital systems. This again highlights the well-known issue of mental health disorders being omitted from clinical coding summaries and demonstrates the need to establish alternative methods for their identification in inpatient administrative datasets. Interestingly, despite similar severity of illness scores and similar mortality rates between the two groups, patients with pre-existing mental health disorders stay in ICU longer and are more likely to require invasive ventilation, suggesting they have a different clinical trajectory to other patients regardless of their admitting diagnosis. While the reasons for this apparent increase in ICU length of stay need to be explored further, one explanation could be that these patients are not participating in ICU rehabilitation activities as fully as other patients. Low mood and motivation are known barriers to engaging in beneficial activities such as early mobilisation, and the effect of implementing additional psychological supports for these patients to enable their participation needs to be investigated. In the interim, we as critical clinicians need to be aware of these unique vulnerabilities in this group of patients and tailor our care to ensure we provide sufficient support to meet their needs, which may potentially reduce their length of stay and duration of ventilation. Our study also found that patients with pre-existing mental health disorders were underrepresented in the cardiovascular and elective admitting categories, which is particularly surprising given previous work showing higher rates of cardiovascular disease in people with mental health disorders compared to the general population. This may be due to higher rates of medical management for cardiovascular disease in patients with pre-existing mental health disorders, meaning they didn't require an ICU admission post-procedure, or potentially there's a large amount of untreated disease in these patients. Again, further research is needed to investigate how cardiovascular disease is managed in these patients and that equity of access to all types of treatments is ensured regardless of mental health history. We've also shown that natural language processing can be used to extract a high-quality dataset about mental health disorders from the clinical progress notes, and this technique could also be theoretically applied to other clinical concepts that are difficult to find complete data on. So one major strength of the study was its size and multi-centre nature. The use of administrative data allowed for what was essentially a census of the ICU population to be performed, thereby avoiding sampling bias. However, a significant limitation of the study was its relatively short-term nature of its duration. Short-term data was only collected during the index hospitalisation, and further research is needed to determine the longer-term outcomes of these patients and investigate the potential need for additional support services that may be offered during or after the ICU admission. Similarly, we were limited by the variables available in the administrative data used in the study, and so weren't able to explore things like indication for ventilation, which may better explain the mechanisms underlying the associations found. Furthermore, although the NLP algorithm demonstrated a high level of accuracy compared to a manual chart review, we don't know how accurate the mental health diagnoses recorded in the clinical progress notes were, and further validation is necessary, for example, to compare the diagnoses in the notes with a formal psychiatric assessment. So in conclusion, patients with pre-existing mental health disorders make up a significant subgroup within the ICU. They clearly have unique clinical characteristics such as increased length of stay, and further research into how to better support these patients in the ICU is needed. Finally, the long-term outcomes of these patients need to be determined with a view to establishing targeted support services in the post-ICU period. Thank you very much, and I'll hand back over to Tomas to moderate the questions. Thank you very much both for the really interesting presentations. I think both studies pose almost philosophical and sometimes likely ethical questions which we would need to explore. First I would like to ask from Jonathan, whether you think that the time to decision to withdraw or live sustaining therapies can counter the self-fulfilling prophecy, i.e. if we wait longer, then this self-fulfilling prophecy would not be that strong? And I understand that you might not have the data, but I'm just interested in your views on that. Yeah, so I think that certainly delaying a decision until more or better data accrue, or ideally in this context, the patient has an opportunity to just experience the positive outcome so there's no longer a prediction task that's necessary, can help to some extent. We focused in this study and my discussion focused on a well-defined binary treatment decision, withdraw life-sustaining therapies or not. But I think concerning to this broader, and as you sort of described, philosophical question, much of what we do in the intensive care unit is not neatly binned into a single discrete measurable decision. And so, you know, like we know we have lots and lots and lots of neutral clinical trials, but we also know that providing really excellent high-quality ICU care moves the needle on patient outcomes. And so delaying can help in the context of a well-defined treatment decision, but I still worry that patients who look really sick when they come into the intensive care unit may get less attentive or lower quality care. And that, you know, may be measurable things, but it can also be unmeasurable things. How long does the team spend rounding? Or how many times does someone go into the room to titrate the vasoactive medicines? So all of that to say, I think delaying the prognostic decision can help to some extent, but the problem is likely to be really insidious, and that only will overcome a portion of it. Thank you. Julia, the question to you is, is there any data to show that if you continue the normal therapy and medications for the ICU patients with pre-existing mental health problems affect outcome? I'm genuinely interested in this because I don't know, and I haven't looked into this. So you might have some data which could help to elucidate why you found the increase in length of mechanical ventilation. Yeah, thank you. So I did have a bit of a look around this. Unfortunately, we didn't collect, again, another limitation of using administrative data is you can't always collect exactly what you want. So there has been a few studies that have sort of given mixed results about the benefits of continuing particularly SSRI therapy. So some studies have shown it actually confers sort of harm because of issues with bleeding that apparently is associated with these medications, and that these patients can have a high mortality rate if you do continue them. Although other studies have also shown that if you continue regular medications, there's less risk of withdrawal, and there's a reduced amount of delirium in the cohort. So it's mostly observational work that's been done, and pretty much all the studies I looked at ended with there's a need for further high quality work to be done in this space. But I also do wonder that just because you're ceasing a pharmacotherapy, and maybe that's necessary for various physiological reasons that are occurring, we could also replace it with some sort of other type of psychological supportive therapy. Like I know from, you know, when you're on a busy shift and you've got a million things to do and you go up to one patient and they're like, oh, you know, I don't want to get out of bed. I don't want to do anything. Just go away. Your threshold for sort of us accepting that and going off and doing something else is probably not the highest. And if there was sort of a dedicated ICU psychology service or some sort of CBT type therapy that could be used to give these patients sort of an extra push, a bit of extra motivation, or like when you're weaning for the ventilator as well, it's feasible that, you know, when somebody who has an underlying anxiety disorder, you suddenly turn off, you do the sedation hold and you see how their breathing is and that kind of thing, they might be more likely to start hyperventilating and showing, giving signs that maybe they're not weaning successfully when it's more due to their underlying anxiety than their respiratory physiology. So while I think the sort of pharmacotherapy type aspect of these patients' management is still quite murky and the sort of risks versus benefits of continuing the medication is yet to be figured out, I don't think that's a reason to sort of just say, okay, well, medication's the only answer. We shouldn't start looking at other supportive type therapies that might help to sort of attenuate these adverse effects. Thank you very much. The next question will go to both presenters. And I would like Jonathan first to try to give an answer that how this data compares to the international cohorts in the developed countries. So is there anybody who has been looking at a similar risk prediction scoring or ways of breaking the self-fulfilling prophecy in cardiac arrest? Are you aware of anything? Yes. So I think there have been a number of approaches that have been attempted. There are certainly regions and countries where withdrawal of life-sustaining therapies for any reason is either illegal or not consistent with local social norms. And so data derived from those countries or cohorts from those countries offer some protection against it. Again, if we focus in on the binary treatment decision of withdrawal or no withdrawal, then if you prohibit withdrawal, then that's a very good protection. But it doesn't necessarily protect against all of the other harder-to-measure aspects of the patient care that we provide. And so there may still be some insidious self-fulfilling prophecies that work their way in. There have also been trials, the TTM trial notably, which I think was the first to do this in a really well-instructed way that have tried to protocolize neurologic prognostication and treatment recommendations based on perceived neurologic prognosis. That has some benefit of standardization, but again is derived from guidelines and literature, all of which are affected by these self-fulfilling prophecies. So these are early steps, I think, and important steps, but are probably not sufficient to more fundamentally change how we try to learn from observed outcomes in the face of critical illness and the numerous large and small treatment decisions that we make. Thank you. Julia, a similar question to you, that are you aware of data from the international literature? And let's focus just on the developed countries for a moment, because that's where you are more likely to have population level data in this field. Anybody has looked at it? Yeah, for sure. So we did find, there were quite a few studies that had looked at this, and it was again quite interesting in the difference in prevalence rate varied quite a lot depending on whether they use purely like billing data, purely clinical codes, in which case the prevalence was quite, was much lower compared to if they'd linked it to like primary care data or some sort of medication data, in which case the prevalence rate was more similar to what we found. The studies, there's been quite a couple of sort of higher quality studies done using Canadian administrative data, which came up with a similar prevalence rate to ours, and they also looked at the long-term outcomes of these patients. The biggest studies done in the United States, as far as I found, were only done using Veterans Health Administration data, so that's obviously sort of a slightly different population, which is probably not quite as comparable to the general population as other studies might have been. But most of the other, and there was another big one done in Denmark, but they only included patients that were mechanically ventilated. So again, sort of not quite as representative of the broader ICU population, and most of the other studies didn't do any multivariable analysis either for in-ICU outcomes. They did perform some looking at longer-term outcomes, which again unsurprisingly showed that patient pre-existing mental health disorders are associated with worse outcomes down the track as well as within ICU, but not quite so much that's been done within the ICU. And so it's difficult to find high-quality data about, particularly when patients are critically ill, as I'm sure you're aware, the coders will preferentially code things that they're likely to get funding for, which is the interventions that happen within the ICU and all the other complexities of their illness. So that tends to be why the mental health, pre-existing mental health disorders get left off the data set, and so the data's not quite as accurate. Thank you very much. Jonathan, one more question for you is that you looked at this phenomenon in the cardiac arrest population, which is a well-defined and relatively small aspect of critical care. Could we look at a similar way, or very critically, our data on prognostication and risk modelling in non-neurocritical care cases? Do you think that this self-fulfilling prophecy might exist in other conditions, as I say, outside of the neurocritical care arena? Yes, in short. So I tried to pick a non-neurocritical care clinical example to start the talk for exactly that reason. I think that neurocritical care patient populations often measure withdrawal of life-sustaining therapies as a, or recorded as a proximate cause of death. So you have to have that if you're going to focus on this. For example, I would love to have the creators of the most recent or past Apache scores redo their analysis accounting for the potential of withdrawal of life-sustaining therapies as a particular example of what I think is a more general problem. As you all know, most deaths in the ICU, although not all, are planned and happen after a decision to de-escalate the intensity of care. So I don't in any way think that this is unique to the neurologically critically ill patients. You could do this in almost any patient population in the ICU equally well, assuming that the necessary data were collected. Thank you. And one final question for you, Julia, as well. Could you just give us an insight into the neuro-linguistic programming method of gathering the diagnosis? Obviously, you have been working, I assume, on electronic databases. How time-consuming and how difficult it is to train a search program to look at progress notes and try to glean this information? I would be really interested to hear a few words on that. Yeah. Thanks, Tomas. I mean, there are lots of different types of natural language processing algorithms. So the two main, the two broad categories are the machine learning-based or rules-based. And the one that we used is a rules-based algorithm. So you don't need a labeled data set, and it's sort of generated in an iterative manner as you examine the progress note data that you have. And the other good thing about it is it's got pros and cons. You don't, for the rules-based type algorithm, which is why I did it, you don't need much data science experience, but you do need quite good clinical knowledge to be able to figure out, you know, does this phrase mean depression or does this phrase mean some other weird concept? The time that it took to make the algorithm, I spent about the same amount of time developing the algorithm. So all it really involved was searching for the keywords, so say depression or anxiety, and then cutting off the text. So the five words preceding the keyword and the five words following the keyword, and then just scrolling through all the notes and looking for frequently occurring words and phrases. Because that's the other good thing about progress notes, sort of all clinicians speak in a similar language. So you sort of saw the same phrases and words appearing quite frequently, which is how we were able to do the algorithm in the first place. But the time that it took to create the algorithm was about the same time that it took to collect the data on the 400 patients for the validation study. So doing 400 manual chart reviews is about as long as it took to review 16,000 patients. So while it's not sort of instant, the amount of data that you get out of it, doing it using NLP, you get much more bang for your buck than you do from just scrolling through endless progress notes over and over and collecting it manually. And like I said, I don't have any formal training in computer science, and so it's not terribly difficult as long as you can understand progress notes and have a basic understanding of the program used to do it. It's quite accessible to clinicians. And the other good thing about rules-based type algorithms that I find is that they're a lot more transparent than machine learning. You can say to someone, okay, these are the list of words that we use to classify depression. Here's the dictionary I used. If you don't agree with it, then okay, fine, but at least you can see quite straightforward how we determined what the condition that we're classifying, whereas with machine learning, it can get a little bit more tricky trying to dig down into exactly how the algorithm decided whether or not it classified it in one way or the other. So I hope that sort of answered your question about it. It's a whole other topic in and of itself, but it's quite transferable to anything as long as you think that, as long as you're reasonably confident as a clinician that the concept you're wanting to classify is accurately documented in the notes, then it can be applied to whatever you want. Thank you very much. That was very useful and very, very interesting. And this concludes our Q&A session. Well, thank you to our presenters and the audience for attending. And again, everyone who joined us for today's webcast will receive a follow-up email that will include an evaluation. Please take five minutes to complete this. Your feedback is greatly appreciated. And this concludes our presentation for today. Have a nice day. Thank you.
Video Summary
The Journal Club Critical Care Medicine webcast featured two articles: "Time to Awakening and Self-Fulfilling Prophecies After Cardiac Arrest" and "Association Between Pre-Existing Mental Health Disorders and Adverse Outcomes in Adult Intensive Care Patients." The first article focused on the problem of self-fulfilling prophecies in clinical medicine and how they can affect patient outcomes. The study looked at patients who had experienced cardiac arrest and found that predictions of poor neurologic prognosis often led to a treatment decision to withdraw life-sustaining therapies, which in turn influenced the observed outcomes. The study concluded that these self-fulfilling prophecies could lead to avoidable poor outcomes and called for a more cautious approach to prognostication. The second article examined the prevalence of pre-existing mental health disorders in adult ICU patients and their association with adverse outcomes. The study found that approximately one-third of ICU patients had a pre-existing mental health disorder, with affective disorders such as depression being the most common. Patients with pre-existing mental health disorders had longer ICU stays, were more likely to require invasive ventilation, and had higher rates of delirium. The study highlighted the need for targeted supports to improve the mental and physical health outcomes of these patients. Overall, the articles shed light on how treatment decisions and mental health can impact patient outcomes in critical care settings.
Asset Subtitle
Resuscitation, Behavioral Health and Well Being, 2023
Asset Caption
The Journal Club: Critical Care Medicine webcast series focuses on articles of interest from Critical Care Medicine.
This series is held on the fourth Thursday of each month and features in-depth presentations and lively discussion by the authors.
Follow the conversation at #CritCareMed.
Meta Tag
Content Type
Webcast
Knowledge Area
Resuscitation
Knowledge Area
Behavioral Health and Well Being
Learning Pathway
Behavioral Health and Burnout
Membership Level
Professional
Membership Level
Select
Tag
Cardiac Arrest
Tag
Behavioral Health
Year
2023
Keywords
Journal Club Critical Care Medicine
Time to Awakening
Self-Fulfilling Prophecies
Cardiac Arrest
Neurologic Prognosis
Life-Sustaining Therapies
Pre-Existing Mental Health Disorders
Adverse Outcomes
ICU Patients
Webcast
Resuscitation
Behavioral Health and Well Being
Behavioral Health
Professional
Select
2023
Behavioral Health and Burnout
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English