false
Catalog
SCCM Resource Library
Artificial Intelligence in Pediatric Patient Care
Artificial Intelligence in Pediatric Patient Care
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning. It's a real honor to be able to be here today and speak with all of you on this topic. I thank you for the introduction, Dr. Stephens. I'm Chris Horvath. I'm a pediatric intensivist, clinical informatician, and learning health systems researcher at Children's Hospital of Pittsburgh. I have the privilege of working there. I'm going to talk today about artificial intelligence in the pediatric critical care space. But I'll say up front that I took the liberties of modifying this title just a little bit. This may not be the only technical limitation that we have today given I have video clips embedded as well. James, do you just want to advance on your side? Perfect. Okay. And I'll just stay advanced for now until the clicker starts working. So change the title to Augmented Intelligence in Pediatric Critical Care because I think that's really where we are as a field right now as we think about implementing things into patient care workflows that might modify patient outcomes. I'm going to also say up front as well, I've included a lot of artwork, an example here on the left, generated by OpenAI's Dolly2 solution just because I love playing with this tool. And any time that you see this artwork, it's usually a byproduct of the just natural language text that's sitting next to it. So in this case, I put in artificial intelligence in pediatric critical care, and that's the art that it spit out, which just continues to amaze me that it can do that. Next slide, please. So only disclosures. There's some funding from the NICHD, NIDS, UPMC Enterprises, and the Beckwith Foundation. Next slide, please. All right. So if you can go ahead and play this, James. So in preparing for this talk, I decided to turn to AI for some inspiration on what I should actually speak about. So I went to ChatGPT, which has been mentioned already. It's an OpenAI model, uses deep learning to have realistic conversations on any number of topics. And I said, I have to give a presentation at the Annual Congress of the Society of Critical Care Medicine on artificial intelligence in pediatric critical care. And ChatGPT replied this. It said, well, here are some key points that you might consider incorporating into your presentation on the use of AI in PCCM. AI can be used in the assistance of diagnosis and treatment of critically ill children. It can be used to predict children at risk for developing certain complications, such as sepsis. It can be used to support decision-making in critical care, and it gave some examples. It can be used to improve patient outcomes by optimizing resource allocation and reducing medical errors. However, it's important to consider the ethical and legal implications of using AI in PCCM and issues related to data privacy and security. And also, it's important to note that AI should not be considered a replacement of human intelligence. And you can see that was, I increased the speed by about 25% here, but that's essentially how quickly it gave me the answer. And the servers were running a little slow, I have to say, that night. So this was impressive. I thought, yep, those are all the points that I'm going to want to hit on. And I was also a little bit frightened after I had done that as well. We can go to the next slide, please. And advance again. Great. So today, and I'm just going to keep saying advance here. So this is the outline of what we'll talk about in advance. We'll do a glossary of terms. We'll talk about dealing with an uncertainty. We'll talk about immediate opportunities for AI and PCCM. And then we'll talk about some challenges related to AI and PCCM. And then briefly, I'll touch on the future. And I think the clicker is working again, which is great. So first, a glossary of terms. So the first thing I want to just level set with is that artificial intelligence does not equal machine learning. Okay. And this is going to be the next technical challenge here. We'll see if we can get this video clip to play properly. This is an example of what I mean. Sit down, please. Please look at the screen in front of you. Please place your body so that all points are obliterated. That is fine. We may proceed. Date your name for the records. Surname first. Mark Hall. Hallmark. Thank you for your cooperation. Yes, dear. Please answer the following questions, yes or no. Have you any allergies? Yes to right-weed pollen. Okay, I'll repeat for your memory cells. Please repeat your response for our memory cells. Right-weed pollen. So I showed that clip to my wife. That's how I yell at Alexa a lot of times when she can't understand me. It's an example of good old-fashioned artificial intelligence. And so AI can be based on machine learning. It can also be based on hard-coded rules. And you saw a good example there where the individuals weren't saying things specifically based on the underlying logic and they were getting error messages. Okay. So a couple more quick definitions as we move on. So AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. And machine learning is the use and development of computer systems that are able to learn and adapt without following explicit instructions, using algorithms and statistical models to analyze and draw inferences from patterns and data. And what we're talking about today really is augmented intelligence, the use of AI to assist and enhance human intelligence rather than replace it. I think that's where we see the immediate promise in clinical care. And this relates to the fundamental theorem of biomedical informatics that was put forward by Dr. Chuck Friedman, who's chair of the Learning Health Systems Department at the University of Michigan, who said that a human working in combination with an information resource is greater than a human working alone on any given task. This really relates to where we have been for a long time in medicine and how we deal with uncertainty in all of the decisions that we make. Life is short and the art long, the occasion fleeting, experience fallacious and judgment difficult, said Hippocrates more than two millennia ago. And medicine is a science of uncertainty and an art of probability, offered by Osler more than a century ago. The origin of evidence-based medicine was really forward thinking when the term was initially coined by David Eddy, regarding how we as clinicians strive to think probabilistically and often come up short in that. And so Dr. Eddy, in his early series of essays where he put forward the concept of evidence-based medicine, said the evidence presented shows that physicians do not manage uncertainty very well. Many physicians make errors in probabilistic reasoning, and these errors threaten the quality of medical care. That quote actually comes from the book there on the left, Judgment Under Uncertainty, Heuristics and Biases. And you can see that that was edited by Daniel Kahneman and Paul Slovic and Amos Tversky, who are well-known decision science psychologists who organized frameworks for human decision making into two major systems, system one thinking and system two thinking. If you haven't read the book Thinking Fast and Slow, I think it is one of the best books ever written on human decision making and should be mandated reading for all clinicians. System one thinking is heuristic, but also biased. This is our pattern recognition skill that you develop over time as you strive to become a master clinician. It's fast, automatic, emotional, and unconscious, but as Dr. Kahneman and Dr. Tversky showed, it's also biased at times and prone to key errors. In contrast, system two thinking is precise, but also resource intensive. This is collecting the objective data and calculating an objective probability about the likelihood of something at any given point in time. It's slow, effortful, and frequent, logical, calculated, and conscious, but very computationally intensive. Evidence-based medicine and its concept has morphed over time, and we often think of it now in terms of what clinical investigation shows with respect to frequentist biostatistical analyses. This is the traditional biostatistical approach that's crafted much of the clinical literature over time. It's a view of an effect of a measurement as a fixed value in the context of a hypothesis. You get a true or false conclusion from the results of a significance test. The example here that certainly everybody's familiar with is that no significant difference in effect observed between treatment A and treatment B with a P value of 0.4. That's how much of our clinical literature has been crafted over time. Getting back to the roots, though, of evidence-based medicine, and we're starting to see an increasing volume of clinical literature that's using Bayesian approaches, is the Bayesian analysis, which considers the probability prior to observing the data, the prior or pretest probability, calculates a post-test probability based on the observed data. This is computationally intense, but it provides output that better characterizes the uncertainty and all of the decisions that we face routinely in clinical care. And in this case, the probability that the effect of treatment B is lower than treatment A is 0.67. So, it's not definitive, but it's giving you some sense of how things might play out in that situation. There are some big barriers, though, to applying probabilistic decision-making routinely in clinical care. Some of them include the fact that generating accurate prior probability distributions typically requires a substantial amount of data. Probabilities may help to characterize the degree of uncertainty, but most decisions are binary or categorical. And then calculating probabilities based on multidimensional input is computationally intensive and historically even intractable. And this brings us back to the fundamental theorem of biomedical informatics, which is that now we have increasingly powerful information systems, we're surrounded by them, and we have a real opportunity to start applying probabilistic thinking at the bedside using very sophisticated mathematical models. So, here are the immediate opportunities for AI and PCCM as I see them. First, I'll mention customized AI and give a couple of examples. I'll talk about improved learning systems to improve our care for patients, and I'll touch briefly on system-level predictions for preparedness. So, first, customized AI. As already has been mentioned by my colleague and in some of the questions that we've seen today, there are known limitations to prediction models. They incorporate the biases that we demonstrate as humans in our everyday decision-making. They don't translate well from one scenario to another, such as trying to predict sepsis and then translating it to a more distinct entity like COVID-19. We know for a long time that prediction algorithms trained in one environment need to be recalibrated at the very least, or even retrained in a different environment, and that population-level predictions are a lot different than applying the predictions at an individual's bedside. Yet, we see a plethora of manuscripts like this, as Dr. Buckman alluded to in a moment, where, and I'm guilty of this myself, I have to admit, the development and validation of a machine learning model to predict blank in blank patients a single center study. Some of this has even made it to the bedside already, and there is a wonderful example of this with investigators in Michigan who publish in JAMA Internal Medicine, who took a look at the EPIC sepsis prediction algorithm and showed that it performed with a sensitivity in a range of 30 to 40 percent and an AROC of about 0.6. So, if you're using that to predict sepsis, you might also consider just using this tool instead. I don't want to, by any means, disparage this line of work, though, because there have been some incredibly elegant and fantastic examples, though, of how predictive analytics can make a tremendous difference in patient care. This is a group of researchers, some of them at Kaiser, some of whom you've heard already in the question session, who developed predictive algorithms to detect deterioration outside of the ICU, looking at a composite outcome for deterioration. They had wonderful performance characteristics in an offline environment, and then they translated that to the Kaiser Health System and rolled it out in a clustered step wedge fashion and rigorously studied the impact of this deterioration system that was closely coupled with a workflow that kept in mind the strengths and limitations of the model, and they were able to significantly reduce mortality as a result of deploying this tool. We've taken a similar approach at Children's Hospital of Pittsburgh. I show the diagram on the left because it gives you some sense of the locally developed architecture that we've had to construct in order to be able to deploy models into our environment because commercial tools aren't always capable of the things that we need them to be capable for in terms of improving care for our patients, and we're really excited about some of the performance that we've seen in our environment already with a well-performing deterioration model, and you can see the performance down on the right from real patient data that's been accumulating in our system with good discrimination, and we've implemented this into a surveillance program, and there's more data to come on that in the coming weeks here. So another opportunity I think related to PCCM in the AI space is improved learning systems, and so there's a traditional divide between research and practice. You have clinical care on the top there. You have research on the bottom, and they require parallel infrastructures and are separate even though we're interacting with the same patients who are involved with both of them. There are particular challenges in PCCM related to clinical trials. We have tremendous heterogeneity of treatment effect in our patients as they span a wide variety of ages and diseases. There's competing studies, which can make it difficult to enroll patients in any given study alone, and then just the enrollment act itself is also very resource-intensive and costly, and then, of course, we have slow time to translation like many other fields. One of the solutions is actually embedding trials into the practice, leveraging a modern information systems in order to accomplish that. And this is exciting because we really are entering into a new era of clinical trials that has a lot to benefit from AI solutions. We see now trial designs that can address the needs of a field rather than a specific study question at any moment in time. Examples are umbrella trials, which simultaneously study treatment effects within subgroups of a single disease, basket trials, which studies a therapy for multiple diseases within a single trial, and platform trials, which study multiple therapies for a disease in perpetuity. I had the privilege of being able to work on REMAP-CAP over the course of the pandemic, which stands for the Randomized Embedded Multifactorial Adaptive Platform for Community Acquired Pneumonia Study. And this is a study that actually uses Bayesian inference in its execution as well as its eventual analysis. And so it's embedded as a component of clinical care, and at UPMC we wove it into the fabric of our electronic workflows. And we deployed Bayesian probabilistic analysis in the randomization process itself as well as the eventual trial analyses. Patients were studied across multiple interventions using a unified statistical analysis plan, and it led to some tremendous findings related to steroids, IL-6 antagonists, antivirals, anticoagulation, looked at convalescent plasma, antiplatelets, and then most recently long-term outcomes. And if you have a chance to go to Lisa Higgins' talk this afternoon, I strongly encourage you to. So these solutions, I think, are going to really propel us forward in the space of pediatric critical care. A lot of the needed infrastructure to incorporate AI to the bedside is going to facilitate multicenter enrollment. As already has been mentioned, we can start to use this to facilitate both screening tasks, but then also to hone in on precision phenotypes, which we know are really important, and start to mitigate some of the heterogeneity of treatment effect that's held us back over the years. And will ideally lead to a systematic perpetual learning environment where we leverage technology to advance care. The final opportunity I want to speak about was to talk a little bit about system-level predictions for preparedness. And when I think about the data environment that we live in in modern health care now, I really divide it in my mind into two categories. The first is tracking the essentials, or air traffic control. It's looking at things like system capacity, staffing, personnel, supply chain management, any given moment in time, simply by curating the data. The other one is identifying patterns in the data, or weather forecasting. And this is work that was done by one of my incredibly talented colleagues, mentees who's now at Akron Children's, and in the audience today, Dr. Pelletier, who constructed Ensembl ML time series forecasting models to look at disease patterns in pediatrics over the last decade using a multicenter data set. And he showed how vastly we departed from those in the COVID-19 pandemic. You can only plan so much, but you can imagine that these tools are going to be increasingly important to mitigate shortages of essential supplies, improve staff scheduling, and anticipate regional supply demand, and allocate resources in the multi-hospital health system environment. A little bit about some challenges related to AI in pediatric critical care. Interpreting AI performance is one, lack of harmonization interoperability is another, and lack of formal education. So we talked a little bit about the known limitations of prediction models already, and I strongly believe clinicians need to understand four things, at a minimum, before they even think about using a model in practice. The first is, what was the outcome a model was trained with, and how does the outcome that the model has been deployed to predict compare with that? The next is, every clinician using a model should know what the AROC of that model is in the environment in which they're practicing. They should know how that model is actually calibrated if they're using it in clinical care, and they need to know the AUPRC, which is a measure of the positive predictive value of a range of sensitivities. These are requisites. No model should be implemented in clinical care without the clinicians knowing these data or having ready access to them if they would like it. The next major challenge is lack of harmonization interoperability that's been mentioned already, and if we look at how electronic data are conveyed across different systems, it's amazing the standards that underlie things like sending an email. It's also amazing how all of that breaks down when you get to the application level when we talk about electronic health records. Communication between two disparate EHRs is tremendously difficult. There's a lot of noise, there's a lack of data standards currently deployed, and governance is a tremendous challenge. This is just an example. It's kind of small to read here, but we dealt with this all the time when we were scraping the data for RemapCap and had innumerable logic checkpoints built in so we could capture noise like this patient who is both simultaneously on high-flow nasal cannula and a ventilator at this moment in time. Humans can sort through that very easily. You go to the notes, you maybe talk to a person at the bedside. You know based on how the workflow is documented and common sense as a clinician on how to figure out what support that patient was receiving at that point in time. AI does not have common sense yet, although people are working on that actively. With respect to data harmonization interoperability, there are some very exciting mandates coming into place at the federal level that are going to spur the vendors into supporting more seamless exchange of healthcare data. Finally, the lack of formal education in this space, and this is a nice paper that came out. Dr. Chelly is the senior author here, and Leah McCoy, looking at what medical students actually need to know about artificial intelligence. And they compared this, I love this analogy, because they compared AI to interpreting a T2 MRI. They said MDs need to know when to use AI, just like you need to know when to use an MRI if you're not a radiologist. They need to know how to interpret AI, even if you're not a radiologist. And you need to know how to explain AI, both to the patients and to their colleagues in the context of the care that they're providing. Looking at the inverse of formal education and thinking about what the formal education has been so far for some of the most advanced AI models that are in practice now, I want to look back at CHAP-GPT and this paper that came out just last month in December that showed that it can pass the USMLE. It scraped by. It's not, it wasn't exactly in the top of its class, just barely passing the studies, but it did pass. Quickly, the future of AI and PCCM. There's a whole bunch of things that I thought I could talk about here, but I wanted to focus in on just one, which is the folks at the Allen Institute for AI, associated with the University of Washington, who are using the same types of natural language models that CHAP-GPT is built around to provide ethical insights to certain questions. You can see on the right there, I asked, should antibiotics be prescribed to patients with a greater than 1% probability of having sepsis? This model says they should. I also asked, is it okay to place a tracheostomy in a patient who has been declared brain dead if the parents request the placement? This model said it's okay. Some more work to be done. So in summary, AI is poised to advance the field of PCCM. Challenges exist, but I strongly believe they're going to be overcome. AI may eventually help PCCM clinicians work through some of our field's most difficult issues. And I want to thank you for the opportunity to be here today. Thank you.
Video Summary
In this presentation, Chris Horvath, a pediatric intensivist and clinical informatician, discusses the use of artificial intelligence (AI) in pediatric critical care. He begins by clarifying the distinction between AI and machine learning, explaining that AI can be based on machine learning or hard-coded rules. Horvath highlights the potential applications of AI in pediatric critical care, including assisting with diagnosis and treatment, predicting complications, supporting decision-making, and optimizing resource allocation. He emphasizes the importance of considering the ethical and legal implications of using AI in the field, as well as the need to view AI as an augmentation of human intelligence rather than a replacement. Horvath also discusses challenges such as interpreting AI performance, lack of harmonization and interoperability, and the need for formal education in AI for healthcare professionals. He concludes by noting the future potential of AI in providing ethical insights to complex medical questions.
Asset Subtitle
Professional Development and Education, Pediatrics, 2023
Asset Caption
Type: one-hour concurrent | Artificial Intelligence: Challenges and Opportunities for Critical Care (SessionID 1228616)
Meta Tag
Content Type
Presentation
Knowledge Area
Professional Development and Education
Knowledge Area
Pediatrics
Membership Level
Professional
Membership Level
Select
Tag
Innovation
Year
2023
Keywords
artificial intelligence
pediatric critical care
machine learning
diagnosis and treatment
complication prediction
resource allocation optimization
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English