false
Catalog
Deep Dive: An Introduction to AI in Critical Care ...
Q&A/Panel Discussion
Q&A/Panel Discussion
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I have lots of questions, but I want to hear your questions. You know, I have a question, might not be really related here, but I'm really concerned about when the technology becomes so, you know, big in terms of getting data and analyzing all this data. I'm worried about people who are going to use this for dangerous reasons, like the hackers, you know, who might be able to get into that data of those patients and maybe do something we don't like to, you know, and maybe give us the wrong answers or something. Is that something considered, or I mean, like it's thinking too much about this? I think, to be fair, I mean, I've heard this before, that as technology, you know, gets more sophisticated, we're going to have more and more sophisticated hackers and bad doers, I guess. So I think it's a great, it's a great question and observation. I think one of the things we forget as clinicians, because we get so excited about an algorithm or something, you know, we're at a trade show, somebody shows us this great, you know, predictive algorithm, we want to try it out. You need to involve everybody on your team, and that includes your security team, right, your cybersecurity group, because they're the ones who are going to help you protect the integrity of whatever tools you're using to ensure that hackers can't get in. And so it's really, really important that you involve them, whether you're, any technology that you're thinking about using, they should be one of your first phone calls, they should be part of the group that evaluates that technology before you purchase it or before you get involved in it. So your electronic health records, I mean, you know, they go through a pretty rigorous evaluation for cybersecurity, but again, that's why we have cybersecurity experts, is because they're looking for those other holes and ways that these hackers can worm into your systems and create havoc. So I think cybersecurity is an ongoing problem, not just because of AI, and people are really thinking about the disruption and how to anticipate better. I feel like the human attacks are the most sort of like the, you know, if I send you an email, and then you download some weird email, and then, you know, you have now a warm traveling, that seems to be one of the most common ways, even before another attack. Definitely with AI, there's more realistic ways of, you know, phishing, because you can change people's voice, you can change images, you can be quite disruptive. But I think as of today, it's still more, and I'm pretty sure most of us are in those email systems where the IT department is always trying to send you, like, a phishing email and tell you, don't click something like this. That still remains to be the most vulnerable issue. Yeah, so, and corruption of data has actually been used intentionally, because if you think about the ongoing discourse about the foundation models, people say that, and I agree that we're stealing from artists and the general public, the data that we contribute for our collective society, and most artists who, for example, have gone and had their voice being cloned, and they had no idea, they're not paid, are corrupting their images so that they cannot be used to train large language models. So it's both ways, but it's very, very possible. Technically, it's very possible. Any other question? Yes. So we look at, think about chat GPT and all these LLMs as something that is analyzed, published data so far, but I think the important thing is also the association of being able to look at our current EMR in real time, and then come up with diagnostic answers and alerts, I think, which is what has already started, sort of like an epic, it'll give you BPAs, right? Hey, this is what is going on, and you need to do this. I think that when the whole thing combines, and then in the medical decision making, if the going forward will be combined with what to do with that, I think that'll be the next step. I don't know how long it'll be, but I don't know if there's any EMR which is way ahead on that, or is EPIC still the... EPIC is the largest, I guess, EMR here. That's a very good question, and that is exactly what we were discussing in our focus group, that even if there is a model that makes a very good prediction, what do we do with that prediction? Where, I totally agree with you, where the money is and where we need to go is not just making predictions, but being able to have actionable intel that this is what we need to do with this patient at this point, because especially in an ICU setting, where things are evolving very fast, why wait for the patient to decompensate? Why can't we know what to do with this patient right this second, so that this patient stays like this and doesn't get worse? Or if the patient is worse, the patient gets better. But I'm not aware of any EMR which has that capability right now, but I know that many labs across the country, including our lab, are working on algorithms and technologies to specifically answer these questions, that what should we do for this patient who's in front of us right now? And as a follow-up, I think there are many reports of dermatology, radiology, diagnostic radiology, computer systems already doing better based on machine learning and AI and all those. I wonder if that is something which is going to be an answer for our always shortage of physicians or clinicians. Perhaps we will not need as many because of the utility. Well, I'll tell you, I was a radiology resident in 2016 when Geoffrey Hinton, who won the Nobel Prize last year, said we should not train radiologists because we were going to be replaced in six years. And we always joke that he quit his Google job before we were replaced. It's a great time to be a radiologist, I'll tell you that right now. And so the discourse, when we look at real-world deployment, not simulations of AI for radiology, it's not better than radiologists. It saves you like 10 seconds, it's not even enough time to get coffee. And so what the gap, at least people are saying, that there was a, if you look maybe two or three weeks ago, Eric Topol and Pranav wrote this editorial that is saying that maybe what we need to do is separate the two. And so have AI work autonomously and have radiologists work autonomously. Because when we bring them together, they don't do well. In fact, you can make a good radiologist bad. And so because of automation bias. And what people in those scenarios say is, why don't you have the AI read the normal studies and have the radiologists read the abnormal studies? It's very easy to create a normal abnormal algorithm today without much technological advance, but the disease algorithms are very difficult. So the threat that I see to our profession is not, it's because of the changing nature of work. Today, most radiologists are reading from home. They do read more because they're not driving all over the place. But, and then the long wait queues, so you're finding more triage algorithms coming up. So they may not be as good, but if they can be as equal and people really need answers immediately, you'll see people maybe rely on algorithms. But also in this country, there's a lot of advocacy. It's not because the technology is going to be better. Because no radiologist, maybe breast, reads just one modality, one disease. You're reading like 50 different studies in just an hour. And that's very difficult today to develop AI that way. Although the foundation models are here, so we don't know. You have a follow up question. Okay. In my hospital, I'm in a small children's hospital in Omaha, Nebraska, but we have already just been discussing about, there is a commercial program available to look at long bones. And it has apparently, FDA has been approved for that in children. So we are looking at implementing that as a way to use that system for nighttime. So you don't need, apparently radiologists don't like to work at night. So this is one way to utilize that. And if there are questions, then other radiologists will be woken up and they could give us. So it is moving in that direction. Yeah. Yeah. So yeah, I think the nature of work for radiologists is changing a lot. I'll tell you that most people prefer to read from home. Radiologists are okay working at night, and that's because of the different geographies where they live. So they can be daytime in a different part of the world and still work. And that's the flexibility of the homework stations. When it comes to some studies, especially pediatrics, like bone age, long bones, they're really actually very boring. So also the cognitive interest, sometimes it's like two people or three people who read those types of studies. But I think when it's one task, it's easy. But when you're looking at a whole myriad of many tasks, which is what I was explaining to my group, this year of agents, you want to have 50 agents who are looking at, I'm looking for pneumonia, I'm looking for something else, that maybe that reality is going to come in place. But where people have seen most gains for radiology has been dictation, that I should just be able to say, I think this is a normal study. And then the AI should pick that, fashion an acceptable report for you and take it to you. And that has seen people read even almost 100 more test x-rays in their shift just by doing that. I am pro-radiologist. Yeah so I think one of the you know we didn't talk about it for the people here who are also in teaching institutions what the disruption is going to be with these AI systems right. I was sort of reflecting on with my group on some of the we are always trying to be faster and faster and faster and maybe that's the problem if if you can read your study in three three minutes I just don't know how much more juice you can keep squeezing out to read one minute or you know whatever priority of study you want and I think this chasing of efficiency always is going to flip in a different sort of negative way. What AI could help is that the histories that radiologists get are terrible right. They're like one sentence that's nothing and some people are trying to build this word clouds because they cannot go through the EMR to really figure out what's going on. People are trying to say could an AI provide a more accurate history for that section that will be like oh okay this is what's going on then they can answer your question with most you know specificity or could you create some word cloud that just made it easy for people to be like oh I see you have cancer here you know so then they're not like trying to give you 50 differentials that have no utility and then in terms of that AI assisted report generation what people are saying is also trying to look at priors you know so it's looking at priors and when you dictate if there's a big contradiction then it flags it for you so it's a sort of a better system but not because of replacing the eye sort of tracking it's just more the the workflow element. I just wanted to announce again that the SCCM Datathon takes place on July 18 to 19. You could register on the SCCM website typically it's the same team that you will be interacting with during the Datathon and I would say during the last Datathon all the teams were able to submit an abstract that got accepted so the work that they put in in one day was enough to come up with the beginnings of a good project so to me this is really the way that we should be educating which brings me to a question that I wanted to post to the audience of participants as well as a panelist is how do we train the next generation of intensivist clinicians in the ICU. I always say that if the question you ask can be answered by chat GPT it's not a good question because the student is just going to ask chat GPT without thinking about it so how do you redesign your curriculum so that you're able to leverage the power of AI but at the same time still exercise them to develop critical thinking and that's an open question I don't know the answer to this I'm actually advising the AMA in terms of what should medical education look like but I need input from I can't rely on just my input I'm trying to gather around input from different parts of the country different parts of the world and different specialty so if you have any answer please I want to hear those answers yes do you want to go to the microphone you're young you could run to the microphone well while he's doing that I'm going to add a plug again for the for the data thon so one of the things I'm working on it's not official so it's not on the website yet but I'm feeling pretty confident we'll be able to add a new data set there one of the things we're working on is simulating combat casualty care and so we run high fidelity simulations with medics with helmet cameras and microphones and they run through realistic scenarios with mannequins and actors doing combat casualty care tourniquets stabilization you know establishing airways things like that and so we build computer vision models on those to be able to determine when procedures are done status of you know injury patterns and things like that so I think there's a good chance that this will be the first state of thon that will actually bring in something that has computer vision model capability that we haven't done yet before hello okay so while we're talking about groups earlier one of the studies we read was on the importance of it was on whether or not LLMs were better than physicians and the only the way they took study was they separate into two classes one was just using the LLM with some kind of prompt engineering tool and the other one was like the physician using the LLM and within whenever you answer or whenever using LLM there are certain ways to do it and that field is called prompt engineering so there are multiple there are multiple methods within prompt engineering that I can use to get a certain response from LLM so if you give a poor question to LLM the same with the person you'll get a bad response and that might affect the outcome but if you what they did in the study was they use a technique called shot prompting where you give an example to the LLM so if I say something like if like if the blood pressure is below 5 it's bad if it's above 5 it's good so if I give it a task I can say you know Sally has blood pressure below 5 that's bad Tom has a blood pressure above 5 well that look like it'll say Tom is good or whatever I said earlier but what one issue with the study was that they only use one prompt engineering technique and that the physicians it never mentioned whether or not they're trained on these prompt engineering techniques so what the study concluded was that the LLM in conjunction with the physician actually had a poor response than just using physician itself so or sorry than using the LLM itself so you can conclude that the LLMs are better than the physicians in this case but if you actually have the physicians learn how to use that a prompt engineering techniques they can in fact probably perform better than the LLM itself because that domain knowledge that can be used in conjunction with the prompt engineering so if you teach physicians how to you know prompt engineer an LLM it's very likely it's very likely they'll get a much better result with the LLM and that way they can say they can actually use the AI for what's intended to use or intended to be used for which is saving time money and energy so you can't cloud the AI with you know poor judgment from the physician okay so I guess prompt engineering should be part of the curriculum for medical students and for clinical trainees so we have five more minutes left and I'm just gonna have go through some wrap-up slides you've heard a lot of these in the last four hours we wanted to hear back from you in terms of how can we improve this course because again we were given four hours so we were very much constrained in terms of what we can include but we really wanted to have more hands-on workshop and you will get that at the SCCM Datathon we are now also organizing all sorts of events like we have an event where we're gonna perform real-world evaluation of language models for counseling patients with mental health issues so we're inviting patients with depression we're inviting social workers and psychologists and psychiatrists to actually stress test certain language models in terms of are they giving you correct answers what is the rate of hallucinations for this particular version of LLM we also have exercises to develop critical thinking so those are part of the what we refer to as health data con we're increasingly having a lot of these events organized throughout the country through different specialties and we need your help like we are very limited in terms of bandwidth so we need to train more of us so that you can help us truly scale this education so some of the I think most important lessons learned from today it's very important that you understand the backstory of the data and the backstory of the data is truly referring to the social patterning of the data generation process again we will share the slide decks watch out for announcements on how you can have copies of the slide decks that we are that we have presented today again this is the summary of the sources of bias the bias could be due to problems with the practice of medicine itself one assumption that we have when we're modeling is that routine care is being administered through all the patients one of the papers that we have just published in preprint is looking at the frequency of mouth care and looking at the frequency of turning and discovering not surprisingly that some patients are turned more often and some patients mouths are cleaned more often in the intensive care unit and those are the labels that we would like to use now when we do fairness evaluation let's look at the patients whose mouths were cleaned least often in the ICU and make sure that our algorithms are not going to give them wrong diagnosis wrong treatment recommendation and then the bias could be from a measurement bias of the medical devices and equipment we had a review two years ago of eight devices that are in use in the intensive care unit that we know our bias across different patient demographics and of course Judy went over shortcut features that is I think one of the biggest challenge in the way we have foundation vision models I also mentioned or maybe I did not to focus on the last mile of model development when you have a model that is 95% accurate of course we're gonna be happy but what happens to the 5% these are the people who will be given wrong diagnosis and wrong treatment recommendations we need to worry about them because I think everyone's convinced that if we roll out AI the way we are doing now it's really going to scale augment health disparities so what we're doing now is rather than trying to explain the model we're trying to explain the errors again we try to compare the true negatives from the false negatives the true positives from the false negatives false positives and then you've heard me say this in earlier we need to stop building models we need to design human AI systems because we cannot predict how the humans how the clinicians how the patients will behave differently when given a prognostication or a recommendation by AI and I'm gonna close off with this slide and this is based on a paper that talked about how we achieved auto safety from 1923 to the present and it wasn't just about designing safer cars it's incorporating regulatory framework it's incorporating public health advocacy and I think that that's what we need it's not just developing safer models but really coming up with better safeguards to make sure that they are deployed equitably so with that I'm gonna end the the course I would like to thank the panel of faculty who joined us today and I would I would like to thank you for coming we would like to train you to become facilitators of the future AI course for SCCM so please contact us if you have any questions or feedback thank you very much
Video Summary
In this discussion, concerns were raised about the potential misuse of technology, particularly in healthcare, where hackers could tamper with patient data. The importance of involving cybersecurity teams during the implementation of new technologies was emphasized. There was a conversation about the role of AI in medicine, especially in real-time medical decision-making, and the capabilities of AI to process data and provide actionable insights in intensive care settings. The potential of AI to alleviate physician shortages and improve efficiency in areas like radiology was considered, though concerns about AI's current limitations and its impact on radiologists' roles were noted.<br /><br />The importance of prompt engineering to optimize AI's utility was discussed, suggesting it should be integrated into medical education. The discourse also included AI's disruption in teaching institutions and emphasized designing human-AI systems to avoid exacerbating health disparities. Overall, the session underlined a multidisciplinary approach to AI integration in healthcare, emphasizing technological safety, regulatory frameworks, and public health advocacy.
Keywords
cybersecurity
AI in healthcare
medical decision-making
radiology
prompt engineering
health disparities
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English