false
Catalog
AI to Improve Productivity at Work - Online
AI to Improve Productivity At Work
AI to Improve Productivity At Work
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
My name is Teresa Boron, I'm the LEARN ICU Program Development Manager at the Society of Critical Care Medicine in Mount Prospect, Illinois. I will be moderating today's webcast. A few housekeeping items before we get started. There will be a question and answer at the end of this presentation. To submit questions throughout the presentation, type into the question box located on your control panel. Please note the disclaimer stating that content to follow is for educational purposes only. And now I'd like to introduce your speakers for today. Ankit Sekuja is Director of Clinical Informatics Research in the Division of Data-Driven and Digital Medicine and Principal Investigator of Augmented Intelligence in Medicine and Science Lab at ICANN School of Medicine at Mount Sinai in New York City. Kaitlin Alexander is Clinical Associate Professor at the University of Florida College of Pharmacy in Gainesville, Florida. And now I'll turn things over to our first presenter. Thank you, TJ. Thank you everybody for joining in here today. I think AI is an exciting topic and it's very relevant in today's world, especially in an ICU. So the goal of my talk today would be to discuss the basic principles of AI tools and algorithms and to explore current applications of AI tools and critical care. So the first question is, why do we care about AI? And we care about AI because AI is all around us. Many appliances these days in our houses are powered by AI. When you use your face to unlock your phone, you're using AI technology. We use AI for navigational apps, text editors, virtual assistants, and you name it, AI is part of our daily lives. The next question then becomes, what is AI? And very simply, AI is the science of making intelligent machines. Now this gets us into a bit of a rabbit hole because then what is intelligence? Simply intelligence is the ability to learn from experience and apply this learned behavior to deal with new situations. It has many elements to it as enumerated on the slide over here. So very simply speaking, we can say that AI is the ability of a computer to think, learn, and simulate human mental processes and the ability to independently perform complex tasks that once required continued human input. Now let's delve a little bit deeper into AI. There are many different ways to classify artificial intelligence. There are three big basic types of AI that we'll talk about today. The first one is machine learning. The second one, deep learning, and then natural language processing. What is machine learning? Machine learning is the technology that allows the computers to learn themselves without the need for explicit programming. Now it doesn't mean we don't have to program at all. We still have to program some, but we do not have to give explicit instructions. And we'll become clearer with some examples. Now there are three types of machine learning algorithms that are out there, broadly speaking. There is supervised learning, there is unsupervised learning, and then there is reinforcement learning. The supervised learning, let's look at an example of supervised learning. So say we have a basket of fruits which has some amounts of apples, oranges, bananas, and we want the computer to sort these out for us. So a way to do that is that we tell the computer that this is what an apple looks like, this is what an orange looks like, this is what a banana looks like. Now, mind you, we have not given any explicit instructions to the computer that, you know, the apple is red, or the shape of the apple, or the texture of the apple, or for orange, or for banana, but we have just given these broad labels as to what how looks like. And based on this, using supervised machine learning, the computer can learn to sort out this basket of fruits. Now, the magic really happens when this trained computer is presented with a different basket of fruits, and it can still sort them out in a nice way. Now, a very simple example of supervised learning is regression algorithms. We all know regression algorithms from statistics. And as you know, the goal of regression is to find the best fitting relationship, which is nothing but a mathematical model that describes how independent variables influence the dependent variables, right? For example, the degree of disease acuity, how it affects the length of stay in the hospital, right? So all this is an example of supervised machine learning. Now, another way or another example of supervised machine learning is decision trees. Decision trees, they create splits along variables that allow for the best discrimination between classes, right? So for example, let's say I want to go play out golf where I live. The first thing I'm going to look at is how the weather looks. If the weather is cloudy, maybe where I live, you know, it's good enough for me to go out and play golf. But if it's sunny, then I need to see how humid it is. If it's too humid, I can't play. If the humidity is not that bad, I can go out and play. If it's rainy, then I need to see how windy it is. If it's too windy, I can't play. But if it's not that windy, I may still be able to play, right? So in a way, this is creating a decision tree for me to decide when is a good time for me to go out and play golf according to weather. Now, what if I go to a different city and I have to still decide whether to go and play golf or not? This decision tree is very specific to the place I live, right? Now, it might be that if I go to a different city, say Seattle, where when it rains, it pours, it may not be a good time to play, right? So in a way, this model would be very overfit to my current conditions. How do we counteract that? We can develop these decision trees using different splits of data and create multiple of these decision trees and take an aggregate of the result. This is a technique called as random forests. Another way to do it is by sequentially training these decision trees where each next tree learns from the mistakes of the previous tree and it continues to get better. This is something called as gradient boosting. An example of this is this must plus classifier, which was developed by investigators at Sinai a few years back to identify patients, inpatients with malnutrition. What they did was they took all the inpatients in the hospital around the time and use this random forest classifier to see if they can identify patients who will have malnutrition versus not better than the prevailing metric must at that time. And what they found was that this must plus did, in fact, perform much better than the must metric at that time. Now, another type of machine learning is unsupervised learning. So let's go back to our basket of fruit. We, again, want the machine, the computer, to subgroup these fruits, but this time we are going to get a little sneaky and we are not going to tell the computer what's an apple, what's an orange, what's a banana. The machine will still figure it out based on the shape, color, texture, and, you know, other things that it might think are relevant and will still, might still be able to come up with this classification. But this time it won't be able to tell us what's an apple, what's an orange, what's a banana because we never told it that. But it did it in an unsupervised way where we didn't tell it what to look for and what the outcome labels are. An example of this is using machine learning to identify various subgroups or, you know, subphenotypes for various disease entities to find patients that may respond differently to different medications or, you know, to understand different pathophysiologies and heterogeneities in disease. For example, acute kidney injury is, as we all know, very common in patients with sepsis. So in our lab a few years back, our group got interested in trying to understand if all these patients with AKI after sepsis, are they very similar or are they different? And using unsupervised machine learning, what our group found was that there were, in fact, three different subphenotypes which differed not only in different clinical characteristics, but also outcomes. And this is how you'll see unsupervised learning being used in multiple streams and multiple diseases. Now, moving on to reinforcement learning. So this is a more, I would say, an upcoming machine learning technique where the algorithm learns by trial and error, right? So what does that mean? So, for example, this little toddler, this toddler's parents want it to take steps and walk from this table to the chair. So what's going to happen? The toddler is going to try to take a step. The parents will clap, which will give this toddler positive reinforcement. So the toddler will take another step. And if the toddler falls, it gets a boo-boo, right, which is a negative reinforcement. And when next time he or she will try to take the next step, will be more careful so it doesn't fall. And then when he or she doesn't fall, parents give accolades again, which is a positive reinforcement. And this is how we all learn to walk, right? So this is exactly similar to what reinforcement learning does, where if the machine suggests or takes an action which is going to lead to an outcome that we desire, we program positive rewards into the machine. And if it is going to lead to outcomes that we do not desire, then we give negative rewards to the machine. And by doing this trial and error method, the machine learns to take much better actions. Examples of these in real world are these computer games, chess games, where a computer can beat humans. A lot of these autonomous vehicles, they use a very similar technology. Now, moving on to the next aspect of artificial intelligence, deep learning. The deep learning is built on the concept of artificial neuron. Just like the neuron takes inputs and processes them, an artificial neuron, or perceptron as it is called, takes these inputs, processes them, and gives us these outputs. Now, if we look at this artificial neuron more closely, it is nothing more than a mathematical function. So the neuron is going to take these inputs. And each input is then multiplied by a weight. This weight is just a perceived importance or contribution of each input to the neuron's decision. And this is all done randomly. So multiple iterations of training need to be done so that this neuron can get better at making predictions about the outcome we want it to. So here in this case, the second input becomes the most important one, and the third one becomes the least important one, at least for this iteration of training. Now, these weighted inputs are then added together to form a weighted sum. And this weighted sum is then compared against a pre-decided threshold, something called as an activation function, to see whether the neuron should even bother firing or not. If this weighted sum is higher than this pre-decided activation threshold, then the neuron will fire. And this activation function not only decides that, but also has information encoded about how it should fire. Should the output be 0 to 1? Should it be between minus 1 and 1? Should it be the actual value? Should it be a probability distribution? And so on and so forth. And these all depend on the task this neuron is supposed to perform. Now, the next step from this artificial neuron is a multilayered perceptron. So this artificial neuron is also called a perceptron. So this is just multiple layers of this perceptron. And this usually has an input layer, a hidden layer, and an output layer. Now, this hidden layer are like these additional neurons that help process and analyze the information. And each neuron in the hidden layer typically has a little bit of a different perspective and can focus on different aspects of the problem. Now, when we have two or more of these hidden layers, we call it a deep neural network. In medicine, there is another layer of complexity in the data. Most of the data is time series data. And in this temporal data, we have multiple time steps. And the outcome at each time step depends not only on the input at that time step, but also what happened previously at that time step. So artificial neural networks, they aren't, as it turns out, very good at capturing this temporal relationship. And this is where we use these special models where a neuron can get an input not only from the input layer, but also from itself from the previous time step at every time step. This is something called a recurrent neural network. So at each time step, it incorporates input that is coming from the input layer and input from its own self from the previous layer. So it has, in a way, a built-in memory. Now, there are models called LSTMs, which do a little better job sometimes in capturing this temporal relationships and have a longer-term memory built into it, which you will see in certain studies that are out there. Transformer networks are, again, an extension of this, but not part of the talk right now here. Now, another type of data that we frequently use in medicine is imaging data, right? And if you think about it, the way the computer sees an image is just a matrix of numbers, right, broken down into these tiny squares or pixels. And we use something called convolutional neural networks, or CNNs, to read them. Now, what does CNN do? It takes these magnifying glass or these filters to scan small groups of these pixels at a time, looking for these specific patterns. And it will go through the entire image to look at that. Now, after finding these patterns, CNN, again, shrinks down the information further to focus on just the most important features by a process called as pooling. There are multiple layers of these filters and pooling steps. And each layer builds upon the patterns discovered by the previous one, gradually creating a more complex understanding of the image. And then this final layer of the CNN connects all these learned features to ultimately make a final decision and say that, you know, whether that chest x-ray was normal or whether there was an infiltrate or an effusion on that chest x-ray that we saw. Now, moving on to natural language processing. So, broadly speaking, natural language processing, the goal is to train the algorithms to understand, interpret, and generate human language. That's a heavy lift, right? Now, if we look at how these NLPs have evolved, not too long ago, if these three sentences were there, my dog's name is Tiger, he has golden fur, and he's a golden retriever, all that NLP would be able to do is count the number of words in each sentence of these three sentences and group them together, something called a bag of words. But then, you know, slowly it evolved to capture semantic relationships where it can understand that golden and tiger are, you know, closer in meaning in these sentences. And then it evolved to understand that, you know, tiger is actually the name and golden is the color of the fur and not the name of the tiger. So you can see how the NLP has evolved to capture these intricate details over time. Now, NLP finds a lot of applications. One of the most important applications is text classification, where it helps categorize texts into predefined categories, right? Like using your email spam filter, like that's a perfect example of this. It's also used for natural language generation. It's used for information retrieval. And as it were, two thirds of data is unstructured. As you can imagine, this has opened up a lot of possibilities. And then sentiment analysis to understand what the sentiment of a sentence or a paragraph written is, is also a part that falls into NLP. Very briefly, I want to talk about large language models because they are prevalent everywhere, right? ChatGPD has totally changed the landscape. Now, large language models, they offer a very comprehensive approach to language tasks. And they are a result of seamless amalgamation of deep learning and transformer architecture that can handle sequential data very well while understanding the significance of each previous word and the word that comes next. They're trained on vast amount of data and not just the amount, but a variety of data. So they are trained very well and through this very extensive training, they learn to understand, interpret, and make predictions based on the data. And they have the ability where they can adapt to a variety of different tasks, right? From information extraction to object recognition to answering questions and whatnot. And they are finding many applications in the realm of healthcare. This is one of the papers published a few months back where authors from Stanford were able to use large language models to summarize parts of the chart, patient chart and patient questions very accurately, even better than the clinicians. So LLMs certainly have a huge potential going forward. Thank you. I'll go ahead and switch gears now to talk a little bit more about AI tools and their uses and potential application with our learners. So in our other kind of role as educators, as faculty on the team and within critical care. So what eventually started or what initially started me down this path of implementing AI into my experiential education kind of curriculum with my learners was what Ankit kind of just introduced to us is that AI is really all around us. And especially with the implementation and onset of chat GPT and our large language models which are so readily available. But I saw a lot of opportunity where that could be very impactful and important to implement with our learners and also be an important tool that could also help enhance their learning and the learning experience. Additionally, I think it's incredibly important that we recognize that AI literacy is a new competency that we need to integrate into our curriculums and with our learners because they need the abilities to learn and work in this digital world. Ankit shared with us all of these clinical examples, right? Where AI is being integrated seamlessly into clinical care. And so we want to be able to prepare our learners for that workplace to know and understand how AI and AI tools work. So with that, they also need the abilities to be able to interpret AI outputs, understand some of the inaccuracies or limitations of potential AI models. And also very importantly for us, know and understand about ethical and privacy concerns, particularly when it comes to patient care. Specifically for our healthcare professionals, I would say we also want to ensure that our learners are understanding workflow analysis of AI-based tools. Some of those basics that we just went through. AI enhanced clinical encounters and how AI can be utilized to improve clinical encounters with our patients. And also importantly, know the critical evaluation piece, but also evidence-based evaluation of AI outputs along with the basic knowledge and understanding. And so there's a lot of benefits for incorporating AI into your experiential education and with your learners to enhance efficiency, efficiency of tasks. I've also found it's a great tool to be able to adapt learning experiences for your students and augment and enhance learning with them and implement and utilize a variety of different kind of pedagogical methods with your learners. There are some risks though, of course, to be aware of, such as hallucinations or inaccurate responses when you're utilizing chatbots or certain LLMs. Again, just emphasizing then the importance of the critical evaluation piece of the output before implementation. And there's dissent among faculty, I would say, about utilization or implementation of AI when you're talking about for teaching and learning with students or implementing AI for teaching and learning with students or implementing with assignments activities because of the potential for reduced critical thinking among learners. And so I think we really, as educators, just need to be intentional. about how we are asking the students or learners to utilize AI in order to augment or enhance critical thinking, but not replace that. And so as educators, we play an active role in teaching students how and when to use AI as they instill these best practices for AI-assisted learning. This is a quote from Mollick and Mollick who, if you haven't read any of their papers or information on AI for teaching and learning, they give a lot of great tools, examples on how we can implement AI and provide teaching on this AI literacy competency for our students. And so these are a few of the recommendations from Mollick and Mollick of how AI can be applied for education. And I want to talk through, with my time with you today, some examples of how either I've implemented it within my teaching or on my rotation or some examples from our college. So AI can be a great tool to help students gain more feedback. Time is always a struggle, I think, for a lot of us, especially when we're in clinical roles and we're wearing multiple different hats, maybe teaching at the college, rounding on patients in clinical care, research. So we're not always able to give a ton of formative feedback, or at least I'm not, to my students immediately. And so by building in utilization of AI, maybe they can start to get some of that feedback earlier through the process by using AI as a mentor, a tutor, or a coach before then, you know, bringing you an assignment, an activity, a deliverable that they've been working on and getting more feedback from you. AI can also be utilized as a teammate to help provide alternative viewpoints, maybe to simulate a debate type of situation, have a student consider another side of a situation with a patient, and kind of work through that, simulate that before coming to a final conclusion. And we can also teach our learners how to utilize AI to help them better understand material. So maybe it's a new topic or disease state that they are just seeing for the first time in practice. And so they can go to the AI, maybe get a more foundational understanding of that topic initially before they delve into some deeper research so that they understand kind of the basic knowledge that they need on that topic initially and using it as a starting point. And then also, I'll share some examples of how we've implemented utilizing AI to improve patient simulation or provide patient simulation so that they can kind of practice those communication skills with patients. And then also just utilizing AI as a tool to help them more efficiently complete tasks with you. And maybe that's something as simple as reformatting a document or taking a document that they've developed to reformat it for a PowerPoint, you know. And so utilizing AI can be a great resource and tool to help students with those tasks and get more done while they're rounding with you or on your rotation. So AI can be utilized as a mentor, as I mentioned before, to provide feedback to patients and provide formative feedback initially. It also can help the student or learner pinpoint gaps or errors and how they want to improve. And what I think about this is great or what is great about this is that the learners then have the autonomy to either accept the feedback or not. And so maybe it is a patient education document that they are working on and they want to implement with you, they can maybe get some initial feedback, choose to accept those things or not from the AI, and then bring that into the discussion. And they can tell you as their kind of preceptor or mentor in that interaction what they chose to accept and then what they didn't and why or why not. And it's a great starting point for that discussion rather than them just handing you a document and asking for feedback. You have some specific points to talk through with them. They can also learn just from that interaction with AI what worked well, what didn't, having the learner reflect on that, I think, is an important piece to helping them build that AI literacy. Another aspect that I've done a lot of kind of work with, especially with my resident learners, is asking them to utilize AI in a way that helps promote their own self-reflection. And so AI can be implemented as a guide or a coach to help students or learners delve deeper and think about an experience, generate specific examples, or build self-awareness. A lot of times our students or my residents will fill out a self-evaluation, and they'll check all the boxes on how they think they're doing, but all the comment sections are blank. And they haven't really spent a lot of time thinking about examples where they performed well or needed to improve. And so AI can be utilized to help them kind of to help ask the questions to get them to those examples. That has been really powerful for those midpoint or final evaluation discussions that I have with students and learners, but it also is, what I tell them, it's also very helpful for them to start documenting that either through their student rotation year, through their residency year, so that they know at the end those examples and can think back to, oh, on that rotation six months ago, these are some of the things that I did really well that maybe I could highlight in an interview or a letter of intent to a program or a job application that they are progressing to next. So this is an example of a prompt that I give to the students with this when I ask them to interact with AI to improve a self-evaluation. You can see I kind of set the stage by telling who the audience is, that this is for a pharmacy school direct patient care experiential rotation. And then I added in the areas on our evaluation that they're asked to assess or self-assess for themselves so that the AI can specifically ask questions and kind of dig deeper into that with the learner. I ask the AI to include a lot of memorable and concrete examples or that's kind of what the goal is of this interaction, right, and they're going to interview the patient or interview the student or the learner to help them come up with those examples and verbalize those examples. And then I add some parameters about asking questions one at a time, allowing the learner to respond, and then only asking a handful of follow-up questions because I don't want to belabor a point too much, right, with the learner. I want this to be helpful and productive and not feel like they're wasting time. So this is an example conversation from a learner with AI. And so they use the prompt that I just shared with you, and this was done in Microsoft Copilot. And so initially, the initial question was asking them to think about a time where their disease state knowledge was put to the test. And so in this case, there was a scenario that the learner didn't have much background information on a question from the team, but they didn't offer up much more than that in the initial response. So the AI was able to ask follow-up questions to have them think about, you know, describing the scenario more, what the outcome was, and how they responded. What I think is great about this example, though, is that Microsoft Copilot responded with, that's a great example of humility and initiative when the learner, you know, said with the team or interacted with the team to say, I don't know about that, but I need to go read more. And they shared in this interaction that they took that initiative to go read more on their own to learn more about the topic. And so I think that reading between the lines with these conversations, too, the AI can really help students maybe put into words some of those strengths or areas for improvement that they otherwise may be challenged to do on their own. Another example is using an AI for simulation or for chatbots to help students prepare for patient interactions. This is an example from a colleague of mine in interprofessional learning that created a chatbot with some of our computer science colleagues to create a patient scenario that would address multiple health professions. So in this course, it has combined students from all of our health science colleges. And so in this scenario, the dental students were able to interview the patient, our pharmacy students were practicing, students from College of Medicine, and then they came together as a team to combine that information. And so it was a great activity for them to simulate working in an interprofessional team and seeing the benefits of having all of those multiple and different perspectives to improve patient care. Here's another example from a colleague who developed a simulation or chatbot to help students practice their motivational interviewing skills. Again, I think giving them this simulated kind of low stakes practice before they go into a patient interaction can be really helpful so that they can learn how to respond. We've also built some with some more difficult conversations, maybe where they're receiving pushback on a recommendation or resistance from the patient. And so they're able to work through with AI and with the chatbot of applying those, for instance, motivational interviewing skills to help kind of work through the scenario before they're put in front of a patient in a patient care scenario. Another recent publication shared some examples of using AI tools to help develop patient education materials. These were very simple prompts of write patient education materials on X disease state or X drug, or you could add some parameters to it as well of maybe write it at a sixth grade reading level or lower. And this was a pharmacy article and the pharmacist that evaluated these materials said that they found this to be a really great starting point for generating patient education materials. And I think that this could be easily worked into an assignment with our learners as well in developing those materials. This was done in one of our, again, one of our interprofessional education courses, where the students drafted a tool and then they were able to use that tool to help them develop and they were also asked to provide back with the assignment, not only the deliverable of the tool, but more reflection on the process. So what prompts did they use? What prompts did they have to go back with to the LLM to get, you know, more desired output? What information was helpful that they received back? But then also what information needs needed to be corrected and having them submit that I think serves a dual process, right, of helping them get to a deliverable that is of high quality, but also gets them thinking through the AI literacy competency of what are the strengths of using this as a starting point? What are some of the maybe inaccuracies or things that could be improved upon with using this so that students are also learning about how they can use and apply AI? As I mentioned before that we don't want artificial intelligence to supplant our students' critical thinking skills. It should be used as a supportive tool. And some of the ways that I've implemented this with my experiential learners is asking them to maybe utilize AI to help with a drug information assignment or to help facilitate a topic discussion where I come up with multiple different patient case scenarios. They are able to input that into an LLM or maybe different ones depending on the group and get a response back and kind of evaluate that output in real time so that they're not just lifting, copying, and pasting a response somewhere, but really thinking about how they're delving deeper into that to provide a deeper explanation and assess it for accuracy or assess the output for accuracy, reliability, bias, and fairness. Is it clear? Is it concise? Is this a complete response to actually answer the question that they were posing? And so this has been an excellent tool for especially my topic discussions, which are usually done in smaller groups of student learners and residents to help facilitate discussion where they can compare and contrast the answers, the outputs they're receiving from the different LLMs, and then come to a consensus about what they agree upon would be the best choice given the patient scenario. And as a preceptor, I've been really impressed with the discussions and it's allowed me to evaluate their knowledge kind of on the fly of what do they know about our guidelines, how are they applying it to these different patient case scenarios, primary literature that they're adding to this discussion without me having to specifically probe or facilitate for all of those questions. There are also other critical evaluation tools that have been developed to help us assess AI-generated outputs. So one is here on this slide. It's called the Fluff Test and it's linked below. And so this gives indicators where there may be potential inaccuracies from AI-generated outputs and so gives the learner an idea, especially a novice learner that maybe hasn't evaluated this information before, this type of information before, a place to start. So if you go to the link, there's a lot more information on what could potentially be included in these different areas. But essentially, you're looking for an answer and output with zero flips. So you don't wanna see any of these inaccuracies in the length, the tone, the phrasing. You don't want it to be repetitive. And make sure, I mean, that it's valid, credible information without, you know, with limited jargon. So I took this idea into my critical care elective that I teach with our pharmacy students where they have historically always done a drug information assignment, but I redid it this year with an AI twist, where they then utilized or took the drug information question or prompt, asked AI or an LLM for an output, and then used that Fluff Trust framework to critically evaluate. And I had them record those responses so that we could kind of see, or I could see, where they were finding inaccuracies when asking clinical information and ask them to provide examples or comments of all of those inaccuracies that they identified. They then performed their own individual drug information search to add to that, to make sure they weren't missing important pieces of literature, guidelines, and develop their own kind of response. And then after they completed the individual components, they came together as a team to then provide their final drug information submission. So this worked out really great because they were able to go through that process individually and use AI more as a starting point, and then compare and contrast what they kind of were seeing with their experience and their own drug information searches to then combine that as a team and come up with a really great kind of final submission. I also asked the students, though, to reflect on the implementation or utilization of AI for drug information and what this potentially means for them and their career as pharmacists and the future of drug information and what they thought about that experience. And so these are some of the kind of overarching themes that I saw from those reflections with the students is that most of them really acknowledged AI was a starting point, and they saw the need for critical evaluation of output, which I think is good in teaching our students, our learners, to make sure that they're fact-checking and not just lifting that information, especially when it comes to patient care decisions, but that it could be utilized to kind of delve deeper or to get a starting point for a topic. They did identify some limitations and misinformation once they did their own search and compared to current primary literature or guidelines, but overall, they found that actually utilizing the AI enhanced their efficiency and confidence in completing the task, which I thought was interesting. But these drug information topics, like a lot of what we receive as pharmacists are topics that don't have direct answers or you don't always have a strong foundational knowledge in. And so they found that it was a really good starting point so that they could use AI for learning and discovery and kind of get them started when maybe before that task would have seemed very overwhelming just because they didn't have a lot of background on those topics or questions. So a few tips or tricks that I want to leave you with before we move into our Q&A is that if you do choose to implement any of these types of activities with your learners, you definitely want to troubleshoot and test your prompt before you hand it over in an assignment or for an activity. But even if it's not successful in one AI tool, you may want to try another LLM or adjust your prompt more in order to get more success. So it may take a little bit of trial and error initially when you're developing these things. I also really emphasize with my learners that they are ultimately responsible for their own work and submissions. And so emphasizing the importance of fact-checking, final work, responses, submissions, and having the student to attest that it's their own work and also acknowledge the use if they utilize AI as part of that final submission, I think is incredibly important. Also, I have learners that have a lot of experience with AI, whether that's just with their technology backgrounds or using it in their personal lives, and they're very comfortable with that. And I have a lot of learners though that maybe have not utilized a lot of artificial intelligence or utilized ChatGPT or any other kind of tool. And so they're very novice and they need a lot of instructions and help in kind of learning how to apply or use these models. So I found that giving very clear or detailed instructions is important, helping to explain the purpose to the student. So knowing that they're, yes, I'm asking them, maybe it's the example of utilizing AI to assist with a drug information question, but also the other dual purpose is helping them learn about how to utilize and apply these tools and how to use them. Utilize and apply these tools because they will be seen in their kind of future careers and in practice. And then also describing the task very clearly for them, giving them the prompt to start with is usually helpful or at least parameters for how to write a good prompt because a lot of students or learners I have found can be a little bit lost with that. And they don't give a lot of information upfront in the, with the prompt if they're left kind of to their own devices. So it helps if you can give them a little bit more detail about what to ask as far as so that they can start with a better output initially. And then if there's any criteria for how they'll be evaluated, of course, upfront. I also have learned to provide some troubleshooting, some troubleshooting instructions for my learners. So for example, with the interview example that I shared before, even though I set parameters to only ask a certain number of follow-up questions, there have been some students that have gotten stuck in loops where the AI keeps asking more and more and more questions to them. And so I provide them with some feedback that they can say, let's move on to the next topic. I'm done with that one. Or don't ask me any more follow-up questions on that domain. Let's talk about this one. So that they can understand that they're in control of that conversation because I don't want it to become overly burdensome for them, but I want them to have a productive and efficient conversation. And then always have a backup plan in case there are tech failures. We teach in a large classroom, for instance, if I'm implementing something with didactic teaching, you never know if the wifi might be down that day or sometimes when I go to log in to use chat GPT, it doesn't wanna work with me. So I've learned just to always post, maybe it's a paper template on Canvas for the students or have that available with the experiential learners so that they can still move forward with the activity, even in the case of any technology failures. These are a few helpful resources if you're interested in learning more about maybe some of these prompt examples that I shared today or ideas on how you can implement and utilize AI for teaching and learning. There's some great approaches or prompts from Malik and Malik here, and then an AI prompt cookbook from our CITT group at UF. They're both freely available through these links. So they're just excellent resources that have stimulated a lot of ideas for me of how I can use and apply AI either in the classroom or on experiential rotations. So now I'll hand it back over to TJ for any questions. Thank you so much, Kaitlin and Ankit. We do have a few questions rolling in. This one, Kaitlin, appears to be directed for you. If you allow learners to use AI, do you require them to provide or keep record of the prompts used? So it depends on the assignment. There are some where we have instructors that have built in to ask them for their prompts back, or you can have the student either copy and paste or screenshot their interaction and bring that into the discussion, which can be helpful, because then you can see the whole conversation that they had or have them even submit it to have it actually submit the assignment that way so that you have that in Canvas or if it's a didactic course. So it depends on the situation. What we do ask, though, in all of our course policies is that if students use AI, that they are referencing or citing the use. And so we give a very basic template of how they could do that and ask the student or the learner to share how they utilized AI. So whether that was for grammatical edits or to improve flow of a document or to change the format, then they're able to cite that use with their submissions, and then that's part of all of our course syllabi and policies now. Excellent. Curious, have you encountered any resistance with students using AI? And if you do, how do you address that? Yeah, so it's interesting. My experience has been that our learners have actually been very excited about the possibility of utilizing AI for learning and in their teaching. So I haven't encountered a lot of resistance. I have encountered learners that maybe are not as comfortable using the tools or they feel a little lost. I think that's where those very detailed instructions become especially helpful for them. And so I think it's just ensuring that all students have appropriate kind of foundational or basic knowledge. I will say too, I feel like it's been most well-received when I'm in smaller groups with the students. So for instance, on my experiential rotations, when I usually have maybe one to two learners or residents with me, and then we combine in smaller groups and I can really work with them through that process of utilizing AI. With the self-reflection though, that specific example, I think you do maybe sometimes need to be a little bit careful. Students may be resistant in putting in any personal information. And so I think how I've dealt with that is just sharing with them that they don't need to add any personal information and they shouldn't, but that this is just helping them kind of form those examples that they might share, right, as part of their self-evaluation or in an interview so that they can think back in the future and know kind of what they accomplished or what they did with me on rotation. Wonderful. Ankit, I do see a question for you. Is there a specific use case related to AI and its clinical applications that you are particularly excited to see come to life or working on? Thank you. Thank you, TJ. I think that's a very good question. AI, we are utilizing AI currently in a lot of different ways in clinical applications. One of the ways we are utilizing it is in reading CT heads. So CT scans are being read by AI and then those reads are being verified by radiologists. Another aspect that we are already using AI for at our shop is the MUST-PLUS score that I showed you. We are using that routinely to identify patients with malnutrition in the hospital who would benefit from a consult with our nutritionist while they are in the hospital. In our lab, we are very focused on developing the automation aspect of AI and bringing it to healthcare. So we're working on some of the algorithms where we could potentially automate certain aspects of clinical care to improve with the workflow processes. For example, giving certain medications and certain doses of medications to patients which could include vasopressors, which could include things like insulin or insulin drips and things like that, things like mechanical ventilation settings, working towards automating mechanical ventilation settings for patients with various disease profiles. Those are, I think, some of the things we are actively working towards and I'm very excited about the future prospects of some of those applications. Excellent, and then Ankit, I'll ask you this question and maybe Caitlin, I'd be curious to hear your perspective as well. Is there anything related to AI currently or perhaps as you're thinking in the future as AI evolves in the clinical setting that maybe worries you? So I think one of the things that would get worrisome is over-reliance on AI, right? AI, I see, is a tool, just like, say, statistics. It's a tool and trying to help us, one, parse out things and two, to improve our workflows. But whenever we use the new shiny objects, we tend to get too attached to them. That's just human nature. And I think we'll have to be careful with that as to still have safeguards in place to make sure that what AI is saying may not be always 100% correct because nobody is always 100% correct. And we need to still be able to use our clinical acumen to make sure that we are making the best decision for the patient. I will say that when I speak to a lot of clinicians, there is a worry in general that AI is going to take everybody's jobs. I don't think that will happen. I don't see that happening because, again, AI is a tool and it is a tool to help improve our workflows. Will our jobs evolve with incorporation of this tool? Sure. And I think that is where it comes down to, as I say, that AI is not going to take anybody's jobs, but yes, people who learn to adapt to and utilize AI effectively are going to do much better in future than people who don't adapt to that. Yeah, so I would say I agree with what Ankit shared in that as an educator, we worry about over-reliance from our students as well on these tools and that lack of critical thinking or problem solving if they implement or utilize AI. And so I think that for me specifically, I've taken the approach of just trying to be very intentional with the assignments or activities that I ask them to do utilizing AI so that they're not losing that sight of the fact that they still need to fact check and provide their own kind of critical evaluation to come up with how to utilize the AI best and most appropriately. Wonderful. And I think that's actually a really good segue to this next question. Addressed to Caitlin, how do you evaluate the credibility of the AI tool that you are using? And do you double check or cross check if the information is correct? And do you teach your learners to evaluate the tool before they're going to use it? Yeah, so we have some kind of university sanctioned tools, if you will, that are available that we're encouraged to utilize if we are using it in the classroom or for teaching purposes. And so with that, we have a lot of access to different LLMs within the tools that the University of Florida has built, which is great. I'll be honest, my pharmacy learners, they don't know a lot of differences about the tools or the different LLMs. And so I take it on, I take the ownership for telling them specific instructions of what I want them to use and kind of when, unless we're in more of that small group discussion where maybe they're comparing and contrasting answers from different tools. And that can be an interesting discussion to show or demonstrate to the students or the learners that there are strengths and weaknesses even amongst the different tools that I don't think at least my pharmacy learners have a lot of that background or knowledge on. So that's one approach to take, but I tend to be more prescriptive in what I ask them to use because they don't evaluate the tool itself necessarily for the task. But I do emphasize to them that they are responsible for the output generated right through that task irregardless and they need to critically evaluate that output and make it their own. Great. And with that, I think this concludes our question and answer session. Thank you again, Caitlin and Ankit. And thank you to the audience for attending today. Again, this webcast is being recorded. The recording will be available to registered attendees within five to seven business days. Log in to mysdcm.org, navigate to the My Learning tab and click on the AI to improve productivity at work course. Click on the access button to access the recording. That concludes our presentation for today. Thank you again. Thank you.
Video Summary
This webcast, moderated by Teresa Boron from the Society of Critical Care Medicine, featured speakers Ankit Sekuja and Kaitlin Alexander who explored the principles and applications of AI in critical care and education. Sekuja, a Director of Clinical Informatics Research, began by explaining the fundamentals of AI, including machine learning, deep learning, and natural language processing (NLP). He detailed how AI is integrated into everyday life and its potential in healthcare settings, including supervised and unsupervised machine learning, and reinforcement learning for automating clinical processes. Sekuja also introduced deep learning concepts like recurrent neural networks and convolutional neural networks for medical imaging, as well as NLP and large language models like ChatGPT for interpreting human language. Following this, Kaitlin Alexander discussed integrating AI in education, emphasizing its role in enhancing learning efficiency, critical thinking, and preparing health learners for AI-driven environments. She introduced practical applications, including AI as a mentor for feedback, simulating patient interactions, and assisting in tasks like developing patient education materials. Both speakers agreed on the importance of teaching AI literacy to students and maintaining a balance to prevent over-reliance on AI while fostering critical evaluation skills.
Keywords
AI in critical care
machine learning
deep learning
natural language processing
healthcare automation
AI in education
AI literacy
medical imaging
ChatGPT
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English