false
Catalog
Deep Dive: An Introduction to AI in Critical Care ...
Welcome and Opening Remarks
Welcome and Opening Remarks
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So on behalf of my esteemed panel of faculty, I would like to welcome you to the first ever pre-Congress AI course. It is a short one, it's only four hours. So we are very limited in terms of what we can teach you during the next four hours. So these are the learning objectives for this course. The first is you will not learn how to build AI models in this four-hour course. What we are hoping to provide is to give you an overview of the current state of AI development and deployment for healthcare, including the ICU. We will go over opportunities and more importantly, the challenges that we are facing in the use of artificial intelligence. Not just in the intensive care unit, but also in healthcare in general. And we're hoping to lay out the roadmap for the short term and the long term. The other thing that we would like to highlight is we're living in a different world now. So machine learning and artificial intelligence require that we understand the bias of the data. And that might be a bit more difficult in this political climate, and we have to navigate this very thoughtfully and very deliberately too. And lastly, the advice that we have to embrace the change, otherwise we won't be able to truly leverage the power of this technology. So this is what we're gonna do in the next four hours. We're gonna start with a primer on machine learning. And we will have the faculty introduce themselves during their presentations. The primer on machine learning is going to be presented by Ankit Sakuja from the Mount Sinai in New York City. And then we're gonna jump into the meat of machine learning. And the meat of machine learning is not the model development. The meat of machine learning is understanding the data. Because if you don't understand the data, if you just upload gobs and gobs of data into an encoder or a transformer. And then see what sticks in terms of accuracy against real world data. Then if you come to think of it, we're dooming ourselves in legitimizing and scaling the inequalities that we see in healthcare delivery. So again, the meat of machine learning is understanding the data before you can even do any modeling. The modeling is easy once you understand the data. And this part of the workshop is going to be given by Judy Gichoya from Emory University and Omar Abadoui from the Department of Defense. And then we move on to the challenges of deployment downstream of machine learning. So understanding the data is what we refer to as upstream of machine learning. While deployment is what we refer to when we say downstream of machine learning. And I will be providing a tutorial on deployment challenges with Teresa Rincon from University of Massachusetts. And then we will have something that is more interactive. We will divide you into five and then you will exercise critical thinking. And what we would want from you is to come up with safeguards. To ensure that the AI models that we deploy are not gonna harm a particular group of people, are gonna perform as they are designed to perform. So we will be diving deep into one or two papers, each group. And then we will talk about the implications of the algorithm or the findings about an algorithm from that paper. And then you yourselves will provide policy recommendations. They could be at the federal government level. It could be at the state government level. It could be at the health system level. Or it could be recommendations that we can provide to universities who will be partners in this journey of artificial intelligence. Finally, we end up with a panel discussion of what we think are foundational issues. Issues that we need to address before we could continue with our journey in this post AI world. I'm gonna start off by asking why AI? How is this different from previous innovations? AI is way out there when it comes to technological advancements for so many reasons. The first is that it is so ubiquitous. It is really threatening to upend every sector of society, not just healthcare, but also education, law enforcement. And because of its ubiquity, because the applications are so broad, the implication is that this will be a beast to oversee. I think that the FDA even admitted that they alone will not be able to regulate artificial intelligence. They will not have the expertise. They will not have the perspectives. And the last FDA director really pleaded that everyone has to step up. And everyone means the health systems, non-government organizations, universities. But this cannot happen unless everyone has some basic understanding of artificial intelligence. And that is, of course, going to be very difficult, given how fast this technology is moving. So every day, I get notifications of about 10 to 15 relevant papers. And which means that traditional ways of educating is not going to cut it. Taking a course, doing a degree is going to be not agile enough to be able to truly reflect all the advances that are happening in the world of artificial intelligence. So we have to be more creative in terms of how we keep up in educating our students, educating each other about this technology. As a clinician, I don't really understand MRI technology, but I don't have any problems ordering for one in the intensive care unit. I, myself, and most of us probably are not familiar with GPS technology, and yet GPS has become a part of our day-to-day life. The difference between AI and these technologies is that these technologies are relatively narrow in scope. So we could sleep at night knowing that some organization did its job to make sure that this technology is safe. That will be very difficult with AI. But the biggest difference between AI and previous technologies is that AI is inequitable by design. It is inequitable by design because it is trained on historical data, which we know reflects inequalities and hierarchies of knowledge. And for that reason, if all we're doing, as I mentioned, is that training on the data as they are now, and then using accuracy to gauge whether that AI tool is useful, then there is definitely a risk of perpetuating and encoding all the problems that we see in healthcare delivery. So the biggest threat to AI delivering its promise is really the risk of cementing the structural inequities that permeate every facet of society, including those that we see in healthcare. I also wanted to use this opportunity to highlight one of the elephants in the room, and that is the effect of AI on the planet. Despite the fact that AI is one of the world's most resource-intensive digital technologies in terms of material use, in terms of electricity consumption, the environmental impact of AI is an issue that has not been sufficiently addressed in conferences and policies and regulation. For example, in 2022, Microsoft reported consuming 6 million cubic meters of water, a 34% increase over the prior year, and presumably this is because of the water consumption of data centers. Researchers estimate that globally the demand for AI could result in about 5 billion cubic meters of water being consumed by the new servers in 2027. The same is true for energy. In one study, the training of a single language model was found to generate emissions approximately equivalent to 300,000 kilos of carbon dioxide, the equivalent of taking 125 round trip flights between New York City and Beijing. That's one language model. In a study assessing the growing energy footprint of AI, the middle range approximation suggests that the electricity demand of new AI servers in the year 2027 could be as much as the entire annual consumption of countries like Argentina, the Netherlands, or Sweden. And really fresh in the news, we've seen that there are policies or executive orders or rules that are being proposed whereby we will be looking at AI to be licensed to practice medicine. And we don't know if this is going to happen or not, especially in this political climate. And we think that it is too early for us to be able to use AI without any oversight from a clinician. But this is being proposed now in the new administration. And together with that, we also know that the institutions that we have erected to make sure that medical devices are safe, drugs are safe, are currently being reshuffled, maybe dismantled. And this is what we mean by saying that we are in a new world now. So I'm hoping that that gave you a jolt of caffeine and we're hoping that we will be able to engage you in the next four hours.
Video Summary
The pre-Congress AI course aims to provide an overview of AI's current state in healthcare, especially in ICUs. The course will focus on machine learning, emphasizing understanding data over model creation to avoid scaling healthcare inequalities. Panelists from various institutions, such as Mount Sinai and Emory University, will guide sessions on machine learning principles, deployment challenges, and policy recommendations to ensure AI's safe use. Key issues include AI's environmental impact and regulatory challenges, with calls for broad collaboration to effectively integrate AI into healthcare systems.
Keywords
AI in healthcare
machine learning
ICUs
regulatory challenges
collaboration
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English