false
Catalog
Leadership and Management Skills to Enhance Your P ...
How to Utilize Unit-Based Metrics and Financial Da ...
How to Utilize Unit-Based Metrics and Financial Data
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay. What a great lecture to give right after lunch. I'm going to try to be energizing. I don't have any financial relationships to disclose. This is some of the things that we're going to cover, explaining why metrics can be useful. We're going to talk about data integrity and why it's so critically important and most commonly misstepped, and then I'm going to give you a paradigm for metric classification that I find very useful. You may not like it. There are others. I will name a couple of different paradigms. I'll also point out there is one spectacular article in the literature on this topic, and I'll give you that reference and point it out when we get to that part as well. The common teaching, which is correct, is you can't improve what you can't measure. It may improve, and you got lucky, plus you may not know it improved if you didn't measure anything to determine whether it improved or not. So you have to avoid making measurement an end to itself, and I think the phrase that captures that the best is that we should be data-informed, not data-driven. We should take data and interpret data and use data. We don't just do what the data seem to say or what the first pass through the data seem to say. Make sure we're on the same page. What is the meaning of metrics? Well, it's a set of numbers that give information about a company or an organization's performance globally, or in a particular process, or in a particular activity. So it can be at the process level or at the complete outcome or activity level as well. It's a system or a standard of measurement. It's a measurement. You're going to use that as a standard approach. We'll talk about that a little bit more, sort of more indirectly, but we will talk about it. You need to understand the validity of the metric and the limitations of the metrics. There are no perfect metrics, and the more you try to get data out of those wonderful electronic health records, the more you actually can run into problem with data validity. I will point that out. I will stress that point, so hopefully you save yourself or I help save you from making a big mistake that I've seen made very frequently. The metrics that are collected in the system of care, in the process of care, are the best because it doesn't take something extra to collect the metric. It's already there. The data is already there for your ease of use. And if it's already part of the workflow, then it can be much easier to automate some of the data collection or data analysis side of the question. And then you also have to think when you have metrics that you're going to use to drive change, how am I going to show that data? Is it just all going to be data points? Am I going to use graphs? Am I going to use pictures? So you've got to think about the visualization of data. I won't cover that very much. I just pointed out a couple of times. So what are the steps in this process of improvement are fairly straightforward for you're going to focus at the data, but you still are connected to how you're going to use it and why use it, and that's you're going to engage a multidisciplinary team. So you ask yourself the common questions, do I have all the stakeholders? You have to think about who has ownership of this and team building. Let me also point out that if you take the daily goal sheet from Hopkins, let's just say as an example, and you go to try to deploy that in your institution, you may find that the same sheet that worked well for us and that got published on doesn't work for you. It's because you missed an important step. Taking that data collection tool and sitting down with your team and customizing it slightly for your environment, hopefully not changing it too profoundly in some significant way that alters its validity, but makes it yours, means the team now recognizes that you value team. You got all the stakeholders together. They now see that they own this tool, so it's not a matter of who's collecting the data and who's responsible. It's our tool. We want to collect the data. We want to use the tool. So having a multidisciplinary team that's appropriate can be unbelievably beneficial. You need to identify what target areas. Are you going to look at fiscal issues? Are you looking at process issues or outcome issues? You need to select the appropriate metric, understanding the limitations of both the system and what it can give you, as well as what you're trying to get. And there can be a disconnect there. You may be trying to get data on a certain point that you can't get out of the system. Let me give you an example. If you wanted to know the length of time the patient's on a ventilator and electronic medical record, you say, well, what's the starting point? Was that when they're intubated? Is that only when they're intubated? Is it when somebody, whether it be nurse, physician, respiratory therapist, enters data in a field that's the first setup of the parameters? How about when is that patient extubated? So is that a checkbox, intubated or extubated, and then the checkbox keeps being checked automatically even though the status of the patient changed? Is it there's no more data in the ventilator field? Well, what happens when the person goes to the OR for 12 hours? There's no data in the ventilator field for 12 hours. What happens if they go to invasive radiology and they need to be embolized? So knowing what data is there and how it's structured is so important. You see the connection, hopefully, to data validity, which I said we're going to stress and talk about multiple times here. And then you have to think about what's the correct analysis and how am I going to disseminate the findings? Is that just local in the ICU? Is that local within a division or a department? Is that across the institution? Is that submitting it to national organizations, et cetera? So an early question to ask yourself, the metric might be important to you, but to who is it important? Is it an individual or is it a group of individuals? Is it docs, nurses? Is it a metric that the patients and families would find important or institutional leadership? Would institutional leadership value this? Would it be useful for them in some way, some creative way? I once presented, one of the hats I wear is a CME hat. And I presented once that the CME office at Hopkins has X number of people visit the state of Maryland to come and go to CME activities. And the state board of visitors has a dollar value for how much each visitor pumps into the local environment. So we did simple math. Number of people that came for CME times the amount of money that the government says is pumped into the local environment. And the dean went, we're contributing over $10 million to the state economy on this one area and the governor's always asking me, what have you done for me lately? So having the metric, having the data, then you have to think about who might use it and be creative about how that data might be useful. You have to think about where does the data exist? Is it quantitative data or qualitative data? And does it exist in some database? Or there are hidden systems. When you go to do this and you start asking around, you might find that some people collect data and put it somewhere. So a not uncommon finding when I've traveled around is that groups like the transplant surgeons may collect all the data out of the patient record and keep certain components of it in their own system for doing analysis that's outside of the standard electronic medical record. Well, that may solve a lot of your problems because there may already be data that you can utilize. Think about the American Society of Thoracic Surgery and their system for collecting data. So think about where there might be systems that are hidden to you but not hidden to the health environment that may help you and you may not have to recreate the wheel. So is it already being collected? At some hospitals I've been, they were surprised to find that either a nursing organization or quality and safety or infection control epidemiology, it's different in different institutions was collecting data on CAUTI and catheter-related bloodstream and et cetera. They didn't need to do that. They just needed to think about where was the resource and who owned the resource and how they could collaborate with those individuals, not again have to recollect the data a new time. I told you I wanted to stress this point. Let me give you some of the examples that will hopefully make it clear for you. You go home and you decide you want to know something simple. You want to know the length of stay of patients in your ICU and so you ask somebody that can generate such a question of your electronic medical record to get you that data and they send you and tell you that the mortality is, or the length of stay is 1.2 days. And you go, it might be low but I know it's not that low. So you're doing your own validity check, right? Instead of saying, go back and saying I asked you to check on the following thing, length of stay, and they come back and say, yes, that's what I gave you. You have to get them to print the preliminary data sets out on the first step before you say this is a valid question we're asking of the system and we're asking it in a valid way. Why? You might ask. Well, here's the example. The first time going back about 20 years ago of a previous electronic medical record system we had, I asked that question of one of our surgical ICUs. That's the exact answer I got, 1.2, and I said, if you told me it was just under two, I probably would have said it has a rapid turnover surgical unit. So I could see less than two, 1.2 just seemed infinitely too small. So I said, can you show me the data set? We sit down with the data set. Guess what? There's over 500 patients that somebody marked as being admitted to the unit that spent 15 minutes or less in the ICU. The data is not valid they gave me. It's valid for answering a separate question. If your question was, if anybody's ever opened a chart and created a chart under a patient's name in the system, what was that patient's length of stay? But that's not a clinically relevant question. You might ask, today if I went home and I asked, what's the mortality of patients in the main surgical ICU? They can tell me the mortality of patients who were on the floor called an ICU, but they can't tell me whether the patient's status was IMC, step down, whether there was no floor bed and the patient spent an extra night in the ICU waiting for a floor bed. All they can do is say, nine east had a mortality of. And then if you say, but there's a field called level of care, well, who's filling that field out and where is that data coming from and who's gone in to make sure they're filling it out correctly? So this is what I mean by data validity. This is a dirty step that takes a lot of time and a lot of extra effort that many people don't go through and then suddenly somebody calls them on the carpet at the time of presenting at a meeting or when they try to publish the data or et cetera and they get into trouble. This is something you want to plan on doing. It slows everything down, but once you determine that the question and the data it generates is valid, then you can swoop forward and you can do that repetitively. I'm not going to go into the details, so we're talking about internal validity, external validity and consistency, reproducibility are the concepts you want to think about here. Now actionability is also important, whether it be intervention related or whether there's an ownership or culture around accountability can be very different in different institutions. But what's at the bottom of this slide is very critically important in my opinion. If you don't have a culture that will accept the results, it may not be worth collecting the data. If you can't manage people, you can't manage data. If you're going to find out that Dr. So-and-so has got a bad record for doing X or his patients have a higher death rate or her patients have a longer length of stay or whatever it might be and you can't then manage that person into being trained again, getting additional education to looking at the data, to participating in its use, then what was the use of collecting the data in the first place just so you could say we know we have a problem that we're not going to do anything about? You have to think about what's called publishing. In this case I don't mean in a journal, I mean in the environment that you work. This has been a tool that was first done in general industry and found out to be quite useful. So if you work in a factory that makes windows, you want to have a curve on how much glass is broken that shouldn't be broken that's slowing us down in making windows and making us reproduce additional windows to replace the ones we broke. You would think that that makes sense but in medicine we don't think about posting. Here you see one form of visualization where they created banners and they're getting together for team photos that they'll use locally and potentially on social media, plus minus some good and some bad there. Here you see people have thought of a run chart relative. So they've thought about how to get the data, where to put it up. You know at our institution the first time we wanted to put data up in the ICU about our catheter-related bloodstream infection rate, the first question was risk management. So we went to risk management and we went to ethics and said, oh, we understand. And guess what? What do patients' families mostly? Are they worried that you have a rate of catheter-related bloodstream infection? Most families are impressed you're following it, tracking it, paying attention, and are going to intervene every time there's a blip in some way and learn something new. That's an environment they want for their family member, not somebody who's just walking around with hands over their eyes, sort of blind. What are the steps? Well, set a goal, set benchmarks, aim higher. The reason I say aim higher is benchmarks tend to get you to average. So if your performance is bad, getting to benchmark is a good step. If you're targeting being in the six sigma range, many times you can be falsely led by benchmarks which will drag you down because they may be the average performance and you are aiming much higher. So I would encourage you to aim higher. Track and compare to a goal. This is no different than what we now do with sedation and pain management, right? We used to say they have pain or we're treating their pain with narcotics. We now say they have a pain with a score of seven and our goal was three and we're using narcotics in the following way to try to achieve, right? We use a metric and we have a goal associated with it and we report both. CAM, the patient's delirious with a CAM score of and yesterday and our goal was CAM of minus two or CAM of one or whatever it might be. And then you tie all this together and use these metrics in a plan, do study, act cycle and you do that iteratively to get improvement and change. So the must read article is called ICU Director Data, Using Data to Assess Value, Inform Local Change and Relate to the External World. It's by Murphy, Agbu and Craig Cooper Smith from Emory and it was published in 2015 in CHEST actually. And they go through a classification, this is what I would call schema one, not their words, my words, where they talk about clinical. So is that unadjusted or adjusted mortality? Again, your team should discuss which one is going to produce the data that's going to help you. You may want both. You may want only unadjusted mortality or adjusted, unadjusted length of stay. Are you going to look at medication errors or vent free days or percent vent free days? And then if it's percent vent free days, is it percent that they're vent free but they can have non-invasive ventilation or each of these become a question you have to clean up and clarify before you even ask the computer people to search the system to try to get you data. The quality and safety classification, so pain, agitation, delirium, the percent of patients that have it documented in the chart, that have both the score and the goal documented in the chart. Catheter related bloodstream infections or ventilator complications, et cetera, et cetera. In this schema, you could have a pro fee or a billing section that could be pro fee billing or it could be institutional billing. And institutional could be ARD, it could be DRGs or APR DRGs or it could be cost buckets that the institution breaks things in. They have a radiology cost bucket, a pharmacy cost bucket, et cetera. You might think about data around mentoring since that's an important part for us today, right? So this would be qualitative data that you would most likely collect through surveys of people of both mentors and mentees and trying to understand are you achieving what you hope to do. You could count the number of meetings that that dyad has, the mentor mentee has. You could ask, do they have a formal curriculum between them? You could ask, do they hold feedback sessions? You could just ask, do you have a mentor? Or again, that metric could be much more defined and so you have to decide from the beginning what you're trying to collect and what you're trying to do. This schema that is sort of in the paper by, the paper's not about a schema, I'm calling it a schema. The paper is going through all of these and what are some of the pros and cons and arguments of each type of variable that you might choose? You could think of these in an organ system based way. So the pain agitation would be here, not under a clinical topic, it might be under a neurologic topic. You could have how much percentage of our patients are troponin positive or what percentage do we think are type B troponin leaks or whatever, mechanical ventilator free days, rates of cabbage or CLABSI or CAUTI, GI bleeding or some of these may be of interest to you in your ICU and your patients, some of these may not. Some of these may have somebody who has a passion for developing a niche on your team and this becomes their career, whether that be the performance improvement side or the academic discovery side. The paper also divides some of the topics into process topics to think about, things like administrative data, RN to patient ratios, neurologic, you get back into the same, serves multiple purposes, same with a pulmonary ID, hand hygiene and line placement protocol, is the protocol followed, is there a documentation of the protocol being followed, et cetera. So these would be administrative metrics that would be very useful for you to think about. Schema two, a completely different way to think about all the data you might collect and the one I like the most is using the six critical aims. So are we providing safe care? So that's things like CLABSI and CAUTI. Are we providing timely care? So time in the ED before transfer to the ICU, antibiotics in the first hour, these sorts of time to intervention sort of stories. Are we providing effective care? So what's our stress ulcer prophylaxis and what's the rate of gastritis that we have? Is it efficient? How much of this gets into time and flow and et cetera? Is it equitable? Notice up until now, equity didn't come into the conversation in any significant way. So now can I look at the data and when I get that data on length of stay and mortality, can I get it by gender? Can I get it by race? Can I get it by ethnicity, et cetera? And then finally, an example of patient-centered care would be, are there goals of care? Are we documenting it? Are we calling it goals of care? Is it easy to find? Not just is it DNR, DNI, but are there actually goals of care that are well elucidated? I like this. I think it breaks the data into a very useful, especially from a quality improvement framework. The third schema I would present is the who, what, which, where, why, when, how, and would lead you back to some of the same variables, but you may break it down like this. I think this is a useful double check. After you've gone through what you're going to build your program around and what metrics you're going to collect, asking, have we thought about who we're collecting data on and who might use it? Have we asked the question of what data and which data and are we satisfied we have the right elements, adjusted, unadjusted, as an example? Why are we collecting this data? In other words, do we really believe we have a problem or are we just collecting data to collect data? Are we going to use the data? Do we have the ability to create change if we get the data that says change is indicated? How would we collect the data? So is it in a hidden system? Is it already being collected? Do we now need to do it? Is it the docs are going to collect five elements? The nurses are going to collect five elements, respiratory will add a couple, pharmacy will collect five, and we're going to put all that, or is it all already collected in the electronic medical record and we can leverage that? And then when are we going to do this? Again, I think this is a good last minute double check of what you've done, much more than a primary data system. So this is the summary slide as I wrap up. What we talked about here is data is important and having metrics. You don't want the data to lead you, so I think that you're much better in a data-informed environment than a data-driven environment. You want to work together as a multidisciplinary team. One of the early steps you want to do when you've identified what data you want to go after is make sure that you have data validity. You want to be able to think about how you're going to present and post and celebrate that data and the benefit that that data has brought you, and you want to do it hopefully in a controlled plan-do-study act. Let me go back to this slide of people celebrating again. The very first project we did for quality and safety over 25 years ago, we did by the way I told you. Physicians collected a few data points, pharmacy collected a few, nursing, there was no money given to us to do it. We did it internally and we divided the work. And then we found a humongous benefit in catheter-related bloodstream infection rates in our institution, which we could then calculate how that impacted mortality, but also how it impacted morbidity and cost of care and length of stay. So we went knocking to say we now would like money to do the next data system, to do the next topic. The institution was very ready to take money because they could easily see we understood what was value to the patients, but we understood what was value to the institution, and it was worthwhile for the institution to invest in what we were doing and what we were building.
Video Summary
In this video, the speaker discusses the importance of metrics in healthcare. They explain that metrics are a set of numbers that provide information about a company or organization's performance. Metrics can be used at the process level or at the complete outcome or activity level. The speaker emphasizes the need for data integrity and understanding the validity and limitations of metrics. They also suggest that organizations should be data-informed rather than data-driven and stress the importance of a multidisciplinary team in using metrics to drive change. The speaker provides examples of different ways to classify metrics and highlights the need for data validation and visualization. They also discuss the steps involved in the process of improvement using metrics and recommend a must-read article on the topic. In conclusion, the speaker emphasizes the importance of using metrics effectively to assess value, inform local change, and relate to the external world.
Keywords
metrics
healthcare
data integrity
data validation
multidisciplinary team
process improvement
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English