false
Catalog
SCCM Resource Library
Not Your Grandparents' Evidence: Advancing Critica ...
Not Your Grandparents' Evidence: Advancing Critical Care Research and Practice Through Real-World Evidence
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So, just as a bit of a disclosure, I do have funding from NHLBI, CDC, HRQ, and I do serve on DSMB for pragmatic trials, and I'll talk a little bit about them, and non-pragmatic trials, as well as, oops, sorry, how do I go back, okay, as well as actually serve on a scientific advisory panel for Regeneron and Philips Healthcare. So over the course of the next 15, 16 minutes or so, we aim to understand what real-world data and real-world evidence is, and to be able to describe the different types of real-world evidence, and to understand the strength and limitations of this evidence. So first, let's start by understanding what are we talking about when we talk about real-world evidence. FDA defines real-world evidence as clinical evidence about the usage and potential risk or benefits of a medical product derived from the analysis of real-world data. So what's real-world data? It's basically data that is generated in the course of day-to-day life. For most of us who are doing medicine, we already know that a lot of clinical data in our EHR system, in our insurance claims, Medicare, Medicaid, are all part of this real-world data that can be examined. But this data now is kind of going through transitions in the sense that more and more of this data can now be linked. An example of that is, you know, this merger of CVS Health and Aetna means that data on not just your prescription medicine, but also actually your over-the-counter use medications can now be part of this real-world data for analysis. And we used to think omics and, you know, detailed family history is something that you give your doctor and it's only in research. Well, that's not true, because almost all of you guys know somebody who has participated in 23andMe as well as actually a family tree genealogy. All of that is real-world data, and they're actively participating in research now. Many of you are like me and has either a Fitbit or an Apple Watch, these mobile devices, as well as actually all of the data that is coming through the ICU in terms of your monitors, your vents, all of those are real-world data now that can be used and analyzed. And it's not just healthcare data. So this is an example of using particulate measurements of the atmosphere and pollution and linking the Canadian wildfire smoke to increased asthma symptoms presentations to ED in New York City. So environmental data. But most of the data that is out there is actually data that we have generated when we interact with the Internet, the social media, what we look up in terms of reading and PubMed, or all of the other things that we do in the Internet. And this is an example of the use of real-world data of looking at search terms and seeing how frequently in an area people will search out symptoms that are related to an upper respiratory infection and how that might predict for surges in influenza in that area and to the emergency department. So use of real-world data. So how is this real-world data used most commonly in medicine and critical care? So one example of that is pragmatic clinical trials. So pragmatic clinical trial aims to slide in the interim between traditional efficacy trials that Jonathan had just mentioned about, how long it takes to put together and how long it takes for the results to translate to practice, and actually what happens in the real world. These trials aim to inform policy and decisions to test new intervention in the real-world setting. So as opposed to traditional efficacy trials, they tend to have very broad inclusion criteria and very few exclusion criteria so that you have large heterogeneous populations similar to what we see in clinical practice. There is no randomization to a placebo and there's no blinding because that's not happened in real world. So usually your control arm is standard of care. And interventions tend to be simple and can be implemented in clinical setting, and we oftentimes use the EHR data to collect the outcomes that also is clinically relevant and simple. And in the last five to 10 years or so, there's been basically an explosion of pragmatic clinical trials that has dealed with critical care both in the adult and pediatric setting, and this is just a few of those that you have seen. All of these have used EHR data as part of their ability to collect the data and outcomes on this. But this is not just EHR data. So this is the hospital airway resuscitation trial, or the HART trial, that's led by Dr. Ari Muscovitz at Montefiore Medical Center. This is a trial looking at in-hospital cardiac arrest and whether the airway strategy of endotracheal intubation versus laryngeal mask airway can improve the outcome of return of circulation as well as mortality. So yes, we're using EHR data on this, but in addition to the EHR data, we're able to actually pull data from the zone monitors, which is connected to every single patient who has a cardiac arrest. And that gives us real-time, high-granularity data in terms of actually chest compressions, how deep, how frequent. Connecting it to the end-tidal CO2 monitor allows us to also understand how well we're doing in terms of perfusion and when we get return of circulation. This gives us much better data than actually an observer can even collect, or a research can collect. They'll help us understand the mechanisms by which airway strategies can help with resuscitation. But the larger use of real-world evidence is in post-marketing surveillance studies, a form of phase four studies. So whenever a drug or a device is approved and put on the market, the FDA and many other regulatory agencies around the world require that the company then collects data on its use in clinical setting to determine safety, tolerability, and effectiveness, meaning how well does it work in the real life of this newly approved intervention. And an example of that is Zygris, that was actually originally put on the market after the publication of the POWERS trial. And then the company then collected data both in terms of observational cohorts as well as subsequent randomized control trials. So there's a couple things to note there. So one is when they looked at real-world data, okay, and observational studies, they find that the odds ratio for mortality from Zygris is not as striking as it was in the POWERS study. It's a little bit not quite as effective. But more interestingly is that the risk of severe bleeding outside those tight confinements of the first randomized control trial was nearly double. And subsequent to this, the vendor themselves, not the FDA, did pull it from the market. But what you're going to hear about today and what you're going to hear more about as the next hot sexy topic is target trial emulation. And what target trial emulation is, is the use of real-world data and applying trial design and analysis principle that you would normally apply to a randomized control trial to the same way that you would analyze observational real-world data. That means, for example, just like in a randomized control trial, I have strict inclusion and exclusion criteria that must be met within a particular time frame in the course of the presentation of the patient. Similarly, you look at your observational data and you select those patients to match the same kind of an environment, inclusion and exclusion criteria, as well as the timing of when they come into the trial, when to study, when to get the intervention, and what outcomes you're looking at. Now there are different applications to these target trial emulations. One is to inform the design of randomized control trials. This is an example. This is from the STOP COVID investigators looking at tocilizumab and mortality in patients hospitalized with COVID-19. Subsequent to this observational study, RemapCAP published on their randomized control trial on IL-6 receptor antagonists in critically ill patients with COVID-19. People found a benefit to TOSI, but if you look at actually the odds ratio for mortality, which is a secondary outcome in the Remap trial, you'll see actually it is very similar to what was seen in the STOP COVID investigators using real-world data. This kind of similarity and being able to anticipate but also help plan and also mimic what you might see in a clinical trial has become more important because real-world data is now used to inform regulatory decisions around new drug approvals, label revisions, and supplemental approvals. And this is an example of all the drugs that has been given this approval just on real-world observational data. Now this is usually on drugs that has been on the market and is being used off-label in a different patient population or for a different indication so that you have real-world data to see patients that some who got it and some who didn't, okay? But this is of interest to the FDA, enough so that the FDA actually funded a project known as the RCT duplicate to see actually how well we can use real-world data to basically replicate well-known published randomized control trials. And we'll talk a little bit more about that in a second. Now the other way you'll see real-world evidence is in rare conditions where you're just not going to have enough patients to be able to do a randomization for a new drug or a new intervention. In which case you might have a single-arm trial but they'll use historical or synthetic controls to be able to replicate a randomized control trial. But what you'll see more often in critical care is comparative effectiveness studies as well as heterogeneity or treatment effects where you'll look at drugs that are being used to compare them but also actually look to see if there are subgroups of patients who may have more or less benefit or harm. An example of that is a study looking at actually warfarin versus actually DOACs in preventing strokes and in bleeding, causing bleeding, in patients with atrial fibrillation. One that was comparative effectiveness of warfarin versus these DOACs and the different DOACs. But they were also able to look at it within a subgroup of patients, the chronic liver disease, that were often excluded from these trials because concerns about bleeding. And they found that it was still beneficial even in this population. Now I know what you guys are going to say. You're going to say, this is opposite of what we've always been taught. That in this hierarchy of pyramid of evidence, these observational studies is lower. They're not as good as randomized control trial, right? Because there's all these biases that are there. And that is why we're always supposed to. I know when I've published, the editor keeps coming back and say, don't say cause, association, not causation. That is true. However, there has been advances on several fronts that now make us much better able to account for these biases and therefore strengthen the level of evidence that we are finding. Some of this is from the analytical tools. Probability score analysis, inverse probability weighing, all that has been there before. But there's also been an exponential increase in the use of machine learning and artificial intelligence analysis. Part because it's actually really good at handling a large amount of data, but also actually because certain analysis, such as convoluted neural networks or large language models, allows us to look at unstructured data that we couldn't measure easily before or capture images that we see on the monitors and on the ventilators, waveforms and sound. But the biggest reason, okay, it's not necessarily the tools because, you know, you can use the right tool, but if you use it on the wrong material or use it wrongly, you're still going to end up with garbage. I think actually the biggest advance in this area is really from the exponential growth of data and the quality of the data that we're getting. This allows us not just to have a large number of patients and higher quality data to have the power to do some of the studies, but it also allows us to have a larger pool of patients that we can be more selective about selecting them to better match or emulate like what we would do in a randomized controlled trial, okay, and to be able to have enough other data that we can adjust for for confounders. In critical care especially, the higher granularity data, especially if you have device integration, really helps with regards to finding a causation because you know exactly when the intervention is and you can see exactly what happens afterwards. So giving you that pre to predict the outcome. But that's not to say that real world evidence is going to replace randomized controlled trials because not everything is applicable. Sometimes the drugs that are not currently in use cannot be studied this way, and interventions that requires expertise, those are not things that you can capture sometimes in a real world data so well. And just like it has always been before, garbage in, garbage out, however much we have more data and the quality has improved, missing patients, missing data, unreliable data are still going to affect your outcome. And as we'll hear more about, there's still challenges in terms of linking data from different sources, avoiding duplication, making sure that you're linking them not only on the right patient but in the right time frame of that health of that patient, as well as privacy. And I think you'll hear more about that and what we have been able to do later on. But just like we have done with everything else, poor study design is going to limit the usefulness of real world evidence. So that means inappropriate selection of patients, inadequate adjustment of confounders. And we know that this is true. When we looked at what happened during the pandemic, you can see actually the methodological quality of COVID trials, observational studies, was actually poorer than it was in the period before. And this is important because remember what I said about the RCT duplicate? When they had looked at how well they can use real world data to emulate randomized control trial, they found that there were some studies that they could emulate well. The data is there for them to match on the patients and match the study design and match the outcome. But then there are trials that they couldn't match so well. It turns out actually the quality and the correlation falls rapidly between those two. On trials that can match really well, 88% of those trials that they emulate in the real world data would agree. The real world data agree with the trial. But in those that they couldn't, only 50% of them ended up agreeing. So in conclusion, we will have more data and real world data will grow and there will be more tools for us to analyze them. And it's going to be very effective because it provides effectiveness evidence in actual practice versus efficacy. But like all clinical research, real world evidence will be limited by incomplete or poor quality data or poor design. With that, I want to thank you for your attention.
Video Summary
The speaker, with disclosures of funding and advisory roles, aims to elucidate real-world data (RWD) and real-world evidence (RWE) in clinical settings. RWE is defined by the FDA as clinical evidence drawn from RWD, which comprises data from everyday medical interactions, insurance claims, and even wearable devices. This data is increasingly linked and used for various medical trials, notably pragmatic clinical trials that offer broader, more inclusive studies than traditional ones. Examples are given, including environmental data usage and innovative trial designs such as target trial emulation, which mimics randomized control trial (RCT) conditions. Advances in machine learning and data quality are enhancing RWE, making it pivotal in post-marketing surveillance and rare condition studies. However, it cannot replace RCTs entirely due to potential data gaps and integration challenges, highlighting the need for high-quality data and careful study design.
Asset Caption
Two-Hour Concurrent Session | Curating and Analyzing Real-World Data for Critical Care Research in COVID-19 and Beyond
Meta Tag
Content Type
Presentation
Membership Level
Professional
Membership Level
Select
Year
2024
Keywords
real-world evidence
pragmatic clinical trials
machine learning
data quality
target trial emulation
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English