false
Catalog
SCCM Resource Library
Intensivist Physicians' Knowledge and Attitudes Ab ...
Intensivist Physicians' Knowledge and Attitudes About Bayesian Adaptive Clinical Trials
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right. Thank you very much, Dr. Kaiser. My name is Brian. I'm going to be talking, as you said, about Bayesian adaptive clinical trials. First want to thank my co-authors, Joel Levin, Andrew Althaus, Sabina Sloman, Jeremy Kahn, and David Huang, my mentor. If I can. There we go. So I'm a clinical instructor in critical care medicine and a T32 research fellow in the CRISMA Center at the University of Pittsburgh. My only financial disclosure is my T32 funding. So just as an introduction, traditional randomized trials have been a mainstay of clinical research since sort of the first modern trial in 1948 on tuberculosis, and they are sort of our mainstay for providing causal inference. However, traditional trials do have drawbacks, including time, cost, and the propensity for trials to be indeterminate or underpowered at times. Traditional randomized trials use what are called frequentist statistics. So this is the P value we're all very familiar with. And forgive me if this is a little in the weeds, but it's important for this project. Basically the way frequentist statistics look at stats is to estimate the probability that the data could have occurred by chance alone. Then you compare that test, that probability or P value, to a set bar, an alpha level that you've pre-specified, and say it's either statistically significant or not. And from that, you accept or you reject the pre-specified null hypothesis. So that's sort of the classic example of we say null hypothesis is there is no difference. We look at the probability that these data showed up by chance. That probability is 2%, so we reject the null and say that it's statistically significant. And this is certainly a rigorous, but it's inherently an inflexible statistical design. So adaptive trials have been used more frequently because they may mitigate some of the drawbacks of traditional randomized controlled trials. And adaptive trials include such innovations as using a master protocol that studies multiple different interventions on the same disease process and has the ability even to add and remove intervention arms as you go through. So an example is the COVID trials. One of them called REMAP-CAP was initially set up to study community-acquired pneumonia. When COVID comes along, it transitions to a pandemic mode. And as treatments were discovered, studied, and then became standard of care or found to be not useful, those arms came in and out. So hydroxychloroquine, and then steroids, and then remdesivir, and then tocilizumab. Other things you can do with adaptive designs, you can vary the sample size and the allocation ratio, even varying based off of data that are occurring in the trial in a pre-specified valid manner. So these types of trials are facilitated by the still rigorous but far more flexible Bayesian statistical designs. Bayesian statistics provide a direct quantitative probability estimate of an event occurring. So they take all the data from the trial and any prior information that's provided, and they provide a probability of an event, say an event like drug A reduces mortality compared to drug B. And certainly Bayesian statistics for those of us who went through medical school as opposed to biostatistician PhDs are much less familiar than the ubiquitous p-value. But the Bayesian approach is actually already something we all use clinically, right? Bayesian logic is essentially the same as a pre-test and post-test probability. So when you do a well score for pulmonary embolism, say this is your pre-test probability, then you do a test like a d-dimer, and you get a post-test probability, that's the same as doing a Bayesian trial, getting some data, and then forming a new probability based off your trial data. So these Bayesian trials are being undertaken with increasing frequency because they may be able to mitigate some of the issues with traditional RCTs, but level of physician understanding on these trials is unknown. And as we know that physician acceptance is a key mediator for evidence uptake, it becomes an issue that physician acceptance of Bayesian adaptive trials is also unknown. So our objective was to determine physician understanding and acceptance of Bayesian adaptive compared to traditional frequentist trials. To do this, we performed a randomized study of U.S. intensivists. We administered this in conjunction with a longitudinal study by Joe Levin and Jeremy Kahn, two of the co-authors, on how physicians form and update their opinions in relation to new evidence. So for our portion of the trial, each participant reviewed one hypothetical trial abstract, either a Bayesian design or a frequentist design. And then they were asked questions on acceptance, self-perceived understanding, and measured understanding. For the hypothetical trial abstracts, each abstract described the trial of a placebo versus a new immunomodulator for vasopressor-dependent septic shock. These were kept intentionally vague for sort of a broad, common, heterogeneous condition. And we avoided any specific drug names, journal names, researchers, institutions to avoid any prior bias. In order to create and ensure that these were equivalent trials, we actually simulated conducting both trials. So we created a simulated trial data set, we gave it a moderately positive true effect, and then we ran separate Bayesian and, sorry, Bayesian and frequentist analyses and put the results in the respective abstracts. The abstracts were otherwise similar in form, content, et cetera. So our results. In all, this was sent to 592 U.S. intensivists who previously had agreed, after being contacted via email, to participate in this longitudinal study. We received 273 responses, or 46.1%. Again, since these participants had already responded to previous portions of the longitudinal study, we know their demographics, and those were similar between respondents and non-respondents. Overall, the cohort was majority male, mostly worked in academia, and majority trained in pulmonary critical care, and our demographics were fairly well balanced between arms. So these are our demographics. I won't go through a ton, but roughly 45-year-old male, Caucasian, working anywhere from 50 to 100% clinical time, and working mostly in academia is sort of our standard participant. So getting into the results of our study, we asked two questions on physician acceptance, and for each of these graphs, for all the questions, the top line is the frequentist, the bottom line is the Bayesian. This is the proportion of their Likert scale responses, and blue is always going to represent greater understanding or acceptance, red, lesser acceptance or understanding. So here we see that for acceptance, there was no difference, statistically significant difference between groups for either question. Moving on to the physician's self-reported or self-perceived understanding, we asked both about the methods and the results of the abstract, and we found that in both cases, the frequentist arm had significantly higher self-perceived or self-reported understanding, with p-values, as you can see up there, each less than .001. Then we asked four questions regarding the abstracts to actually measure understanding. For each of these, we provided a true-false statement, and then asked the participant to rate their level of agreement, and for these first two questions, one on false positives, and the second on whether or not the investigators of this hypothetical trial could influence the patient assignments with adaptive randomization. We found on the first that there was statistically significantly greater frequentist understanding, on the second, numerically higher but not statistically higher, and then on our final two questions regarding stopping the trial, and then the correct definition of a p-value or Bayesian posterior probability, again, both of these, the frequentists had significantly better understanding. So overall, no difference on acceptance, higher frequentist understanding on two of two self-perceived, and on three of four measured. So our findings were that there were no difference in acceptance of Bayesian versus frequentist trials, despite lower physician understanding on the Bayesian side, and we believe our findings suggest, one, that physician acceptance is not, at this time, a barrier to uptake of Bayesian methods or use of Bayesian adaptive trials when they could potentially improve efficiency or quality of evidence. However, we do believe that, from the lower understanding, it does highlight the importance of ongoing education on Bayesian adaptive trials, and also underscores the critical role for peer review and expert trial oversight, as we have evidence that physicians are willing to accept the results of these trials, but these trials are much more complex and the physicians do have a lower understanding of them. Thank you very much for your time, and I'd appreciate any questions anyone has.
Video Summary
In this video, the speaker discusses Bayesian adaptive clinical trials as an alternative to traditional randomized trials. Traditional trials have limitations such as time, cost, and the potential for indeterminate or underpowered results. Bayesian adaptive trials offer more flexibility by using a master protocol that can study multiple interventions and adapt to add or remove arms as needed. These trials use Bayesian statistics, which provide a direct probability estimate of an event occurring. The speaker presents the results of a study that compared physicians' understanding and acceptance of Bayesian adaptive trials to traditional frequentist trials. The study found that there was no difference in acceptance between the two types of trials, but physicians had a higher understanding of frequentist trials. The findings suggest the need for ongoing education on Bayesian adaptive trials and highlight the importance of peer review and expert oversight.
Asset Subtitle
Professional Development and Education, Research, 2023
Asset Caption
Type: star research | Star Research Presentations: Research Enrichment, Adult and Pediatric (SessionID 30002)
Meta Tag
Content Type
Presentation
Knowledge Area
Professional Development and Education
Knowledge Area
Research
Membership Level
Professional
Membership Level
Select
Tag
Medical Education
Tag
Research
Year
2023
Keywords
Bayesian adaptive clinical trials
traditional randomized trials
limitations
flexibility
Bayesian statistics
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English