false
Catalog
SCCM Resource Library
Year in Review: Research Section
Year in Review: Research Section
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, my name is John Savransky. I'm an intensivist at Emory University Hospital in Atlanta, and I'm delighted to be able to talk a little bit about clinical trial design during the COVID-19 pandemic, whether traditional or frequentist designs or adaptive designs are better for patients and for clinicians. You can see here the financial grant support to my institution and some of my intellectual disclosures. But perhaps the most important disclosure is that I'm not an expert in adaptive trial design, nor am I a statistician, but I have participated in the design of both frequentist and adaptive clinical trials. During a pandemic, it is extraordinarily difficult to improve care to patients with a novel pathogen. And this is for a number of reasons. First, almost everybody has increased clinical workload. And people often wonder, are the usual treatments that I know of, the usual treatments that I do for other diseases, are they appropriate for this novel pathogen? There's substantial noise from your colleagues, from the lay press, and importantly, from the medical press in terms of what people think is best. And that sometimes can distract from improving patient care with a novel pathogen. And you often will talk to colleagues who have experience, and they'll tell you about their experience. But that is clearly inferior to having a clinical trial that tells you how best to care for a patient. So even high-impact journals, in their rush to deliver information to clinicians, have erred. And sometimes some of the things that showed up in the literature turned out not to be true. This happened both in journals that we read, in journals that general internists read. And there were enough retractions during the pandemic that there was actually a study on how retractions were published. So one may feel that when you're trying to improve clinical care during a pandemic, that you're a little bit like the myth of Sisyphus. You're rolling a boulder up the hill, and it keeps on rolling back down after you. And that's rarely the type of work that we want. We would like to be efficient at delivering better patient care to our patients with a novel pathogen. So again, I was asked to talk about both the differences between frequentist and adaptive designs, but also which one might be better for patients and for a clinician. And it's, I think, useful to spend a few minutes just reminding everybody and myself what these two types of trials are. So a frequentist trial is a traditional randomized controlled trial in which patients are randomized usually to either get an intervention or usual care or a placebo. And this is determined by a metaphorical flip of a coin. This is different from an adaptive design that allows information from patients who have already been enrolled into the trial to modify important characteristics of the trial, including to which arms patients are allocated to, to the dosing of the drug, to potentially the dose of the drug. And this involves a number of treatment arms, the study endpoints, and even the sample size. And this is often done in conjunction with a platform trial that rather than testing an intervention, tests a group of interventions according to a disease process or a patient population. And perhaps one of the better examples of that is the REMAP-CAP study that was designed to look at patients with pneumonia, designed to look at patients admitted to the hospital with pneumonia. And as of a couple of days ago, and this is the beginning of February right now, they had enrolled almost 20,000 patients into this platform trial. And this particular trial was designed by the well-thinking investigators to be up and running, to have a so-called warm base, in the event that a pandemic might occur during the course of the clinical trial. So again, you've got a base platform of a type of disease, namely pneumonia, and you can have different arms that are added on in the case of a particular type of viral pneumonia. So what are the advantages of frequentist designs? Well, most of us are familiar with them. In fact, the first clinical trial back in the days of scurvy was in the use of vitamin C versus no vitamin C, in which you compared one treatment with placebo. And most clinicians are fairly comfortable with a traditional frequentist trial. And in the past, the FDA had mostly approved trials based on the frequentist design. There's also, for an investigator, there's a lower barrier for entry, that all you have to decide is what you're going to test. And most statisticians can help you design this. But there are a number of disadvantages. Generally, you can only test a single hypothesis. So you can say, does this drug, does this treatment pathway work? And if it doesn't work, you're back at the beginning. And because of that, especially in critical care, where many of our trials have not shown useful treatment effects, they can be extraordinarily inefficient. And when you do sequential trials of things that don't work, in aggregate, you may actually spend more money on a trial, even though it may, the startup costs, if you will, for an adaptive trial may be higher. And finally, if you're randomizing patients to either get placebo or a treatment, that means that half of your patients are not going to get a treatment. And some people think that may be less patient-centric, rather than a design that would allow more people to get a potentially beneficial treatment. So what are the advantages of adaptive designs? Well, you can test potentially multiple agents. And because of that, they may be more efficient. And you can actually design the trial to have a responsive adaptive part, so that if there is a drug that's working, you can randomize more patients towards that drug. And if there's one that's not working, you can randomize fewer patients towards that drug. So you can stop interventions earlier if it looks like it's unlikely to have benefit, and perhaps, again, randomize more patients towards an arm that looks like it does have benefit. So the disadvantages, you've heard that many people are less familiar with that. The FDA, while it has some experience now, certainly less experience than with the frequentist design. And it does take longer to design and may cost more money to do so. And certainly, the initial design can be more complicated. And I just wanted to briefly show you what you could do in this three-arm trial, that potentially, if you find out that treatment two seems to be leading towards a signal that it might be better than treatment one or placebo, then you can randomize, rather than equally to all three of them, you can randomize more of the patients to treatment two. And you won't stop randomizing them to the other treatments. But again, you can shuffle the randomization. And when you, I mean, the computer that is allocating people can put more people towards the treatment that might be more effective. So potentially, you can do other things with an adaptive design. And this slide comes from the design of the VICTIS trial. We actually didn't, we weren't funded to get to this point. But we designed the trial so that if we showed our primary outcome measure, which was vasopressor and ventilator free days, because that is a less compelling, outcome than mortality, the trial was designed with the help of statisticians who are adept at using adaptive designs, so that if we showed our primary outcome measure, we could continue the trial as long as the predictive probability that we were going to reach the mortality endpoint was neither very low or very high. So in other words, it allowed us to the flexibility to potentially continue the trial, as long as the predictive probability stayed within two sets of bumpers for either success or futility. So how do you decide which of these is better? Well, one way to do it is just to toad up the number of trials that have been published in high impact journals. And in the next two slides, I'm going to do this. And you will see studies that have informed patient care in patients admitted to the ICU. or to the hospital with COVID, including the dose of dexamethasone, the use of an antibody to interleukin one beta, dexamethasone, again, convalescent plasma, helmet ventilation, hydrocortisone, high flow oxygen, anticoagulation. On the adaptive side, you've got therapeutic anticoagulation, convalescent plasma, interleukin six receptor antagonists, and dexamethasone. And if you told them all up, you'll find that there were more studies done with frequentist design than adaptive design. So if you're strictly going by the number of trials done, maybe more of them were done. Well, not maybe, more of them were done with frequentist design. Having said that, I think that it's pretty clear that each of them have potential advantages. And that while it's perhaps easier to get a frequentist design up and running and off the ground, that there are a number of potential advantages of the adaptive design. And similar to the group that designed the remap cap, if you're already up and running, if you have a continuous trial going on, for example, that even though it may take longer to start, that it may be more efficient once it's started and may turn out to be more patient-centric. Because you can potentially randomize more people to a treatment that might be helpful. I think that you will see that adaptive trials may become more common over the next few years. And the real winner, it's probably too early to tell. It's probably too early to tell which of them. But I suspect that many of us will be both doing and seeing more adaptive trials in the next few years. With that, I'd like to thank the organizers again for giving me the opportunity to talk about this. And if there is a live portion to this, I will do my best to answer any questions that people might have. Thank you very much. Today, I'll briefly be speaking about what preprints mean for critical care science and what we can expect from them in the future. I currently have funding from the NHLBI, as well as I serve as editor-in-chief of Annals of the American Thoracic Society. Here, I've provided an overview in a schematic format describing scientific publishing in its current form. As we all know, manuscripts are typically submitted to a journal, subsequently undergo administrative processing, followed by peer review. Manuscripts that are selected as high priority are then revised, potentially being sent back to the journal and then either rejected by the editors of the journal or ultimately accepted for publication. Upon acceptance, there is a publication lag between when a manuscript is ultimately accepted and when it's finally published, either in print or potentially even online, before which the public can consume that information. This process, on average, ultimately takes a median of about 100 days for the majority of scientific journals. So, on average, by the time a manuscript has been submitted, it won't see acceptance or publication for at least three months. There is general consensus that this three-month lag between a time of paper submission to a journal and its ultimate acceptance is far too slow to create meaningful knowledge in areas where timely information is necessary. This was particularly relevant during the 2003 SARS epidemic, when approximately only 8% of papers were ultimately published during the outbreak that were relevant for care of those patients. There was an additional 7% of papers that were accepted, but this left the vast majority of scientific information left unaccessible to the practicing individuals who needed information about these patients. In this figure, you can see that this publication lag spanned all research domains, from search for causative agents all the way to treatments as well as risk factors. SARS epidemic is shown here by the gray distribution. In response to the lack of readily available information around the SARS epidemic, as well as growing discontent within the scientific community around publication lag, the scientific community advanced a new mode for publishing papers early, and thus the preprint was born. The preprint sought to bypass the lengthy peer review and revision process that typically occupied many weeks to months in the journal's cycle of paper evaluation, to immediately put research in the hands of the public to allow them to consume and potentially benefit care for patients. Preprints, by definition, are considered publicly accessible scholarly content that has not yet been certified by a journal's peer review process. There are a number of pre-existing preprint servers whose sole purpose is to catalog and maintain scientific information that has not yet been peer reviewed. These initially began in 1991 with ARXiv, pronounced archive, which cataloged scientific discoveries in the physical sciences. In 2013, BioRxiv was developed, cataloging scientific information from the biological sciences. And ultimately in 2019, MedRxiv was launched, which represents information surrounding the clinical sciences. Since the creation of BioRxiv in 2013, there has been a steady and increasing use of preprints as a way to disseminate scientific information. Shown in these figures are the number of papers published versus in preprint, both during the Zika epidemic as well as the Ebola epidemic in 2014 and 2016. You can see that approximately only 10% of papers during both of these outbreaks were published in preprints relative to the total number of papers published in the scientific literature. In contrast, at the beginning of the COVID pandemic in 2020, nearly 20 to 25% of all science was represented in preprint form relative to the only 10% represented during the most recent Zika outbreak in 2016. While there is general acceptance that preprints as an important way to rapidly disseminate new knowledge, they have also brought several other additional benefits to the scientific community, but also have brought several ongoing challenges. I've broken these benefits as well as challenges into several areas that preprints have impacted, including authors, editors, funders, and readers. There are a number of benefits to preprints from the perspective of an author. First and perhaps most importantly is the ability to disseminate new knowledge rapidly in the scientific community that is delayed when publishing a manuscript in a traditional journal. This allows for earlier community engagement where scientists can potentially interact with other clinicians or other scientists to discuss the findings at an earlier stage. Inclusion of manuscripts that are on a preprint server in one's CV may also allow for authors to more timely address issues related to promotion. Some authors use preprints as a way to protect the provenance of intellectual content, specifically staking a claim in a scientific finding in an area that might be hotly contested or is otherwise competitive. Authors often benefit because there is no paywall to preprint publication, allowing a broad reach of the science for free. This has become increasingly important to authors who are concerned about equity at the global level. Preprints allow scientific findings in a manuscript to gain greater visibility and recognition. This was nicely demonstrated in a paper published in JAMA in 2018 where authors examined over 8,000 preprints published on BioRxiv, matched them to up to five published articles on PubMed that didn't have a preprint, matched for the same journal, publication, as well as time frame. They then compared altmetric attention scores between articles who were published in the scientific literature that did have a preprint versus those that didn't have a preprint but were also published in the medical literature. Altmetric attention scores, as shown here, is a crude measure of engagement with the scientific community as well as the public around a specific manuscript. Publications in the scientific literature with a preprint had significantly higher altmetric attention scores than those published in the medical literature who did not have a preprint, suggesting that preprints significantly increase the engagement with the scientific and public community around a scientific finding. In contrast to the significant benefits of preprints to authors, there are relatively few potential harms. The first and most often cited harm has to do with an interpretation of the Engelfinger rule, which states that most medical journals will not consider for publication a manuscript that's been previously published elsewhere. However, the vast majority of journals consider preprints an exception to this rule. In fact, over 86% of the top 100 journals by impact factor will consider for publication manuscripts that have been uploaded to a preprint server. An additional potential harm has to do with the added work necessary to engage with the preprint community. However, the majority of authors that do upload to preprints find this engagement to be beneficial. There are a number of potential benefits to preprints from the perspective of an editor. Because preprints typically receive feedback from a broader and more diverse group of readers than afforded by the typical peer review process, editors may view submissions with a preprint as more thoroughly peer reviewed and thus a superior product. Moreover, as discussed in prior slides, journal publications that had a preprint received greater attention, perhaps driving greater traffic and citations, two key metrics that journals rely upon to measure success. Many journals have also streamlined preprints into their workflow, specifically allowing the transfer of a manuscript from a preprint server directly to a journal submission process. This may reduce administrative burden within a journal when processing manuscripts. However, there are also some potential harms. Editors view themselves and their journals as credible sources of information and likewise take some responsibility to combat misinformation. Misleading, non-peer-reviewed science in preprint form that garners significant attention may demand that editors spend real estate in their journals to combat such misinformation. In addition, on occasion authors will continue to cite preprints instead of the final peer-reviewed journal publication, which diverts traffic away from the journal potentially in a harmful way. Peer reviewers or other editors may also feel responsible for reviewing a preprint's history and feedback when commenting on a manuscript's acceptability for publication in the journal. Ultimately, this creates more work for what is already a volunteer effort for the vast majority of peer reviewers and editors. From the perspective of funders, preprints may be beneficial in that they allow writers and reviewers access to the most up-to-date science. They may also allow junior investigators to better demonstrate productivity in an area that they're interested in being funded. Or, they may allow more seasoned investigators to demonstrate productivity in an area that is rapidly evolving or relatively novel. However, most funding agencies have yet to determine the right role for preprint publications or how they should be weighted in the peer review process when assessing a proposal's strengths. From the perspective of the consumer, preprints have the most opportunity to benefit the community when there is an immediate need for knowledge, such as what we've experienced with the COVID-19 pandemic. Real-time dissemination of emerging epidemiology, prognosis, cases, case theories about novel diseases, potentially can be life-saving in some circumstances. However, there are some potential harms that come as well. Correct interpretation of the findings of preprints may require attention to the comments on the preprint that emerge from the scientific community. In some cases, poorly conducted science may raise safety concerns about suboptimally vetted treatments. In other words, shoddy science has the potential to be disseminated via this means. Ultimately, the central argument surrounding the benefits versus harms of preprints comes down to timeliness versus credibility. The word preprint neither implies shoddy science nor absolves investigators of the need for critical review. Yet, it provides opportunities for both. So let's take a look at what the literature says about this competing tension. In the modern era, preprints still dramatically reduce the time to data access by at least two to three months. This was nicely demonstrated by Fraser and colleagues published in PLOS Biology recently. In their analysis, they looked at thousands of preprint manuscripts that were ultimately published in the medical literature by most of the major publishing houses across the world. On the left panel, you'll see the time it took for preprint servers to take the uploaded manuscript and ultimately release it to the public. You can see for both COVID preprints and non-COVID preprints, this averaged about one to two days of screening time before a manuscript was ultimately released for public consumption. In contrast, on the right panel, you'll see the time between when the paper was uploaded to the preprint server and when it was ultimately published by the medical journals. And you can see that there was significant delays across all both COVID and non-COVID manuscripts. And while COVID-19 preprints were ultimately published in the medical literature much faster than non-COVID-19 preprints, there was considerable delays across all groups. With the red bars, on average, anywhere between 50 to 100 days post-preprint posting before it was published in the medical literature. These data demonstrate that while publishers responded to the need for rapid COVID-19 information by publishing COVID-19 papers faster than non-COVID papers, there was still considerable delays in data dissemination that ultimately were caused by peer review of at least 50 to 100 days. But what about preprint credibility? Is the information that's being released earlier quality information? Some of the most important discoveries of the COVID pandemic were published first in preprint form. The manuscript first describing the COVID-19 outbreak in China was uploaded to MedArchive on February 9th, 2020. Ultimately, the New England Journal peer reviewed and accepted this manuscript and posted it online on their website in March, 2020, and finally published the manuscript in its paper form in April, 2020. On the one hand, this demonstrates how journals can rise to the occasion and rapidly publish incredibly important findings. On the other hand, that one to two month lag may have had incredible consequences for public health. Similarly, the initial recovery trial which demonstrated a mortality benefit to individuals with COVID-19 treated with dexamethasone was posted in preprint form in June of 2020 and ultimately accepted by the New England Journal and posted online one month later. The recovery investigators also published their study examining early tocilizumab and COVID-19 in preprint form in February, 2021, which was ultimately accepted and published by The Lancet in May of 2021. Advocates for preprints may highlight these last three examples as evidence as to their great credibility and importance and potential to save lives that might be lost had such findings not been disseminated via preprint form. However, during the pandemic, there are examples of how preprints may have spread misinformation to the detriment of the scientific community and ultimately to patients. Many of you are aware of the initial trial suggesting that hydroxychloroquine in combination with azithromycin was beneficial as a treatment for patients with COVID-19. This initially was posted to MedArchive in March, 2020 and published in July of 2020. The study was widely and rapidly criticized. Among other concerns, it did not meet a priori sample size requirements. It incorporated a trial arm that was not pre-specified. And finally, the study was rapidly contradicted by numerous other studies. Despite these issues, the preprint was widely picked up by the media and ultimately called a game-changing treatment by the sitting president of the United States. At the time of this talk, the published yet refuted study was cited almost 3,000 times. And while it's not clear to what extent the preprint contributed to the widespread hysteria surrounding this potential treatment, it certainly allowed greater opportunity for dissemination of misinformation surrounding these therapies. So if we're convinced that preprints have the capability of providing credible information in a more timely way than peer review, the question then is, what is the value of the peer review process and does it add to the quality of the manuscripts being considered? In a fascinating analysis, Carniero and colleagues compared manuscript quality between preprints and manuscripts published in the medical literature. They did this two ways. First, they compared the quality of a random sample of preprints uploaded to bioRxiv to manuscripts published in the medical literature indexed by PubMed, shown in the panel on the left. Second, they conducted a pre-post analysis examining the quality of manuscripts uploaded to a preprint server that were ultimately published in the medical literature by comparing quality at the preprint stage versus the publication stage, shown in the panel on the right. To measure quality, they calculated the percent adherence of each manuscript to its respective manuscript reporting standard. For example, a clinical trial manuscript would be compared to the consort reporting standard to assess adherence. In both analyses, the quality of the published manuscripts was higher than the preprint manuscripts. There was a 4.2% greater adherence to reporting standards that resulted from the peer review process. So while peer review clearly improved the quality of manuscripts, it's not yet clear whether such improvement outweighs any delays in data dissemination that are required for peer review to occur. There is some evidence that the preprint community can be self-policing, and bad preprints can be removed via community engagement. One particular article, as an example, was posted in January of 2020, which suggested that the COVID-19 virus was actually man-made. The scientific community ultimately posted 135 comments critiquing the work, and the authors of the manuscript eventually withdrew the preprint from the preprint server. We also know that peer review is not a panacea for bad science, as evidenced by a couple of trials posted here that were published in high-impact journals after peer review, and ultimately withdrawn once investigation identified significant scientific integrity issues related to the conduct of these particular studies. With a more nuanced understanding of preprints, instead of labeling them as universally good or bad, we can focus our efforts on strategies to try and maximize the benefits as well as minimize the harms. A number of existing strategies are currently in place to minimize the harms of preprints. This includes preprint servers, automatic screening of pseudoscience, and conspiracy theories prior to posting preprints. An active engagement with the community to solicit input and feedback about specific manuscripts. Disclaimers to label preprints as not peer-reviewed or preliminary. There needs to be an important link for preprints to point to articles that are ultimately published in the medical literature post-peer review as the most credible source of information surrounding that specific topic. And finally, there needs to be some acceptance of consumer responsibility that there are pros and cons to preprints in their native form. There are also some emerging and novel ways in which we can reduce the harms of preprints. These include providing a mechanism through which we can recognize individuals who actively engage as commenters or peer reviewers of preprints. Currently, we have no standard by which that academic work receives credit. We also should develop as a community standards for how to incorporate preprints into our discourse. Publishers and funders need to figure out ways in which we can incorporate preprints into our discussion while acknowledging some of the pros and cons of using them in those discussions. And finally, there is some discussion around whether or not preprints should have an expiration date if they ultimately aren't published in the medical literature. This has some pushback because it may create some publication bias in that we're potentially suppressing data that journals ultimately are not interested enough in to be published, but nonetheless is an emerging way in which people are thinking about how to minimize harms of preprints. So what does the future of preprints in critical care look like? Well, it's clear that they're increasingly used and here to stay. And overall, they're largely beneficial, allowing access to new knowledge earlier and providing the community with a greater exposure to primary data. Consumer standards, however, are necessary to really minimize the harms, which are potentially great when preprints are used to disseminate misinformation. However, medical journals are still in good shape as the default of peer review is still clearly additive and confers powerful credibility that the preprints have yet to really accomplish. Thank you so much for your attention. I'd be more than happy to discuss any topics presented in my talk offline. Feel free to email me at your leisure. If you have questions or issues that you'd like to discuss further. My name's Craig Coopersmith and I'm delighted to be speaking to you today about 10 lessons about how to talk to the media about scientific findings, either yours or others. I have my disclosures listed here. So for those of you who don't know, there are actually many Hawkeyes. On the left is Hawkeye from Lester Mill Beacons, that's Daniel B. Lewis, who scored 93% on Rotten Tomatoes. In the middle is Hawkeye Pierce from MASH, the finale had a 77 share and was viewed by 125 million people, the largest ever for a scripted TV show. And the right is the most recent Hawkeye, who's kind of a superhero, kind of a sidekick in Marvel movies. And then now has his own show on Disney Plus. But for the people of a certain age, you're gonna know who this is. And this is Hawkeye Pierce, but he's actually played by somebody named Alan Alda. And Alan Alda is not a doctor, although for many years people believed he was. He is however a master communicator who was interested in science education. So interested that there's actually something called the Alan Alda Center for Communicating Science. And if you look at the bottom, after 11 years on the show, Alan wondered why scientists struggled to switch from lecturing about their work to having real conversations about it. So we're gonna try to go through 10 different lessons. These aren't from the Alda Center, but things that I think might be helpful. So lesson number one, we're gonna start with a description of an ICU patient who receives an experimental drug. This is what you might say. The patient is a 64-year-old female with a history of unreceptable pancreatic neoplasia with a CBD stricture status post ERCP, who presented with lethargy, poor PO intake times two days, amnesis rigors, was found to be tachypneic and hypoxic in the ED, intubated for airway protection, hypotensive, started oppressors, AKI started on CRT, went to IR for tube exchange, cultures grew out of ground negatives, had persistent leukopenia, presumed functionally in a compromise, started an experimental checkpoint inhibitor. You understand everything that I just said. What do you think was heard? When I say the patient is a 64-year-old female, that's actually exactly what's heard. When I say a history of unreceptable pancreatic neoplasia with CBD stricture status post ERCP, what's heard is something related to the pancreas and CBD is something similar to marijuana. And so there must be something similar to medical marijuana here. So ERCP must be a brand of CBD. Presented with, as you see, these things here. So probably what's heard is presented with lethargy, some other things that don't make sense that might've been rigorous. Found to be tachypneic and hypoxic in the ED. Found to be something in the ED, not sure what that is. What is the ED? Well, it could be the emergency department. It could be EDU related, so perhaps the university. It could be perhaps related to Ed Sheeran. Multiple things were done totally related to the ED or Ed. Went to IR for tube exchange. Went to IR, what is IR? I looked it up on the internet and there's 142 definitions of IR, including the last one, Ivan Reitman, who sadly passed away recently, who was the director of Ghostbusters. You have these things above. And what's heard, persistent something with presumed something. And the patient was started on an experimental drug. So let's put the whole story together. This is what's heard versus what's said. The patient is a 65 year old female with a history of rigorously and continuously using CBD. Got sick in an Ed Sheeran concert after seeing Ghostbusters, was persistently sick, and so started on an experimental drug. What's my take home point? Everybody in the audience has a native language. Your native language is not critical care research. It's English or something else. However, you're now at least bilingual as you can converse just as easily in either of these languages. That story that I told you was probably not really in English, but you understood what I said. A reporter is also gonna have a native language, potentially English, potentially something else, but is unlikely to be fluent in the language of critical care research. You need to communicate in their native language, not your secondary language, if you want to be understood. Lessons two through four. I love science. I love research. I love words. And I get very excited about each of them. These words roll off my tongue. Animals that overexpress the anti-apoptotic protein, BCO2, and the gut epithelium are made septic via pseudomonas pneumonia. 24 hours later, animals were sacrificed. Intestinal tissue was stained for active caspase-3 via immunohistochemical techniques, et cetera, et cetera. Those words actually roll off my tongue, and that's from a paper that I wrote. So how difficult is it to understand me if critical care research is not your native language, and I speak really, really fast, and I include both essential and fairly irrelevant detail, and I don't take the time to articulate what the key message is? So I'm going to try this again, what I actually just put in that previous slide. We looked at animals engineered to have a protein that prevents cell death only in the intestine. Mice were subjected to a model of pneumonia that results in almost all the animals dying. By changing cell death in the gut, we improve survival tenfold in pneumonia. Preventing cell death in the gut might be a future treatment for a disease which affects 49 million people worldwide. Take home points. No matter how excited you are, pace is important. Speak slowly and clearly. These are critically important points that should be included. There's also unimportant points, and they should not be included unless you're asked for more detail. There's a reason that every journal you publish in has an abstract. There's a reason why a journal will often ask you to summarize your most important points in a few sentences. Actually, your research does have one to three most important points, and it's your responsibility to highlight them. Lessons five and six. You may be interviewed by the medical press or by the lay press, and they're quite different. In the medical press, you're generally, not always, but generally, talking to peer medical professionals. Even if they're not ICU specialists, they understand the basics of what we do. So we'll take the medical press. I understand that I can't see you on Zoom, but who's a true expert in oncology listening to me? Who's a true expert in psychiatry? Who's a true expert in radiology? Or who took a rotation of these in school or postgraduate training and has had some exposure to them? What you're trying to do is explain your findings in a way that will make sense to someone with a basic understanding of critical care, the same understanding that you have of oncology or psychiatry or radiology. More than nothing, but certainly not that of an expert. In the lay press, who are you talking to? These are my parents, and my parents are amazing. They're smart and they're well-informed. They're caring and curious. They acknowledge the skill sets in multiple domains I do not. They're very proud of me for what I do, and neither of them is a medical professional. They're not nurses or doctors or APPs or pharmacists or RTs or dieticians or any other type of healthcare professional, and they don't really understand what I do, much less understand what my research is on. And they're still likely not reflective of the general population that you're ultimately trying to communicate with because they're incredibly well-educated. What's my take-home point? Know your audience and communicate to who your audience is. Take-home point number seven. Your employer very likely has a media relations expert. Often, not always, but often, you can actually get extra media training. They'll often help you put together a press release if one is indicated in advance. They're often going to know the reporters and outlets you're going to be talking to. They'll actually often want to be at your interview. Importantly, they want you to come across in the best possible light. Their job is to make you look good, but their job is also to make the university and hospital look good. And equally important, if not more important, to protect the university or hospital from looking bad and to protect them from controversy. They're generally going to want to know about your interviews in advance, especially if you haven't done this a thousand times, and discuss what they believe can and cannot be said and why. And why is this? Because they want to prevent unintended consequences. You have no idea if what you say is going to gain traction in the media and then in social media. You have no idea what will be stated once it's in the public domain. It might be exactly what you want to communicate. On the other hand, it might not. So what might media relations help you do? They might tell you not to work with a reporter of a show that's known for gotcha reporting, that's coming with a clear agenda. They want you to talk about what they want you to talk about as opposed to what you want to talk about. They're going to prevent you from commenting on things that have potential to get you or your employer in trouble. Commenting on a politician's health, commenting on policies such as mask mandates. And even if you say something that's scientifically accurate, if you criticize somebody who controls the dollars that comes to your university, that can be problematic. So if you say something about your governor or your senator, and it's very, very critical, even if it's accurate, your governor or your senator has a lot to do with money that goes to your institution. And your institution is not going to be thrilled with that. There's other ways of getting the message across. Take home message. Work with your media specialist. They can help you with your presentation and likely help with messaging that's not inherently in your wheelhouse. You're the research expert. It doesn't mean you're the messaging expert with the media. And they can prevent you from getting into unintended trouble that you never considered when you agreed to talk to the press to publicize your great work. Lesson number eight. I've been interviewed, especially during COVID by a zillion different things. CNN and Fox and NPR and the New York Times and the Washington Post and People and Readers Digest and a whole host of local media outlets and those I've never heard of on internet sites and local TV stations in different countries. And they are not all the same. So it's important to understand how much of your interview is actually used in the final story. So if you're live, all of it. This is nerve wracking in its own way. And it's also incredibly unusual. I think I've done two entire start to finish live interviews. But if you're taped and someone is either or if somebody is taking notes listening to you, it depends on the type of media. Let's talk about a newspaper, a magazine or an internet site. How long they talk to you will probably have to do with how long the story is gonna be. A story in the New York Times is gonna be longer than one that you would say today. A story in the Atlantic is gonna be longer than one of people. You can realistically expect that you're gonna talk for somewhere between 15 minutes to an hour and that they will use one to three quotes. I've been on NPR multiple times. And I actually find that Richard Harris, the science reporter, gets the story right 100% of the time when I've been involved. The shortest interview I've ever done has been 30 minutes. And the longest was over an hour. And this goes to a radio story that a short is one to two minutes and a long radio story is five minutes. I've heard my voice as short as a single sentence and as long as 45 seconds to a minute, which is actually a lot in radio time. Television segments tend to be very short. And as such, television interviews tend to be very short. They don't go into a lot of time on a topic. They're generally much more superficial. So you have to be very focused to make your key points. And as such, time on television tends to be very short. It might be a higher percentage of the time interviewed, but still very little time. So take home points. The vast majority of what you say will not be used regardless of what type of media you're interviewing with. However, the reader will only know what they see and hear. And so you need to be very conscious throughout the interview that you're making your key points and you need to be equally conscious that you don't know what will be used. So make sure that literally every single sentence you speak, you're comfortable being in the public domain. Lesson number nine. Can you see or hear your quotes before they're published or on the air? Maybe, it depends on the interviewers. Major newspapers and radio stations, TV stations will not allow you to hear or see your quotes in advance. You're gonna see and hear them at the exact same time as the rest of the world. Many publications that are smaller will explicitly tell you in advance that they will send their quotes to you. Because especially for non-controversial things, they wanna get the story right. And since many don't tape you and they're actually simply writing notes or print interviews, they might've misunderstood something you said. Again, going back to the fact that your communication might not be in their native language. Misquotes are troublesome for both the interviewer and the interviewee as no one is served by missing the point of what you're trying to say. So if someone sends you the story in advance, read it carefully. Make corrections if needed for content, but not for writing style, which is incredibly offensive to the writer. So the take home point, your quotes may or may not be accurate. It depends on whether you're taped or not and how good a reporter is at taking notes. And you don't get to decide that. You may or may not get to review your quotes in advance. You also don't get to decide that. If you can review your quotes, review the story for accurate content, but not for writing. My final lesson, your agenda and the media's agenda may be entirely aligned. Either great work or being asked to comment on great work. They're interested in great work and they're interested in asking you to put the great work of others into context. Or they may be aligned only partially or not at all. At times the interviewer comes in interested in a preconceived slant in a story and they want what you say to align with that. How can this manifest itself? Very selective quote use. Asking questions you're not comfortable with or that seem irrelevant. Asking you to draw conclusions that you can't draw. Looking for you to use or at least agree with hyperbolic phrases. So how do you manage this? For very selective quote use, make sure that you're on your game throughout the entire interview. Say precisely what you want and emphasize the most important points. And don't get trapped into the seemingly innocuous aside 40 minutes into the interview that turns out to be the only quote used. Asking questions you're not comfortable with. State you're not able to answer a question or not comfortable asking about it. An example would be when I was asked by a major media outlet about whether I thought the regimen President Trump was prescribed was appropriate when he was diagnosed with COVID. I didn't know, I wasn't his doctor. I don't know what his history was. It wasn't allowed. I didn't talk about it. Asking you to draw conclusions that you're not able to. Don't. Or be specific what conclusions you can draw and what you can't. Looking for the use to agree or to actually use hyperbolic phrases. Again, don't. Remember who's gonna listen and read to this. My parents, who I've shown you before on the left, get the news in the New York Times, nightly TV shows, those still exist, in 60 minutes. And my college age students on the right get their news exclusively from the internet. I'm not sure they know what a newspaper is. And but the only similarity in their habits is they both read and watch reputable sites just about all the time. However, both of them tend to hear phrases like game changer and are more likely to believe them. Some hyperbolic conclusions are easier to understand and internalize the multiple levels of nuance. So take home points. No matter what you say, understand that very little of what you say will be used. Be alert throughout the interview so you don't quote unquote drop your guard and briefly say something that you wouldn't want to be literally the only quote that's used from you. Only answer questions you're comfortable with answering and draw conclusions you're comfortable making. Be aware of the hyperbolic phrase. The ratio of what's called the game changer in the press to what's actually a true game changer is enormous. So in conclusion, publicizing your research or commenting on others is an important method for us to communicate with a broader audience. Most of us are experts in critical care, but not experts in communication. Remember that perilously little of what you say ends up on the final story, but every single word you order, even if it's just a single sentence in an hour might literally be the only thing that's used. Understanding how to communicate clearly and concisely, your key message is an art. Thank you very much. I wish we could be a person hopefully next year.
Video Summary
Dr. John Savransky, an Intensivist at Emory University Hospital, discusses the challenges of improving care for patients with a novel pathogen during a pandemic. He highlights the difficulty of determining the most effective treatments amidst the surge of information and research. He emphasizes the importance of clinical trials in guiding treatment decisions and the potential drawbacks of relying on anecdotal experience or information from publications that may later be retracted. Dr. Savransky also examines the differences between frequentist trial designs and adaptive designs. Frequentist trials are traditional randomized controlled trials, while adaptive designs allow for modifications to important trial characteristics based on information gathered during the trial. He outlines the advantages and disadvantages of each approach. While frequentist designs are widely understood and easier to implement, adaptive designs can be more efficient and patient-centric, potentially allowing more patients to receive a potentially effective treatment. Dr. Savransky concludes that both trial designs have their merits, and adaptive trials may become more common in the future. Overall, the goal is to communicate effectively and clearly to both medical professionals and the general public, in order to ensure the best possible outcomes for patients.
Asset Subtitle
Research, Quality and Patient Safety, 2022
Asset Caption
This year in review on behalf of SCCM's Research Section will focus on three timely subtopics: research design strategies during a pandemic, the role of preprint servers, and strategies for interacting with the media.
Learning Objectives:
-Review arguments in favor of Bayesian platform strategies versus traditional frequentist trials and the successes and failures of each during the pandemic
-Discuss the benefits and drawbacks of preprints (full research manuscripts that have not yet undergone peer review)
-Address a strategic approach to engaging the media and how to message effectively without being misquoted
Meta Tag
Content Type
Presentation
Knowledge Area
Research
Knowledge Area
Quality and Patient Safety
Knowledge Level
Foundational
Knowledge Level
Intermediate
Knowledge Level
Advanced
Membership Level
Select
Tag
Clinical Research Design
Tag
Evidence Based Medicine
Tag
Communication
Year
2022
Keywords
Dr. John Savransky
Intensivist
clinical trials
treatments
adaptive designs
randomized controlled trials
patient-centric
efficiency
best possible outcomes
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English