false
Catalog
SCCM Resource Library
Thought Leader: Novel Medical Education and Misinf ...
Thought Leader: Novel Medical Education and Misinformation: A Double-Edged Sword
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you so much for that, that's a really kind introduction. Well, it's really great to be here with all of you this morning and have the opportunity to talk with you about a topic that I'm really passionate about. How can we use social media to enhance medical education and what can we do about this pervasive problem of medical misinformation? I have no conflicts related to this talk. So first off, let me just say that my goal is not to lecture you for an hour, but to facilitate a slightly structured conversation about this. In the first half, the first side of the sword, if you will, I'll take a few minutes to introduce this topic. I'll tell you some ways that I think we can all use social media to enhance medical education, and then we can discuss whether those efforts are in fact scholarship. We'll use the second half to look at the other side of the sword, the enormous harms that misinformation and disinformation have wrought, and what we as a profession can do about it. Like I said, my goal is for this to be a dialogue and not a monologue, so I want to hear from you. There will be opportunities for questions, but I actually think if you tweet at us, so Artie and myself, we'll try to respond to those too, and that way other people can respond to those. So we can really try to make it more of a conversation. And use the hashtag SECM2023 so everybody can see what you're doing. So every year around New Year's, I always kind of find myself looking back and asking myself, how have things changed in the last year? What's changed in my own life? What's changed in the world? And this year, I was thinking back a little bit further, and I was thinking like, how has medical education really changed in the last century? And if you imagine, so imagine a time traveler from a century ago brought forward into the present, right? And they're looking around at the world, and how do things look to them? I mean, things would look pretty different. The way we communicate is different. The way we travel is different. The way we access information is different. But there's one place where this time traveler, this Rip Van Winkle might feel very much familiar, and that's medical education. Superficially, at least, medical education looks very similar then and now. In some cases, we actually teach the same topics in the same rooms in very much the same way. That way, of course, is the lecture. We use this format often in medicine. It's after the Latin lectura, meaning a reading. This was developed in the medieval university when books were scarce and literacy rates were very low. So one person reading to a group of people was actually a very pragmatic solution to a problem. This is a good example of that format adapted to medicine. This is Thomas Eakin's famous painting. It shows the format of the lecture and its limitations perfectly in this surgery by Dr. David Agnew. And Dr. Agnew is definitely one of the greats of surgery, but as Eakin's painting shows us, he might be just a little bit soporific, or at least the format may make him a little bit soporific. If we zoom in on the audience, for example, we can see this guy is checking his watch. Another guy is kind of resting his head. These guys here are like cuddling slash napping together. And this guy is like full horizontal. So clearly this is not super engaging to his audience. How can we do better than this? Well, I think if we look back to an earlier example, the so-called Socratic method or an argumentative dialogue between individuals, we can address some of these limitations. As we see in this painting, instead of passively absorbing content, his audience is actively engaged. One guy is literally leaning forward, another is amplifying his gestures, and they're clearly unafraid of expressing their opinions, both positive and strongly negative. Now if there's one place where a public discourse exists, where people are unafraid to share strong opinions, it's social media, right? That's exactly what this forum sounds like. So the question is, how can we use social media to enhance our delivery of medical education? How can we deliver the same great content to a larger, more diverse, and more engaged audience? And I have a couple ideas for how we can do this. So just imagine, by delivering the same content in a lecture, but not to a few dozen people physically in a room, but to an audience of thousands or tens of thousands via the avenue of a podcast or a video. Alternatively, instead of a paywalled review article, what if you shared that same content in a tutorial or an infographic that thousands, tens of thousands of people could learn from? Instead of answering people's questions one at a time in a journal club or office hours format, how many could learn from all of your expertise if you shared that in an Ask Me Anything or AMA format online? Now in order to understand how we can reinvent, we have to kind of understand what already exists. So for the next couple of minutes, I want to think about kind of the existing strengths and weaknesses of each of these existing modes of delivering content. First we need to consider the size of the audience. Next, how interactive is the format? What is the latency of creating content to sharing? How stale is the information? Whether that content is available at all times or whether it's only available at specific times, which we'll call synchronous, you have to be in the same place at the same time, or asynchronous delivery. And then finally, cost. So looking at it with these five lenses, the formats of lectures, journal clubs, and office hours tend to have very low latency, right? I could edit these slides right before I got up here, meaning the content is hopefully up to date. And this can be interactive, but the audience size, not in this room, this is a terrifyingly large room, but in general, the audience size is relatively small, which limits how many people get to enjoy that content. Review articles in textbooks, they often do have a larger readership, though not always. But there's often a long latency from when the content is created to when it's actually shared. In some cases, like in fast-changing fields, like immune therapy for cancer, there literally are textbooks that are out of date by the time they are first printed. Finally, we should also consider cost and how that can be a barrier to people learning from educational content. Textbooks and paywall reviews are often expensive, which can be prohibitive to many learners around the world. Lectures and journal clubs are nominally free, but of course that's only to people within that institution. And so the vast majority of people out there in the world don't get to benefit from that content. I think these are all areas where we can use social media to enhance our delivery and address all of these limitations. And let me show you that. So now let's think about how this changes when we use social media. First, we need to emphasize just this vast difference in scale. The most popular medical podcasts, such as the Curbsiders, have hundreds of thousands of listens per episode and a million listens per month. I could give a lecture this size to a full room this big every day for a year, for several years, and not reach as many people as one podcast episode. That's just an absolutely staggering difference in scale. And a large part of this is because med ed delivered over social media can be asynchronous. Users can listen anytime, anywhere, not just at a particular place at a particular time. The second big strength of using social media to enhance med ed is the potential for interactivity. Not only can you reach a massive audience, but they can reach you. Your audience can engage you with comments and questions. They can reach you as easily as you can reach them, in fact. And third, the latency in general is shorter because there aren't those publication delays. Your tutorial doesn't have to sit, you know, for months waiting to be sent. It's out there as soon as you click the button, which makes it an ideal discourse for talking about fast evolving topics like COVID. And then finally, with a few notable exceptions, such as paid podcasts and up to date, most of the content which is delivered online and over social media is free. And this means that nobody around the world is excluded from accessing it. So this brings me to my first take home point. We have this amazing ability to reach more people, reach a larger audience with shorter latency, greater interactivity and at lower cost. So how do we actually do this? Well, I have a couple ideas for this. First, you know, I'm sure many of you, if not all of you, have an amazing lecture that you give every year. That one that they ask you to come back and do for the interns over and over again. That one where you've got your content perfect, your jokes land every time, right? Take that lecture, record it, or go on a podcast and share it that way. I guarantee you if people at your institution like it, people around the world will, too. If there's a paper that you are excited to discuss at your journal club, there's other people who are excited about the same thing. And they might not have the same thoughts that you do. They might reach different conclusions. So having an online journal club is a great way to reach more people and to have a larger, more diverse discussion. The nephrologists have got this figured out. There's this thing called the Nephro Journal Club, where they do just an amazing job of having really structured monthly journal clubs where they involve people from, you know, across institutions across the world. If you just wrote a review article and you are super knowledgeable about a specific topic, try summarizing it as succinctly as you can. Take that thousands of words and put it into a few hundred words in a Twittorial, or better yet, no words. You know, personally, I love the visual display of information. If you can summarize something graphically, go for it. That's a challenge. But we should challenge ourselves to do more of that. Consider declining that invitation to write a chapter that will be three years in press and accept an invitation to do an open access text instead. Make sure that the content that you deliver is going to be out there soon because you want it to be out there and up to date as much as possible. And then finally, if you're an expert on a topic, put yourself out there. Let people know that they can ask you questions and maybe even do that in a structured format. Say, hey, I'm going to have an AMA on neurosarcoid on this date if anyone wants to join. I guarantee you there are people around the world who want to hear all of your expertise on these topics. So my hope is that using social media, we can move from that lecture format to more of the engaged Socratic format and have more animated discussions. This is really a key point, which is that I think social media facilitates discussion, among other things. So this is from Nature a couple of years ago, back in 2014. They polled their readers and asked them why do they use different social media apps. And the way people used Twitter was actually quite different than the others. And the five reasons that most people said were they use it to discover their peers, they use it to discover papers, they use it to share their own content, but most of all they used it to comment and discuss research and to follow those discussions. And I think there is a real opportunity here. One of the biggest strengths of a public forum is other people get to watch discussions in front of them. By engaging in professional discussions with our peers, we can role model how to have those discussions. You can see, people can watch us react when a new study changes our practice. Like yesterday, when a bunch of new studies changed our practice. Reacting to those publications as they come out is a great way to role model how we do that as medical professionals. I felt like as a trainee, I really rarely got to see my attendings do this. Maybe once a month at an especially good conference or journal club, you see somebody convince somebody else, somebody changes their mind. Once a month, if you're lucky, I literally feel like I get to see this every day on social media. Somebody shares something and somebody else's mind is changed, and I think that's incredibly valuable to have public professional conversations about how our practice changes. I think watching those has enormous value to learners. Next I want to talk about how sharing content greatly amplifies its impact by traditional metrics. In fact, I've got an RCT to prove it. This is from the Thoracic Surgery Social Media Network, which is a group of thoracic journals. Together across multiple journals, they randomized 114 articles to either tweet them or not to roughly 50,000 followers. A year later, they looked back and they compared the journals that had been randomized to tweet or not tweet, and they found that the articles that were tweeted had higher altimetric scores, more reads, and more citations. Not just more citations. They had four times more citations. For articles that were otherwise matched. This is a striking effect size, and I would argue that with an effect size this large, it actually probably matters more whether your article is tweeted or not than whether it's published in this journal or the journal one down. That difference in impact pales in comparison to whether it's amplified for people around the world. The final question that I think we need to consider is whether or not med ed on social media is scholarship. This is a hard question to answer, but Scribbino et al. I think did a really excellent job of building a framework that we can use. They developed this consensus definition of scholarship by speaking with dozens of health professions, educators in different organizations around the world, and essentially they came up with a four-part test for is it scholarly. First is the work original? Second, does it advance health professions education? Third, is it disseminated and can it be archived? And fourth, I think most importantly, can other health professionals comment on it? They conclude that digital venues that, quote, promote learning through the social construction of knowledge, and that, quote, engage the audience in a shared dialogue, do meet the definition of scholarship. But this last part is absolutely critical. It has to engage the audience in a dialogue in order to be scholarly. Basically, this means that they have to be able to leave comments and react to the work. That means that if you have a blog, you need to have comments engaged. People can leave them. And you have to respond to them. It can't just be something you put out there. It has to be a back-and-forth. Similarly, you know, many podcasts will have, like, a mailbag where people can write in. You have to do that if you want it to be scholarly. There has to be that back-and-forth. The debate and discussion, which is the thing that social media is so good at, is actually essential for the scholarliness of media distributed on social media. One thing that I think is fascinating, though, is this sort of paradox of digital scholarship. So you could put this another way. Does digital scholarship count for promotions? And I want to highlight two papers here published just one year apart. The first one is from 2022, which reviewed the promotion and tenure guidelines from 139 U.S. medical schools. And they found that the vast majority, 87%, contained terms related to social media and digital scholarship. So they're putting the words in the guidelines. Second paper looked a little closer. They actually read the documents more closely. And they found that only 12 schools, or 8%, actually explicitly endorsed including digital scholarship in advancement. Furthermore, a third of schools had the words in there, and they explicitly denied that social media scholarship was a criteria for promotion. For the remainder, so the vast majority, it was ambiguous. And it's really hard, when you see two papers with such dissimilar conclusions, it's really hard to reconcile what the truth is, except to say that this is probably a fast-changing field and this is probably a confusing subject. And I think on that note, I think we ought to pause there and have a little bit of a discussion about this. I want to hear what all of you think. Let's start with the question, does social media-enhanced medical education count as scholarship at your institution? And should it? And I'm going to tweet that, too, and people can respond to that one. So why don't we do this show of hands first, right? So at your institution, does social media scholarship count? One of these. Thumbs down. I see a hand. I see a thumbs up. Okay. I see a couple. I see a couple. What about does not count? Explicitly does not count? Okay. Several hands going up. All right. And ambiguous? That's the rest of it? Okay. Great. Great. Can we have mic three, please? I'm pretty loud. Sorry. I can't answer as to whether it counts because I'm waiting to go up for promotion. We'll see. But I definitely think it should count as long as it's done in these ways. I personally invest an enormous amount of time in the Chess Journal podcast, the APCCPD moderating Twitter journal clubs, et cetera, et cetera. And for that not to count, but for something else, just publishing a case report or something like that to count, seems counterintuitive in terms of the educational impact of those things and the time spent doing them. So as long as it's done according to that framework and it does count as scholarship, I see no reason why it shouldn't count. But unfortunately, I have yet to see whether it will in my own professional world. How many of you have efforts going on to incorporate social media metrics in your promotion policy? Just one. Another question? Yeah. Hi. I'm Tracy Walbrink from Boston Children's, and I run an online learning company, so I totally love this approach, I guess. One of the questions that I have that comes up a lot within promotion related to this question is, there's a lot of work that's being done, but how do we accurately benchmark, assess impact, and quality? And I think that that's the missing link here. There are some things like the social media index that have been developed that are similar. We've got metrics for papers, citation index, impact factor, all of those elements, alt metrics. We don't have the same thing that's easily generated that a promotion committee can look at and say, this podcast with 10,000 views is very different than this person's podcast that has three views. And so that benchmarking and ability to look at academic reach, impact, I think is really a challenge, and I'm curious, you know, your thoughts, both of your thoughts related to that topic. First off, let me say, I think that's a great question, and I think it's a great point, which is that we do a really bad job of quantifying things that are not easy to count, right? So if you want quantifiable criteria for promotion, we need some way to count, and we need some way to compare the apples to oranges inherent in a podcast versus a video versus an article versus, you know, speaking at a session. And I don't think that comprehensively exists. I do think people have tried to make it. I know there's a group of EM docs who have worked on a framework for this, and I think I didn't include that in this talk, but I can tweet it afterwards. I think it's actually a great start. I think we probably need to adapt it for different fields, and I think the other piece of it is that just creating a framework or a score is not enough, right? People need to know how to interpret it, and they have to know what an impressive score is, right? And so I think it has to be sort of a normalized, normal part of those promotion packets. Otherwise, people are going to say, oh, a score of 62, great, what's that out of? You know, there needs to be use of the tool for the tool to be valuable. Thank you. And Dr. Casey Albin from Emory actually just got a grant on that topic, so if any of you have any perspectives on metrics to benchmark from the perspective of academic promotion, please tweet to her, Dr. Casey Albin. We'll take one more question, and then we'll go on with the second part of the talk and take more questions in the end. Hi, Rich Savelle. I thanked you on Twitter and publicly. It's great to meet you. Helping me through the COVID with all the misinformation, and something I hope you're going to talk more about, because you're bringing people together quickly. And I helped invent the original SCCM podcast in 2005. My comment on that is that, in general, academic productivity, even back then, I was working at Einstein. And they understood the concept of that as contributing for academic promotion. I think it gets more complex the further removed it is from clear-cut academic productivity, working with the National Medical Society. And the question I had, really, was, you said something at the bottom about being scholarship if there was a chance for interactivity. But if you could comment, classic publication of papers, the only interactivity is during the peer review process. And I think there's a lot to discuss about that, med archive, things like that, pre-publication. And I guess you're going to talk about it. No, that's a great point. That's a great point. I completely agree that it does seem a little bit weird that one of the criteria for digital scholarship is there has to be a conversation, has to be a dialogue. But it's not clear that that's always part of the requirements for traditional scholarship, that three thumbs up or maybe two thumbs up from peer reviewers is sufficient in one venue, but more is presumably needed in another. I think that's a strange inconsistency. I think that's probably something we need to think more about. And I think, I'm not going to talk as much about this point, but I think one thing that we have seen with misinformation is there's enormous value to have a large pool of post-reviewers, not pre-reviewers, right? That when a paper gets put out there and thousands of people can go over it and find errors, we tend to discover fraud much more quickly and we tend to discover flaws much more quickly. So I think if you compare the results of that traditional model to the results of that massive dialogue online approach, I think there are some big advantages to the latter. Thank you, Nick. Respecting the outreach of social media, there's a related question on Twitter from Eduardo Mirales Cabo de Villa. He says, in relation to your presentation, traditional academic venues use peer review before release, a sort of safeguard. What is your opinion on how to emulate this for social media to improve med ed quality? That's a great question. So yeah, so the question from someone on Twitter is, should there be like a peer review before sharing stuff on social media? And I think that's a really interesting idea. I think certainly for people who have enormous impact, they have to be really careful how they use that impact. And so running that by people, even informally saying, is this right? Or does this make sense? Is a good idea? But I wonder if a more formalized process is ultimately gonna be necessary. I do know that some of the most impactful podcasts actually do this already. So I've done sort of this peer review, actually for Dr. Evans, one of the podcasts that you did on Core IM, they had me like read over the exact transcript afterwards and like answer questions about it. And I was like, yeah, yeah, it's all right. She got it right. But I think that's a format that we probably need to have more of, especially for the platforms that have a lot of impact. Let's hear about your misinformation strategies. Yeah, let's talk about misinformation. All right, and then I'll just summarize the poll results online. So I asked the question of, does social media enhanced med ed count as scholarship at your institution? And so far, the answers are thumbs up, 17%, thumbs down, 41%, no idea, 26%. So kind of similar to what we saw in those papers. So far, we've talked about social media as a force for good, right? In sharing information and educating people. But I think we need to acknowledge the elephant in the room, which is that this is a two-edged sword. Social media can enhance medical education, but it's also a force that's done enormous harm, especially in the recent context of COVID. Specifically, social media has been this massive amplifier that's enabled the sharing of misinformation and disinformation. And we really need to define those terms. So misinformation is false information, whereas disinformation is knowingly false or deliberately misleading information. Now, to some degree, this may seem like a distinction without a difference, right? Because you'd have to be inside somebody's head to know if they know that something isn't true. And I think there is a literature on this that most people who share misinformation don't know that it is untrue. Most people are just sharing it. But I think we really need to remember that there are bad actors out there. There are people who are deliberately sharing false content so they can profit by selling snake oil cures, among other things. And I think it's really critical for us as a profession to use both of these words and to use them carefully so that when we see examples of disinformation and people who disseminate disinformation, we can call that out. We've all been wrong before, and we'll all be wrong again. There's always gonna be issues where the pendulum swings back and forth, right? The Clovers trial yesterday is a great example of that. But I think there's a big difference between believing something based on the best evidence you have at the time versus deliberately sharing something that you know or really ought to know is untrue. I think it's also important to emphasize that the distinction between misinformation and disinformation isn't static. Early on, we all didn't know if hydroxychloroquine or ivermectin could be beneficial in COVID. There was true equipoise at one point. But after many clinical trials, we now have very strong evidence that those therapies are ineffective or perhaps even harmful. Thus, I would argue that the people who persist in sharing those have gone from spreading misinformation to disinformation because at this point, the evidence is pretty clear and people really ought to know better. I want to highlight kind of the difference between how information and misinformation spread online. This is an article from the New York Times. This is May 2020. And this illustrates a really important difference between how the two types of information spread. This is what's called a network graph. So it shows basically how many people saw a tweet. The size of the green circle is how many people saw the original tweet and the size of the green circle is how many people saw it. The size of the yellow circle is how many people saw the original tweet and the size of the gray halo around it is how many people saw all the retweets. And so just to put some context to that, those are like millions or tens of millions of people in the gray. On the left, under misinformation, we have a pre-print shared by a prominent Fox News commentator arguing that the mortality from COVID was much less than what Dr. Fauci was saying. And as you can see, just four interpretations that are massively amplified by a large army of people who are retweeting. By the way, I should add that this paper, this pre-print actually, was later withdrawn due to errors. So this never made it through the hurdle of peer review. On the right, we see a very different pattern. This is the pattern when a group of clinicians and researchers discuss a pre-print about hydroxychloroquine. Here you can see many smaller green circles with smaller gray halos and they're all interconnected. And there's actually back and forth between them. In the words of the authors, quote, this pattern illustrates the shape of collegial debate adhering to our idealistic notion of peer review, but writ large across dozens or even hundreds of commentators. Oh, now there's three things I wanna illustrate about this graph. First, selfishly, it's the first and only time that my name's appeared in the New York Times in six point font. So my mom was really excited about that. Second, as somebody who's passionate about data viz, I think this is a really valuable way to share information. I think this is a way that we can conceptualize what the spread of information looks like. And third, I think there's a lot of potential in that. If we can look at this picture and we can see a pattern that looks like information versus misinformation, an algorithm presumably could do that too. So you could imagine a future world, maybe an idealistic future world, where a social media algorithm actually use the metadata and the pattern of information spread to decide whether to promote something or not. The idea being to amplify information and collegial debate among experts, but not necessarily to amplify misinformation. So I think there's promise here. The next point I wanna make is just how potent misinformation can be. This is a study done from September, 2020. This was before there was a vaccine, by the way. 8,000 participants were asked at baseline, if a COVID vaccine became available, would you accept it? Then they were randomized to see high engagement social media posts, either ones that were characterized as disinformation, statements like the vaccines will alter your DNA, or control statements, like vaccines produce good immune responses. After they were exposed to this for just a few minutes, then they assessed their vaccine hesitancy afterwards, so pre post structure. And what they found was fairly shocking, even terrifying. So this single exposure to disinformation was associated with an absolute six to 10% reduction in vaccine acceptance, even among people who had previously been interested in getting vaccinated. This is fairly terrifying because it means that even a single exposure to disinformation can substantially alter people's stated health preferences. To put that another way, that's a number needed to harm of 10, which is a very, very harmful intervention indeed. As a quick aside, I should point out that this study did pass ethics review. So they debriefed the participants afterwards and told them it was disinformation. So they didn't have 6,000 people walking away who they'd like radicalized. This leads me to my next point though, which is that the net harms from disinformation and misinformation are truly enormous. And this is one of these scale questions that can be really hard to get your head around. This is a figure from a paper, 2022 plus one paper by Peter Hotez, which he titled the Great Texas COVID Tragedy. So three axes here, on the bottom we have time, on the left in blue, we have the daily number of deaths from COVID, on the right in orange, we have the cumulative deaths from COVID. Right in the middle, that green arrow is the widespread availability of vaccines. That was May of 2021. And as you can see, almost half the deaths occurred to the right of that line, meaning that almost half the deaths occurred after there was a vaccine available. Now, Dr. Hotez does the math here based on what we know about vaccine efficacy at preventing death, both for Delta and Omicron, and calculates that 40,000 people died because of vaccine hesitancy just in the state of Texas. That is just a truly horrifying number. I mean, in context, that's similar to the number of gun deaths nationwide in the entire United States, which I think we recognize is a huge problem. And this is just one state. The other thing which is frankly terrifying is if we apply the same methodology to the entire United States, we come up with a number of about 200,000 people who died as a result of vaccine hesitancy. And that's really kind of unbelievable when you think that 1.1 million people have died, that 200,000 of them could have been prevented. To put that another way, this is the New York Times once again, this is May of 2020. Remember when they ran this front page? This was to commemorate the incalculable loss of people who died from COVID at that point. I mean, double this number have now died as a result of vaccine hesitancy leading to deaths from COVID. And this is a really poignant tragedy that's made worse because those were preventable. And much of that damage was done on social media, and much of the harm was amplified through social media. But as a final point, I wanna share a framework for how we can use social media to try to protect people or immunize them against the harms of viral misinformation and disinformation. Oh, that wasn't supposed to appear yet. Ignore this. So this is a paper called When Science Becomes Embroiled in Conflict. And this is just full of really useful nuggets about opposing misinformation. To start off, they argue, and unfortunately, you can't see this because this tweet is on top of it, but they point out that there is a difference between reasoned debate and misinformation. So reasonable debate is debate that's supported by evidence and logically consistent. A viewpoint like vaccines make you magnetic is not. Oh. Somebody's trying to help. It's okay, you don't have to change the slides. It's also worth saying that reasoned debate can include people's individual lived experience. If somebody shares, I work in an ICU and I haven't seen any COVID cases for months, that may not be representative, but as long as it's true, that is a piece of the debate. That is part of a reasonable debate. It may not be super representative and it may not be super constructive, but I think there's still value there. But I think it's really important to call out what misinformation looks like and the mnemonic or framework flick is really valuable for this. So the first characteristic is fake experts or fake expertise. For example, someone claiming to have invented mRNA or somebody who doesn't practice clinical medicine tweeting about ventilator management. That's what this is supposed to pop up and point out that somebody who is not a clinician talking about ventilator strategies is a good example of fake expertise. Flick also includes logical fallacies. So for example, if somebody is praising the benefits of natural immunity without acknowledging that you have to undergo the harms of having COVID in order to acquire it. We also frequently see impossible expectations. So people will say you need to prove something which is statistically impossible to prove or they'll demand perfect efficacy of a therapy, the so-called nirvana fallacy. Another thing that we often see is cherry picking. So anyone who's watched football or watched social media recently saw this recent tragedy of a sudden cardiac death on the football field. Many people have taken to attributing that tragic event to COVID vaccination, which is obviously without any evidence, but the cherry picked examples of people who die being used as evidence of something else is a really good example of a tactic frequently used to spread misinformation. And then finally, conspiracy theories. Attributing motives and usually massive conspiracies as the reason why things are happening the way they are. So for example, the idea that governments are colluding with big pharma to encourage vaccination. There's no evidence that that's being done for any malicious motive. That's just the well-intentioned thing. But often you'll see people who are spreading misinformation weaving together these crazy narratives of the Gates Foundation and the WHO, et cetera. So using flick can be a very good way for us to spot misinformation, but it's also a very good way to teach others to spot it. And I wanna highlight an example of how that's been done. So this was a study that basically used a technique called pre-bunking, the idea being to sort of show people how misinformation works, kind of raise the curtain and show them what's being done and offer them protection. So in this study, they used a game, a five-minute online game that runs in a browser where they took 1,700 participants and they run through this simulation where players are, quote, slowly lured into an echo chamber where misinformation and outrage-evoking content are common. Then they sort of pause and they show them the tactics in that echo chamber that they were exposed to. So they show them, here was a fake expert, here was fear-mongering, here was a conspiracy theories, and they show them exactly how those tools were used to manipulate them. Before they did the game, they asked them to rate the manipulativeness of different, both disinformation and legitimate news articles. And then after the game, they repeated that. And what they found was that after this five-minute pre-bunking intervention, they were much more likely to detect disinformation, but their perception of actual news was completely unchanged. So suggesting that it is possible to protect people from misinformation by showing them Flick and teaching them how that works. This brings me to what I think is a framework that we can use to help protect people from disinformation. First, we can inoculate them by warning that there is misinformation out there. And I think we've done a good job of that thus far. We can educate people about how to recognize disinformation by teaching mnemonics like Flick or referring people to structured interventions like this Go Viral game. And then once people have achieved some baseline immunity against misinformation, we can bolster that. We can give them additional protection by debunking specific false claims. So whenever some new false narrative begins spreading, addressing it and showing why it's wrong. And you can think of this as kind of like a antigen-specific booster to protect against new virulent forms of disinformation. Now, there are a lot of questions about this. All of the studies that I've shown you have looked at a very short time window. We don't know, is this effect persistent? And we also don't know, does teaching people about Flick offer broader protection? And a lot of the people who are researchers in this space have been doing this for many years, looking at political misinformation, climate change misinformation. And I think one big unanswered question is, if you teach people to recognize misinformation, will they be able to do it across domains or is it gonna be domain-specific? I wanna conclude by asking, what can all of us do? So not everyone wants to argue with disinformation super spreaders online. And fortunately, most of us don't have to wade into the mosh pit of Twitter misinformation. There are a range of options for how each of us can use our authority as healthcare professionals to oppose misinformation. I like to think of this conceptually as a pyramid. So at its base, you can engage passively. You can amplify the voices of others, colleagues who are sharing legitimate factual information or who are opposing misinformation. You don't have to put yourself out there, but you can put your voice behind others who are. Building on that base of support, a smaller number of people can use their authority as healthcare professionals to share trustworthy factual content and educate others and have that reason debate that we talked about. And then building on top of that, a still smaller group of us can take a more active role in opposing misinformation, pointing out and debunking specific disinformation stories. Now, nobody should feel obligated to engage in this conversation, but every one of us should know that if we choose to, our voices really do matter. As healthcare professionals, we add something to this dialogue by just the credibility of who we are and what we do and the fact that we know something about this. So whether you choose to do this by passively amplifying others, educating others about factual content or actually opposing harmful disinformation, I think all of us can do enormous benefit.
Video Summary
The speaker discusses the use of social media in enhancing medical education and addressing the problem of medical misinformation. They highlight the potential of social media platforms to reach a larger and more engaged audience, and suggest various ways in which medical professionals can utilize social media to share educational content. They also emphasize the need to engage in active discussions and debates on social media platforms to enhance learning and knowledge exchange. However, they also acknowledge the detrimental effects of misinformation and disinformation spread through social media. They define misinformation as false information and disinformation as knowingly false or deliberate misleading information. The speaker emphasizes the importance of distinguishing between the two and calls for the use of such terms when addressing the spread of false or misleading information. They present a framework called "Flick" which helps in recognizing and addressing misinformation, including characteristics such as fake experts, logical fallacies, impossible expectations, cherry picking, and conspiracy theories. The speaker also discusses strategies to immunize the public against the harms of viral misinformation, including warning about misinformation, educating people about how to recognize disinformation, and debunking false claims. They conclude by encouraging healthcare professionals to engage in opposing misinformation and emphasize that their voices can make a difference in addressing this pervasive problem.
Asset Subtitle
Professional Development and Education, 2023
Asset Caption
Type: thought leader | Thought Leader: Novel Medical Education and Misinformation: A Double-Edged Sword (SessionID 9990003)
Meta Tag
Content Type
Presentation
Knowledge Area
Professional Development and Education
Membership Level
Professional
Membership Level
Select
Tag
Medical Education
Year
2023
Keywords
social media
medical education
misinformation
disinformation
Flick framework
fake experts
logical fallacies
healthcare professionals
Society of Critical Care Medicine
500 Midway Drive
Mount Prospect,
IL 60056 USA
Phone: +1 847 827-6888
Fax: +1 847 439-7226
Email:
support@sccm.org
Contact Us
About SCCM
Newsroom
Advertising & Sponsorship
DONATE
MySCCM
LearnICU
Patients & Families
Surviving Sepsis Campaign
Critical Care Societies Collaborative
GET OUR NEWSLETTER
© Society of Critical Care Medicine. All rights reserved. |
Privacy Statement
|
Terms & Conditions
The Society of Critical Care Medicine, SCCM, and Critical Care Congress are registered trademarks of the Society of Critical Care Medicine.
×
Please select your language
1
English