COVID-19 Myths, Misinformation, and Misunderstandings
This article is part of an ongoing collaboration between the Colorado School of Public Health, the Denver Museum of Nature & Science, and the Institute for Science & Policy. Click here to watch the full recording of this session and find all of our previous COVID-19 webinars and recaps here.
Over the past several months, all of us ─ from medical caregivers to research scientists to policymakers to citizens ─ have had to get up to speed quickly on this rapidly emerging virus. We immerse ourselves in information, but as we’ve seen, not all of what we read or hear is reliable. We’ve seen high-profile examples of scientific literature and official guidance that turned out to be inaccurate or incomplete. So how do falsehoods take hold, and how can the media and the public be discerning about what we’re seeing? To discuss, the Institute for Science & Policy’s Senior Policy Advisor Kristan Uhlenbrock chatted with Steven Goodman, MD, MHS, PhD, Associate Dean of Clinical and Translational Research and Professor of Epidemiology and Population Health, and Medicine at Stanford University, and Elizabeth Skewes, PhD, Chair of the Department of Journalism and associate professor of journalism and media studies in the College of Media, Communication, and Information at the University of Colorado Boulder. This transcript has been lightly edited for clarity.
KRISTAN UHLENBROCK: Welcome to you both. Could you give us a broad sense of what the dynamic between information and misinformation looks like in the age of COVID-19?
DR. STEVEN GOODMAN: We should start by talking about the fundamentals of good science. Now, everybody knows that science is about getting to the truth. That's easy. It's what's in the books, it's facts. But that's really not the active part of science. The active part of science is to get the uncertainty right, either what we don't know right now or how sure we are. That is really hard, and it underlies all of scientific research. A primary crime is not just getting the facts wrong, but getting how sure you are wrong. It’s the difference between saying that hydroxychloroquine might work and declaring that hydroxychloroquine is a game changer and you should take it today. And we now have 66 million pills in a stockpile somewhere because of the second statement.
Uncertainty underlies all scientific research and it's happening in real time right in front of our eyes with COVID. This is making it very difficult to sort out what's true and what we can rely on. This thing called we call uncertainty has two pieces. One is the play of chance. It's what we call the margin of error in a political poll when they say, for example, one candidate is up or down 5% plus or minus 3%. That's the play of chance. But there's something else, which is the reliability of the methods themselves, and that it's hard to put that in a formula. If we were measuring national presidential preference, would we go to a Trump rally or a Biden fundraiser to do it? No. You have to know where that poll was done and if it was based on a random sample. Sometimes, you need a lot of expertise to know whether the methods are good or bad. The website FiveThirtyEight assigns grades to political polls for how good their methods are. We're not going to go into the science of that today, but it’s a way of indicating reliability.
As of today, over 10,000 COVID scientific articles have appeared. Over 5,000 of those are in the form of preprints. By comparison, during the SARS epidemic, there were 29 papers published. So we are in a tsunami of not just information, but also misinformation. Now what are these things called preprints? A preprint is a paper that a scientist puts on the internet. Just straight from their computer. There's no reviews, no comments, no nothing. It just goes straight on to the internet. There can be comments after they put it on, but not before. So there's no assurance that any scientist has ever laid eyes on that paper before. As an example, here is the section of the MedRXiv server just for COVID. Remember, COVID didn't exist before December, and there are 5,500 preprint articles already in this server. Note the yellow box at the top where it says that these are preliminary reports that have not been peer reviewed, should not be regarded as conclusive guides for clinical practice, and should not be reported in the news media as established information.
One preprint that recently got a lot of attention was from some of my colleagues here at Stanford. It was a zero prevalence study trying to figure out how many people in Santa Clara County had antibodies to the disease. It got a huge amount of play because it claimed that the infection rate was higher than we expected and therefore the death rate was way, way lower and it was part of an argument that that COVID is no worse than the flu. Within a couple days, this started being questioned and challenged in terms of its methodology and the reliability of its claims.
What’s really notable is how quickly it went around the internet. The study was posted on April 16 on a Thursday afternoon. By Friday, the tweeting started. It was retweeted over 100,000 times just within 24 hours, including by several high-profile pundits with wide followings. We saw just a narrow range of interpretations that tweeted to many, many people. This is in contrast to academic model, in which many different interpretations are sent to fewer people. In an ideal world, it would happen more like this to get the true heterogeneity of the scientific community represented among the critics.
In fact, the conclusions of the study weren't necessarily all that wrong, but the scientists overstated or heightened their conclusions. They were underplaying or misstating uncertainties in their methods. They might ultimately be right. But no matter what, if you say you have too much confidence in something when the methods don't justify it, that's seen as just as bad as being misleading.
Another paper appeared recently in the Proceedings of the National Academy of Sciences. This one was actually was published after peer review. One of my postdoc students saw it and was incensed. The study had a conclusion that we all agreed with, which is that masks are very important part of protection from COVID, but it dramatically downplayed every other countermeasure. It was felt that the authors tremendously overemphasized the role of masks. One of the authors was a Nobel Prize winner in atmospheric chemistry and not epidemiology. We sent a letter to the journal asking for a retraction.
You might have also heard recently that the NIH halted trials of hydroxychloroquine. You probably heard mixed information about a low cost dexamethasone treatment reducing fatality rate among those ventilators. There was a high-profile retraction in The Lancet, where the study went through peer review even though all of the data was basically made up and everybody's trying to figure out what happened.
There have been some high-profile calls to just throw out the entire process and get rid of journals. One epidemiologist posited that if that were to happen, though, we would have a tsunami of junk with no curation. And then some enterprising young people will get together and they'll start sifting through it and they'll have a website somewhere, and they'll say this is the reliable information. A version of this is happening already at Johns Hopkins, where about 70 researchers are combing through the COVID preprints and putting out their own reviews. This will just recreate the journal system. And it’s one of the problems of high speed science. We can’t lower our standards just because of the urgency of the pandemic. Every time we do, it's another myth that we have to stamp out.
Some questions we should always ask: Is there a publication behind the claim? Have other scientists seen it and have they commented? Is there enough detail for effective comment? Has it been changed in response to comments or is this a first draft? Is the journal reputable? Are the trial numbers large? None of these are guarantees, as we’ve seen, and all conclusions come with caveats and warnings.
DR. ELIZABETH SKEWES: Just as Dr. Goodman grounded us in proper research practices, I’m going to start by talking about best practices in journalism. Two longtime journalists Bill Kovach and Tom Rosenstiel came out with a book called The Elements of Journalism, and they wanted to emphasize is that journalism has an obligation to truth and that duty is more important than publishers or corporate structures. In fact, it has an obligation to publish the truth the best it can, given deadlines. And it has a loyalty to citizens. It requires discipline and verification. All of those are critical and they're at the top of the list for a reason. All of that gets challenged in a social media world. Just as speed is antithetical to good science, it's also antithetical to good journalism.
We also need to talk about polarization. Gallup research shows that Democrats place more trust in the media than Independents or Republicans and that gap has widened. In 2019, you have more than a 50% gap between Democrats and Republicans in terms of their trust in media. This also shows up in our understanding of how people interpret COVID information. People who lean left tend to trust the media more than people who lean right. People who lean right tend to trust the government a whole lot more.
In addition to the pandemic, we’re in the midst of an “infodemic.” We have an overabundance of information. This means it's much harder for the public to tell what's real and what's not. A recent Pew study said that two-thirds of Americans have said that they're seeing news that seems made up. Part of the problem is that we've seen newsrooms have to trim their staffs because of market considerations and budget constraints. That means there are fewer science reporters who have a real background in health reporting. Many of the journalists who are covering the pandemic are just general assignment reporters without a lot of expertise. And so they fall back on traditional journalism practices of trying to give balance to all sides of the story. But not all sides are necessarily equivalent. So there's less critical judgment about what might be real and what might be not be real. This overabundance of information is causing real problems.
It's also a problem that information about the pandemic has become politicized. Some people in politics will say one thing, some people in politics will say another thing. And because of that, when there is contested information coming out, it leads to a lot of confusion. If you wear a mask, it's a political statement. If you don't wear a mask, it's a political statement. It’s no longer a public health statement.
Social media puts all of this on steroids. Social media is where most of the misinformation about COVID-19 has been spread and it continues to be spread. Some 25% of the most viewed videos on YouTube had significant misinformation or misleading information about COVID-19. The information on social media is usually more dramatic and more sensational so it spreads farther, faster, and deeper. We have these “oh my god I can't believe it” moments right and so we repost and retweet.
When we see information on social media, it comes from our friends. Even if we've never met these people our Facebook friends are our friends. Our guard is down and we tend to believe the information more than if it comes from a media source.
About 68% of the public believes that social media websites and apps do contain misinformation, but again they can't often tell what's true and what's not. 54% say misinformation is coming out of the Trump administration, 45% say misinformation is coming out of mainstream news. In Great Britain, there were lot of stories out there about the virus being caused by 5G cell towers that resulted in cell towers actually getting pulled down. And of course, all of the unverified treatments.
On May 4, a 26-minute video went online called Plandemic. It’s a trailer for a full length film which is supposed to be coming out sometime this summer hasn't been released yet. One of the lead people in the film is Judy Mikovits, a well-known figure in the anti-vaxxer movement. She's made quite a few claims that have been discredited by science, including the idea that the virus was actually created so that billionaires could profit off of a vaccine that people would flock to take. She purportedly said that if you get a flu vaccine, your chances of getting coronavirus are increased by 36% ─ no scientific evidence of that. She said that wearing a face mask actually increases your chances of getting ill from coronavirus ─ again, no science to support that. But it was posted and to QAnon and got shared out by another 1,600 people, including an anti-vaccination doctor who had appeared on Oprah.
Plandemic garnered a total of nearly two and a half million interactions on social media, compared to a lot of other types of stories. It was shared by Republican politicians and started circulating in mostly conservative media circles. It was viewed 8 million times before it was finally removed by Facebook and YouTube for violating their policies on misinformation.
So how do you know what's false? Well, there's a lot of places that are trying to give us accurate information and tell us what is real and what is not. The World Health Organization. The Federal Emergency Management Agency. The Centers for Disease Control. Even organizations like PolitiFact which normally deals with political claims, but the coronavirus has become a political issue as well. They have a section where you can get ratings and details on why that rating is given. So, the suggestion of drinking bleach solution gets its highest false rating, for example.
What can we do to stop the spread of false information? Fact check the information that you see on social media. If you see something that's incorrect, correct it, even if it's from one of your friends. Provide links when you do that so people who see your post or your retweet can know where you’re coming from. Avoid repeating or reposting false information. This is a hard one, because even if you’re trying to correct it, you actually just help spread it. News organizations have had to learn that repeating the error only makes people kind of believe it. It’s all about familiarity and hearing the same claim repeatedly. So, these are some things that you can do just from your end to try and tamp down the amount of misinformation that's out there.
KU: Could you elaborate on the push-pull between people needing and wanting information quickly and the deliberative rate of science, particularly peer reviewed science?
SG: Journals are now in a race against preprints. The journals, which are looking at literally hundreds of papers, suddenly feel like they have to complete that vetting process very fast. Because maybe that paper is already on a preprint server and already being reported on. It's putting pressure on the review process and this is why we've seen some retractions from very reputable journals. All parts of the information ecosystem are suffering under the strain of this and journals are trying to do this in a way that the public can trust.
It's hard for any individual to know exactly what to do in the face of this. But as I said, I do think that looking for information that has been vetted by other scientists is best. A lot of the reporters are looking straight to the preprint servers where they used to look only to the journals. That's a big problem. The people trying to do the best curation for you are science journalists who are helping vet information from different sources.
ES: One of the things that people need to think about as they're assessing the quality of information is how fast it came out. If it's a tweet that somebody just pumped out in about 30 seconds in 240 characters or less, that may be less accurate or reliable or valid than information that took a month or two to really dig up. The less time you have to vet information and make sure it’s accurate, the fewer eyes that you have on it before it gets published. That's where you want to be a little more skeptical, whether that's journals and preprints or journalism and social media journalism, or, you know, things that that are being reported 15 minutes after they happen. I think we all have to be a little more media savvy about the quality of the information sources that we’re going to.
SG: It feels like we can't find out things fast enough, because of the urgency of this epidemic. But just like all things in life, if you're taking a long trip and you go down a detour, you waste a lot of time. So it really does pay to wait a little bit of time for that information to be confirmed. We don't make anybody better by reacting within, as Dr. Skewes said, within seconds, minutes, or even a few days. Sometimes it might take a week, but that's a week worth taking because 90% of the information falls away while the 10% that we can rely on comes through.
KU: What is the responsibility of the media to cover preprints or not?
ES: For years, health journalists have reported on published information out of scientific journals. I think the urgency of this moment has journalists searching anywhere for information and journalists aren’t always the best at asking the questions about how the science got produced and the methodologies involved. They need to be asking those questions as well as understanding that preprints are essentially the scientific equivalent of Twitter.
KU: What is the role of scientists here? How can they help the public understand the information?
SG: I think we have to be very careful about the use of preprints. Preprints have a very valuable purpose, which is communication among scientists. But now because of this epidemic, the public and the media are viewing them as scientific sources. So the scientists themselves need to think carefully about whether they should be putting information on a preprint server and weighing the possibility that it might be misinterpreted and spread very fast without vetting. Scientists may or may not be the best judge of that, because they're trying to get noticed for their work as well. So I would slow down the flood of information. I think unless it's of the highest methodology standards and absolutely critical urgent public health need, they should not be using preprint servers. There are many journals that are publishing papers within a week. Within a week! That process is no longer the months-long wait it once was. Scientists can also play a role in being sources for the media, because as Dr. Skewes said, many reporters are covering carjackings in the morning and COVID news in the afternoon because there aren't dedicated science reporters in as many newsrooms anymore.
KU: If a friend or a family member came to you and presented you with a piece of false information or something that seems like an obvious falsehood, how would you respond?
ES: I try to start from a place of compassion and understanding and try to find out why that person thinks that information is accurate. I've actually done that with a couple of people ─ less around COVID than around other issues ─ and when they say something that doesn't seem to be supported by the information that I know, one of my first questions is: Oh, where did you hear that? Tell me a little bit more about it. What was the evidence that was presented in the story? Could you send me a link? And then it's a conversation where I can say that that's not my understanding. I'm not going to tell them they're wrong, but I'm going to say that's not what I understand. We can talk about information quality. I think it is important to recognize that people have their trusted sources of information, and if I go in and simply try to say you're wrong and your sources are stupid, that doesn't open up a conversation and that doesn't help correct misinformation.
SG: I go back to the old adage that if you give a person a fish, they eat for a day, but if you teach a person to fish, they eat for a lifetime. It's unlikely you will convince someone of anything just on the basis of your authority. You have to help them understand where to look for reliable information in future. Because otherwise you’ll just keep having the same discussions. It’s about pointing people to sources of what we would consider more trustworthy information and the criteria that we want to use. For example, why it matters that it was really big study, or why it matters that that it was randomized, or these other terms that people hear but don’t know how to interpret. Teaching people these skills is the best thing we can do.
Associate Dean of Clinical and Translational Research at Stanford University
Chair of the Department of Journalism at the University of Colorado Boulder
Disclosure statement:
The Institute for Science & Policy is committed to publishing diverse perspectives in order to advance civil discourse and productive dialogue. Views expressed by contributors do not necessarily reflect those of the Institute, the Denver Museum of Nature & Science, or its affiliates.