How Do You Solve a Problem Like Misinformation?

Vaccines, oil spills, genetic engineering, and stem cellsโ€”anywhere thereโ€™s science, thereโ€™s also misinformation. It muddies our ability to make good decisions, ranging from far-reaching ones like creating policy to simple ones like what to buy at the grocery store. Misinformation also undermines trust in scientific institutions and across society. The National Academies of Sciences, Engineering, and Medicine tasked an expert committee with addressing misinformation. Their report, Understanding and Addressing Misinformation About Science, is out now. 


On this episode, host Monya Baker is joined by Asheley Landrum, one of the authors of the report and an associate professor at Arizona State Universityโ€™s Walter Cronkite School of Journalism & Mass Communication. Landrumโ€™s research focuses on science, communication, and media psychology. She discusses what exactly science misinformation is, how to tackle it, and the unexpected places it can arise.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Monya Baker: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

Vaccines, oil spills, genetic engineering and stem cells. Anywhere thereโ€™s science, thereโ€™s also misinformation. It muddies everyoneโ€™s ability to make good decisions, whether thatโ€™s setting policy, shopping for groceries, making health decisions, or just living. Misinformation also undermines trust in society. To address this, the National Academies created an expert committee. Their report, called Understanding and Addressing Misinformation About Science, is out now.

My name is Monya Baker. Iโ€™m joined by Asheley Landrum, one of the authors of the report and an associate professor at Arizona State Universityโ€™s Walter Cronkite School of Journalism. In this episode, weโ€™ll talk about what exactly science misinformation is, how to tackle it, and the unexpected places it can arise like reality tv shows like Naked Attraction.

Asheley, welcome!

Asheley Landrum: Thank you.

This question about how trust and doubt interact and how they shape peopleโ€™s engagement with information led me to study conspiracy theories.

Baker: What type of misinformation do you study and, and how did you become fascinated by that?

Landrum: Yeah. Iโ€™ve tended to study science relevant conspiracy theories, such as people who believe that the Earth is flat. And my interest in this area really grew out of my fascination with trust. So, as a PhD student, I worked in a lab that focused on childrenโ€™s trust in others as sources of information. And my advisor was really interested in understanding when kids doubt others and how can we help them develop critical thinking skills so that theyโ€™re less credulous. But as I worked on these questions, I started to ask something slightly different. Whereโ€™s the boundary between being critical versus being cynical? And when does skepticism stop being helpful and become excessive? And so, this question about how trust and doubt interact and how they shape peopleโ€™s engagement with information led me to study conspiracy theories. And I think theyโ€™re a perfect example of what happens when trust breaks down and skepticism becomes extreme.

Baker: Oh, wow. Thatโ€™s really interesting. You also took this expertise, and you worked on a National Academies report on science misinformation. And one of those conclusions was that thereโ€™s no simple way to define misinformation about science. I actually thought the report got kind of almost gnarly in the many definitions of science misinformation. Why are there so many definitions? And is this something thatโ€™s particular to misinformation in science?

Landrum: So, thatโ€™s a great question. Defining science misinformation is not straightforward, and thereโ€™s a few reasons why. But first, people use the word misinformation very broadly in a colloquial sense. It includes everything from outright false claims to misleading claims or incomplete information that could potentially be skewed. So our report uses a very specific working definition, which is, and I quote, โ€œMisinformation about science is information that asserts or implies claims that are inconsistent with the weight of accepted evidence at that time, reflecting both the quality and quantity of the evidence, and which claims are determined to be misinformation about science can evolve over time as new evidence accumulates and scientific knowledge regarding those claims advances.โ€

Misinformation about science is information that goes against the weight of accepted evidence at the time and what we think we know today might change as new evidence comes in.

So basically misinformation about science is information that goes against the weight of accepted evidence at the time and what we think we know today might change as new evidence comes in. So what counts as misinformation can shift. And that complexity isnโ€™t totally unique to science, but itโ€™s definitely more pronounced here. Science is a process. Itโ€™s always testing and refining ideas. So new findings might seem like contradictions to people who arenโ€™t familiar with the existing evidence base or familiar with the process. And a great example is how public health guidance kept updating during the COVID-19 pandemic because the evidence was evolving.

Baker: All right, so when youโ€™re thinking about science misinformation, you have to also consider the fact that science is a way of accumulating knowledge and you have to accommodate the fact that youโ€™re, in some sense, defining a moving target.

Landrum: Science is the process of determining what is true in a sense, right? Science is a way of knowing. And when weโ€™re looking at information about science, itโ€™s information for which there is an evidence base. Maybe itโ€™s a small evidence base so far, maybe itโ€™s a larger evidence base, and thus the information is more consistent.

And thatโ€™s very different from things like political information or even just like basic facts like, did something happen yesterday? All of those have very different bases for determining whether or not something was true, you know, if somebody did something versus if someone holds an opinion versus what might be the best policy or the most accepted policy to enact. Those are all separate. But when it comes to information about science, which is what we tackled in this specific consensus report, it was really looking at information for which thereโ€™s an evidence base, based in the process of science.

Baker: And the report says that weโ€™re all sources of misinformation. So itโ€™s not just corporations and political actors and people with particular agendas that are sources of misinformation, but itโ€™s also news media. Itโ€™s also scientists. All these groups can be sources of misinformation, and they can be sources of misinformation whether itโ€™s intentional or inadvertent. And I wanted to ask you about how important itโ€™s to understand the, the intent to misinform and, and, and if so, why?

A key takeaway here is that misinformation can come from anywhere. Itโ€™s not just bad actors with specific agendas, but also well-meaning individuals like scientists and journalists.

Landrum: Yeah. So a key takeaway here is that misinformation can come from anywhere. Itโ€™s not just bad actors with specific agendas, but also well-meaning individuals like scientists and journalists. And sometimes misinformation spreads because people genuinely misunderstand the science or they oversimplify it, or they take it out of context even when they do have good intentions. Now, when it comes to understanding intent, the report makes an important distinction between misinformation and disinformation. The committeeโ€™s definition of misinformation does not take intent into consideration. One can spread misinformation without necessarily knowing itโ€™s wrong, or maybe even if they do. But we describe disinformation as a subset of misinformation, a type of misinformation that is intentional when somebody knows that information is false, but is spreading it anyway. Now, intent is not a characteristic of information, itโ€™s a characteristic of the people who might be sharing it. So it can be very difficult to determine peopleโ€™s intentions.

And different people who share the same instance of misinformation across networks or in a network could have very different awareness of the veracity of that information and have very different purposes for sharing it. All that said, while understanding of intent could help us in some ways tailor interventions, the report stresses that misinformation can still cause harm regardless of whether itโ€™s deliberate or accidental. A journalist misinterpreting a scientific study, or a scientist failing to provide context for their findings can spread misinformation just as effectively as a targeted disinformation campaign. So while intent is important, the impact of misinformation and how to address it needs to remain front and center.

Baker: What you had just said about how weโ€™re all sources of misinformation really led me to think about how many components have to be considered, not just whatโ€™s the content of a particular post. And there were two terms in this report that were new to me that I found really useful for understanding that. And those were information voids and context collapse. And I was wondering if you could just tell me what those are.

Landrum: Sure. So letโ€™s start with information voids. The idea here is that sometimes there just is not enough high quality, reliable information available on a topic, whether thatโ€™s because an issue is new or niche, or perhaps itโ€™s not well studied yet. And then into that void comes misinformation, filling the gap and often becoming what people encounter first. And again, a good example of this is early in the COVID pandemic, where there was so much uncertainty and very limited reliable information available. So that vacuum was filled quickly with rumors and conspiracy theories and falsehoods. And the report suggests that filling those voids proactively with accurate information is crucial. Otherwise misinformation can take root and spread.

Misinformation isnโ€™t just about bad actors or gullible audiences, it is also about the conditions under which information is accessed, shared, and understood.

Then thereโ€™s context collapse. So context collapse happens when information designed for one audience is seen by another, which can often lead to misinterpretation. So, for example, scientists talking to one another might use technical analogies like describing mRNA as the โ€œsoftware of life,โ€ assuming shared knowledge and understanding between peers. But when that analogy is shared on social media and your audience is broader than just the scientists that you had intended, it reaches those public audiences without explanations that can fuel misinformation. So like conspiracies about mRNA vaccines changing DNA because itโ€™s the software of life, right? But thatโ€™s not the case.

So whatโ€™s so powerful about these concepts is that they highlight how misinformation isnโ€™t just about bad actors or gullible audiences, it is also about the conditions under which information is accessed, shared, and understood. And tackling misinformation means addressing those conditions and making sure that, you know, for example, thereโ€™s high quality information to fill voids and improving how information is communicated to help preserve that context.

Baker: Yeah, I was thinking about this and it seemed like two ways that misinformation is propelled. They actually seem to work in opposite ways, and theyโ€™re related to these terms. So decontextualization means you strip away where the information comes from, and then personalization means you package this information so that it is going to reach particular individuals or particular demographics. So in one instance, the misinformation is lacking its setting and in and the other, itโ€™s enriched for a particular recipient. And I was wondering, is that a fair way to put it? And how can understanding that help mitigate misinformation?

Landrum: Yeah, thatโ€™s a really great observation. So youโ€™re right, on one hand, decontextualization strips information of from its original nuance, and without that context, the information is more vulnerable to misinterpretation. But on the other hand, personalization does the opposite. It adds layers of targeting, tailoring information specifically to individuals or groups based on their preferences, beliefs, or demographics. And online platforms use personalizations through their algorithms to deliver content that resonates more with people, which can make that information feel more relevant and trustworthy. But it can also amplify misinformation by reinforcing biases or creating echo chambers where people only see information that aligns with their existing beliefs. So as you pointed out, yes, so these two forcesโ€”one stripping away context, perhaps incidentally, maybe that didnโ€™t start off as misinformation, but it becomes misinterpreted by the stripped away context, and the other enriching certain types of information so that it makes it more sticky with certain types of audiences can drive the spread of misinformation. And itโ€™s just a reminder, a really good reminder, of how complex our information ecosystem has become.

Baker: Yeah, I am just thinking of ecosystem, and you talk about ecosystem factors and temperature and different species and predator, prey, commensalism. I feel like we have ecology has all these words and misinformation ecosystems are still developing those concepts.

Landrum: Definitely.

Baker: What are some of the things that can make misinformation more or less influential?

Landrum: So, thereโ€™s influential for the individual, and then thereโ€™s influential for how easily, perhaps, that it spreads through society. So you have these two levels, you have the individual level and the societal level, and one impacts another. And in the report, we talk about these reciprocal effects. So how misinformation exposure can lead to misbeliefs if itโ€™s particularly designed in a way to make it, like we said, like sticky. But also, sometimes holding misbeliefs that may or may not originally stem from misinformation but could stem from misunderstandings or values systems or personal experiences can increase the likelihood that somebody is exposed to misinformation in the first place, perhaps from some of this personalization or algorithms or the sources that they use or the things that theyโ€™re more interested in engaging with.

A lot less is understood about that bigger picture: how misbelief stemming from misinformation translates into actions or how they might cause societal level harms.

And so our report notes that weโ€™ve really studied a lot of these individual level impacts, you know, what the characteristics are of individuals that makes them more receptive to misinformation. And in some cases, weโ€™ve studied what characteristics of misinformation make those pieces of information more sticky. And that is often sort of the interaction of the two, the characteristics of the information as they interact with the characteristics of an individual. If the information is phrased in a certain way that resonates with somebody based on their prior views or their values, theyโ€™re more likely to take that up. But a lot less is understood about that bigger picture: how misbelief stemming from misinformation translates into actions or how they might cause societal level harms. Those things are murkier and those impacts are harder to measure because they build up over time and theyโ€™re influenced by a mix of factors and not just a one-time exposure to a piece of misinformation.

And we ask well, why, why is that? Why havenโ€™t we done that? And part of it is just the challenge of studying these complex systems, you know, linking individual actions to larger societal consequences. Like declining trust in science or vaccine hesitancy, it requires long-term data and analysis that we donโ€™t always have, whether itโ€™s because it doesnโ€™t exist or itโ€™s complicated to collect, or because our access to it is blocked. Itโ€™s also tricky because misinformation rarely works in isolation. It in interacts with things like social norms, economic pressures, or even structural inequalities. Things like a global pandemic or a time of war can increase the likelihood that misinformation might spread. And so, while weโ€™ve made progress in understanding some parts of this, thereโ€™s still a lot to learn.

Baker: Yeah. Yeah. I guess itโ€™s a lot easier to know if somebody saw a piece of information than it is to know if they cared about that piece of information or incorporated it into their mindset. And itโ€™s even harder to know if they did something differently in their day-to-day lives. One of the ways that the report talks about grappling with misinformation is thinking in terms of looking at supply, looking at demand, looking at distribution, looking at uptake. I wonder if you could tell me whatโ€™s the difference between demand and uptake?

Landrum: Sure. So in the reports framework, demand refers to how people seek out or consume information. So what theyโ€™re looking for, what sources they trust, what motivates them to engage with certain types of content. So demand-based interventions might aim to reduce the consumption of misinformation by, for example, filling information voids so people arenโ€™t left searching for answers in unreliable places. But uptake, on the other hand, refers to what happens after somebody is exposed to misinformation, how that information might influence their beliefs or their attitudes or their behaviors. Iinterventions here might be designed to prevent misinformation from taking root or correcting false beliefs that will happen after exposure. So demand is about how and why people encounter misinformation, whereas uptake is about how they process and act on it.

Baker: Okay. So this is probably oversimplified, but basically demand is what you want for your eyes and your ears and uptake is what happens in your head.

Landrum: Yeah.

Baker: Too simple?

Landrum: I think, I mean, itโ€™s simplified, but I think thatโ€™s about right. Yeah. So demand is that sort of exposure and engagement with the content itself and uptake is how youโ€™ve sort of applied that.

Baker: The report describes several points for potential interventions. How do these make for ways to stall misinformation?

Landrum: We tried to categorize the interventions based on the point in that chain where they might be most effective. So I mentioned before that pre-bunking and debunking are uptake based interventions. And whereas others might be demand based interventions. So you can kind of see interventions at different points in that kind of causal chain. And so the report touches on a number of these, but itโ€™s important to emphasize that no single approach is going to fix everything. The most effective strategies are likely going to combine efforts across the different areas of supply demand, distribution, and uptake.

And like I said, so currently itโ€™s the individual level interventions that are best understood. Pre-debunking and debunking have received the most research attention. Pre-bunking, just to define it, involves teaching people ahead of time to recognize common tactics of misinformation or common, like deceptive tactics or even common characteristics for recognizing misinformation before people actually encounter it. You might think of that as like critical thinking or media literacy training. Debunking, on the other hand, provides corrective information after exposure. So assuming people have seen a claim, an explanation of why that claim is false, and when possible addressing the motivations behind the spread of that claim. So that might be something more like fact checking. And while these approaches are effective in controlled settings, much more research is needed on how these might work at scale or in, you know, in more real-world contexts.

We call for more systems level research to better understand how these different interventions can interact and be scaled for broader impact.

But whatโ€™s less understood are potential systems level interventions, like improving trust in credible sources or, you know, addressing these information voids. And overall, we call for more systems level research to better understand how these different interventions can interact and be scaled for broader impact. But we have to sort of attack this on multiple levels, have multiple different types of interventions to try to address these problems.

Baker: Yeah, I mean, and in terms of supply, I know that one thing that was discussed was just trying to makeโ€ฆ to create, or make more prominent, credible information sources.

Landrum: Yeah. So, you know, low hanging fruit in this case, right? Includes supporting local journalism, bolstering credible sources of information. You know, these interventions are relatively practical. Theyโ€™re feasible with the right funding and partnerships. I know here at Arizona State University, we have a couple of organizations that are working to help support local journalism or increased investments in local journalism. These sorts of investments can help address news deserts, improve access to high quality science reporting, especially in underserved communities. Similarly, professional science organizations could start curating reliable, accessible information sources with relatively modest coordination and funding.

You know, these are the types of interventions that could start making a difference fairly quickly. Of course, we also bring up some potential interventions that are, you know, theyโ€™re possible, but theyโ€™re much more complex. So creating, for example, an independent monitoring commission or getting social media companies to share data with researchers to kind of better understand the complex flow of information. These are much more complex. And an independent commission would require significant resources, buy-in from multiple stakeholders, sustained cooperation across sectors. And while social media companies sharing data would be critical for understanding how misinformation spreads, itโ€™s just complicated. Itโ€™s complicated by privacy concerns, legal issues, the companyโ€™s competing business priorities or philosophies. And so, the report notes that while these interventions could be important, they do face deep implementation barriers and would need long-term multi-stakeholder efforts in order to succeed.

Baker: All right. This is not something that was part of your work on the committee, but I know you have studied how reality game shows incorporate or try to rely onโ€”

Landrum:  Reality tv! (laughs)

Baker: Yeah! Yeah. Can you just tell me a little bit about that and if you could see any interventions..?

Landrum: Yes. So this was interesting. We do point out in the report that while misinformation spreading on social media is very studiedโ€”thereโ€™s a lot of attention on itโ€”less is studied about how misinformation can be shared during entertainment television. And thereโ€™s been a little bit of work on this. If you think about, for example, the CSI effect, right? So the CSI effect is focusedโ€”and thatโ€™s, you know, Crime Scene Investigations and the famous TV show there. So, thereโ€™s this theory that people who watch a lot of CSI have different expectations of what evidence should be available. Like assumptions about the veracity of like signature identification or a lie detector tests or blood spatter analysis, like watching any of these shows kind of gives them the impression that these pieces of evidence  are weighted much more heavily than they really are, in actual investigations.

Itโ€™s kind of hard to draw that line between where things are obviously fiction in an entertainment narrative and where things are obviously not.

And so we can draw a line from that and think about other places where this might be the case. Do you think about when you watch fictional television shows and thereโ€™s scientific narratives, and we recognize, for example, when weโ€™re watching Star Trek, we know that weโ€™re not teleporting people, but even if we know weโ€™re not capable of teleporting people, we might expect that some of the other background information about science is true. Right? So itโ€™s kind of hard to draw that line between where things are obviously fiction in an entertainment narrative and where things are obviously not. And so weโ€”a colleague of mine who studies romantic relationship communication and Iโ€”were andโ€ฆ Iโ€™m embarrassed to say this, but watching very shady reality TV shows. Dating reality TV shows, and one of them was Naked Attraction. And what we noticed about Naked Attraction, and this isโ€ฆ

Baker: So just explain the premise.

Landrum: So this is an amazing premise. So Naked Attraction is a show that aired in the UK. And the premise is that a dater or a picker, they called them. So this is the person who is getting to select a date, has a set of people that they can choose from and they are naked in tubes. And so they slowly lift these tubes up at every section, and they focus in onโ€ฆ first itโ€™s the feet. So they look at the feet, oh, what do you like, do you like toenails? Like long toenails or short toenails? And then they move up from the feet to the legs, to the genitals, to the mid-bodies, and their head is the last thing. And they have to take somebody out at each level.

But I think one of the things that this particular show does is, at any section, it kind of cuts away from the naked people in tubes to give a sort of a scientific sounding bit about the science behind why people might think this way. And itโ€™s interesting because the show itself, while it, it will say something like, โ€œscientists say thisโ€ or โ€œscience shows us,โ€ theyโ€™re not actually providing you with the context. Theyโ€™re not showing you, well, what article said this? Or how long ago was this study? And most people may not even think to ask. And so, my colleague and I wrote a piece as part of a special issue in the Psychology of Misinformation. We wrote about this and we kind of defined them as being sort of two types of misinformation about relationships in these contexts.

Like one is theโ€ฆ like actual, like Iโ€™m asserting that in the case of the show, the show that we used as an example. I should pull the quote up because itโ€™s kind of amazing. Hereโ€™s the quote from the show that we sort of fact checked, right? It says, โ€œTurns out, you really can judge a man by the size of his balls. Anthropologists in America have discovered that smaller testicles have lower testosterone levels, which suppresses a manโ€™s urge to mate and re-channels his brain activity towards nurturing and childcare. So, small balls might mean a bigger, better daddy.โ€

Baker: Okay, and I should just emphasize that this is a quote that is being studied in the context of misinformation.

Landrum: This is a quote being studied in the context of misinformation. And again, was there a study? Yes. It was done by American scientists. We went back, we traced the claim back, we were able to find the original paper that they likely cited, and we showed the claim that was made to the first author and had her sort of annotate that claim and say what they did say, what they didnโ€™t say.

So thatโ€™s like one example of misinformation is like this blatant type misinformation. And then the other we talked about is like this incidental information. Itโ€™s there itโ€™s sort of like misleading things that you might like reason or, or sort of come away with maybe, in our uptake model, right? You take up these, these beliefs or these views from watching content thatโ€™s actually inaccurate, but your perception was sort of informed by what by what the show was sort of explaining. We really wanted to kind of call attention to it as well, given that there is a lack of, of studies on how misinformation can kind of incidentally be spread in entertainment media.

Baker: Reality tv is a particular modern example of misinformation. The word misinformation is actually a pretty new term, maybe from the last ten years or so. But the idea that misinformation is a problem, a problem that needs to be mitigated, thatโ€™s been around for a long time. Iโ€™m a member of the National Association of Science Writers and that started in 1934. One of its explicit missions from the get-go was to make sure accurate information was available. Also back in the 30s, the FDA gained the power to go after deceptive marketing. So Iโ€™m wondering what lessons, what hopes, we can take away from the days before social media?

Misinformation might feel like a modern phenomenon because weโ€™re talking about it so much, but it really has been a challenge for much longer.

Landrum: Right, yeah. Youโ€™re right. Misinformation might feel like a modern phenomenon because weโ€™re talking about it so much, but it really has been a challenge for much longer. I guess one key lesson from the past is that, you know, sustained organized efforts can make a real difference. For example, the systematic campaigns to counter misinformation about smoking help shift public perception and policy around tobacco use. Now, these efforts often combined multiple strategies, scientific evidence, public awareness campaigns, and regulatory action, which demonstrates the power of coordinated multi-level interventions when theyโ€™re done with research. And done effectively. I think we can also think of some of these organized campaigns that have been less effective, perhaps โ€™cause they didnโ€™t understand the audiences quite as well. But another lesson is the importance of adapting to new challenges in technologies. So the tools and platforms for spreading misinformation have changed dramatically especially with the rise of social media. But what remains constant is that need for trusted institutions, clear communications and proactive efforts to address misinformation before it really takes hold.

As for hopes, the past has shown us that progress is possible even in the face of widespread misinformation. You know, by building on whatโ€™s worked before, like leveraging trusted messengers, creating clear standards for accuracy, engaging diverse communities, hopefully we can create more resilient systems for combating misinformation. You know, the report emphasizes the problem is complex, but history gives us reasons to believe we can rise to the challenge.

Baker: To learn more about science misinformation, how it spreads, and how we can stop it, visit our show notes! There, youโ€™ll find links to the National Academiesโ€™ report, Understanding and Addressing Misinformation About Science. Youโ€™ll also find more of Asheleyโ€™s work and some articles from Issues.

Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and our audio engineer Shannon Lynch. My name is Monya Baker. Thank you for listening.