Misunderstanding Misinformation

An obsession with gauging accuracy of individual posts is misguided. To strengthen information ecosystems, focus on narratives and why people share what they do.

In the fall of 2017, Collins Dictionary named fake news word of the year. It was hard to argue with the decision. Journalists were using the phrase to raise awareness of false and misleading information online. Academics had started publishing copiously on the subject and even named conferences after it. And of course, US president Donald Trump regularly used the epithet from the podium to discredit nearly anything he disliked.

By spring of that year, I had already become exasperated by how this term was being used to attack the news media. Worse, it had never captured the problem: most content wasn’t actually fake, but genuine content used out of context—and only rarely did it look like news. I made a rallying cry to stop using fake news and instead use misinformation, disinformation, and malinformation under the umbrella term information disorder. These terms, especially the first two, have caught on, but they represent an overly simple, tidy framework I no longer find useful.

Both disinformation and misinformation describe false or misleading claims, but disinformation is distributed with the intent to cause harm, whereas misinformation is the mistaken sharing of the same content. Analyses of both generally focus on whether a post is accurate and whether it is intended to mislead. The result? We researchers become so obsessed with labeling the dots that we can’t see the larger pattern they show.

By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need. Academics are not going to effectively strengthen the information ecosystem until we shift our perspective from classifying every post to understanding the social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms.

What’s getting left out

To understand what these terms leave out, consider “Lynda,” a fictional person based on many I track online. Lynda fervently believes vaccines are dangerous. She scours databases for newly published scientific research, watches regulatory hearings for vaccine approvals, reads vaccine inserts to analyze ingredients and warnings. Then she shares what she learns with her community online. 

Is she a misinformer? No. She’s not mistakenly sharing information that she didn’t bother to verify. She takes the time to seek out information. 

By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need.

Nor is she a disinformation agent as commonly defined. She isn’t trying to cause harm or get rich. My sense is that Lynda is driven to post because she feels an overwhelming need to warn people about a health system she sincerely believes has harmed her or a loved one. She is strategically choosing information to connect with people and promote a worldview. Her criteria for choosing what to post depends less on whether it makes sense rationally and more about her social identities and affinities.

Dismissing Lynda for her selective interpretation and lack of research credentials risks failing to see what she’s accomplishing overall: taking snippets or clips that support her belief systems from information published by authoritative institutions (maybe an admission by a scientist that more research is needed, or a disclaimer about known side effects) and sharing that without any wider context or explanation. This “accurate” information that she has uncovered via her own research is used to support inaccurate narratives—perhaps that governments are rolling out vaccines for population control, or doctors are dupes or pharmaceutical company shills.

To understand the contemporary information ecosystem, researchers need to move away from our fixation on accuracy and zoom out to understand the characteristics of some of these online spaces that are powered by people’s need for connection, community, and affirmation. As communications scholar Alice Marwick has written, “Within social environments, people are not necessarily looking to inform others: they share stories (and pictures, and videos) to express themselves and broadcast their identity, affiliations, values, and norms.” This motivation can apply to Beatles fans as well as to cat lovers, activists for social justice, or promoters of various conspiracy theories.

Siloed research

Lynda’s online world points to something else that the labels misinformation and disinformation cannot capture: connections. While Lynda might post primarily in anti-vaccine Facebook groups, if I follow her activities, it’s very likely I’ll also find her posting in #stopthesteal or similar groups and sharing climate denial memes or conspiracy theories about the latest mass shooting on Instagram. But that’s a big if; no one expects me as a researcher to ask questions so broadly.

Researchers need to move away from our fixation on accuracy and zoom out to understand the characteristics of some of these online spaces that are powered by people’s need for connection, community, and affirmation.

One of the challenges of studying this arena is that its narrow focus means that the role of the world’s Lyndas is barely understood. A growing body of research points to the volume of problematic content online that can be traced back to a surprisingly small number of so-called superspreaders, but so far even that work studies those who amplify content within a particular topic rather than create it—leaving the impacts of devoted true believers like Lynda still understudied.

This reflects a larger issue. Those of us who are funded to track harmful information online too often work in silos. I’m based in a school of public health, so people assume I should just study health misinformation. My colleagues in political science departments are funded to investigate speech that might erode democracy. I suspect that people like Lynda drive an outsize amount of wide-ranging problematic content, but they do not operate the way we academics are set up to think about our broken information systems.

Every month there are academic and policy conferences focused on health misinformation, political disinformation, climate communication, or Russian disinformation in Ukraine. Often each has very different experts talking about identical problems with little awareness of other disciplines’ scholarship. Funding agencies and policymakers inadvertently create even more siloes by concentrating on nation states or distinct regions such as the European Union.

Those of us who are funded to track harmful information online too often work in silos.

Events and incidents also become silos. Funders fixate on high-profile, scheduled events like an election, the rollout of a new vaccine, or the next United Nations climate change conference. But those trying to manipulate, monetize, recruit, or inspire people excel at exploiting moments of tension or outrage, whether it’s the latest British royals documentary, a celebrity divorce trial, or the World Cup. No one funds investigations into the online activity those moments generate, although doing so could yield crucial insights.

Authorities’ responses are siloed as well. In November 2020, my team published a report on 20 million posts we had gathered from Instagram, Twitter, and Facebook that included conversations about COVID-19 vaccines. (Note that we didn’t set out to collect posts containing misinformation; we simply wanted to know how people were talking about the vaccines.) From this large data set, the team identified several key narratives, including the safety, efficacy, and necessity of getting vaccinated and the political and economic motives for producing the vaccine. But the most frequent conversation about vaccines on all three platforms was a narrative we labeled liberty and freedom. People were less likely to discuss the safety of the vaccines than whether they would be forced to get vaccinated or carry vaccine verification. Yet agencies like the Centers for Disease Control and Prevention are only equipped to engage the single narrative about safety, efficacy, and necessity.

Not “atoms,” but narratives and networks

Unfortunately, most scholars who study and respond to polluted information still think in terms of what I call atoms of content, rather than in terms of narratives. Social media platforms have teams making decisions about whether an individual post should be fact-checked, labeled, down-ranked, or removed. The platforms have become increasingly deft at playing whack-a-mole with posts that may not even violate their guidelines. But by focusing on individual posts, researchers are failing to see the larger picture: people aren’t influenced by one post so much as they’re influenced by the narratives that these posts fit into. 

People aren’t influenced by one post so much as they’re influenced by the narratives that these posts fit into. 

In this sense, individual posts are not atoms, but something like drops of water. One drop of water is unlikely to persuade or do harm, but over time, the repetition starts to fit into overarching narratives—often, narratives that are already aligned with people’s thinking. What happens to public trust when people repeatedly see, over months and months, posts that are “just asking questions” about government institutions or public health organizations? Like drops of water on stone, one drop will do no harm, but over time, grooves are cut deep.

What is to be done?

Over the past few years, it’s been much easier to blame Russian trolls on Facebook or teenage boys on 4chan than to recognize how those tasked with providing clear, actionable information to meet communities’ needs have regularly failed to do so. Bad actors who are trying to manipulate, divide, and sow chaos have taken advantage of these vacuums. In this confusing space, trusted institutions have not kept up.

To really move forward, proponents of healthy information ecosystems need a broader, integrated view of how and why information circulates.

Organize and fund cross-cutting research. Those hoping to foster healthy information ecosystems must learn to assess multilingual, networked flows of content that span conventional boundaries of disciplines and regions. I chaired a taskforce that proposed a permanent, global institution to monitor and study information that would be centrally funded and thus independent of both nations and tech companies. Right now, efforts to monitor disinformation often do overlapping work but fail to share data and classification mechanisms and have limited ability to respond in a crisis.

Learn to participate. The polluted information ecosystem is participatory—a site of constant experimentation as participants drive engagement and better connect with their audiences’ concerns. Although news outlets and government agencies appear to embrace social media, they rarely engage the two-way, interactive features that characterize web 2.0. Traditional science communication is still top down, based on the paternalistic deficit model, which assumes that experts know what information to supply and that audiences will passively consume information and respond as intended. These systems have much to learn from people like Lynda about how to connect with, rather than present to, audiences. An essential first step is to train government communications staff, community organizations, librarians, and journalists to seek out and listen to the public’s questions and concerns.

Proponents of healthy information ecosystems need a broader, integrated view of how and why information circulates.

Support community-led resilience. Today, global and national funders have an outsized focus on platforms, filters, and regulation—that is, how to expunge the “bad stuff” rather than how to expand the “good stuff.” Instead of pursuing such whack-a-mole efforts, major funders should find a way to support specific place-based responses for what communities need. For example, health researcher Stephen Thomas created the Health Advocates In-Reach and Research (HAIR) campaign that trains local barber shop and beauty salon owners to listen to their customers about health concerns and then to provide advice and direct people to appropriate resources for follow-up care. And after assessing information needs of the local Spanish-speaking community in Oakland, California, and finding it woefully underserved, journalist Madeleine Bair founded the participatory online news site El Tímpano in 2018.

Targeted “cradle to grave” educational campaigns can also help people learn to navigate polluted information systems. Teaching people techniques such as the SIFT method (which outlines steps to assess sources and trace claims to their original context) and lateral reading (which teaches how to verify information while consuming it) have been proven effective, as have programs to equip people with skills to understand how their emotions are targeted and other techniques used by manipulators.

For each of these tasks, people and entities hoping to foster healthy information ecosystems must commit to the long game. Real improvement will be a decades-long process, and much of it will be spent playing catch-up in a technological landscape that evolves every few months, with disruptions such as ChatGPT emerging seemingly overnight. The only way to make inroads is to look beyond the neat diagrams and tidy typologies of misinformation to see what is really going on, and craft a response not for the information system itself but the humans operating within it.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Wardle, Claire. “Misunderstanding Misinformation.” Issues in Science and Technology 39, no. 3 (Spring 2023): 38–40. https://doi.org/10.58875/ZAUD1691

Vol. XXXIX, No. 3, Spring 2023