Mental Models for Scientists Communicating With the Public
To make research more valuable and enhance trust in science, scientists and institutions must build greater capacity for risk communication.
Over the past few decades, scientists have delivered stunning achievements—from improving crop yields to exploring Mars to identifying the COVID-19 virus. Despite such successes, many scientists feel unappreciated and even dismissed by policymakers and the public. A February 2022 study by Pew Research Center found that trust in scientists had declined significantly over the preceding year in the United States. Scientists fret about misinformation and disinformation and bemoan public innumeracy and scientific illiteracy. Some scientists try to set the record straight through social media, blogs, conferences, and op-eds, often with disappointing results. In frustration, scientists may become eager consumers of the “deficit model,” which emphasizes the public’s lack of knowledge and biased judgments, a narrative that allows scientists to relish being experts—smarter and more knowledgeable, yet underappreciated.
But blaming the public for such a disconnect absolves scientists of their responsibility to provide people with the information they need. Over the past few decades, social science has shifted away from the deficit model to embrace a “dialogue model,” but this shift has not carried over into the natural sciences. We believe that scientists and their institutions (e.g., universities and government agencies), should embrace evidence-based models of communication with the public. One well-tested model is risk communication, which is the exchange of information for the purpose of making a good decision about a potential harm.
Risk communication can make scientific information useful for a specific decision by building on the general understanding that people acquire in formal science education in schools and informal science education in documentaries, popular nonfiction books, science museums, nature centers, and elsewhere. After identifying the scientific information that is relevant for decisionmakers, risk communication employs evidence-based methods for translating that science into a comprehensible, relatable message, conveying estimates of the expected costs, risks, and benefits of specific choices.
Unlike traditional science communication, risk communication requires dialogue, because there is no way of knowing what information people need without talking to them. Nor is there any way of knowing how well messages work without assessing how people interpret them. And by focusing on what scientists say, rather than how they say it, risk communication has substantially different goals than training in communication and media skills. The right content, not just the right delivery, is essential for building and maintaining public trust in science. Although many scientific organizations provide skills training about message delivery, few support evidence-based content development and testing. We believe that just as legal, financial, and data management capacity are essential for science-based organizations, so is risk communication.
Why so many scientists struggle with communication
Many scientists want and need to get better at communicating their research with the public, but the nature of science, beginning with training, stacks the deck against them. We see three structural barriers that hamper effective communication.
The first barrier is that scientists are trained to communicate with other scientists, not with non-scientists or even with scientists outside their own specialty. Scientists begin their papers by bounding a problem, and then write for peers who understand and accept these bounds. Scientists identify limits to their research in ways that make sense to peers who know the strength of its theories and methods. As a result, scientists may leave out information of relevance to other people, which creates opportunities for misunderstanding and distrust. For example, when public health and medical scientists discuss the benefits of vaccines and medicines, they understand there are also risks, but that these are outweighed by the benefits to the population. However, members of the public, who must make decisions for themselves based on their own personal circumstances, likely do not have the same understanding and may feel betrayed when they learn about potential risks.
The second barrier to effective risk communication is that teaching, a vital part of many scientists’ responsibilities, is not necessarily good preparation for communicating what the public needs to know about scientific findings. Classrooms provide clear, prompt feedback through blank faces in lectures and wrong answers on tests. There is rarely such direct feedback from the general public, leaving scientists with no way of knowing if they are communicating effectively, and no idea about how to do better. Moreover, in the classroom, teachers decide what topics matter. In risk communication, it is the audience whose decisionmaking needs determine the topics.
The third structural barrier is that science tends to be disciplinary, whereas risk communication is necessarily multidisciplinary, integrating knowledge from the multiple sciences needed to inform decisions. When individual scientists know only some of the science that decisionmakers need, they can wind up on shaky ground if they offer opinions outside their specialty. However, scientists who acknowledge their ignorance may leave a vacuum that can be filled by confusion, distrust, or (at worst) people peddling misinformation or disinformation. Risk communication brings scientists together with the knowledge necessary to provide a comprehensive perspective.
A robust psychological finding is that people overestimate how well they understand one another. That gap grows with the difference in people’s backgrounds, experiences, and decisions. Scientists, though vaunted as experts, are not immune to this bias. The mental models approach to risk communication provides a scientifically grounded, practical way to overcome these three structural barriers.
The mental models approach
The mental models approach addresses the gap between scientists’ mental models of a domain and the mental models of the public that scientists aim to inform. To communicate effectively, scientists must understand the mental models that people use to understand the world around them. Unless these intuitive perspectives align with scientists’ claims, the public cannot fully absorb the information that scientists offer and may distrust it.
The mental models approach to risk communication integrates psychological studies, which describe how people think about how things work, with risk analysis, which develops information on the severity and likelihood of harm that is relevant for a specific decision. Thus, the mental models approach helps identify the most meaningful ways to communicate the science that people need. It structures the two-way dialogue with the public that scientists need to identify the information needs, assemble the relevant science, and convey it clearly. Perhaps most importantly, it respects the public’s right to understand the science relevant to their decisions, and does so in terms that align with their current thinking. In this manner, it seeks to build trust by helping science serve the public.
Since the mental models approach was developed in Carnegie Mellon University’s Department of Engineering and Public Policy about 30 years ago, it has been applied to dozens of wildly varied problems, including soil management, ionizing radiation, illegal drug management, vaccines, pandemic disease, breast cancer, breast implants, industrial accidents, Plan B contraceptives, trauma triage, riverine flooding, storm surges, and others. For example, the Dartmouth Toxic Metals Superfund Research Program used mental models to understand why the public showed little concern regarding the risks of naturally occurring arsenic. In British Columbia, authorities used mental models to facilitate collaborative planning between foresters and residents over prescribed burns. A study in Kenya used the approach to understand why farmers rejected a chemical disinfection dip for poultry carcasses that could reduce foodborne illnesses.
The mental models approach to developing effective risk communication follows five steps. The first step identifies the science most relevant to the decisionmakers’ needs. That involves an iterative process: consulting with members of the intended audience about their goals and options, then with scientists about their relevant knowledge. That science is then organized in the second step, typically in the form of an influence diagram. These graphic models show the factors affecting decision outcomes, represented as nodes linked by arrows indicating when knowing one factor (e.g., age) should influence predictions of another (e.g., severe COVID-19). The influence diagram, which pools and synthesizes relevant science, is the “expert mental model.”
The third step involves open-ended interviews with members of the intended audience, structured around the expert mental model. These interviews seek to capture what people know and how they think about issues in the expert mental model in their own natural terms. Knowing how people frame and talk about the issue is essential to communicating in meaningful terms. These conversations almost always reveal surprises regarding what people believe, what matters to them, and how they express themselves—all critical inputs to effective risk communication. For example, our study of domestic radon found that people assumed that radon in their homes was a long-lasting contaminant, like the toxic materials in Superfund sites, rather than a problem that was readily remediated. The Kenyan food safety study found that the experts were not aware that consumers did not want to buy disinfected carcasses because they feared exposure to the disinfection chemicals.
The fourth step in the mental models approach compares the mental models revealed in the interviews with the expert mental model. Gaps in the expert model are addressed by adding factors mentioned in the interviews and, if needed, adding scientists with the missing expertise to join the communication team. Gaps in audience members’ thinking are addressed by developing materials that connect scientific concepts to their existing mental models.
The final step is developing communications that strengthen people’s mental models by reinforcing what they already know, filling gaps in their knowledge, and addressing misperceptions. Before being deployed, these communications must be tested. The simplest test of a draft communication is the think-aloud protocol, asking people from the target audience to say whatever comes into their minds as they read. That test may reveal content that readers found unclear, interpreted differently than the experts intended, sought but didn’t find, or that struck the wrong tone. This step is iterative: each round of testing improves communications.
Though any scientist could use the mental models approach to improve their communication related to decisions they aim to inform, applying the method well is outside most scientists’ skill set. For this reason, scientists need deliberate risk communication capacity building and support from their institutions.
Building risk communication capacity
Just as scientific agencies and organizations have legal, financial, and IT departments, they need departments supporting risk communications. In addition to conventional communication training, such departments should work with scientists to create meaningful content and to test risk communications, drawing on the decades of peer-reviewed social science research on framing messages and conveying potentially difficulty constructs (e.g., uncertainty, exponential processes).
Some institutions have developed such high-level teams, suggesting models and revealing pitfalls. The US Food and Drug Administration (FDA) was an early adopter of risk communication, creating initial infrastructure in 2003, followed by a strategic plan for risk communication, a Risk Communication Advisory Committee with a rotating membership of researchers and practitioners, and a practical guide for risk communication. However, the social science support staff was disbanded and the advisory committee has not met since 2018, leaving no coordinating mechanism for the social scientists scattered throughout the agency. FDA’s communications during the COVID-19 pandemic might have been more effective had it strengthened, rather than depleted, its risk communications capabilities. Behavioral research units have also been established, to one degree or another, in agencies such as the Consumer Financial Protection Bureau, General Services Administration, Security and Exchange Commission, and the Federal Reserve.
Such units, where they exist, would be logical homes for adding risk communication capacity, as well as bases for coordinating, sharing, and leveraging expertise across agencies. Other agencies still need to shift toward adding risk communication expertise to their resources. The National Science and Technology Council in the White House Office of Science and Technology Policy recently re-chartered its Subcommittee on Social and Behavioral Sciences (SBS), an interagency working group. Given the importance of public trust in government risk management, risk communication should be a focus of SBS’s mission, supporting agencies in creating and sustaining risk communication capacity. Ideally, agencies would establish their own chief risk communication officers to guide two-way dialogue between agencies and their stakeholders.
Scientifically sound risk communication as part of agency functions would benefit both the public and the agencies. Underlying the process of risk communication is a recognition that even world-class science will have limited value unless it is translated into trusted, useful terms. Additionally, risk communication facilitates proactive engagement with the public—understanding, respecting, and addressing its concerns. And because risk communication is a disciplined, transparent, and evidence-based process, it also provides a benefit by enabling scientists to overcome their sometimes mistaken intuitions about the public. A final benefit of embedding risk communication within agencies is enhancing trust. Risk communication seeks to inform decisions, not manipulate them. Thus, it protects agencies and the scientists who work within them from the charge that they are spinning the facts to achieve policy goals, or that they are acting as advocates rather than resources. Instead, it helps scientists and agencies be seen as providers of clear, unbiased, and relevant information, making them more trustworthy.
Risk communication empowers scientists by training them to engage the public in respectful dialogue, with the goal of enabling good decisionmaking. The mental models approach aligns naturally with the scientific process: investigate the issue, assess the science candidly, update messages as the evidence changes, evaluate their success, and repeat as necessary.
There clearly are disconnects between scientists and the public. However, their source is often not the public’s failure to understand science, as the deficit model supposes. Rather, it is science that has failed to understand the public, in terms of what and how to communicate. Risk communication research and practice can help to fill those gaps if the scientific enterprise creates the capacity and resources for using them. Science depends on the public’s goodwill. The public depends on scientists’ knowledge. Risk communication can structure the dialogue needed for science and the public to work together more effectively.