Trust in Science Is Not the Problem

Many members of the science and science policy communities have grown increasingly concerned lately about what they see as a decline in the general public’s trust in science. Frequently cited examples include the common failure to follow COVID public health guidelines or to take the issue of climate change seriously enough to adopt significant changes in individual behavior or support changes in public policy. Discussion then often shifts to debates about how to restore that trust.

The basic premise, however, is wrong. There is no real evidence that the public has lost trust in science per se. On the contrary, most surveys show that most of the public does trust, has confidence in, and respects science and scientists. Therefore, problems around expert advice and the public are best considered one societal issue at a time and should be viewed in terms of how scientific advances intersect with such variables as individuals’ values, economic and other interests, or politics. The most effective remedial strategies typically start from that perspective.

Actually, people respect and trust scientists

Trust in science and scientists among the public is unquestionably critical to their being able to meet their obligations to serve society, but trust does not seem to be a major problem itself. Virtually every survey suggests that the public holds scientists and science in high regard and believes that science benefits humanity. People also generally believe that scientists are motivated to help society. For example, the National Science Board’s Science and Engineering Indicators has reported every two years since the 1970s that over 70% of the US public consistently believes that the benefits of scientific research outweigh its harmful effects, and another 10% believes the benefits and harms are roughly equal. The National Opinion Research Center at the University of Chicago has repeatedly compared the level of confidence the public has in various professions since the early 1970s, finding that the scientific community consistently ranks second, behind only the military, in public confidence. 

Of course, the word “confidence” is neither a synonym nor a perfect proxy for the concept of “trust,” although people occasionally use it that way. For example, having confidence in scientists’ abilities to make vaccines that are effective is not the same as having trust in science more generally. People can have substantial trust in science without strictly following science-based recommendations.

Surveys show that science still scores high even when the general public is asked directly about its trustworthiness. For example, a 2017 survey by the nonprofit group ScienceCounts reported that more than 70% of the public trusted scientists to “tell the truth” and to “report their findings accurately.” Moreover, the group reported more recently that trust levels have increased since the COVID pandemic began. A separate survey conducted by the Pew Research Center in 2019 found that 86% of the public was at least moderately confident that scientists act in the public interest—a number that was actually higher than in 2016. Altogether, these surveys demonstrate that, in fact, trust or confidence in science has not been declining, but rather is holding steady or even increasing in the aftermath of COVID. 

When good people don’t listen to good science

It is clear, then, that the public continues to place a high degree of trust in science and scientists. But this does not mean people will follow science-based recommendations on specific issues, and that can be a problem. In fact, a 2020 Pew survey showed that fewer than half of Americans believed they should rely primarily on “experts” to tell them how to deal with various societal issues. Although it can be disquieting to scientists, this should not be surprising. Indeed, science and public policy experts have long taught that important decisions, such as policy decisions, are rarely, if ever, made solely on the basis of science, but are based on both facts and values, or on facts and personal experience. Moreover, long-held beliefs or core values often win out over scientific evidence when policy decisions are being made. 

Virtually every survey suggests that the public holds scientists and science in high regard and believes that science benefits humanity. People also generally believe that scientists are motivated to help humanity. 

The bottom line is that the public makes decisions based on an array of inputs, including but not limited to scientific facts. This is particularly true when the issue or problem is controversial—as demonstrated in another Pew survey that compared the views held by scientists who were members of the American Association for the Advancement of Science with those of a sample of the general population about a variety of contentious issues. According to that survey, AAAS members were far more likely to hold beliefs consistent with the preponderance of scientific evidence than were members of the broader public. However, the amount of disagreement varied substantially across topics. A large gap occurred, for example, between scientists and the general public concerning the belief that humans evolved over time, where 98% of scientists believed in evolution while only 65% of the public did so. A smaller gap occurred in belief about the safety of vaccines such as MMR, where 86% of scientists thought them safe compared with 68% of the public.

Some failure to follow scientific recommendations results from a broader lack of understanding about the nature and processes of science. For example, there typically is some uncertainty in scientific evidence, and there are very few situations where all scientists agree about what the data are showing or how the data should be interpreted. Scientists understand that there can be scientific consensus behind the recommendations they make to the public in spite of some uncertainty and disagreement—but that can be disquieting to nonscientists. Another example is the evolving nature of scientific theories. Scientific theories often are revised or even replaced as additional information is acquired. Scientists accept these revisions as a normal part of the scientific process, whereas the public may consider them signs of lack of authority or expertise.

In the same vein, it also is important to acknowledge that not all science is equally definitive. Some scientific theories are weaker than others and some data sets are more reliably interpreted than others. Patients, for example, often seek second opinions on their clinical tests or recommendations because even experts can disagree on the meanings of their findings. Such disagreements or lack of certainty become particularly problematic when people—scientists or nonscientists—distort or deny the meaning of data or findings on topics, such as the effectiveness of vaccines or the evidence for human-caused climate change, that have been more completely established. 

Concepts from social and behavioral science help to explain why people, when confronted with competing views, may be more likely to follow the direction set by long-held beliefs or values instead of those coming from science. Two especially powerful types of beliefs and values are “cognitive dissonance” and “cognitive bias.” With cognitive dissonance, the inability to reconcile competing facts or views sets up significant psychological tension or emotional dissonance that must be resolved. Many studies have shown that people typically will choose the more familiar or comfortable side of the argument to resolve that tension. With cognitive bias, various types of biases appear to be relevant. For example, “confirmation bias” is the tendency to seek out information that is consistent with what one already believes. “In-group bias” is the tendency to support or believe someone within one’s own social group more than an outsider. “Status quo bias” is the preference to keep things in their current state. These biases often lead people to discount or ignore scientific evidence. Understanding these kinds of influences can be useful in helping to shape responses to the inconsistencies between scientific evidence and public behavior.

Don’t just explain: engage

Many scientists believe that public disagreement with science is simply a result of people’s lack of understanding, and therefore that a public education campaign would solve the problem. Of course, some minimal understanding of the issue is needed, but that typically is not enough. People may understand the fundamentals of the science behind the issue, but they find the scientific answer unacceptable for one of the reasons discussed above. The inaccurate belief held by many scientists—that the problem is simply a lack of people’s understanding—is often called the myth of the deficit model. Over the past several decades social scientists have thoroughly debunked the deficit model and are now paying more attention to the role of values and other factors that undergird strongly held but nevertheless scientifically mistaken beliefs. 

Substantial research has shown that simply attempting to “educate” the public is not a very effective strategy for shifting people’s views, let alone their behavior.

Unfortunately, there is no single strategy that is effective for all topics. There are some general principles, however, based both on scientific research and practitioners’ experience, that can be applied in attempting to shift public opinion or behavior on scientific topics. They involve a change in the way the scientific community typically approaches the public on science-related issues, by shifting the overall strategy from simply educating the public about science to public engagement with science.

Public engagement is a mind-set and strategy that involves a genuine dialogue between the scientific community and members of the public, and what makes it work best has been studied scientifically. There is no step-by-step guide to effective public engagement, but general principles have been summarized in the 2017 consensus report Communicating Science Effectively (which I chaired) and in a series of colloquium reports from the National Academies of Sciences, Engineering, and Medicine, as well as in a variety of scholarly papers and compendia. Examples of those principles include:

  • All parties must be willing to listen and even compromise. Many scientists are accustomed to teaching nonscientists, rather than listening to their views and perspectives. Genuine dialogue, where both sides listen respectfully and are willing to work on problems collectively, is much more effective than simply lecturing to people.
  • Smaller discussions are much more effective than large town halls or lectures. Small group working sessions are often the most effective, as are hands-on exhibits or demonstrations, laboratory visits, and science museums.
  • Scientists should never deny, exaggerate, or otherwise distort scientific evidence. Scientists have an ethical obligation to reflect accurately the state of knowledge and uncertainty in their fields and should restrain from asserting their expertise beyond its actual limits. On the other hand, the public is, of course, free to talk about science but cannot be expected to meet the same standard.
  • Engagement strategies should be tailored to the audience. It is critical to understand the audience before trying to communicate with them. Audiences for science communication or engagement can vary greatly in their prior knowledge of science; ability to understand numeric information; ways of interpreting new information; core moral, social, and political values and norms; and beliefs used to explain the world. An adage applies here: “know before whom you stand.”
  • Credibility and trust of the speakers are critical. These attributes appear to depend on perceptions of integrity (fair and just), dependability (they will do what they say), and competence (ability to do what they say they will do). Credibility also depends, of course, on perceived expertise. Hyperbole and exaggeration are enemies to credibility. Audiences often perceive when speakers are claiming more surety than is in the data or are exaggerating the magnitude of a scientific relationship.
  • It is important to make the science personally or locally meaningful. People are most interested in and relate to concepts and findings that affect them directly or are applicable locally. They are less likely to be influenced by generalities or abstractions. A term often used to describe this principle is “glocal,” which means to take a global issue and make it meaningful on a local level.

Concern about whether trust in science is waning is misplaced and can distract from the issues that really need attention. Levels of trust in science remain high and are at virtually the same levels they have been for decades. What is cause for concern is that there appear to be more instances of late where members of the public feel safe in ignoring, distorting, or denying recommendations from the scientific community about solutions to issues of societal concern. Moreover, substantial research has shown that simply attempting to “educate” the public is not a very effective strategy for shifting people’s views, let alone their behavior.

Rather, there is a growing body of scientific evidence suggesting that public engagement is much more effective. Engaging the public effectively will require attitude changes for some scientists who are accustomed to being the teachers and not necessarily listeners as well. But effective public engagement is a learnable skill, and there is a growing body of scientific evidence to help guide the strategy for such engagement. The scientific community should follow that script and engage much more with the public around specific critical societal issues and work together to find common ground.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Leshner, Alan I. “Trust in Science Is Not the Problem.” Issues in Science and Technology 37, no. 3 (Spring 2021): 16–18.

Vol. XXXVII, No. 3, Spring 2021