Science Policy in the Spotlight
What have the past 40 years taught us about the evolving relationship between science and society?
It happened. One of the summer’s hottest films is about who should have the privilege of providing scientific advice to the government. Science policy has finally moved to center stage, and we can expect a heated battle for newsstand dominance between People magazine and Issues in Science and Technology.
Well, maybe not right away, but as Issues begins its fortieth year of publication, it’s worth reviewing how its domain of science, technology, and health policy has evolved during its lifetime. Having been the editor of Issues for most of its existence, I had a front row seat to the succession of topics that have attracted the attention of experts, as well as the way science policy’s political context has shifted over the years.
In the magazine’s first edition in Fall 1984, the pollster and social scientist Daniel Yankelovich surveyed the current state of science advising and public attitudes toward science, providing a capsule history of the previous 40 years. The story begins with the development of the atomic bomb during World War II. As we were all reminded this summer, theoretical physicist J. Robert Oppenheimer was at the heart of that story. The director of the Manhattan Project that produced the bomb, “Oppie” quickly became a celebrity and a powerful player in postwar government policy on weapons development, military strategy, and other science-related areas. He personified the new prominence of science and technology in national policymaking—and the hazards of scientists moving into the contentious realm of politics. His leftist political beliefs became a liability in the age of Senator Joseph McCarthy, and he was driven out of Washington by the mid-1950s.
Although Oppenheimer’s reputation was severely diminished, science maintained its privileged position in public policy and public esteem. According to Yankelovich, this was a period of enthusiastic, albeit naïve, faith in what he called “science magic” to solve problems. Growing US military power, achievements in space exploration, the introduction of a polio vaccine, and strong economic growth cast a golden glow over the work of scientists in the 1940s, ’50s, and early ’60s. Yankelovich wrote that the enormous prestige of science and technology led a “popular ideology” suggesting that science held a superior path to truth, “unaffected by human passions or modes of perception.” In the mid-1950s, 88% of Americans held a favorable attitude toward science.
But by the mid-1970s, that number had fallen to just 52%, with college-educated young adults taking an even dimmer view. Opposition to the Vietnam War made military strength less admirable, awareness of environmental damage raised questions about progress, novel capabilities such as gene splicing spurred anxiety, and progress in robotics created a fear that millions of jobs could soon be eliminated. The depth of animosity was manifest in 1970, when four antiwar activists set off a bomb targeting the University of Wisconsin’s Army Mathematics Research Center, killing a researcher doing unrelated work and injuring four others. In this climate, scientists felt justified in distrusting the public. So when advances in recombinant DNA technology raised safety concerns, biologists opted not to invite the public to a meeting tasked with assessing the risks at the Asilomar conference center in California.
By the time Issues was launched in the 1980s, public perception of science had become more positive—rebounding to 85% approval—but the relationship was no longer naïve. Yankelovich identified sources of stress in the relationship between science and society. To start, scientists had won considerable independence in determining the direction of their research and liked their separation from the larger public. They were happy to educate the public about their work and to offer expert advice to government leaders, but they had little taste for learning more about public opinion, particularly on how science should be used. Researchers portrayed the work of science as purely rational and rule-bound, floating above the messy debates about values that preoccupied the public. The most pressing issue of the day—as expressed in Issues’ pages—was nuclear arms control, the same problem that had been at the heart of the Oppenheimer case.
During the next two decades, new concerns arose: HIV/AIDS, economic competition with Japan, climate change. In 2003, Yankelovich returned to the pages of Issues to offer his assessment of what had changed. Although the public still expressed general support for science, Yankelovich found that the gap between scientists and the public had grown. The vast majority of scientists still lived primarily in the world of science, which they stubbornly maintained was rational, lawful, orderly, and focused on a long-term perspective. They saw the world at large as irrational, disorderly, and driven by a short-term perspective.
Yankelovich argued that rather than expecting the public to change to be more like scientists, it is the responsibility of scientists to accept that their lens is not the only way to view the world. They should make a deliberate effort to understand how the public understands policy questions. He challenged scientists to adopt a new strategy: “To better engage the public, shift from the goal of ‘science literacy’ to the goal of reaching sound ‘public judgment’ on scientific issues, and use specialized forms of dialogue to advance this goal.”
Another two decades have passed, and how I wish Yankelovich had lived long enough to share his perspective on the topsy-turvy path science’s relationship with the public has taken, especially in the past few years. Biotechnology gained new prominence with the mapping of the human genome and the growing capabilities of synthetic biology. Then the development of CRISPR gene editing made real the prospect of genetically engineering humans and controlling the future course of human evolution. The evidence of human impact on climate change grew steadily stronger, and nations and corporations made pledges to reduce their production of greenhouse gases. But even if these promises had been fulfilled—which they were not—they would never have been sufficient to fix the problem.
By the 2010s, science finally got the message that the goal was not to better educate the public about science, but to instead engage with nonscientists in a shared effort to understand societal challenges in which science could play an important role. Conferences on the science of science communication emphasized that this should be a two-way conversation, not a lecture.
And the scientists began to walk the walk. In 2015, soon after the introduction of CRISPR, the US National Academies of Sciences and Medicine, the Chinese Academy of Sciences, and the UK Royal Society convened an international meeting on human genetic engineering that was open to the public and available as a webcast. The event began not with scientists presenting their work, but with historians, ethicists, disease organizations, sociologists, labor groups, and others voicing their concerns and providing a rich context for the discussion of the science. This was far different from the closed meeting at Asilomar.
That is not to say that controversy subsided or that the public rekindled the faith in science that characterized the postwar period. Intense debates raged over the development of genetically engineered crops, research that used pluripotent stem cells derived from embryos, and the disposal of nuclear waste. But public engagement in science policy was growing, and scientists were beginning to move toward meeting the public halfway.
Then there was an election in 2016, and the US political environment has been in turmoil since. The new president shocked the science community with his talk of hoaxes and stated intention to cut funding for science. In 2017, a March for Science brought hundreds of thousands of Americans to the streets to declare their support for science. (Why weren’t they all subscribing to Issues?)
In some ways this was encouraging for science, even if many of the day’s marchers may have also opposed genetically engineered crops or research into the possibility of geoengineering. On the other hand, it was disconcerting to see that so many people thought it necessary to demonstrate for such uncontroversial ideas as evidence-based policymaking, respecting data, and peer review. Were these ideas really in need of defending? Well, we would soon find out that they were.
The marchers of 2017 were expecting that the most important science-related debates would be about climate change and other environmental issues. But in late 2019, people in Wuhan, China, started showing up at hospitals with an aggressive viral infection. When the COVID-19 virus ignored the president’s prediction that it would quickly fade away, it became a national crisis that would take an enormous toll.
The COVID-19 pandemic was a health and economic disaster for the United States and the world. In its early months there was widespread confusion and uncertainty about how it spread and what measures could be taken to slow its progress. Physicians understood immediately that the virus attacked the lungs, but they did not anticipate its effects on the public perception of science. The public wanted certainty, but scientists were doing what scientists do: collecting data, developing hypotheses, testing interventions. Tentative recommendations emerged and were gradually—or abruptly—revised. The public response ranged from denial to panic to confusion. And when medical science could not quickly produce a reliable cure or treatment, some turned to charlatans promoting untested remedies.
Into this void stepped another scientist, Anthony Fauci. Unlike Oppenheimer, who made no secret of his ideological preferences, Fauci had avoided political arguments since the Reagan administration. But the circumstances of the pandemic made that impossible, and he found himself almost physically tussling with the president for legitimacy. At a press conference on March 20, 2020, when Fauci countered Trump’s claim that hydroxychloroquine was a “game changer,” Trump stepped forward to say, “We’ll see. We’ll know soon.” But Trump’s relationship to science was less about the public interest than it was personal: he rejected it when it hurt him, embraced it when it helped. And he also supported a massive, unprecedented effort to coordinate science, industry, and government to create a new COVID-19 vaccine in record time.
In this polarized political environment, Fauci was the face of science and the target for the frustration, fear, and resentment that gripped much of the public. As furor intensified throughout the pandemic, he and his family required Secret Service protection. Even now that the pandemic has eased and Fauci has retired, he remains a villain in the minds of many. Ron DeSantis seemed to believe that Fauci-bashing will help him become president, suggesting, “Someone needs to grab that little elf and chuck him across the Potomac.”
One could try to see this as good news for science—it has achieved a level of influence that attracts the attention of would-be national leaders. But it speaks to deeper changes in the relationship between science and society. Twenty years ago, Yankelovich argued that scientists could contribute the most to policy debates by framing issues in a context of reliable knowledge and analysis, thereby setting boundaries within which the general public could debate its preferences. Political scientist Roger Pielke Jr. made a similar case in placing science in the role of “honest broker.” This is an appealing notion because it would allow science to be engaged in public policy debates in a way that allows it to act within its professional norms of evidence and reason. But is this role feasible in today’s political environment?
The reaction to Fauci is consistent with a more widespread suspicion of the nation’s elite. Although it does not conform with scientists’ self-image, many people now group scientists in with the corporate executives, Wall Street financiers, and mainstream media that they perceive as facilitating disquieting social changes as well as economic shifts that threaten to topple their precarious financial situations. Well-intentioned efforts to play the role of honest broker might be hopeless in this climate of suspicion and polarization.
The question then becomes whether this turbulent political environment will last. We could seek reassurance in the historian Richard Hofstadter’s 1963 book, Anti-intellectualism in American Life, which traces a recurrent pattern of public revolt against dominant institutions and formal learning. Hofstadter identifies several periods when anti-intellectualism erupted as a major force in public thinking, including the McCarthy era that inspired Hofstadter to write his book. After the public drubbing of the McCarthy years—or the Trump era—scientists could take heart that they were only targeted because they were too valuable to the functioning of a complex society: “Once the intellectual was gently ridiculed because he was not needed; now he is fiercely resented because he is needed too much.” After each episode, the anti-intellectual fervor receded into the background, and I suspect that Hofstadter would reassure us that the current eruption will also fade.
A less sanguine perspective on the role of science in society might require reckoning with the fact that many of the transformations of society over the past 80 years were driven, at least in part, by science and technology—and that those participating in a resurgent anti-intellectual populism may well have good reasons for their resistance. In this sense, the public engagement that Yankelovich advocated becomes more than a nice add-on for scientists to gain the support of the masses; it is instead an existential requirement for a scientific enterprise that has too long believed itself to be separate from, if not superior to, the citizens whose benefit and edification it professes to serve.
It required an atomic bomb to alert the policymakers of the 1940s to the important role that science could play in their work. And it took a pandemic to make scientists aware of how quickly and profoundly public attitudes could change. Much of what Yankelovich said in 1984 and 2003 is still applicable today: as much as science has shaped our world, scientists must accept that the scientific perspective is not the only, or necessarily the best, way to view a subject. The social world is more complicated than the physical world, and scientists must learn from other disciplines and those with practical experience solving human problems. As the pace of scientific progress continues to accelerate, the cultural and political environment will continue to evolve. We can expect to recalibrate the relationship between science and society many times in the next 20 years.
But you know that. You’re reading Issues.