Codeswitch

A Visionary Agenda—in Quilts, Mars, and Pound Cake

Sanford Biggers, The Talk, 2016. Antique quilt, fabric, tar, and glitter, 80 x 84 inches. Courtesy of the artist.

Artists and poets have unique ways to communicate salient truths about the human experience. In “Quilting the Black-Eyed Pea (We’re Going to Mars),” poet Nikki Giovanni conjures imagery that, on the surface, seems whimsical—space travel to Mars accompanied by the songs of Billie Holiday and slices of lemon pound cake. Yet her vivid language reminds us that we can learn from the past to imagine how we might construct our shared destiny on this planet, and on others. In one sense, her aim is practical: space exploration (and indeed all exploration) benefits from diverse perspectives. But on a metaphorical level, Giovanni expansively connects the past experiences of Black Americans to the future, which is “ours to take.” In the telling, she creates entirely new narratives of what interplanetary inclusivity could mean.

As with Giovanni’s poem, the artist Sanford Biggers revisits an American tradition of quilting and storytelling to create an imaginative bridge between the past and the future. In his work, the lives of Black Americans are an evolving and complex story with shifting meanings. Inspired by the idea that quilts may have provided coded information to African Americans navigating the Underground Railroad before the Civil War, Biggers adds new layers of information and meaning to the antique quilts. His work suggests that lessons learned through one of the darkest moments in American history can be reimagined, and, as in Giovanni’s poem, layered into a visionary agenda that embraces innovation and joy.

Sanford Biggers, Reconstruction, 2019.
Sanford Biggers, Reconstruction, 2019. Antique quilt, birch plywood, gold leaf, 38 x 72 x 19 inches. © Sanford Biggers and Monique Meloche Gallery, Chicago. Photo: RCH Photography.

Over the last two decades, Biggers has been developing a singular body of work informed by African American history and traditions. Sanford Biggers: Codeswitch, the first survey of the artist’s quilt-based works, features nearly 50 pieces that seamlessly weave together references to contemporary art, urban culture, sacred geometry, the body, and American symbolism. The exhibition’s title refers both to the artist’s quilt series, known as the Codex series, and to the idea of code-switching, or shifting from one linguistic code to another depending on social situation.

Codeswitch is on display at the California African American Museum in Los Angeles from July 28, 2021, through January 23, 2022.

Text by J. D. Talasek

Sanford Biggers Chorus for Paul Mooney, 2017.
Antique quilt, assorted textiles, acrylic, spray paint, 76 x 76 inches.
© Sanford Biggers and Marianne Boesky Gallery, New York and Aspen.
Sanford Biggers, "Bonsai" (2016)
Sanford Biggers, Bonsai, 2016. Antique quilt, assorted textiles, spray paint, oil stick, tar, 69 x 93 inches. © Sanford Biggers and Marianne Boesky Gallery, New York and Aspen. Photo: Object Studies.

Images of works by Sanford Biggers courtesy of the California African American Museum.

Scientific Cooperation with China

The recent deterioration of the US-China relationship could not have come at a worse time for global science. With China’s sustained effort in catching up in scientific capabilities over the last 40 years, benefits from collaborating with China have already increased tremendously, and will grow further over time. These collaborations can help, among other things, to address some of the unprecedented challenges we are facing such as climate change and COVID-19. Therefore, in the political rush to set up barriers that impede science collaboration with China, Valerie Karplus, M. Granger Morgan, and David G. Victor, in their article “Finding Safe Zones for Science” (Issues, Fall 2021), offer some fresh and sensible ideas that are practical in preserving valued collaborations with China yet mindful of the domestic political reality in the United States.

The key feature the authors present is a framework that helps identify areas with potentially large gains and areas with high political risks. Such a framework can help US policymakers, including Congress and the Biden administration, to act in a more rational way so as to reduce the damage to the global science enterprise. In addition, if accepted by policymakers, the framework can be useful to the US scientific community by ensuring that people who engage in collaborative research activities in the safe zones do not have to worry that they would be investigated or charged some day for working with their Chinese colleagues. Further, such a framework can also help to identify potential areas where collaboration between the two countries may yield huge rewards. To this end, the United States and China should try to revive some formal or semiformal channels of communication in science, such as the US-China Innovation Dialogue that existed between 2010 and 2016.

These collaborations can help, among other things, to address some of the unprecedented challenges we are facing such as climate change and COVID-19.

At the same time, there are practical challenges in adopting this framework for policy purposes. First, putting different research areas into the four quadrants the authors describe is not easy. For example, technology standards in the lower-right quadrant can be questionable for some industries. At the same time, tracing the origin of COVID-19 is not intrinsically high risk. The rare incident of politicizing a pandemic made it high risk. A more fundamental issue is that in the current political climate in the United States, will there be changes in some of the basic principles held dear by global science community? For example, in the basic research area, people collaborate and publish internationally without any concern for where their partners are from and how their knowledge will be used. The recent US investigations of scientists who are of Chinese origin or who are engaged in collaboration with Chinese institutions undermine many of these principles.

Finally, scientists in the Chinese research community, many of whom studied in the United States as graduate students or visiting scholars, still treasure their friendships and collaborative relationships with their US colleagues. These relationships are the joint efforts of generations of scientists in both countries since the 1970s. They should be valued and cultivated in our joint work to address the common challenges we face, instead of being the victim of haste to contain China’s emergence.

Cheung Kong Chair Distinguished Professor and Dean of Schwarzman College

Tsinghua University

Complexity and Visual Systems

Art, in both creation and experience, is one of the most complex of human endeavors. Artist Ellen K. Levy engages the mental loop of seeing, connecting, and processing by juxtaposing imagery that creates meaning from unexpected and often disconnected relationships. Printing, painting, and animating images of complex systems relating to society, biology, and economics, she creates visual contexts that critique technological progress gained at the cost of ignoring the importance of the environment and society.

Ellen K. Levy, "Mining: A Brief History," 2021, acrylic and gel over print with augmented reality component, 40 x 60 inches
Ellen K. Levy, Mining: A Brief History, 2021, acrylic and gel over print with augmented reality component, 40 x 60 inches

For the past decade, Levy has incorporated renderings of US Patent Office drawings into digital collages made vivid with paint. About her latest series, Re-Inventions, she writes, “Most inventions are reinventions; they spin from developments in prior innovations. In my works I explore unintended consequences of technology and include (re)drafted plans of some of the patented inventions that cause them (e.g., steam engines leading to cumulative carbon dioxide emissions). Some of the patents propose remedies resulting from yet other (patented) technologies (e.g., protection from nuclear radiation).”

She creates visual contexts that critique technological progress gained at the cost of ignoring the importance of the environment and society.

Levy, who is based in New York, has been exploring the interrelationships among art, science, and technology through her exhibitions, educational programs, publications, and curatorial work since the mid-1980s. As guest editor of Art Journal in 1996, she published the first widely distributed academic publication on contemporary art and the genetic code. With Charissa Terranova, she is coeditor of D’Arcy Wentworth Thompson’s Generative Influences in Art, Design, and Architecture (Bloomsbury Press, 2021), and with Barbara Larson, she is coeditor of the Routledge book series Science and the Arts since 1750. 

Levy’s work is a part of a group exhibition in Vienna titled EXTR-Activism: Decolonising Space Mining, curated by Saskia Vermeylen. More information about the exhibit can be found at https://www.wuk.at/en/events/extr-activism/.

All images courtesy of the artist.

Ellen K. Levy, Transmission , 2019, mixed media on paper, 60 x 40 inches
Ellen K. Levy, "Messenger," 2021, acrylic and gel over print, 40 x 60 inches
Ellen K. Levy, Messenger, 2021, acrylic and gel over print, 40 x 60 inches
Ellen K. Levy, "2020 Vision," 2007, mixed media on paper, each 80 x 20 inches
Ellen K. Levy, 2020 Vision, 2007, mixed media on paper, each 80 x 20 inches

Episode 5: Dinosaurs!

It may surprise you to learn that the enormous dinosaur skeletons that wow museum visitors were not assembled by paleontologists. The specialized and critical task of removing fossilized bones from surrounding rock, and then reconstructing the fragments into a specimen that a scientist can research or a member of the public can view, is the work of fossil preparators. Many of these preparators are volunteers without scientific credentials, working long hours to assemble the fossils on which scientific knowledge of the prehistoric world is built. In this episode we speak with social scientist and University of Virginia professor Caitlin Donahue Wylie, who takes us inside the paleontology lab to uncover a complex world of status hierarchies, glue controversies, phones that don’t work—and, potentially, a way to open up the scientific enterprise to far more people.

Transcript

Jason Lloyd: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. I’m Jason Lloyd, the managing editor of Issues. On this episode, I’m talking with Caitlin Wylie. She’s an assistant professor of science, technology, and society at the University of Virginia. She wrote an essay for the fall 2021 Issue called “What Fossil Preparators Can Teach Us About More Inclusive Science.” And she recently wrote a book, Preparing Dinosaurs: The Work Behind the Scenes, published by MIT Press, which is about the workers in fossil preparation labs and their often unacknowledged contributions to science. So Caitlin, thank you very much for joining us today on our podcast.

Caitlin Wylie: My pleasure. Thanks for having me.

Lloyd: You wrote a fantastic piece for the fall Issue about preparing dinosaurs and what the role of fossil preparators is in paleontology research. So I thought a good place to start might be if you could talk a little bit about Keith, who’s one of the fossil preparators you describe in your essay.

Wylie: Yeah, thanks. So Keith—and that’s not his name, that’s a pseudonym—he is a volunteer at a museum and he works in the fossil prep lab. And he’s a pretty typical volunteer in the sense that he’s retired. He was self-employed, so he ran a business for most of his career. He’s a veteran. And for him, the point of being in a lab was really to be able to contribute something back to society—which is interesting, because usually when we retire, we think we’ve done enough for society. But he was really attached to the idea that he was serving science by preparing fossils. And of course, volunteering in a lab is different from volunteering as a museum docent, say, or at a soup kitchen, because the work of preparing fossils is really skillful. So Keith had to invest a lot of time learning how to work with these specimens under the guidance of more experienced preparators. And once he put the time in, he was showing up every day, sometimes putting in a full work day just because he found it so satisfying and rewarding.

Lloyd: I’m curious how Keith got interested in this job. Was he really into paleontology? Was he a frequent museum goer? What was the initial reason that he did this in retirement?

Wylie: It’s interesting because a lot of volunteers say, “I’ve always loved dinosaurs. I’ve loved dinosaurs since I was a kid.” But Keith was unusual in that he did not love dinosaurs. He wasn’t all that interested in dinosaurs. But he was very good with his hands. He really liked doing home improvement projects. He was into carpentry as a hobby, and he liked the idea of working in a lab where the tools are tools that he’s familiar with: basic hammers and chisels and little drills. And he found that setting very familiar and comforting.

So for him, it wasn’t so much about the dinosaurs as about the work itself. He often said that he found it relaxing. He was scheduled to come like, I don’t know, Mondays and Wednesdays every week. And then sometimes he would show up on a Thursday and the staff preparators would say, “Hey Keith, what are you doing here?” And he would say, “Well, I did all of my chores at home and I ran out of things to do, so here I am.” But for him it was really a place to go. He had a lot of friends among the other volunteers. He loved to talk to the staff preparators. So for him, the community aspect was strong and the tools were something that he loved, but not so much the science, which is interesting.

Lloyd: So what did he do as a volunteer preparator? What was he doing?

Wylie: Yeah. So the staff preparators would assign volunteer preparators a bone, and that would be their bone until it was finished. And that’s a really important way of training volunteers, that they really do a bone from start to finish and see all of the steps along the way. So I followed Keith because I was following the bone that he was working on, which was a vertebra of a hadrosaur, which is like the cow of the dinosaur era. They’re pretty common, and the vertebra he was working on was broken into several pieces. So basically he was handed this field jacket, and he had to dig the rock out and find out where the fossil was in all of this wrapping that they’ve put around it to protect it on the journey to the museum.

And then once he did that, he put it under a microscope and used what’s called an air scribe, it’s pneumatic, an air-powered sort of hammer and chisel type thing. It’s basically a little drill, handheld, to get the rest of the rock off the bone so that you could see the surface. And then he moved into a reconstruction phase where he had all these bits of dinosaur bone, and he was trying to piece them together to make the vertebra look more like it would in life. For that, he used a variety of different glues and adhesives—which is something that fossil preparators care about a lot. Glue is absolutely central to vertebra paleontology because the specimens are always fragmentary because of the process of fossilization.

You can’t really follow a bone without following the person who’s working on it. So I sat next to Keith for a lot of hours as he was chipping the rock off or trying to piece bits together and telling me what he’s working on. He would narrate while I was watching.

Lloyd: When he gets the jacket and starts opening it up and trying to find the fossil inside, is it fairly clear what’s bone and what’s not, or is that part of the skillset of the preparator?

Wylie: That is a crucial skillset of the preparator. So no, it’s often not at all clear what is fossil and what is rock. Because, of course, the fossil and the rock, the fossil is rock. They’re made of the same minerals. The bone has been replaced with minerals over millions of years. And so they are, materially, often identical. Sometimes there’s a difference in color, which is very helpful. Sometimes there’s a slight difference in texture. And of course, bones are porous when rocks are usually not. So if you see little bumps or little holes in the specimen, you know that’s bone and you’ve gone too far because you’ve penetrated into the inner bone instead of the surface.

Usually to become a preparator, you have to pass what they call the “prep test,” where you walk into the lab as an applicant and they hand do a crappy fossil, usually a fish or something that museums have a lot of and is not very scientifically important. And they hand you a tool and they say, “Take the rock off,” with no training. And if you pass that—basically if you don’t damage the fossil—then they’ll take you on and train you. And they think that that prep test is testing for certain innate skills that you have to have to be a fossil preparator that you cannot learn. Isn’t that fascinating?

Lloyd: Yeah. That’s really interesting. What are those innate things that they think this tests for?

Wylie: Attention to detail, manual dexterity, so like fine motor skills, and patience. Are you willing to work really slowly? And basically, if the applicant passes, then they start the process of training with the staff preparators, where basically they just get another fossil to work on and the staff preparator checks in on them a bunch to see how they’re doing and give them advice. So for example, I’ve seen preparators use a Sharpie to mark the rock, to show the volunteer what to remove, to show them the distinction between the fossil and the bone. And through instruction like that and through lots of time spent staring at these materials, that’s how volunteers learn how to distinguish fossil from rock.

Lloyd: You mentioned adhesives before, and those sound extremely important for reconstructing a fossil. And one of the things that I found really interesting—I don’t think you mentioned it in the essay, but you do talk about it a bit in your book—is that different institutions and different places have different cultures around the kind of adhesive they use and whether they use it or not. Could you talk a little bit about that? This gets to maybe how you got interested in this subject when you were studying abroad in the United Kingdom, right?

Wylie: Totally. Yeah, thanks. The cyanoacrylate controversy was a massive disagreement—continues to be a massive disagreement among the community of fossil preparators. Cyanoacrylate is the chemical name for super glue. It bonds two bones together in an instant, and then you can’t take it off, you can’t dissolve it. And some preparators think that’s great, because it’s really strong and it works fast. You don’t have to wait for it. And their thought is, “I’ve put this piece together perfectly. Why would anyone ever want to take it apart?” So that’s the camp that really believes strongly in cyanoacrylate. It tends to be a more American-heavy camp. And the opposing view is the idea, borrowed from conservators, that all materials put on a specimen and should be removable. The conservation-minded camp argues for adhesives that are basically based on solvents, so you can re-dissolve them if you wanted to take them off and actually remove the chemicals altogether. But those take a long time because for them to adhere, the solvent has to evaporate. So you have to sit there and hold the bits of bone together in a precise location while it dries. And it’s not as strong as cyanoacrylate.

I learned about those two opposing views because I was a student preparator at the University of Chicago, and I was taught to use cyanoacrylate—probably because I was working on not very important bones, right? I was a student, I was learning. And so probably no one ever will take those bones apart. And then as you said, I volunteered at the Natural History Museum in London for a semester. And they were, like, appalled that I asked for cyanoacrylate, and they introduced me to this other family of adhesives which are solvent-based.

I found them really hard to work with because they were so different from cyanoacrylate, in terms of how you line up the joints and how you apply them. In college, I just constantly had glue on my fingers. My fingers were just permanently super glued. And so for that not to be a part of fossil preparation, I struggled to learn that. That made me wonder, how can fossils be prepared in such different ways—and be compared and studied in the same ways? And none of that preparation work is documented in scientific papers. It’s beginning to be documented in specimen records, but even that is not universal. So it just blew my mind that the work of making a fossil researchable could be so different in the United States versus the United Kingdom and other places. And yet, the fossils are considered basically the same kinds of data.

Lloyd: Yeah. Actually, that really segues really well into the preparator’s role in research. So as the preparators work in putting together fossils, what role is that playing in paleontological research?

Wylie: So you might think that scientists would prepare their own specimens. And this is true in paleoanthropology, because there are so few fossils of human ancestors that scientists do prepare them themselves. But for vertebrate paleontology, there’s so many vertebrates compared to hominins that the bulk of it is just too much. It would take scientists forever to prepare the specimens and study them. It’s not sustainable in that sense, so they needed a division of labor.

The interesting thing about vertebrate paleontology is that the division of labor is so strong. Very few vertebrate paleontologists know how to prepare a fossil. They automatically take it to a preparator and preparators, the vast majority, have no idea how to study a fossil. I guess they would know what species they’re working on, but they don’t know how to identify species versus one species versus another. They don’t see that as relevant to the work they’re doing of revealing that specimen. And so that’s the part that I find really interesting, that that division is so strong even though they’re working on the same specimen. The handoff between fossil preparation and fossil research is a clean break. No pun intended.

Lloyd: Yeah. That’s really interesting. How do preparators talk about what they do?

Wylie: They say that they are serving science. They take a very long-term view, and they see that as their responsibility, to take a long-term view of the benefit, or the protection of, the specimen itself. They consider themselves advocates for the fossils, sometimes against their bosses. Scientists are the ones who hire staff preparators and pay for them through their grants or through institutional funds. So technically, the preparators work for the scientists. But preparators would say that they work for science in general. So not just this scientist who needs the fossil tomorrow to write a paper, they’re in a big hurry to publish or perish. Whereas preparators would say, “No, I need more time to prepare this fossil well,” or “I need more time to prepare it in a way that is conservation friendly so that the fossil lasts for another generation and isn’t just useful for you tomorrow.” So in some ways, fossil preparators are like the mediators between the specimens and the scientists, even though they’re all arguably working towards the same goal, which is learning about the distant past.

Lloyd: I’m interested in the disagreements that occur. I think you mentioned this in the book, that occasionally scientists will not allow a preparator to look at a jacket or an unprepared fossil yet, or vice versa. The preparator will keep a specimen from the scientist until he or she is ready to hand it off. Do those disagreements, those kinds of conflicts, occur frequently, or is that a pretty rare occasion?

Wylie: That’s a good question. The disagreements arise when preparators do research and when researchers do preparation. So there’s very much a territorial sense. That’s when they get upset. For example, in one lab, there was a scientist who would sneak into the lab at lunchtime when nobody was there and work on fossils. And he found it relaxing and just described it as like, “I just needed something brainless to do,” which is pretty offensive to the preparators.

And so the preparators installed locks on the specimen drawers so that he couldn’t get the specimens out, because he was hurting them. He was causing damage and, mostly, he was insulting the preparators by invading their territory. And the reverse happened too, in the sense that I talked to preparators who wanted to do research on fossils, like to write a paper describing a new species or comparing species, and their bosses, the scientists, would say, “No, that’s not your job.”

Some preparators got pretty dressed down by their bosses, yelled at. Others got fired for doing too much research. That wasn’t considered part of their job. And so the disagreements are very unequal in the sense that scientists have more power than preparators do. So preparators can’t fire our scientist for coming into the lab and breaking a specimen by trying to prepare it. But of course, scientists can fire a preparator. So yeah, they do a lot of work—both groups do a lot of work to distinguish themselves from each other, if you see what I mean. So yeah, that’s one way in which they do it is those conflicts.

Lloyd: Yeah. And that brainless comment made by the researcher sort of hints at the larger power dynamics. But just to get into that, how does the scientist conceive of what the preparator is doing? How do they describe what the volunteers are up to?

Wylie: Yeah. The difference in language here is really interesting. Often, scientists will say that preparators are cleaning fossils. And cleaning’s pretty easy, right? Anybody could wipe the dust off a countertop. And preparators never say that they’re cleaning fossils. They say that they’re preparing fossils, or sometimes they say that they’re sculpting fossils because those micro-decisions of “what is rock” and “what is fossil” feel like they’re sculpting or they’re creating. And those are really different ways of talking about this work, in the sense of the scientists kind of dismissing it as merely technical or grunt work that anybody could do, whereas preparators are likening it to art, which is much higher status than technical work.

Lloyd: That really gives you a sense of the hierarchies in the lab. So just to describe that hierarchy with some specificity, am I right in thinking that volunteer preparators are the lowest status and then there’s staff preparators and then it’s the research scientist, the paleontologists themselves? Is that sort of the general order?

Wylie: Officially, yeah. So in a museum, in the list of job titles and salaries, yeah, that’s the order. And sort of formal institutional power, yeah. In practice, it really depends on the context. For example, when fossils get broken—often by scientists who are trying to study them, and these fossils are super heavy and super fragile and they break under their own weight, so it might not even be a mishandling. But if a scientist breaks a fossil, it’s amazing how the power dynamic immediately shifts to the preparator. So then whatever the preparator says is going to happen. So the preparer might say, “I’m just going to glue this for you right now and give it back and you can keep working on it,” or the preparator might say, “You mishandled handled this. You can’t study it anymore.” Which is amazing, right? Very different from the formal hierarchy of the scientists having power over the technicians. So in case, preparators have that much power because they are the only ones who know how to fix that broken fossil.

The other case in which preparators have power is over the volunteers. So the scientists usually say, “The volunteers are not my problem.” And so it falls to the preparators to train them, select them, manage them as a workforce. And actually, that’s an incredible amount of power for technicians. And especially for technicians who don’t have standard credentials or a shared degree. In that sense, I think that volunteers are a major source of empowerment for preparators. And it also means that the preparators are in charge in the lab. Deciding what preparation methods to use totally falls to the preparators, scientists have no say in that. Partly as a power thing—preparators would never listen to a scientist who said, “You must use this tool,” because it’s not their expertise—and partly as a knowledge thing, that scientists really don’t know which tool to use. They wouldn’t know what recommendation to make.

So in that sense, preparators have a lot of power within their domain of the lab, over the other workers, volunteers, over how they’re going to prepare those specimens. And so then the power that scientist have really comes down to funding and what specimens the preparators are working on. So the scientists say, “I really want to study this bone. I need it in six months to write this paper.” And then the preparators do whatever they think is best to achieve that scientist’s goals.

Lloyd: So do the staff preparators, the paid preparators, do they have similarly varied backgrounds to the volunteer preparators, or do they have generally more a scientific background or a post-secondary degree in some sort of science?

Wylie: Yes. All of those things. Almost all preparators start as volunteers, which is interesting because the number of volunteers who become preparators is very small, percentage wise. But almost all of the staff preparators begin as volunteers and get that early training and exposure. Some of them have PhDs in paleontology, some of them have PhDs in literature. Some of them have only a high school education. It’s a really wide variety.

Lloyd: Does that very general hierarchy apply throughout paleontology, or do different institutions or maybe different kinds of institutions, such as museum labs versus university research labs, are there differences there? Or is it generally pretty much the same?

Wylie: It’s pretty much the same. I studied 14 labs in three countries. About half were in museums and about half were in university labs, and it seemed pretty much the same. The major difference was the number of workers. Generally, museums have a larger staff of everybody, more scientists, more preparators, more volunteers, whereas university research labs might only have one or two preparators. And they’re generally doing more specialized work. A scientist might only study fossil lizards, and then that lab’s only going to work on fossil lizards. The university ones tend to be more specialized in that sense.

There’s a slight difference in responsibilities towards the public. In universities, preparators work with grad students. Not necessarily to teach them how to prepare, but more to prepare specimens for them. And then in museums, preparators are somewhat responsible for the mission of the museum, as are all the staff, which is outreach and education. So they do a lot of lab tours. Training the volunteers is a form of outreach. I think if you work in a research lab, a university lab, you probably do less outreach and a little more work with students than in a museum lab. But yeah, those are the main differences.

And also, there are a couple of labs in museums that are for demonstration only. They’re not specifically research labs, and those are really different from taking a lab and just turning the walls into windows, which is the basis of most of the glass-walled labs that I studied. But there are a couple where they just have a couple of tools lying around, a couple of junky fossils. And they prepare them to show groups of school children, for example, as a demonstration, rather than actually preparing the specimens to be studied.

Lloyd: Oh, okay. That’s interesting. They’re like ersatz labs that are only for showing kids how it works.

Wylie: Yeah. So that’s more of a traditional display as opposed to an actual workplace that you can watch.

Lloyd: But there are some museum labs that are sort of fishbowl glass-walled labs, where the preparators are actually doing research and the public can see them doing what they’re doing.

Wylie: Totally. And I would say that’s the majority.

Lloyd: Okay. How do preparators feel? Do they like being on display in that sense, or are they annoyed by the attention, or do they just not really even think about it and get used to having people looking over their shoulder?

Wylie: It depends. A lot of the volunteers really like it. They like to be seen as someone who gets to work in a science lab. So they’ll wave at kids through the windows, or sometimes they’ll go outside and chat with visitors and stop working and take a break and serve as a public face of the lab. But staff preparators generally think it’s a drag, right? They’re in this business because they want to work with fossils. I don’t think I’ve ever met an extroverted fossil preparator. They really prefer the sort of solitary, focused work and they do outreach, like working in the glass-walled lab, as kind of a chore, as a service, but not as their favorite thing, for sure. And almost all museums that have glass-walled labs also have backstage labs, behind-the-scenes labs, and that’s usually where staff work.

And then it’s often the volunteers who are out in the public-facing lab. Part of that is because staff preparators are working on more complicated and more important fossils, so it’s just easier to do that in a place that’s quiet and has ideal air filtration and all the noisy tools that aren’t allowed on the museum floor, for example. I heard a couple of stories from labs that had been designed so that visitors and preparators could talk. So visitors could ask a question while a preparator’s working. And almost every single lab then removed that feature because it meant that the preparators were just answering questions all day long and not preparing fossils. And they found that infuriating. So yeah, lots of these labs have a telephone on the wall that doesn’t work or it used to connect into the lab.

Lloyd: It’s really fascinating that the most public-facing aspect of this research that occurs in museums would be enacted by these folks who are essentially members of the public themselves. They don’t necessarily have specialized scientific credentials, and they’re on a volunteer basis, and they’re the closest to the public. That just seems really interesting. Do they see themselves as sort of citizen scientists? That’s a very broad movement that comprises a lot of different sorts of research and people. But I wonder if they’re sort of enveloped in that broader movement to open up science a little bit more to the public.

Wylie: I think so. I don’t think they would identify as citizen scientists. They describe themselves as volunteers because most of them, like Keith, see themselves as distanced from the science. So they’re serving science, or they’re doing this work to help out scientists or to help out the museum, but they don’t see themselves as researchers. And most of them are like, “Why would I want to be a researcher?” They’re kind of dismissive of the very idea. And Keith would say things like, “I like working alongside people who are furthering the world of knowledge, and I’m just along for the ride.” And so there’s this sense of being science-adjacent that people like, rather than actually doing research.

Lloyd: That’s an interesting conception of their role. When the average, I don’t know if you studied this, when the average museum goer goes and sees this person working in a glass-walled lab, who do they think that person is? Do they think that that’s the paleontologist, or do they know what’s going on in the lab? Maybe this gets to why they were installing telephones that no longer work to ask them questions. Do you think the public has a sense of who these folks are and what they’re doing?

Wylie: No, they don’t. I thought a lot about what these labs are doing, because there are text panels around these labs and they say things like, “This is an air scribe, this is a microscope.” None of them, I never saw a sign that said, “These are volunteers. If you want to volunteer, take this flyer.” I never saw a recruitment form, I never saw any information about who these people were or what they were doing. So yes, I would stand outside the lab and eavesdrop on visitors to try to understand what they thought this was. And they would mostly say things like, “Look at the scientists” or, “Are those robots?”

Lloyd: Like at Disney World?

Wylie: Exactly, because that’s what you expect to see in a museum. You don’t expect to see people at work. And of course, fossil preparators don’t move very much. The movements they’re making are very small, so you can believe that it’s not a person, it’s a stuffed model or something. The conclusion I came to about the purpose of these labs is partly that they’re a scientific workplace. Volunteers are producing specimens that are going to be studied, in most cases. And the other function they serve I think is to show that a museum is a home of research. So we might think of museums as being a home of just dead stuff and finished facts written on these authoritative text panels. But actually, they’re housing a lot of research, and this is one way to show visitors this is a research lab. Research is a process, research is work, research is done by ordinary looking people wearing jeans and drinking coffee and chatting.

So it’s a pretty different front portrayal of science from the rest of a typical natural history museum. And the coolest part about it, I think, is that again, the text panels don’t really explain what the preparators are doing. They’re usually about the specimens or the tools, not about the people. And so I think that creates an opportunity for visitors to actually practice skills of scientific meaning making. You’re making observations, you’re trying to make sense of what you’re seeing. You’re asking yourself questions, “What are they doing? Who are these people?” And then you’re drawing conclusions. And that’s what scientists do. I think that’s what museums want to be teaching the public, is how to think like a scientist.

In that sense, these labs are very good for that because people don’t understand what’s going on. And the funny thing is that sometimes their conclusions are not what the preparators or the scientists would intend for their conclusions to be. I heard one woman approach one of these labs with a little kid and she said to the little kid, with great excitement, “Look, people making fossils.” So no scientist, no preparator would ever say they’re making a fossil. But yeah, you can understand why she got that idea, right? There’s plaster everywhere, there’s tools all over the place. You could totally understand why she would think that. And that’s drawing an evidence-based conclusion.

Lloyd: I did not realize that about the signs around the glass-walled labs, that they just don’t even mention who’s in there. It’s just maybe the tools that they’re using. But that does get to one of the things that you talk about where the preparators get very little, if any, credit for their work anywhere. And one of the things you talk about is potentially doing, in order to give credit, potentially provide some authorship to papers or to research papers that the scientists have written or, even just acknowledging them in the methods section. And I was wondering, is that a moral stance, or would that have some effect on the research itself or the products of research?

Wylie: Yeah, that’s a great questions. So I started this project from a Marxist perspective where I’m like, I’m going to go empower the proletariat, these oppressed workers who get no credit. And I very quickly abandoned that perspective because I realized how much power preparators actually have. They control the volunteer workforce. In effect, they control the space of the lab and the work that happens there and the decisions that go into each fossil. They choose their tools, they choose their materials. And so I started to think that actually being missing from scientific papers provides that space for preparators to have autonomy over their work and their workforce. And so I was thinking, there’s some evidence that as things become documented, as work becomes documented, then surveillance increases.

The classic example is nursing. 50 years ago, nurses pretty much did their own work because they were trusted as experts and professionals. And then as more and more documentation became common in the medical workplace, nurses lost some of that autonomy. So instead of saying, “I checked on the patient,” you had to document how you checked on it and what measurements you took, and so it lost that space for creative problem solving and judgment because it was becoming more documented. I worry that adding preparators to papers might increase scientists’ involvement in fossil prep decisions, which actually would be bad news for the preparators because that’s their main area of power. So I’m not sure that authorship is the right answer.

What I do think should be transparent is preparation methods. And preparators agree with me on this, and they really push each other to improve their documentation practices because it’s not a typical, traditional part of their work. For example, certain glues will screw up geochemical tests. So if you try to carbon date a fossil that has cyanoacrylate on it, it’s not going to work.

So that’s important to know for a scientist in 50 years who wants to date a particular specimen, and they have no idea what glue is in there—that’s going to impact what tests they can do. Keeping track of those kinds of materials and also who prepared it, because preparator skills are really different, and their decisions are really different, I think would be an awesome contribution to science, as part of the metadata of that specimen. And it would serve as a form of recognition, right? So if it’s in an institutional database of each specimen that includes the name of the preparator and all the materials they used and when it was prepared, that would make the preparator’s work look more legitimate, I think, more respectable, more scientific in a sense. But it would protect them from the surveillance that might come from being part of scientific papers.

Lloyd: Yeah. That’s really fascinating. I didn’t know that that would be a concern. It makes sense. And actually, that nursing example is helpful to extrapolate a little bit beyond the focus of your research: Are there positions comparable to fossil preparators in other fields? Nursing wouldn’t be one of them, they’re very specialized with a great deal of education. But I’m just thinking of other fields that may have people who come in on a volunteer basis, or maybe don’t have a scientific background, and do similarly very critical work for the research.

Wylie: Yeah. I’ve been thinking about this a lot. I would argue that the skill-based nature of research is ubiquitous, widespread. Even scientists have embodied skills of doing experiments, for example, that they don’t really describe. They get written out of papers that the rest of us outsiders wouldn’t know about. So the dependence of science on skill, I think, is ubiquitous. But the lack of credentials is really unusual in science. Lots of sciences have a history of lots of amateur participation—think of anything from natural history, botany, mycology, people who collect fungus as a hobby or people who study astronomy as a hobby. Those are very long histories of public involvement, but most of those fields have now specialized or credentialed those positions. And so now, if you like to look at the stars in your backyard, you’re not going to be considered a contributor to science; that’s your hobby.

I think preparators are not amateurs because they’re part of an institution. They’re working in a museum, they’re working in a research lab. Even if they’re volunteers, I would say they’re not amateurs because they’re not doing it on their own. I guess I would love for other scholars to tell me whether this position for people with a wider background, wider variety of backgrounds, exists in other fields, because I suspect that it does.

And one way which I know it does is with undergraduates. We all think that undergraduate research experience is a good thing for students. I’ve done a lot of research on undergraduate engineers, and I’m finding that it’s actually an excellent thing for the labs that these undergraduates work in, because undergrads bring this very interdisciplinary mindset that the grad students and the professors don’t tend to have, because they’re so much more specialized, right? They’ve had so much more education in engineering than the undergrads. So in that sense, the undergrads are kind of playing the role of preparators in the sense that they’re bringing in outside information, they’re having a different approach to problems that the professors think about in a very specific way.

Lloyd: So what would that look like, if there was a bigger focus on skills, maybe, rather than credentials, at different levels of the scientific enterprise, if you had to guess?

Wylie: I know, right. I’m a professor in an engineering school, so I hate to make this argument, but if we broaden paths to doing scientific work, that can only be a good thing. So I’m not arguing against STEM education, but education in science and engineering has a long history of discrimination and exclusion. I hope that we all will someday overcome that. That day is not today, it’s an ongoing process of making science and engineering education available to anybody. And so in the meantime, yeah, I think it would be awesome for science to include more kinds of people as volunteers, as technicians, as people watching from the outside—even that is a way of extending science beyond the lab. And I think this is good for people to participate in science, to learn that it’s not as elite and exclusive as it might seem, because that spreads scientific literacy. It spreads a sense of appreciation for science. It makes public trust in science stronger if people understand that science is just work done by people. It’s not magic.

And the other crucial thing it would do, I think, if we had a more diverse workforce in science, would be to bring ideas to scientists that are different. So to expose scientists to people who have backgrounds very different from theirs, which will bring in new skills that science, at the moment, doesn’t have or new ideas or new ways of understanding things that will improve the science for all of us. And crucially, watching the scientists chat with the preparators and chat with the volunteers, there’s a lot of knowledge exchange that happens just by having people around, hanging out together in the same space, talking about the same bone, people share a lot of knowledge.

And I think that that information sharing can help scientists learn to ask more relevant research questions. So for example, how can they use fossils to study how species adapt to climate change? Something that is crucial to our world now. How can they use fossils to study how environments change over time or change in response to rapid flooding, natural disasters, widespread wildfires, things that we’re experiencing that paleontology actually has enormous insights to offer? But I’m not sure those are the questions that scientists would come up with on their own. I think they need our help.

Lloyd: That’s a fantastic message for inspiring people to get more involved. So thank you for joining us for this episode of The Ongoing Transformation and thank you to our guest, Caitlin Wylie, for talking to us about the work fossil preparators do behind the scenes. Check out the show notes to find links to her Issues article, “What Fossil Preparators Can Teach Us About More Inclusive Science,” and to her book, Preparing Dinosaurs: The Work Behind the Scenes.

Please email us at [email protected] with any comments or suggestions. And if you enjoy conversations like this one, you should visit us at issues.org for many more discussions and articles. And I encourage you to subscribe to our print magazine, which is filled with incredible art, poetry, interviews, and in-depth articles. I’m Jason Lloyd, managing editor of Issues in Science and Technology. Thank you for joining us.

A New Compact for S&T Policy

Since I came to Congress in 1993, increasing diversity in science and technology has been a driving focus of mine. I know from experience that talent is everywhere and that far too often students from underserved communities are left behind. Unfortunately, while I and many passionate leaders such as Alondra Nelson, the deputy director for science and society in the White House Office of Science and Technology Policy, have spent our careers working to advance diversity, equity, and inclusion in science, technology, engineering, and medicine—the STEM fields—there is still so much more to be done. Nelson ably presented some of the challenges in her recent Issues interview (Fall 2021). Through my leadership of the House Committee on Science, Space, and Technology, I have listened to Nelson and numerous other experts and have reframed the problem, and the suite of solutions available to us.

Inclusive innovation is not just about representation. It is not just about creating new opportunities and breaking down barriers for historically marginalized groups to enter and remain in STEM fields, although that is a necessary step. To promote STEM diversity and equity, I developed the STEM Opportunities Act, the MSI STEM Achievement Act, and the Combatting Sexual Harassment in STEM Act. My committee also developed the Rural STEM Education Act and the Regional Innovation Act to address the geographic diversity of innovation.

I know from experience that talent is everywhere and that far too often students from underserved communities are left behind.

But diversity alone will not catalyze the paradigm shift we need to see. We need to rethink, at the highest levels, how we prioritize our investments in science and technology. To date, national security and economic competitiveness have dominated the discussion. This focus has served the nation well in many ways, but it has failed to address many of the challenges Americans are facing in their lives. We are faced with a web of complex and interconnected societal challenges ripe for innovative solutions—access to safe drinking water, gaping economic inequality, misinformation, addiction and mental health crises, climate change, and the list goes on. For too many Americans, science and technology is an abstraction that has no bearing on their daily lives. I echo Alondra Nelson’s call for increased transparency and accountability in US science and technology policy. And I commend President Biden for establishing the Science and Society Division at the Office of Science and Technology Policy.

Last year, led by my committee, Congress enacted legislation to establish a National Artificial Intelligence Initiative that has trustworthiness, transparency, equity, fairness, and diversity as core principles. I will make full use of my final year as a member of Congress and Chairwoman of the Science Committee to advance the congressional conversation around inclusive innovation. Already, I have proposed that the new Technology, Innovation, and Partnerships Directorate at the National Science Foundation be focused not only on competing with China, but on addressing the full breadth of challenges we face. Moreover, the legislation I introduced pushes NSF to take a much more expansive view of who gets to have input to the research agenda. We cannot let China set our agenda. We lead only by being the best possible version of ourselves. I believe we should steer our science and technology policy toward that goal and that, in doing so, we will strengthen this country, and its innovative capacity, from the inside out.

Member, US House of Representatives (D-TX)

Chairwoman, House Committee on Science, Space, and Technology

In her interview, Alondra Nelson lays out her vision for what it means to bring social science knowledge to the work the White House Office of Science and Technology Policy will undertake. The creation of its new Science and Society Division and the selection of Nelson to lead it are exceptionally welcome initiatives of the Biden-Harris administration’s agenda. As a renowned expert with deep knowledge about the links among science and technology, social inequities, and access inequalities, Nelson is ideally situated to bring social science to this policy table. Reflecting sociologists’ value commitments, her vision is anchored in a serious concern for justice, access, inclusion, and transparency.

I would like to highlight two points from her interview, as neither seems to have been prioritized in previous initiatives of science and technology policy. First and foremost is her vision for inclusivity, equality, and justice. This vision incorporates efforts to embrace all who are interested in studying and then working in technology, regardless of socioeconomic, racial, or any other form of inequality, including, I would like to imagine, immigration background. Her broad tent for inclusivity also seeks to incorporate in technology policy the diverse approaches, thinking, innovation, and creativity that those from different social backgrounds may bring to solving a problem or creating policy. In this thoroughly globalized world in which we are increasingly aware of the harms of exclusion, this is perhaps key to progress but also to a more just society that broadly foments equality and inclusion. Nelson’s vision goes to the core of what is needed to confront the challenges of this moment in history. I see it encapsulated in her description of what she would like science, technology, engineering, and mathematics—the STEM fields—to look like: “to look like all of us, that reflects all of us, in the classroom and in the boardroom.”

Nelson’s vision goes to the core of what is needed to confront the challenges of this moment in history.

Second, I want to remark on another aspect of her broad vision for inclusivity, and that is to incorporate social science knowledge as key to technological innovation and attend to the effects of technological advancements. Social science can shed light on the tensions in society that Nelson mentions, and how to reconcile them to create more equitable conditions to expand opportunities for all. Social science research is also equipped to contribute knowledge on how organizations and institutions work; it can provide critical research on organizational culture and on how team members’ social characteristics shape organizational hierarchies, which often determine the success of a project and ultimately better policy solutions. It can also help to illuminate the social effects of new technologies and how they may reconfigure human interaction.

We in the social science fields look with excitement to the many possibilities for science and technology to progress in equitable, just, and inclusive fashion with Alondra Nelson in the lead. With a sociologist at the helm of this new top-level division, we trust that our value commitments as sociologists will be reflected in progressive, transparent, and just policy for all.

Dorothy L. Meier Chair in Social Equities

Department of Sociology

University of California, Los Angeles

President, American Sociological Association, 2021–2022

Alondra Nelson highlights the importance of science, and social science in particular, for developing effective interventions across all policy domains. We applaud the Biden administration for elevating the Office of Science and Technology Policy to cabinet level and for bringing Nelson’s expertise as a social scientist into the upper echelon of its leadership. As Nelson noted, science and technology policy in the United States has not historically incorporated all voices or responded well to the needs of all Americans. We share her assertion that community partnership is fundamental for moving the nation forward in a more inclusive and equitable way.

From our point of view as sociologists, Nelson’s focus on involving communities in the policymaking process is an important step toward achieving racial and social justice. Such a focus goes beyond simply informing communities about policy initiatives or getting feedback from community members after implementation. Rather, policymakers should seek to understand the needs of communities from community members and engage communities directly in creating and articulating the kinds of interventions that can most effectively address those needs.

There is a long tradition of community-based sociological scholarship, but it has often been marginalized. Our sense, as supported by Nelson’s comments, is that such work is increasingly central not only within our discipline but within the academy more broadly. The American Sociological Association (ASA) has sought to elevate this work, including running a longstanding funding program for research collaborations between sociologists and community partners, and the Winter 2022 issue of our online magazine, Footnotes, is devoted to community-focused research. Universities, including Syracuse University, the University of Minnesota, and the University of Wisconsin, have begun to prioritize and reward community-focused research in recommendations for tenure and promotion.

Policymakers should seek to understand the needs of communities from community members and engage communities directly in creating and articulating the kinds of interventions that can most effectively address those needs.

Also important for generating more equitable and inclusive policy is the training of the next generation of community-focused scholars. Students of color often enter graduate school with aspirations of studying issues that affect the communities from which they originate. The ASA is committed to supporting graduate students of color in their research endeavors through the Minority Fellowship Program, a predoctoral fellowship initiative that has funded more than 450 fellows across almost 50 years. Programs such as this can play an important role in diversifying the scientific workforce and serve to bring scholars and communities into the policymaking process who have often been excluded.

Our hope is that institutional support—from scholarly societies, colleges and universities, the top levels of government, and beyond—will indeed move the scientific enterprise as a whole toward incorporating true understanding of and consideration for all populations into the policymaking process. Such a shift would be entirely consistent with what sociologists have known for a long time and Alondra Nelson has illuminated: humans are at the center of all science. Failing to incorporate the full range of human voices into policy development is not an option if we seek a truly democratic nation.

Executive Director

Director of Diversity, Equity, and Inclusion

American Sociological Association

Bridging Divides Through Science Diplomacy

The COVID-19 pandemic has presented the international community with a series of unprecedented scientific, social, and public policy challenges. Particularly in the early days of the pandemic, the world experienced a shift toward geopolitical tribalism exemplified by nationalistic quests for personal protective equipment, testing supplies, and therapies. Rhetoric focused on “self-reliance” cast a shadow beyond the political and into the scientific, further magnifying perceptions that science is a competitive rather than a collaborative endeavor and increasing concerns that such actions may be encouraging a retreat into research secrecy.

Nowhere has the retreat from international cooperation been more drastic and consequential than between the governments of China and the United States, where increasingly antagonistic dialogue has exacerbated existing tensions between the two countries. If continued, the growing geopolitical conflict between the United States and China and declining faith in multilateralism could dominate a post-COVID world.

We argue that it is critical to foster international cooperation in the face of global crises. Early-career researchers (ECRs) like us are in a unique position to create new and lasting ties among scientists, with implications for improved international relations and the progress of science more broadly. However, helping ECRs develop the necessary skills requires investment by both research institutions and governments.

COVID-19 has highlighted the importance of international cooperation in confronting global threats, which was observed through the role publications and knowledge exchanges played in quickly characterizing the virus. Collaboration appears to be an important way forward as the world looks to make meaningful progress in tackling both the pandemic and other pressing issues, such as climate change.

Early-career researchers are in a unique position to create new and lasting ties among scientists, with implications for improved international relations and the progress of science more broadly.

In mapping out his own experiences with “science for diplomacy,” President Obama’s science adviser John Holdren wrote that international science and technology (S&T) collaboration “foments personal relationships of mutual respect and trust across international boundaries that can bring unexpected dividends when the scientists and engineers involved end up in positions to play active roles in international diplomacy around issues with significant S&T content—e.g., climate change, nuclear arms control, and intellectual property.”

We define ECRs to include undergraduate and graduate students as well as those still in the early stages of their careers, in any sector. Precisely because they are early in their careers, ECRs are uniquely positioned to create bonds with foreign researchers now that can mature and strengthen over the coming decades.

ECRs are also key to collaborative efforts in low-risk research areas, which are nonpolitical and concern only basic scientific questions of mutual interests. Valerie Karplus, Granger Morgan, and David Victor noted in Issues that these “safe zones” could include research necessary to address climate change, such as advanced battery chemistry or carbon capture and sequestration, among other topics. These areas are unlikely to be of immediate commercial or military applications and thus are fertile grounds for developing international cooperative partnerships.

Looking back to another time of heightened geopolitical tensions—the Cold War—reveals that scientific cooperation at the level of individual laboratories, or through the exchange of students and scholars, was a popular and effective way of carrying out international cooperation. In the case of the United States and the Soviet Union, interpersonal relationships between scientists proved beneficial as the countries sought to cooperate on discrete space-related activities. Acknowledging the caveat that the political circumstances of the two periods are not identical, this type of approach, focusing on particular projects and individual relationships, could be used as a model to facilitate communication between China and the United States.

We also believe that any framework for scientific cooperation between the United States and other countries should center the role of ECRs. Areas of mutual interest present a prime opportunity for extending international collaboration beyond individual scientists to the level of research institutions and government agencies. Cooperation in such areas could be politically feasible despite geopolitical tensions: the 1985 Cold War-era agreement between the United States and the Soviet Union to jointly develop the International Thermonuclear Experimental Reactor (ITER), an international nuclear fusion facility, is an illustrative example. ITER continues to operate today with an expanded coalition of international partners aiming to develop nuclear fusion as a sustainable energy source. A more contemporary example is US-Russia collaboration on spaceflight programs. Although space cooperation between the United States and China is less likely in the face of political tensions, expanding cooperation in health security could present a more feasible opportunity to warm relations.

Areas of mutual interest present a prime opportunity for extending international collaboration beyond individual scientists to the level of research institutions and government agencies.

By actively contributing to these projects, ECRs can play crucial roles in developing research agendas as well as in building relationships with individual researchers. The interpersonal relationships that develop among ECRs over the course of cross-border collaborations could prove instrumental as these scientists rise through the professional ranks in diplomatic or research arenas. In his op-ed, Holdren credited the relationship he developed with the Soviet scientist Evgeny Velikhov during US-Soviet collaboration in the field of nuclear fusion with the success of the bilateral commission on the disposal of excess plutonium in the post-Soviet era.

Through this process of long-term relationship building, scientific cooperation at the level of individual scientists could play a central role in building trust between countries. Over time, countries involved in individual-level collaborations may become more amenable to broader collaborative efforts, even in the field of commercial technologies.

As an example of how early-career personal relationships can lead to cross-institutional and even cross-national trust, as well as far-reaching research progress, consider the relationship between Mark Levine and Zhou Dadi. Levine, director of the US-China Clean Energy Research Center (CERC) at the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL), and Zhou, then an ECR in energy efficiency, began working closely in 1988 at the start of the LBNL initiative to support international clean energy research, development, and deployment. Twenty years later, by the time momentum was growing for a US-China agreement to address climate change, Zhou had become an advisor on energy issues to premier Wen Jiabao and director of the China Energy Research Institute.

Scientific cooperation at the level of individual scientists could play a central role in building trust between countries.

Zhou and Levine’s relationship created a foundation for progress on climate-related research. It also fostered mutual trust in intellectual property protections, easing the way for more expansive agreements in the future. For instance, CERC developed an intellectual property protection plan that “may ultimately play an important role in building trust among the consortia participants, which could lead to even more constructive collaborations in the future, and serve as a model for future bilateral cooperation agreements,” according to a 2014 examination of the program. Thus, Zhou and Levine’s ECR relationship provided a powerful connection between the two countries that grew to support meaningful progress on the broader issue of climate change.

Although ECRs could be highly effective in making meaningful progress on a range of S&T issues, the lack of awareness about science diplomacy career pathways and dearth of training opportunities has inhibited their ability to participate in this arena. Thus far, science diplomacy has been largely taught to ECRs through extracurricular courses and workshops or within general science policy programs (see Table 1). But there is a clear case for increasing support for ECRs to receive science diplomacy training.

Universities and research institutions can play a crucial role by creating new science diplomacy courses or certificate programs and ensuring that students and scholars have opportunities to pursue work experience in science diplomacy and other policy-related fields. Workshops and seminars involving professionals working in these fields could help expose ECRs to the various available avenues and provide potential mentors.

The lack of awareness about science diplomacy career pathways and dearth of training opportunities has inhibited ECRs’ ability to participate in this arena.

We argue that science diplomacy should be taught as an elective course and included in career development discussions. One possibility is to build on existing virtual courses, such as those offered by S4D4C and the DiploFoundation, among others.

Informal communities and networks can also be valuable resources for researchers interested in learning more about science diplomacy, providing a platform for networking and opportunities for engagement. The National Science Policy Network (NSPN), where several of the authors met to collaborate on this article, is one such community. That community’s informal environment facilitates open dialogue and discussion of innovative solutions to confront global challenges. NSPN’s Science Diplomacy Exchange and Learning program (SciDEAL), which completed its inaugural year, facilitates collaborative work between ECRs and science diplomacy institutions, including nonprofit organizations, embassies, and consulates.

The move into a post-COVID world requires all hands on deck to build the international collaboration that will help science most effectively address pressing global issues. With additional training opportunities and mentorship, ECRs can play an even greater role in building trust between countries—a fact illustrated through recent historical examples. ECRs, including us, are looking to gain experience in cross-border cooperative projects now so that as we move along our career trajectories in academic and science policy spaces, we can help shape a policy environment that promotes science for diplomacy.

Table 1. Selected opportunities for early-career researchers to train in science diplomacy.

OrganizationCourse / summer school descriptionWebsite link
The American Association for the Advancement of Science, Washington (AAAS), DC, USA, and The World Academy of Sciences, Trieste, Italy.This course exposes participants to key contemporary international policy issues relating to science, technology, environment, and health.https://twas.org/opportunity/2020-aaas-twas-course-science-diplomacy
AAASThis one-hour course is hosted by AAAS Center for Science Diplomacy and covers the basic definitions and frameworks of science diplomacy as well as its evolution in history using several case studies.https://www.aaas.org/programs/center-science-diplomacy/introduction
The Barcelona Science and Technology Diplomacy Hub (SciTech DiploHub) and Institut Barcelona d’Estudis Internacionals (IBEI).The summer school offers an intense, 40+ hours course that covers the most pressing issues on science and technology diplomacy, such as sustainable development and technology diplomacy. It has a special focus on Europe, the Mediterranean, and the role of global cities.http://www.scitechdiplohub.org/summer-school/
European Academy of Diplomacy and InsSciDE (Inventing a Shared Science Diplomacy for Europe)The Warsaw Science Diplomacy School allows young diplomats and scientists from across Europe to build diplomatic skills and create a new network of science diplomats.https://insscide.diplomats.pl/summer-school/
S4D4CThe European Science Diplomacy Online Course introduces participants to science diplomacy, including the conceptual framing of science diplomacy and the variety of stakeholders and networks involved.https://www.s4d4c.eu/european-science-diplomacy-online-course/
National Science Policy Network’s Science Diplomacy Exchange and Learning (SciDEAL) ProgramThis new program provides ECRs with opportunities to pursue project-based collaborations between early-career scientists and science diplomacy institutions, including nonprofit organizations, embassies, and consulates. Participants will create tangible outputs while also learning about science diplomacy and cooperation.https://scipolnetwork.org/page/science-diplomacy-exchange-and-learning-scideal
The Institute of International Relations (IRI-USP) and the Institute of Advanced Studies (IEA-USP)The São Paulo School of Advanced Science on Science Diplomacy and Innovation Diplomacy (InnSciD SP) organizes an annual summer school introducing participants to multidisciplinary aspects of science diplomacy and innovation.https://2020.innscidsp.com/about/

Episode 4: Art of a COVID Year

In the early days of the pandemic, communities began singing together over balconies, banging pans, and engaging in other forms of collective support, release, and creativity. Artists have also been creatively responding to this global event. In this episode, we explore how artists help us deal with a crisis such as COVID-19 by documenting, preserving, and helping us process our experiences. Over the course of 2020, San Francisco artist James Gouldthorpe created a visual journal starting at the very onset of the pandemic to record its personal, societal, and historical impacts. We spoke with Gouldthorpe and Dominic Montagu, a professor of epidemiology and biostatistics at the University of California, San Francisco.

Transcript

Host: Hello and welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology, a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. You can find us at issues.org.

In this addition of the podcast, join J. D. Talasek, the director of Cultural Programs at the National Academy of Sciences, as he talks with artists James Gouldthorpe and epidemiologist Dominic Montagu about a series of paintings called COVID Artifacts.

J. D. Talasek: Hi everyone. I’m J. D. Talasek, and I’m the director of Cultural Programs at the National Academy of Sciences. Welcome to The Ongoing Transformation podcast. For over 10 years, my colleague Alana Quinn and I have had the privilege of working with the journal, Issues in Science and Technology. We get to suggest artists to feature in the magazine, and it has been a real joy to do so. We believe that not only do artists have a unique perspective, they also have a unique way of communicating that perspective.

For this episode, I’m joined by one of these artists, James Gouldthorpe, who is based in the San Francisco area. We’re also joined in discussion by Dominic Montagu, who is a professor of epidemiology and biostatistics at the University of California, San Francisco.

James, Dominic, welcome. We’re glad you’re here.

Gouldthorpe: Hello.

Montagu: Pleasure to be here.

Talasek: So I’d like to just start by asking you how you met. It sounds like the start of a bad joke: an artist and a scientist walk into a bar. So why don’t you tell us what the real story is? How did you guys meet?

Gouldthorpe: You want me to go, Dominic? It’s actually all-around parenting. Our sons, who are both now in their mid-twenties, met in middle school, and they and some other boys formed this really tight group of delinquents that have remained friends for many years now. And through them, we got to know each other as parents. Dominic’s home became the sanctuary for all these boys as they roam the streets. So, we always knew where they were when it came time to track them down.

Talasek: Well, it just reminds me—we talk about cross-disciplinary discussions and the way that different disciplines interact. What you just said reminds us that it’s because we’re all human and that we have other ways of connecting through just our systems of knowledge.

James, we reached out to you because of a body of work that you’ve done called COVID Artifacts. And I wonder if you could tell us about that project—how it started, maybe just describe it for us, as well as how you view it now, after a year or so?

Gouldthorpe: Like a lot of people at the beginning of the pandemic, there was a certain level of panic. We had been sent home. I actually work at SF MoMA, and we had been sent home with the idea that we were going to check in in two weeks and all come back to work at that point. And as we all know, it didn’t happen that way. So at home I was panicking, kind of spinning out, and I just retreated to my studio to start working. I don’t know if you remember back at the initial start of the pandemic, there was this video that went around of this nurse showing you how to disinfect your groceries. He took each one out, wiped it down. It was a bit excessive, but back then we didn’t know.

And I remember my wife and I did our first trip to the grocery store, a little local grocery store. And we came back and spent over an hour wiping down every item. It had started to occur to me the things that we had taken for granted, our regular daily items, had suddenly become this vector for death. We had no idea how dangerous these things were now. Suddenly, a bag of potato chips could kill you.

I got the urge to represent that somehow, so I sat down and I painted a bag of groceries, which now was weaponized. It was this terrifying thing that was part of our daily lives, but it had this feeling of danger around it. And I discovered that doing that, just staying in my studio, kept me from spinning out. It started to really help my mental health. So I began reviewing the daily news feeds, which got brutal. I mean, there are people who chose to look away from the news feeds. I did a deep dive and then every day I would try to find something new to represent in a painting.

Talasek: Dominic, I’m wondering if you can remember the first time that you saw this work that James was doing and what your response was initially to it?

Montagu: I didn’t see any of the painted until I went to see the show at SF MoMA. And then it was just extraordinary because at UCSF, we realized, I think, in January that something rather dramatic was going on in Asia, and the university started at having weekly updates tracking COVID-19. And the first cases—do you remember the boat that came into Oakland and they were identifying positive cases and sending them on airplanes to North Dakota? And Trump was saying, “It’s okay, there’s only 13 cases. So we think we’re fine. Look at Asia. China has 50 cases.” And I spent a year looking at the  and the infection numbers and from when it was single digits in the US and forecasting how bad it was going to get and worrying about that.

It was a really stressful year for all of the reasons that James said, as well. And I forgot all of the daily events. And I forgot the individuals. I forgot what it was like when that boat got towed through San Francisco Bay and those first few weeks, when we worried about groceries. The friends that I had who went to New York to support the doctors and nurses when New York seemed like it was overwhelmed and that was going to be the end of the world. And each of those episodes got replaced by a new trauma or by avoiding those traumas by focusing on the infection numbers and the statistics, or the mechanisms of infection, what we were learning. Is it aerated? Is it aerosol? Is it just droplets on objects that don’t absorb liquids? Do we only have to worry about droplets on metal? How well do we have to do all of this? Each new worry meant you had something to focus on that was pragmatic and you could control it a bit by understanding it and everything else got forgotten.

And so this was an amazing thing, to look at all of James’s paintings and have it all come just rushing back—both the human impacts that were so vivid in the moment and just returned, or even the impacts on all of us, remembering what it was like to get the first bag of groceries and “how worried should we be?” I remember hearing about people in Italy using bleach to wipe down every apple that they got and thinking, “We’re not doing that. Should we?” And yeah, it was incredibly impactful.

Talasek: Your account of that is almost exactly like mine. It was that I had forgotten that this happened. I had forgotten that we had experienced that. And I’m wondering also, Dominic, how does that feed back into your work and into your research? How does that inform a scientist, to have that sort of moment of reflection?

Montagu: A lot of epidemiology, a lot of biostatistics, is not thinking about individuals. We look at aggregate, we look at infection rates, at mortality rates per hundred thousand. And 600,000 people—we’re close to that [number of] deaths from COVID-19 in the US—I immediately want to think, “Well, but it’d be hundreds of thousands of people who would’ve died from other diseases if we didn’t have COVID.” I contextualize everything in abstracts. And so it’s quite powerful to have a collection of images that breaks you out of that and forces you to constantly think about the human context, the human importance that is behind all of the numbers. That, I think, matters—it’s why the numbers matter rather than the inverse. It’s not because there’s lots of people that we get excited by statistics. It’s the other way around. The statistics only have value because they represent people. And if you forget that, you lose an enormous amount. You’re doing things for the wrong reasons. So, it’s been very important to me.

Gouldthorpe: When I look back, particularly at the early works, they become these icons of human behavior in the face of near-apocalyptic events. And you see what becomes the focus. Suddenly we have a shortage of toilet paper, which never fully made sense to me. It was this sort of irrational response. And then as events went on, there was a period where I was like, “How am I going to keep painting these objects?”

But suddenly, society, social norms started to unravel. When the George Floyd murder happened, there was this explosion of protests and the exposing of just how deep [systemic] racism is. And then events just began to accelerate. Some people say to me that this project was a great idea, but it wasn’t really an idea. It was a reaction. I was just like, I want to stay ahead of this. I want to note how we behave, how we’re responding to this, and what layers are being exposed as we move along.

It’s interesting in retrospect, because even I forgot what some of the images were about. Things went by so fast. I was clicking and I was like, “I don’t know what that is, but it’s tragic.” I don’t want to forget, but then at the same time, it was so accelerated that, as a painter, I had a hard time maintaining the momentum, because it was so much happening.

And it keeps shifting. Out here in California, we ended up with the wildfires, and we had this apocalyptic sky that was very Blade Runner-esque that went on for a day. And then it seemed like the events grew larger and larger in their consequences as it went along, and it all seemed to stem from the pandemic. The pandemic seemed to be the foundation for this unraveling of society, I guess, is a very dramatic way to say it.

Talasek: I’ve heard you talk about your work, James, in terms of storytelling and in terms of narrative. And certainly what you’re describing here exactly fits into that larger impetus of your work. And I think that it also ties in with what Dominic was talking about. The work that he does in the lab is statistics and you’re dealing with numbers, but then the power of the narrative, such as is represented in your work, to humanize that and to connect that very necessary study of the numbers with what the numbers mean in our real lives.

I’m wondering, Dominic, in your work as a scientist, how does storytelling manifest itself for you? Once you crunch the numbers, so to speak, at what point is a narrative, like what James is creating, helpful?

Montagu: It becomes very important for communicating the visceral information that’s behind statistical reality, but it always works in the opposite direction of what James’s paintings have done. At least, as a scientist, you do the analysis, you look at the data, and then you identify stories that illustrate the data rather than being outliers to the data. You might have a great story, but it turns out it’s the one in a thousand where the person, they survived against all odds. Or they died, but not from the disease that you’re looking at; they got hit by a truck. And so it might be a great story, but you wouldn’t choose that because you’re choosing stories that illustrate data.

I think what James has done and why this resonated so strongly for me was it’s completely the opposite. It’s a collection of 365 and counting items of information, each of which is incredibly powerful. And the story is built from the collection of all of them. You don’t look at averages. You can see a shared narrative. In many ways, COVID turned us all sitting at home into observers of the world, much more so than we had been before. We would participate in real life more than somehow we did for a year.

And so, what you get is a story that is more like a reflection of real life, where it’s many different things which all built up to a collective influence. And I think you see that, and that doesn’t come out in data. Nobody analyzes data to produce that story. So, it’s been really interesting for me to try and think about the relative position, the relative utility of those different ways of approaching the creation of a narrative to reflect back something that’s happened.

I think that the paintings are really useful because they show a really complicated narrative and experience. I assume, James, that for any one person, 60% or 70% of the paintings will resonate with them. And the other ones—one of the paintings that I love the most is this enormous crowd of people on the Golden Gate Bridge. I don’t remember that. I never saw that. I really like it, but it doesn’t viscerally hit me. And yet there’s so much overlap between what you experienced and what I or anyone else experienced, that we build a bond there.

And the bond is much more interesting because it’s an imperfect overlap. If it was just the average experience, if it was just the statistically calculated median, it’d be much duller. It would reflect what all of us share, which is probably Trump and doctors in New York and three or four other things, but it wouldn’t be as nuanced and it wouldn’t be as powerful. What we didn’t both see is as interesting in this story as the things that we both saw on TV or in the newspaper.

Talasek: That’s an amazing description of what this is. And James, I’d like to get your response to what Dominic just said.

Montagu: Come on James, I want to hear you say, “I disagree completely.”

Gouldthorpe: I’m leaving right now [laughs]. One of the benefits of working at SF MoMA and having this exhibition at SF MoMA is I can go into the galleries and sit in the corner and basically loiter. I’m the doughy middle-aged guy in the corner that’s a little creepy. But I get to watch people as they review the year. The work up is not the entire year. It goes basically from the start of the pandemic until just post-election. There’s a lot of other work that’s not on the wall. And I can watch the recognition go across people’s faces. And something that I’ve been trying to do with my work over the past years is to create a communal event, in a way, when you come and visit my work, that you linger and you read it like a book or a painting.

And I’m pretty excited to see that people come to it and they’re all pointing at different paintings and sharing a story about it, sharing that moment. Or, “I don’t remember this, what was the…” Trying to get other people and they’ll gather and all discuss it and review it. And it’s humbling, for one thing, because I wasn’t thinking about that when I was painting them. I wasn’t thinking exhibition, I was just literally in my own head trying to get through the day. But I am witnessing this shared narrative, this global narrative.

It’s now this archive of the pandemic. Now, whether it’s going to be able to exist with as much intention after the pandemic’s over, I don’t know, because our memory’s going to fade even more. But at the moment it’s fresh enough that people are definitely finding a collective memory out of it. It’s interesting to watch, and it was unexpected. I’m really enjoying lurking in there and seeing how people respond.

Talasek: Well, it is interesting. James, do you see this as, or was it part of your original intent, for it to be such a healing process? I mean, you talk about it as originally just your needing to do something creative in response and then you see people coming together and you know that’s a healing conversation to have. Was that your original intention?

Gouldthorpe: It was not. I can’t claim there was any intention. The intention was really to keep my hand moving and my mind occupied. But then, I chose to use social media, which is something that I generally avoid. I don’t like the endorphin rush that you get addicted to with social media, but I decided to start posting daily. And as it went along, I started getting responses from people who were very appreciative of the work. A lot of frontline workers, when I would paint hospital scenarios and nurses and doctors, would write in their appreciation for my depiction. And over time, even the certain specific subject matter got in touch with me. I painted a young man getting arrested in St. Louis, and his girlfriend wrote and talked to me about that day.

And Rahul Dubey, who was the gentleman who gave sanctuary to the people in Washington [DC] during Trump’s little stroll over to the church—they were all about to be arrested for curfew. And he threw open his doors and he brought them all in and he had 70 people and they spent the night. And I did a portrait of him and he wrote me and now we’re in regular communication. His portrait’s on the wall and he was so excited that his portrait was in the museum.

Here’s the thing. I’m basically an introvert, so it was a little strange to have these strangers reaching out to me. But then that became my way to stay connected outside the studio. I was watching the news feeds, but to actually start to hear from people that were having the genuine experience that I was painting, exposed a reality that I was only experiencing through my laptop screen. When I was invited to do the exhibition, I was shocked, for one thing. But then, to be able to do this, to see people—and people still write me now and say, “I saw your show and it moved me in this way.”

It is unexpected. And I’m still processing what that means. I hope that it has a life beyond the exhibition and that I can continue to do work that has this meaning for people, because that’s really what I’m trying to find in my work. A lot of art deals with deeper conceptual things that have a limited audience that can begin to understand and to dissect that work. I like, if I can, that my work can actually have a human element that can reach people in unexpected ways.

Montagu: I have a question for you, James. Because of this forum, because you made the clear mistake to invite me on to also talk with you about this, when you’re painting in general, or with this series of paintings specifically, do you think about the differences between recording a lived experience and a scientific analysis of the world—whether that’s about something specific to diseases or physics or chemistry or other aspects of science—what do you think about what it means to have an artistic, or an artist’s perspective, on the world, versus the scientists and how those either are completely unrelated or how they complement each other?

Gouldthorpe: I think they’re very related. I think that if you go through contemporary art, you’ll find a lot of artists who are working very specifically within the sciences as well. And it’s interesting. On my behalf, I’ve long had an interest in science, and I’ve tried to work it into my art, but it’s not always been successful. I’m perhaps realizing that just because you have an interest doesn’t mean you have to make more about it.

I have seen artists who have been successful at it, but in the case of the pandemic, the science was so integrated to what was happening, that it was a crucial part of the narrative of the year. So in this case, I was able to review the science and represent it in the painting.

Now my other work, which is large narratives that I do, are fictionalized stories set in the past and the element of science is not there. I haven’t figured out a way to do that, that makes any kind of sense. Whenever I do it, it feels forced and inauthentic.

But if you start looking around and you look at artists, there are artists who work with scientists and do an amazing job bringing the two elements together. Not two elements, there’s multiple elements in this. I don’t think they’re very different, science and art. They’re both explorations of ideas. And I think that they both manifest crucial elements of our human existence. Wow, that was—I’m sorry. You can use that or not. That sounded ridiculous coming out of my mouth when I said it.

Montagu: One thing that this raised for me is, Why was there so little shared memory? Why was there so little, certainly none that I’ve seen, art that came out of the Spanish Flu of 1918? OK, in part because the news was shut down because of World War I, but this was immensely traumatic. And, there was very little scientific understanding of what was happening, little analysis of the disease that was helpful. And it also didn’t get shared or discussed.

And I wonder if those two things are related. That this was simply a shared trauma that had no explanation and no answer and that somehow that’s quite different from World War II. World War I, that produced… The answers and the resolutions came in the end of the war. And so, it became cathartic or useful to explore what happened in the war through art. And that somehow those two things relate: having a resolution through better understanding or through the conclusion of an event helps.

Talasek: It seems interesting to me, Dominic, going back to not having these communal experiences during the earlier catastrophes that you described. And a lot of creative outlets were probably lost. And I think that’s why it works such as what James is doing. It’s so very important. Around the same time that the pandemic hit, Cultural Programs of the National Academy of Sciences started collecting creative responses from artists, engineers, and scientists. Everyone was responding to it in different ways.

We started collecting those and that’s how we actually found out about James’s work. So thank you so much, James, because now it is part of the archive of our collective experience that will hopefully live on.

Talasek: Thank you for joining us for this episode of The Ongoing Transformation. I’d like to take a moment to thank our guests, James and Dominic. Thank you for taking your time to be with us, for sharing your insights on art, science, COVID-19, and our collective memory.

Visit us at issues.org for more conversations and articles. I’m J. D. Talasek, director of Cultural Programs of the National Academy of Sciences, signing off.

Retreating From Rising Waters

“Disaster recovery” is a generous concept, in theory. But the reality is much muddier, and the issues plaguing the communities that Nicholas Pinter describes in “True Stories of Managed Retreat From Rising Water” (Issues, Summer 2021) are daunting.

Rural river towns are often critical for surrounding agricultural production, and in some lucky cases, such as Gays Mills, Wisconsin, are home to important manufacturing facilities. But their location, part of their appeal, can render them prone to floods. Often, many of these communities also face pre-disaster challenges, such as internal community fracturing, financial shortages, an aging population, and outdated housing stock. The dangers for residents during rescue events, as well as their frustrations with repeated cleanups, are not trivial.

A lack of resources to move anywhere before, or after, an event is also a reality for many. Given current programs and policies, it is unlikely that government funding or insurance coverage could cover buyouts that would allow people or communities to relocate to safer locations. New types of collaboration involving the private sector will be needed, along with new goals for development. Rather than focusing on building spotty new developments that can further sprawl, the aim should be to design for flexibility that incorporates housing, businesses, shared green spaces, and facilities that foster independence. This strategy can attract people to safer and better new sites, and do so in advance of disaster.

The good news is that there have been tremendous advances in both understanding what is needed for preparedness and in the building sciences, so that it is increasingly possible to quickly develop flexible and attractive state of the art housing. Human needs for connection, green space, and self sufficiency need to be embraced. To rebuild or relocate a town to make each resident “whole” is unlikely in the coming years, and technology, planning, and a cultural understanding that moving to smarter communities and perhaps different types of shelters might in some cases be the wisest use of resources.

To capitalize on the opportunities, we need private planners and developers, along with federal leadership, to promote innovation that will help create attractive mixed-use rural communities that can become the vibrant, sustainable choices of the future. We need to realize that doing so will result in better management of limited resources of time, money, and materials. Refurbishing power grids to have backup capability to support self-sufficiency can also mitigate wholesale disaster.

Rather than focusing on building spotty new developments that can further sprawl, the aim should be to design for flexibility that incorporates housing, businesses, shared green spaces, and facilities that foster independence.

And in a more profound shift, residents and policymakers in flood-prone areas will benefit from embracing the cultural reality that moving to a safer location is not a failure, or to be feared, but rather a smart strategy—environmentally, financially, and from a quality of life perspective—regardless of the disaster relocation funds available. Of course, government or private-sector aid can make moving an economically easier choice. Over time, smart planning and development investment in smarter places will become a natural transition, rather than scrambling under the pressure of disaster recovery.

As the managed retreat case studies that Pinter describes have been whispering, there are better ways to prepare, rebuild elsewhere, and embrace a new lifestyle without breaking the collective bank or tolerating years of trauma after a disaster. Getting on with life comes from rapid response, embracing transitions, and new ideas. Managed retreat can work, but the cultural mindset that being “made whole” or remaining in place is not a real option. Managed planning and innovation will save us via public and private collaboration.

Former Gays Mills Recovery Coordinator, 2009-2013

She is a business, housing, and community developer in Southwest Wisconsin

In his review of the rich 140-year history of relocation projects to respond to and protect from floods in the United States, Nicholas Pinter provides important insights that can be applied in implementing managed retreat in other countries as well. While managed retreat can eliminate disaster risks, there are many challenges to implementing projects.

As the author’s examination of Japanese cases of managed retreat shows, the country has promoted large-scale relocation programs following the Great East Japan Earthquake and Tsunami in 2011. The disaster killed over 20,000 people, completely destroyed some 130,000 buildings, and partially damaged 1 million more. Local governments in affected areas prohibited the construction of new houses and bought up lands in at-risk areas of tsunamis in the Tohoku Region. In all, Japan has conducted managed retreat for more than 100 years since the recovery from the 1896 tsunami in the Tohoku Region.

Japan and the United States share common lessons from managed retreat projects, but can learn from each other as well. Furthermore, they can share these lessons with the rest of the world.

While managed retreat can eliminate disaster risks, there are many challenges to implementing projects.

Japan could reconstruct local communities in safe areas by conducting managed retreat just as the US reduced flood risks. Japan experiences the same issues of complicated implementation processes as the United States in building consensus among the affected people, securing funding, and supporting vulnerable and low-income groups. In addition, the population in affected areas along the seacoast has declined and some local communities have collapsed. Some members of local communities cannot wait years to complete managed retreat and move to major cities that provide better education and job opportunities. These are challenges in promoting managed retreat in any country.

Support for local governments is essential in promoting managed retreat. Generally speaking, local governments have limited capacity to implement the complicated processes of managed retreat. Specialists are involved in planning and implementing in both countries.

Japan should learn from the approaches of the United States to sustain local businesses. Communities’ members relocated to higher safe grounds currently face difficulties in accessing shopping centers and commercial facilities. Considering residential, industrial, and commercial areas together is essential in rehabilitating people’s lives at relocation sites.

The Japanese system of managed retreat includes not only buy-outs of damaged sites but also developing relocation sites so that communities can be maintained at relocation sites. The country has constructed 393 relocation sites, which contain some 48,000 houses and 30,000 units of public apartments. The programs of tsunami recovery cover supporting measures to vulnerable groups. Older people and members of low-income groups, who cannot afford to construct new houses, can live in public apartments with subsidized rents. Local governments send supporting teams to ensure that older adults do not become isolated from their communities. A nongovernmental organization is operating an “Ibasho” house that assists the lives of older people.

Countries that are vulnerable to natural hazards can apply managed retreat as an adaptation measure to increased disaster risks due to climate change. By exchanging knowledge, countries can strengthen policies and approaches to promote managed retreat to make societies more resilient to natural disasters.

Visiting Professor, Graduate School of Frontier Sciences

University of Tokyo, Japan

Time to Modernize Privacy Risk Assessment

In 2018, media reports revealed that a company called Cambridge Analytica had harvested data from millions of Facebook users, creating psychometric profiles and models that were then used for political manipulation. For Facebook, it was a high-profile privacy debacle. For the information technology and privacy communities, it was a particularly high-profile wake-up call.

This privacy failure was far too complex to be called simply a breach; it occurred across multiple layers and involved several companies. At the foundation of the scheme was data gleaned from Facebook’s “like” button, which Cambridge Analytica used to infer users’ personality traits. The company gained access to more people and more information through users’ profiles and Facebook friends (i.e., social networks). This unchecked flow of data was enabled by Facebook’s privacy policy, the way the platform interacted with third-party apps, and its desire to support social science research. Amazon’s Mechanical Turk, a platform for virtual paid piecework, also played a key role, as did various sources of public information, including US Census data. At first glance, the whole mess looks like a textbook example of an emergent property of a complex system: the interactions of multiple actors and systems producing completely unanticipated results.

It’s possible that Facebook didn’t see the potential for such a disaster brewing in advance because of outdated and inadequate methods for defining and evaluating privacy risks. Despite dizzying socio-technical changes over the past quarter of a century, organizations still rely heavily on assessments of privacy impacts with simplistic forms and functions that are poor matches for the layered complexity of today’s technologies. The United States is not alone in this; the data protection impact assessments required by the European Union’s General Data Protection Regulation, although an improvement in some respects, are similarly lacking.

Despite dizzying socio-technical changes over the past quarter of a century, organizations still rely heavily on assessments of privacy impacts with simplistic forms and functions that are poor matches for the layered complexity of today’s technologies.

As long as this dependence continues, we can expect new, more frequent, and ever-stranger privacy incidents. AI-based decisionmaking tools, for example, which often require large amounts of personal information for training their algorithms, can encode bias in their operation and injure or expose individuals to harm in everything from criminal justice proceedings to benefits eligibility determinations. The Internet of Things raises difficult issues of data aggregation—in which seemingly innocuous data points acquire much greater significance when combined—and of ubiquity, where multiple platforms can create mosaics of individuals’ activities. Biometrics, especially facial recognition, create additional potential for persistent surveillance as well as for problematic inference of individual attributes from physiological features. As these technologies develop, public- and private-sector organizations must update their approach to effectively manage privacy risk.

How we got here

In the early 1970s, the public was concerned about the potential implications of modern data processing systems, then standalone mainframes, for civil liberties. The US Department of Health, Education, and Welfare ordered a 1973 report by the Secretary’s Advisory Committee on Automated Personal Data Systems. The committee articulated a set of guidelines called the “Code of Fair Information Practices.”

That code formed the basis of the federal Privacy Act of 1974 and prompted the development of numerous, slightly varying, and expanded sets of best practices for protecting privacy. These practices—which typically included consideration of data collection, retention, and use as well as training, security, transparency, consent, access, and redress—were eventually dubbed Fair Information Practice Principles (FIPPs). Around the world, they became the de facto approach to protecting informational privacy. Most privacy statutes and regulations today are built on some version of FIPPs.

Chief among the approaches enabled by FIPPs are Privacy Impact Assessments (PIAs), which bear a name and constitute an approach partly inspired by environmental impact statements and assessments. Echoing these roots, a PIA is both the process of assessing a system’s privacy risks and the name of the statement that results. In the evolution of impact assessments, PIAs act as tools for addressing one particular societal value. However, they have been constructed in a way that renders them less about privacy as a human value and more about procedural niceties.

Around the world, Fair Information Practice Principles (FIPPs) became the de facto approach to protecting informational privacy. Most privacy statutes and regulations today are built on some version of FIPPs.

PIAs (and FIPPs) became further embedded in US law and practice when they were required for federal information systems by the E-Government Act of 2002. Today’s PIAs largely retain their original form—a set of written questions and answers about each of the FIPPs—and the same function, firmly rooted in identifying potential violations. Because FIPPs provide the principal structure of PIAs, they have become so intertwined with these processes and artifacts that they have together taken on a perception of inseparability. And as the interactions between society and computational technology become more complex, that perceived indivisibility increasingly poses problems.

Problems with the status quo

By using FIPPs to define privacy practices without requiring more expansive analysis, PIAs today maintain a relatively narrow and inelastic view of privacy. This static conception offers a very circumscribed model for imagining and understanding the risks that technological systems could pose to privacy. An ideal risk model describes possible threats, identifies vulnerabilities that might be exploited by them, and lays out what would happen if each exploit were realized, including its likelihood and severity. However, because FIPPs are the risk model utilized by PIAs, today’s consideration of privacy risks is largely restricted to violations of FIPPs. Furthermore, the close integration of PIAs and FIPPs, together with FIPPs-based compliance obligations, effectively discourages the use of other privacy risk models and assessment methods.

PIAs also suffer from two problems that have been significantly exacerbated by the evolution of technologies. First, PIAs tend to emphasize description over analysis, which prejudices them toward addressing privacy in a checklist fashion. Second, even when PIAs do explicitly invite discussion of possible privacy risks and potential mitigation strategies, risks are typically construed narrowly. They tend to be first-order problems, issues that might arise as the immediate result of system operation. Potential knock-on effects are seldom considered, nor are potential problems involving indirect cause and effect. 

These problems are compounded by the largely procedural nature of FIPPs. Consequences for individuals’ privacy are often framed only as possible FIPP violations—as violations of privacy-related procedure—rather than as violations of privacy per se. Many real results from privacy violations, such as embarrassment, lost opportunities, discrimination, physical danger (e.g., stalking), and more, are overlooked.

Another limitation of FIPPs is that they ignore the social context of systems, preventing analysts from considering potential harms originating in the external environment. Finally, FIPPs are so dependent on a system’s purpose, without carefully evaluating whether that purpose is fundamentally objectionable, that an unethical purpose can sometimes serve as the basis for satisfied FIPPs. If a system had the purpose of maintaining individual political dossiers on members of the general public, for example, a purely FIPPs-based analysis would take this as its unquestioned starting point and assess each principle relative to that disturbing purpose.

Other risk models and methods

Over the past two decades, other, more capable privacy risk models and assessment methods have been developed that could address the inadequacies of FIPPs and PIAs. Law professor Ryan Calo’s dichotomous privacy harms, for example, categorizes all privacy injuries as either subjective or objective, with the former forcing explicit consideration of potential impacts on individuals’ mental states—something often ignored by FIPPs-based models. Another model, a taxonomy of privacy developed by law professor Daniel J. Solove, proposes 16 different kinds of privacy problems divided into four groups relating to information collection, information processing, information dissemination, and invasions. This granular categorization enables more precise identification of privacy harms, again forcing more nuanced consideration of potential adverse privacy consequences.

Many real results from privacy violations, such as embarrassment, lost opportunities, discrimination, physical danger (e.g., stalking), and more, are overlooked.

Other risk models address vulnerabilities or threats. The contextual integrity heuristic, developed by information science professor Helen Nissenbaum, aims to identify violations of informational norms, which can be construed as privacy vulnerabilities. The model is noteworthy for explicitly recognizing the existence of social standards of privacy in various spheres of life, something FIPPs avoid by design. In contrast, frameworks such as LINDDUN, a privacy threat model and methodology, focus on modeling threats at the level of system architecture, considering factors such as potential attempts to link together system elements pertinent to individuals (data, processes, flows, etc.). Although situated at notably different levels, all these models attempt to discover issues that might ultimately affect privacy as experienced by individuals, rather than primarily looking for procedural problems.

Just as there are models beyond FIPPs, there are also new privacy risk assessment methodologies that could replace or complement PIAs. For example, the National Institute of Standards and Technology has developed a Privacy Risk Assessment Methodology that addresses systemic privacy vulnerabilities, defined as “problematic data actions,” and consequences, defined as “problems for individuals.” This methodology features numeric scores that explicitly estimate the likelihood and severity of privacy consequences. There are also more advanced quantitative options using statistical analysis, such as privacy expert R. Jason Cronk’s adaptation of the Factor Analysis for Information Risk framework, as well as a wholly qualitative but rigorous methodology called System-Theoretic Process Analysis for Privacy (STPA-Priv), which uses an approach originally developed to address the safety properties of system control structures. These models and methods have distinct emphases and orientations and could be mixed and matched for best effect.

Cambridge Analytica revisited

Could using other risk models and assessment methods have helped Facebook avert the Cambridge Analytica scandal? It’s not clear what, if any, privacy risk analysis was performed by the company, but it’s likely that more innovative approaches could have anticipated some of the factors that eventually led to the attempted political manipulation of Facebook users.

One fundamental reason Facebook users were vulnerable is that they were part of a social network. The privacy of any specific Facebook user depends in part on others in their network. This is, of course, part and parcel of being on Facebook, but that connectedness tends to be viewed exclusively as a feature—not a vulnerability. Cambridge Analytica exploited this weakness, termed “passthrough” by information science professors Solon Barocas and Karen Levy, in which the connections of one user enable access to the information of other users.

More recent risk models could have illuminated the threat of manipulation amid the tempestuous political climate of the time. Solove’s taxonomy, which considers decisional interference a significant privacy problem, might have suggested the potential consequence of inappropriately influencing voters. And if Facebook had performed an analysis using STPA-Priv, looking at the combination of technology and social context through the lens of hierarchical control and feedback loops, it might even have found the specific control failure scenarios that actually led to abuse. 

Using one or several of the models available, Facebook almost certainly could have identified and addressed at least some of the relevant control weaknesses, which might have prevented the debacle. These weaknesses included inadequate monitoring of researcher data use, insufficient restrictions on app access to user and friend profiles, and targeting of users across platforms. That apparently none of these weaknesses provoked concern at the time highlights the importance of adopting more capable approaches to privacy risk.

It’s likely that more innovative approaches could have anticipated some of the factors that eventually led to the attempted political manipulation of Facebook users.

The complex role of technology in society demands that public and private entities expand their privacy risk assessment toolbox beyond FIPPs and PIAs. In the United States, the Federal Trade Commission should issue guidance for the private sector encouraging the adoption of a broader range of privacy risk models and assessment processes. The National Institute of Standards and Technology, through its privacy engineering program, should develop guidance and tools to assist organizations in comparing and selecting appropriate privacy risk models and assessment methods.

The White House Office of Management and Budget should update and supplement its existing PIA guidance for federal agencies, directing them to actively consider and deploy privacy risk models and assessment methods in addition to FIPPs and PIAs. Finally, the National Science Foundation should encourage and support research explicitly focused on enhancing privacy risk models and assessment methods, consistent with the 2016 National Privacy Research Strategy.

FIPPs and PIAs were innovative in their early days, but the world has changed dramatically. Modern technologies and systems require complementary and flexible approaches to privacy risk that are more likely to discover serious and unexpected issues. FIPPs and PIAs by themselves are no longer enough. Moving forward, organizations need to employ privacy risk assessments that ultimately serve the public interest.

Artificial Intelligence and Galileo’s Telescope

Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher, THE AGE OF AI (2021)

In 2018 Henry Kissinger published a remarkable essay in The Atlantic on artificial intelligence. At a time when most foreign policy experts interested in AI were laser-focused on the rise of China, Kissinger pointed to a different challenge. In “How The Enlightenment Ends” Kissinger warned that the Age of Reason may come crashing down as machines displace people with decisions we cannot comprehend and outcomes we cannot control. “We must expect AI to make mistakes faster—and of greater magnitude—than humans do,” he wrote.

This sentiment is nowhere to be found in The Age of AI: And Our Human Future, coauthored by Kissinger, Eric Schmidt, and Daniel Huttenlocher. If Kissinger’s entry into the AI world appeared surprising, Schmidt and Huttenlocher’s should not be. Schmidt, the former head of Google, has just wrapped up a two-year stint as chair of the National Security Commission on Artificial Intelligence. Huttenlocher is the inaugural dean of the College of Computing at the Massachusetts Institute of Technology.

The stories they tell in The Age of AI are familiar. AlphaZero defeated the reigning chess program in 2017 by teaching itself the game rather than incorporating the knowledge of grandmasters. Understanding the 3D structure of proteins, an enormously complex problem, was tackled by AI-driven protein folding which uncovered new molecular qualities that humans had not previously recognized. GPT-3, a natural language processor, produces text that is surprisingly humanlike. We are somewhere beyond the Turing test, the challenge to mimic human behaviour, and into a realm where machines produce results we do not fully understand and cannot replicate or prove. But the results are impressive.

Once past the recent successes of AI, a deep current of technological determinism underlies the authors’ views of the AI future and our place in that world. They state that the advance of AI is inevitable and warn that those who might oppose its development “merely cede the future to the element of humanity courageous enough to face the implications of its own inventiveness.” Given the choice, most readers will opt for Team Courage. And if there are any doubters, the authors warn there could be consequences. If the AI is better than a human at a given task, “failing to apply that AI … may appear increasingly as perverse or even negligent.” Early in the book, the authors suggest that military commanders might defer to the AI to sacrifice some number of citizens if a larger number can be saved, although later on they propose a more reasoned approach to strategic defense. Elsewhere, readers are instructed that “as AI can predict what is relevant to our lives,” the role of human reason will change—a dangerous invitation to disarm the human intellect.

We are somewhere beyond the Turing test, the challenge to mimic human behaviour, and into a realm where machines produce results we do not fully understand and cannot replicate or prove.

The authors’ technological determinism, and their unquestioned assertion of inevitability, operates on several levels. The AI that will dominate our world, they assert, is of a particular form. “Since machine learning will drive AI for the foreseeable future, humans will remain unaware of what it is learning and how it knows what it has learned.” In an earlier AI world, systems could be tested and tweaked based on outcomes and human insight. If a chess program sacrificed pieces too freely, a few coefficients were adjusted, and the results could then be assessed. That process, by the way, is the essence of the scientific method: a constant testing of hypotheses based on the careful examination of data.

As the current AI world faces increasingly opaque systems, a debate rages over transparency and accountability—how to validate AI outputs when they cannot be replicated. The authors sidestep this important debate and propose licensing to validate proficiency, but a smart AI can evade compliance. Consider the well-known instances of systems designed to skirt regulation: Volkswagen hacked emissions testing by ensuring compliance while in testing mode but otherwise ignoring regulatory obligations, and Uber pulled a similar tactic with its Greyball tool, which used data collected from its app to circumvent authorities. Imagine the ability of a sophisticated AI system with access to extensive training data on enforcement actions concerning health, consumer safety, or environmental protection.

Determinism is also a handy technique to assume an outcome that could otherwise be contested. The authors write that with “the rise of AI, the definition of the human role, human aspiration, and human fulfillment will change.” In The Age of AI, the authors argue that people should simply accept, without explanation, an AI’s determination of the denial of credit, the loss of a job interview, or the determination that research is not worth pursuing. Parents who “want to push their children to succeed” are admonished not to limit access to AI. Elsewhere, those who reject AI are likened to the Amish and the Mennonites. But even they will be caught in The Matrix as AI’s reach, according to the authors, “may prove all but inescapable.” You will be assimilated.

The pro-AI bias is also reflected in the authors’ tour de table of Western philosophy. Making much of the German Enlightenment thinker Immanuel Kant’s description of the imprecision of human knowledge (from the Critique of Pure Reason), the authors suggest that the philosopher’s insight can prepare us for an era when AI has knowledge of a reality beyond our perception.

Kant certainly recognized the limitations of human knowledge, but in his “What is Enlightenment?” essay he also argued for the centrality of human reason. “Dare to know! (Sapere aude.) ‘Have the courage to use your own understanding’ is therefore the motto of the enlightenment,” he explained. Kant was particularly concerned about deferring to “guardians who imposed their judgment on others.” Reason, in all matters, is the basis of human freedom. It is difficult to imagine, as the authors of The Age of AI contend, that one of the most influential figures from the Age of Enlightenment would welcome a world dominated by opaque and unaccountable machines.

The authors argue that people should simply accept, without explanation, an AI’s determination of the denial of credit, the loss of a job interview, or the determination that research is not worth pursuing.

On this philosophical journey, we also confront a central teleological question: Should we adapt to AI or should AI adapt to us? On this point, the authors appear to side with the machines: “it is incumbent on societies across the globe to understand these changes so they reconcile them with their values, structures, and social contracts.” In fact, many governments have chosen a very different course, seeking to ensure that AI is aligned with human values, described in many national strategic plans as “trustworthy” and “human-centric” AI. As more countries around the world have engaged on this question, the expectation that AI aligns with human values has only increased.

A related question is whether the Age of AI, as presented by the authors, is a step forward beyond the Age of Reason or a step backward to an Age of Faith. Increasingly, we are asked by the AI priesthood to accept without questioning the Delphic predictions that their devices produce. Those who challenge these outcomes, a form of skepticism traditionally associated with innovation and progress, could now be considered heretics. This alignment of technology with the power of a reigning elite stands in sharp contrast to previous innovations, such as Galileo’s telescope, that challenged an existing order and carried forward human knowledge.

There is also an apologia that runs through much of the book, a purposeful decision to elide the hard problems that AI poses. Among the most widely discussed AI problems today is the replication of bias, the encoding of past discrimination in hiring, housing, medical care, and criminal sentencing. To the credit of many AI ethicists and the White House Office of Science and Technology Policy, considerable work is now underway to understand and correct this problem. Maybe the solution requires better data sets. Maybe it requires a closer examination of decision-making and the decisionmakers. Maybe it requires limiting the use of AI. Maybe it cannot be solved until larger social problems are addressed.

But for the authors, this central problem is not such a big deal. “Of course,” they write, “the problem of bias in technology is not limited to AI,” before going on to explain that the pulse oximeter, a (non-AI) medical device that estimates blood oxygen levels, has been found to overestimate oxygen saturation in dark-skinned individuals. If that example is too narrow, the authors encourage us to recognize that “bias besets all aspects of society.”

To the credit of many AI ethicists and the White House Office of Science and Technology Policy, considerable work is now underway to understand and correct this problem.

The authors also ignore a growing problem with internet search when they write that search is optimized to benefit the interests of the end-user. That description doesn’t fit the current business model that prioritizes advertising revenue, a company’s related products and services, and keeping the user on the website (or affiliated websites) for as long as possible. Traditional methods for organizing access to information, such as the Library of Congress Classification system, are transparent. The organizing system is known to the person providing information and the person seeking information. Knowledge is symmetric. AI-enabled search does not replicate that experience.

The book is not without warnings. On the issue of democratic deliberation, the authors warn that artificial intelligence will amplify disinformation and wisely admonish that AI speech should not be protected as part of democratic discourse. On this point, though, a more useful legal rule would impose transparency obligations to enable independent assessment, allowing us to distinguish bots from human speakers.

Toward the end of their journey through the Age of AI, the authors allow that some restrictions on AI may be necessary. They acknowledge the effort of the European Union to develop comprehensive legislation for AI, although Schmidt had previously criticized the EU initiative, most notably for the effort to make AI transparent.

Much has happened in the AI policy world in the three years since Kissinger warned that human society is unprepared for the rise of artificial intelligence. International organizations have moved to establish new legal norms for the governance of AI. The Organisation for Economic Co-operation and Development, made up of leading democratic nations, set out the OECD Principles on Artificial Intelligence in 2019. The G20 countries, which include Russia and China, backed similar guidelines in 2019. Earlier in 2021, the top human rights official at the United Nations, Michelle Bachelet, called for a prohibition on AI techniques that fail to comply with international human rights law. The UNESCO agency in November 2021 endorsed a comprehensive Recommendation on the Ethics of Artificial Intelligence that may actually limit the ability of China to go forward with its AI-enabled social credit system for evaluating—and disciplining—citizens based on their behavior and trustworthiness.

Much has happened in the AI policy world in the three years since Kissinger warned that human society is unprepared for the rise of artificial intelligence.

The more governments have studied the benefits as well as the risks of AI, the more they have supported these policy initiatives. That shouldn’t be surprising. One can be impressed by a world-class chess program and acknowledge advances in medical science, and still see that autonomous vehicles, opaque evaluations of employees and students, and the enormous energy requirements of datasets with trillions of elements will pose new challenges for society.

The United States has stood mostly on the sidelines as other nations define rules for the Age of AI. But “democratic values” has appeared repeatedly in the US formulation of AI policy as the Biden administration attempts to connect with European allies, and sharpen the contrast between AI policies that promote pluralism and open societies and those which concentrate the power of authoritarian governments. That is an important contribution for a leading democratic nation.

In his 2018 “How the Enlightenment Ends” essay, Kissinger seemed well aware of the threat AI posed to democratic institutions. Information overwhelms wisdom. Political leaders are deprived of opportunity to think or reflect on context. AI itself is unstable, he wrote, as “uncertainty and ambiguity are inherent in its results.” He outlined three areas of particular concern: AI may achieve unintended results; AI may alter human reasoning (“Do we want children to learn values through untethered algorithms?”); and AI may achieve results that cannot be explained (“Will AI’s decision making surpass the explanatory powers of human language and reason?”). Throughout human history, civilizations have created ways to explain the world around them, if not through reason, then through religion, ideology, or history. How do we exist in a world we are told we can never comprehend?            

Kissinger observed in 2018 that other countries have made it a priority to assess the human implications of AI and urged the establishment of a national commission in the United States to investigate these topics. His essay ended with another warning: “If we do not start this effort soon, before long we shall discover we started too late.” That work is still to be done.

Managing Retreat Equitably

In “A Concerted and Equitable Approach to Managed Retreat” (Issues, Summer 2021), Kavitha Chintam, Christopher Jackson, Fiona Dunn, Caitlyn Hall, Sindhu Nathan, and Bernat Navaro-Serer call for expanded efforts by the US Federal Emergency Management Agency (FEMA) to support managed retreat—a strategy to reduce risk by relocating homes and other infrastructure away from hazard-prone areas—in an equitable manner. They describe how inequalities in community resources to apply for and administer federal funds may exacerbate historical social inequalities, and they call for greater support for persons displaced by climate change and natural hazards. Some of the changes they propose are already in place. For example, relocation assistance for renters is already required by the Uniform Relocation Act; FEMA incentivizes properties that experience “substantial damage” to relocate through requirements to rebuild at higher elevations; and FEMA’s Building Resilient Infrastructure and Communities program is an explicit attempt to provide more risk mitigation funding. But the authors’ overarching point that existing measures are often insufficient remains important.

Developing strategic support for managed retreat will require coordinated actions by numerous federal agencies and state and local governments. The Department of Housing and Urban Development, for example, oversees more postdisaster funding than any other agency, including FEMA, and has funded numerous relocations, in whole or part, through its Community Development Block Grant programs. In fact, almost every federal agency receives funding following a major disaster, and where and how they spend those funds shapes the willingness and ability of communities to relocate or to receive displaced persons. Relocation is influenced, for example, by where schools are rebuilt, using funds from the Department of Education; what roads are elevated, using funds from the Department of Transportation; what small businesses recover, using loans from the Small Business Administration; and where floodwalls are built by the US Army Corps of Engineers.

I am cautious about expanding the role of FEMA to address land use, housing, development, and employment.

State and local governments determine where new buildings are constructed and to what standards. They, and not the federal government, have authority over building codes and land-use laws. According to a recent study by the Government Accountability Office, over 80% of properties that have received FEMA funding to address repeat flood risk received that funding as a buyout. Managed retreat, through buyouts, is therefore FEMA’s primary means of addressing repetitive flood loss. Nevertheless, the number of homes at risk of repeat flooding has increased over the past two decades. This is, in part, because state and local governments have not exercised their authority to redirect new construction away from the most flood-prone areas. Federal reform, to encourage risk reduction, will need to create greater incentives for local governments to act and will need to build local capacity to meet their greater responsibilities.

I agree with Chintam et al. that managed retreat requires a more holistic approach than has been the case to date. However, I am cautious about expanding the role of FEMA to address land use, housing, development, and employment. FEMA was originally established to provide federal assistance in response to disasters that overwhelm local and state resources. Over time, FEMA has been required to take a larger role in reducing risk, and climate change undoubtedly requires a different approach to disaster management and risk reduction.

However, it is probable that other agencies, such as the Department of Housing and Urban Development, have more experience in directing development and establishing incentives or training programs to entice or enable displaced persons to resettle in less risk-prone areas, and it is possible that state governments could or should play a larger role in guiding development (housing, business, and infrastructure) toward safer areas within their borders. FEMA could, and probably should, provide additional funding and incentives for local governments to engage in relocation. But as Chintam et al. note, local involvement is likely to remain crucial in tailoring future relocation programs to local contexts, to avoid losing histories or erasing identities.

Assistant Professor of Geography and Public Policy

University of Delaware

Episode 3: Eternal Memory of the Facebook Mind

Social media and streaming platforms such as Facebook and Spotify analyze huge quantities of data from users before feeding selections back as personal “memories.” How do the algorithms select which content to turn into memories? And how does this feature affect the way we remember—and even what we think memory is? We spoke to David Beer, professor of sociology at the University of York, about how algorithms and classifications play an increasingly important role in producing and shaping what we remember about the past.

Transcript

Jason Lloyd: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. I’m Jason Lloyd, the managing editor of Issues. On this episode, I’m talking to David Beer about social media and how these platforms, algorithms, and classifications play a role in shaping our memories and reality. David is a professor of sociology at the University of York. His new book, Social Media and the Automatic Production of Memory, which he co-authored with Ben Jacobsen, explores these themes.

Dave, thanks for joining us today. I just finished your book and it got my brain spinning in really terrific ways. You also recently reviewed David Arditi’s book Streaming Culture for Issues, which dealt with similar topics, and may be a good place to start. Could you tell me a little bit about “Chamber Psych?”

David Beer: Thanks. The “Chamber Psych” thing I started the review with, that was from Spotify, and it was the End of Year Review thing that they produce. It’s like an automated narrative of your music tastes that Spotify creates on your behalf and provides you with. It tells a bit of a story. And one of the aspects of the story is about genre.

So I’ve had this interest in genre as part of a broader interest in archiving—how media, new types of media platforms, [operate] through an archival lens. And then you start think how are they organized, what are the classifications that are going on within these spaces, and that sort of thing. They’re really interesting, the grids that we put culture into are really interesting and very vibrant, I think, as a result of the new types of media structures.

So my top five genres last year—I can’t remember now which number it was at, but in the top five was “Chamber Psych.” I didn’t know what that referred to, or which artist, which songs that referred to. It was a genre label I was unfamiliar with.

I found myself, as I mentioned in the review, searching for the genre that described my own taste so that I could understand how my tastes were being classified, really, or categorized. I thought that was interesting. I think the thing that drove it—I don’t mention this in the review—I think the thing that drove it was listening to the band Super Furry Animals and then a couple of other things that I’d listened to must have been categorized as that as well, I think.

Lloyd: And that’s how the platform slots those artists, or does it categorize them by album, or song?

Beer: Yeah, I think it’s by song. But I suspect it filters through from the artist. So the songs, I imagine, can be tagged with more than one classificatory label, but that seems to be the one. But I’ve been listening to Super [Furry] Animals for about 25 years and I’ve never heard of Chamber Psych.

It’s interesting how these labels then start to take on a presence within these platforms, even if they’re not perhaps labels that we might use ourselves. The other genres are more familiar, but it just struck me that it’s indicative of the vibrancy of the kind of classificatory systems that are going on in media structures.

Lloyd: I was really struck by the fact, I think you mentioned in the review, that Spotify has more than 2,000 different types of classifications for the various types of music that they host—Chamber Psych, obviously, being one of them. Could you talk a little bit more about the role of classification—obviously, on a streaming platform like Spotify, but also in other social media platforms such as Facebook?

Beer: Well, I think consumer culture is full of this kind of classificatory system. The comparable thing on Spotify you get on Netflix, you get on Amazon: these labels, very specific, granular types of labels that are used to organize content. I think this is driven, in part, by people’s involvement in the platforms, but it’s also to do with the amount of content that there is out there, and that needs to be organized.

We’re familiar with ideas about algorithms presenting content back to us, but I think alongside that, we sometimes focus less upon the classificatory systems, the kind of archival systems that are active. So once you get all this massive, massive content—like you get all these films, all these TV shows, all these podcasts, all these songs available, and you move to an access-type cultural consumption—you need ways of organizing that, so that it’s manageable. It needs to be rendered fathomable to a consumer, and one of the ways this works is through classificatory systems taking on a greater level of significance within the media structures, so that people can then find their way around and find culture they might be interested in.

So you get the automated thing presenting you with suggestions, but there’s also the classificatory structures that allow us to organize this content in different ways. You get that in the consumer-culture platforms like Spotify, Netflix, Amazon, and so on, but you also then get it within social media with people using hashtags and stuff to try to organize content within those spaces. There’s classificatory systems that work there as well, and you had it with tagging in the past—tagging photos and so on.

So these are classificatory systems. Some of them are user-generated classifications, others feed off that but are led by the platforms, and there’s this interesting mix there of everyday classifications that people use, combined with the classificatory schemas that are applied, or imposed, onto culture by the platforms themselves, or actors within them. It’s a quite interesting mixture, I think, of agendas going on within classification of culture and content. But behind it all is there’s so much there. You need ways, then, of managing it, and these are vast archives, as I see them as, of content that classificatory systems allow us to access and allow us to retrieve the things that we might be interested in.

Lloyd: I was really struck. I had not given this any thought, how a platform like Facebook takes a post and assesses it for how it’s going to classify it, then how it ranks it, and then how it determines whether or not it’s going to feed it back to you as a memory. I assume it has some sort of AI that looks at the image itself or classifies the image in some way, maybe fairly generally as “two people on a beach” or something like that, and then it also looks at how it’s tagged, the comments on it, I guess they do some linguistic analysis on whether those comments tend to be good or bad, and so the platform purports to have a sense of whether or not this post or this photograph will be positive or negative for the user. I had not thought about that.

Beer: And that’s where the interesting thing of these things is archival, and thinking about classifications. To understand the kind of politics of those archival structures, what they allow to be said, what comes back to us, what we see, what we encounter, then, are all a product of those archival structures and that kind of politics.

Memory, then, is a part of that. So that was the collaborative work I’d done with Ben Jacobsen, who has written widely about algorithmic memory as well. And, yes, we were trying to understand, in particular, the classification and ranking processes, and then how people responded to them. That’s the three movements in the book.

One of the starting points, really, was understanding that Facebook had a taxonomy of memories—types—and the content was then slotted into, literally, a grid. And, in the way that you’ve described, it’s assessed, and the images, the comments, and so on are used to place memories within a grid. … As we live in social media, they become memory devices. And that past content then becomes slots into these pigeon holes, these categories of types. And in that moment, you’re deciding, “Well, what, of that content we create, constitutes a memory?” And also then “What type of memory is it?” So at that moment, memory becomes part of the logic of social media. Because it’s the types of things that social media want us to engage with as memories—[those] are going to be the types things driven by the logic of stickiness, and sharing, and commenting, and engagement.

We thought that was quite interesting to see the grid. We use Facebook in this book, but what we were pointing towards is a broader set of trends in social media through memories: a memory culture. And we’ve also got mobile devices doing something similar, but we used Facebook as a way into that because we’d got this taxonomy. So we did that, yes. And then, once classified, they’re ranked for their worth or value, which is partly to do with the prediction of what the person will want to remember and when, and then we looked at how people respond to those recirculated and sorted versions of their own past.

Lloyd: You touched on this just now, but when I think about the goals of the platform, what this kind of classification scheme, the targeting and regurgitating of your memories, what the objective is for that, I’m thinking of other related concepts like James Scott’s ideas about legibility in the state: in order to get visibility and control over the population, they need a census, a sense of who lives there and where they are and what they do. In this case, it wouldn’t be for something like taxation or conscription—why is this useful for Facebook? What do they do with these systems?

Beer: Well, because people have been living on social media platforms and using social media platforms for a significant amount of time, we’ve built up a series of biographical traces about our lives within those platforms.Now, instantly then, if you run a platform and you want to have maximized personalization, then having people’s biographical traces is a significant, valuable resource for knowing them. It’s a way of knowing people through those pasts, and that becomes a resource for maintaining engagement in the present.

Obviously things like nostalgia, memories are powerful things for people and they’re understanding themselves, but also their understandings of their friendships and relationships, and connections with other people, collective understanding of what’s going on. So this can generate significant activity, because what you are looking to do is maintain people in that platform, as long as possible, each day.

That’s the objective, because, really, some of these social media platforms, they can’t grow much bigger in terms of the number of users. What they’re looking to grow is the level of engagement that people have with the platform, particularly as they’re competing more with each other, I suppose, a little bit on this now as well with generational shifts and so on.

It doesn’t have to be a long time in the past, it can just be a previous year or whatever, you don’t have to be recalling long periods of time for it to work, but it gets people engaging with their own past and with shared moments that then recirculate and trigger activity in the present.

Therefore it fills the gaps. It fills the voids within social media for us to say something, or respond, or act within the platform—to be active. So it gives us an option, little bit like memes, I think, they become these anchor points for activity that allow people to fill the spaces of social media and satisfy the obligation for activity, I suppose.

Lloyd: Yeah, and it seems really effective. Part of the book is about, as you mentioned, the user response to the memory feature. Could you talk a little bit about what you found? You did focus groups and some structured interviews, right?

Beer: So this was off Ben Jacobsen’s project. He’d performed his interviews, and has been writing about those, and this became a kind of side project to that, really, around classification and ranking. It was a kind of unexpected insight that came off the back of that. We started to think about ranking and classification and then how people were responding to their past content being classified and ranked. And we found some, we use Imogen Tyler’s term “classificatory struggles” within the space. So it’s not like these things fit seamlessly in. They do generate activity and they do generate content and stickiness, but they also create other outcomes. We detail this a bit in the book, but these are things to do with misunderstandings of memory. So, presenting back things that weren’t that significant to the individual.

That was one of the examples, one of the things we looked at. We also look at the way that, sometimes, it can feel invasive. It is part of the surveillance, I suppose it’s that creepy over-surveillance you sometimes get from these platforms that unsettles you.

And in other instances, it was almost like this reaction against the polished, sleek version of their past that didn’t feel quite right. It didn’t sit well with people to have their past packaged in quite such a neat way. That was one of the other things that we found. So people were engaging and showed that they were entertained and amused by these memories, or they saw it as useful in some instances, but there were also these struggles, uncertainties, unsettling properties to it as well, sometimes, that people found.

Lloyd: It strikes me as paradoxical that people would find these too polished, because the conventional wisdom about what you put on social media is that it’s your vacation photos and the studio pics of your baby and things like that. And so the idea that in a year’s time it would come back to you as a memory and you’d find it too sleek.

Beer: People use those types of terms in their response to it, but maybe part of this is that an automated story of your life can create a kind of unsettling presence or it can clash with your own version of your own past, and therefore feels wrong. Or it might just be that people have an uneasy sense of automation within the space.

We use Walter Benjamin in the book, there’s an illustration fragment from Walter Benjamin about how memories gain authenticity through the digging, through the actual digging them up, unearthing the memory actively, is how they gain legitimacy. And here, maybe it’s the fact that because people aren’t digging it for themselves when they’re presented back, there’s a sense that they lack authenticity, or lack a kind of legitimacy.

We problematize notions of authenticity in the book, but you can see how it might be communicated as a sense of not liking the kind of polished nature of what’s presented to them. That’s part of their response, perhaps, to automation.

Lloyd: That’s really interesting.So, you point to this in the book a bit as a potential path for future research, but I was wondering, if you were to speculate on what effect this automated production of memory, what these algorithms are doing, both to the individual and maybe to social relations more broadly—what do you think will be the effect?

Beer: It will change what we remember and how and when, because the things we’re encountering from our own pasts are coming up through the devices. So what we remember and how and when we remember it is going to be filtered through this archival structure. I think that’s already happening, that’s in place.

I think that there’s the potential there for a reworking of what the notion of a memory is. What we understand to be a memory could change as a result of this, that it is something that’s in the platform, as well, and that’s automated, and that is provided to us. And the selection of what a memory is by this system could then lead us to see memory through that lens, potentially. So there’s a possibility for that there. And then, I think, the third thing is that this will have consequences for individual notions of self, potentially, and identity, but also collective remembering.

I’m not sure we understand fully what the implications are for collective memory, and therefore for solidarity, social connections, and social divisions that could come from a transformation in the collective memory, when memory is something that’s personalized to algorithms and is fragmented at the level of the individual, potentially.

So I think there’s three things there: when we remember, what we remember, what we understand the memory to be, but also how individual and collective memory might operate in the future, particularly as these things become more and more embedded, more active, and, potentially, more predictive.

Lloyd: This type of feature seems to be everywhere. I assume it’s in part because they’ve just found it so effective, a really effective way of increasing and engagement. But it’s on your phone, Spotify now sort of famously has this year-end feature where they feed you back a memory of the year in music. So it doesn’t seem like it’s going away anytime soon.

Beer: I don’t think so. The first thing I did on the memory thing was maybe about four or five years ago, and you can just see it escalating. Most of these platforms and devices have got their version of presenting your past to you, or the automatic production of our past.

That seems to me to be spreading out into these platforms and devices, and they’ve just got more and more biographical traces on which to use, but also now the accumulation of data about people’s engagement with those recirculated memories, which then feeds into the system itself. So they can then use that to try to be predictive about which types of memories work and what to rank as being the memory to send back to you.

The consequence of that are difficult to predict really, because it might be that that narrows down memory, or it might be that they find that they want to try to create ways of being unpredictable, because that’s what people are. So you don’t know, but it’s going to get coded into the algorithms.

Lloyd: So you and your co-author on this book, Ben Jacobsen, did a really deep dive into this [memory] feature that is a fairly significant part of, but in some ways tangential to, the overall structure of the platform and what they try to do, which is increase engagement. So I’m wondering, what do you think about what social media is doing overall, after having looked at this particular feature that seems to have all these complexities and tensions in it, potentially a manipulative approach, although not necessarily—but it seems like this particular feature is such a rich source of research and tension. How does it make you think about the larger platform or social media itself?

Beer: Yeah, you’re right. This project is part of trying to build up a bigger picture around the way that these systems work, and what their objectives are, and what the politics of platforms is, and data, and algorithms, and that kind of thing. And I think you can understand this in terms of the broader transformation that we’ve seen through social media as it builds up.

I did another book called The Data Gaze, which is about how this gaze is exercised on us, how we’re watched through platforms and by data. So I think you can see the memory thing in terms of the broader political economy of platforms, and particularly social media, which is about the data.

The data of the archived users is really where the value is in social media. That’s where the predictions are, because the idea is you can use the data to be more predictive about individuals, and, therefore, target content towards them in ways where value can be extracted. So I think you can see the memory thing in that broad term.

So what you want to do is keep people engaging with the platforms as much as possible, because that generates the maximum amount of data about those individuals, which then lends itself value. Now I’m saying they can use the data to achieve the things they say they can achieve, but it’s the notion that that data is of value. The ideas around value attached to data are the really important things in terms of understanding the activities of a number of these platforms, I think. So you can see the memories thing, I think, through the broader ideas around data capitalism, probably.

Lloyd: And engagement.

Beer: Engagement really equals data production and stickiness. These are things that create, that increase the amount of data gathered, and therefore maximize the opportunities for value to be generated—or for notions of value to be generated, at least.

Lloyd: What’s been your experience with this feature on social media?

Beer: Apart from Spotify, which I think—I did the calculation—I think I spent 3.64% of 2020 on Spotify(you can work it out from the hours it gives you), I don’t actually have any social media profiles. And a student asked me about this recently actually, and I said, “Well, the reason I don’t have any social media platforms is because I do research them. I’m some sort of social media enthusiast.”

So I did my first project on social media in about 2006, 2007, I was working on a program called the e-Society program funded by the ESRC with a colleague called Roger Burrows. It was called Web 2.0 then, and I created a Facebook profile, and I found it very unsettling. So I deleted that after we had done a little bit of research on it.

And then I did have a Twitter account for about six, seven years, I deleted that. And the only thing I’ve really stuck with is blogging, really. Yeah. Blogs and working on blogs, used Medium for a bit, been using Substack, just experimenting with those types of mediums as a way of writing and communicating and being part of an online community. But it’s not quite the same thing, is it?

So personally, I never get presented with any memories about my past. There’s a problem. It’s “How do you understand social media from the outside?” is something that I’m always working with because I teach this as well. I actually find it’s quite useful, because you can look across platforms, internationally, to try to understand it, rather than being led by a targeted, personalized experience of the social media space, I think. That’s the way I justify it to myself, anyway. I have a kind of discomfort with social media, but I can see the value in social media and understand people’s engagement with it, absolutely do.

I see my job is to try to think skeptically about what’s going on and to try to think in sociological terms about broader transformations.

Lloyd: Thinking skeptically about the world and ongoing transformations is also our goal here at Issues. I’m grateful you joined us for this episode, Dave.

If you’d like to read more about the new process of digital memory making, check out his book, called Social Media and the Automatic Production of Memory, and visit us at issues.org for more conversations and articles. And of course, you can read Dave’s review there. I’m Jason Lloyd, managing editor at Issues. Thank you for joining us for this episode of The Ongoing Transformation.

Ethics in Animal Research

It was reassuring to read Jane Johnson’s frank assessment of the limitations of animal research, presented in “Lost in Translation: Why Animal Research Fails to Deliver on Its Promise” (Issues, Summer 2021). As a practicing physician who has worked in public health, clinical research, and research ethics, I appreciate Johnson’s appraisal of the practical and fundamental problems with animal research.

As Johnson notes, scientific problems with animal research are entangled with its ethical problems. Exaggerations of the potential benefits of animal research, and the confounding effects of stress on animals used in research, cannot be extricated from decisions about the ethical permissibility of animal research.

In 2011, in the journal PLOS ONE, my colleagues and I published the first of multiple papers showing how chimpanzees used in laboratory research demonstrated signs of depression, anxiety, and posttraumatic stress disorder. Other authors have shown how various nonhuman species experience acute and chronic pain and a range of physical and mental disorders. In laboratories, these physical and psychological injuries accrue. As Johnson notes, the animals, the people who care for them, and patients pay the price of these cumulative harms.

Exaggerations of the potential benefits of animal research, and the confounding effects of stress on animals used in research, cannot be extricated from decisions about the ethical permissibility of animal research.

As Johnson also observes, improved standards in human clinical trials are relevant. Although she highlights the relevance of improvements in methodological interventions, advancements in ethical standards in human research offer more salient guidance.

In a 2015 Cambridge Quarterly for Healthcare Ethics article honoring the pioneering medical ethicist and investigator Henry K. Beecher—for his approach to moral problems in human research and his landmark 1966 article in the New England Journal of Medicine—John P. Gluck and I identified problems in animal research that are analogous to those Beecher described. These include, for example, inattention to the issue of consent, incomplete surveys of harms, and inequitable burdens on research subjects in the absence of benefits to them. Beecher noted how these ethical deficiencies were bad for science.

Fortunately, by the middle of the twenty-first century, concerns about human research practices in the United States led to the establishment of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which resulted in the publication of the Belmont Report in 1979. Like the World Medical Association’s Declaration of Helsinki, the Belmont Report emphasizes key ethical principles: respect for autonomy; duties to nonmaleficence, beneficence, and justice; and special protections for vulnerable groups and individuals. Human research has become more ethical and improved ethical expectations have enhanced the scientific merit of human research studies.

Despite significant advancements in our understanding of animals’ capacities and growing consensus on the limitations of animal research, no similar effort has addressed the use of animals in research. But it could. Extending principles such as respect for autonomy and duties to nonmaleficence and justice to decisions about the use of animals in research could lead to the needed shift in culture that Johnson stresses. It could also lead to positive changes in education and training and a national research agenda that favors more translatable, human-centered, modern research methods. 

Associate Professor, University of New Mexico School of Medicine

President/CEO, Phoenix Zones Initiative

Digital Learning and Employment Records

In “Everything You’ve Ever Learned” (Issues, Summer 2021), Isabel Cardenas-Navia and Shalin Jyotishi present a compelling and timely argument for the important role digital learning and employment records (LERs) can play. While many organizations have piloted LERs or moved in the direction of issuing microcredentials or other types of digital credentials, it is clear they are valuable only if stakeholders can easily decipher their quality and credibility. Developing governance structures internally within institutions, let alone across an ecosystem that has many actors, is a primary challenge.

Institutions of higher education are poised to play a major role in this development if they can take a fully learner-centered approach and be willing to engage with industry openly and intentionally. LERs can increase responsiveness to the market and facilitate a learner’s ability to package what they have learned to be meaningful and understood outside the walls of academia. While many institutions work with advisory boards or have recruiting relationships, the depth of these conversations often does not reach an understanding of the competency-development process. Leveraging the expertise of institutions when it comes to assessment is critical—and relying on them to be the posting and vetting authority for external skills or credentials could help to mitigate concerns regarding validity.

Further, through being leaders in developing LERs, institutions of higher learning will better support adult learners and currently enrolled students. However, this requires focusing on competency-based education and pushing institutions beyond what is currently a very rigid academic structure. LERs utilized in this way can increase access to higher education, especially for adult learners, and provide students who are also working while learning or engaging in cocurricular activities the ability to document skills that will make them more competitive after graduation. If more institutions move toward a competency-based model, LERs would begin to quickly demonstrate returns of investment. Further, a system such as this—one that can be taken with students as they graduate—facilitates the development of a “lifelong learner” mindset and drives an institution’s ability to market programs to alumni for reskilling and upskilling.

Institutions of higher education are poised to play a major role in this development if they can take a fully learner-centered approach and be willing to engage with industry openly and intentionally.

The call to include workers as well as institutions of higher learning and corporations is critical. As institutions pilot these programs, involving their offices of engagement and continuing education offers additional perspectives in terms of what is meaningful to learners who are not formally enrolled. Expanding into these areas can enable institutions to play a critical service role in validating skills and “hosting” these LERs for the community at large, and not just students. For anchor institutions, this is already part of their mission, and would enable them to not only better serve their communities, but better develop pathways that could lead to additional stackable credentials.

To be successful, taking a learner-centered rather than an institutional-centered approach will be imperative. Institutions of higher learning are poised to play a critical role based on their expertise in documenting learning. However, their input and expertise are only as valuable as their ability to engage with industry and move toward a more flexible and nimble approach to learning.

Assistant Vice President for Academic and Student Affairs

Florida International University

Isabel Cardenas-Navia and Shalin Jyotishi outline the challenges workers face in attempting to articulate their knowledge, skills, and abilities learned and demonstrated in informal learning environments. Their solution is the adoption of comprehensive learning and employment records stored and distributed digitally.

Nowhere do we see a better example of the need for verified and trusted digital records than in the United States’ withdrawal from Afghanistan. Anecdotal reports of Afghan scholars arriving at checkpoints only to have their academic credentials and visa documents destroyed is a perfect example of the need for trusted and verified credentials—immutable and secured digitally. While the ethics and self-sovereignty of these data is a debate for another day, the fact remains that many scholars around the world would benefit from digital LERs, similar to the workers highlighted in the opening paragraphs by Cardenas-Navia and Jyotishi.

The authors accurately describe a critical issue within the workforce ecosystem: those skills most desired by employers, earned through informal learning experiences, are the most difficult for job seekers to describe and the least likely to be included in transcripts or certificates. When we consider the thousands of quality nondegree postsecondary credentials offered by the University Professional and Continuing Education Association’s member institutions, along with those offered by informal or noninstitutional education providers, we should question how we will ever make progress on ensuring equity and fairness in hiring when only a subset of learners—degree-holders—possess evidence of their learning, even if it was achieved in contexts independent of, and perhaps unrelated to, the world of work.

My hope is that readers will return to Cardenas-Navia and Jyotishi’s contribution and consider the equity issues they raise, as well as those raised here. We should all feel compelled to examine the nature of credentials and the role they play in hiring decisions, to consider the nature of assessment as well as demonstrations of learning, and to better appreciate how the use of digital learning and employment records could impact a global workforce.

Vice President of Online and Strategic Initiatives

Managing Director, National Council for Online Education

University Professional and Continuing Education Association

Isabel Cardenas-Navia and Shalin Jyotishi’s article explores the potential for learning and employment records to address key market failures in the US labor market associated with hiring/job finding and ongoing learning. The authors maintain that LERs may make some aspects of the labor market process function better. They also note several challenges that will play a major role in shaping whether LERs can truly increase equity and expand talent pools, such as how LERs can verify quality, protect privacy, and counteract bias.

In order to succeed, it will require legislative and policy changes focused on standardization, data security, and regulation (e.g., quality assurance) at a level that some stakeholders, especially employers in the private sector, often are unwilling to get behind.

I have two observations to add. First, there is an underlying tension and political struggle over decentralized versus centralized governance. Education is highly decentralized in the United States. This makes it difficult to evaluate or change education systems at scale, because innovations require coordinated action across multiple fragmented programs and overlapping, often conflicting authority structures. Decentralization also has contributed to the reification of formal higher education institutions and an overreliance on a narrow set of education delivery models. It has similarly constrained private-sector education technology innovations due to the challenges of scaling solutions across a fragmented, chaotic landscape. In short, the lack of a consistent “rules of the game” framework is a barrier for private-sector education technology actors to consolidate a market to innovate.

LERs offer a potential opportunity to transcend this tension over governance because the technology allows for both a distributed administrative setup and a centralized (harmonized) set of data standards to ensure interoperability. However, in order to succeed, it will require legislative and policy changes focused on standardization, data security, and regulation (e.g., quality assurance) at a level that some stakeholders, especially employers in the private sector, often are unwilling to get behind. Standardization and clear rules will be necessary for LERs to produce data that are useful across states and localities, as well as to ensure that the data and signals that LERs produce are consistently meaningful and trustworthy. Without an explicit conversation about the roles of the public and private sectors, as well as the need for redesigning authority structures, LERs will be very difficult to scale and will struggle to gain a foothold.

Second, the development of LERs should be considered alongside the growing evidence that the blockchain technologies needed for using the digital records use tremendous amounts of energy. The impacts of climate change disproportionately burden the same communities that are structurally excluded from access to quality education and formal higher education. Given the climate emergency, any effort to plan for LER implementation should simultaneously be paired with an effort to incentivize and make commitments to align it with a clean energy transition in the power grid.

Overall, this is just the beginning of a longstanding conversation about the potential for LERs to address structural and signaling problems in the US labor market and close the opportunity gap. I agree with the authors that a technical fix alone is probably not enough.

Fellow, Metropolitan Policy Program

Brookings Institution

Looking Beyond Economic Growth

In “When the Unspeakable Is No Longer Taboo: Growth Without Economic Growth” (Issues, Summer 2021), Zora Kovacic, Lorenzo Benini, Ana Jesus, Roger Strand and Silvio Funtowicz call for radical transformation in how, why, and by whom society is governed. As they rightly observe, the “obsession with measuring growth seems to have derailed public policy.” Focusing on economic growth, and on measuring gross domestic product (considered the broadest measure of goods and services produced by an economy) has led to governance and policy lock-ins that are inconsistent with the radical transformation required to respond to the climate and ecological crises. These reflections should spur urgent and deep questioning among policy actors, knowledge actors, and community actors on their assumptions, modes of working, and values.

Although policymakers in the European Union acknowledge the need for transformation, the implications of this are poorly reflected in policy and governance plans. Many programs, such as the European Green Deal, advance sustainability policies on paper, but they do not necessarily question the underlying assumptions of the desirability, fairness, or attainability of continued economic (“green”) growth. Research on institutional change warns us that EU institutions are sticky, that change is often slow and incremental. Procedural rules, customs, habits, culture, and institutionalized values all interact to prevent radical transformation. The realization of institutional or policy change frequently requires the pressure of external crises, combined with the fruition of good ideas.

These reflections should spur urgent and deep questioning among policy actors, knowledge actors, and community actors on their assumptions, modes of working, and values.

But the development of integrated, transformational policy and governance programs also requires integrated knowledge and wisdom. What does this mean for our academic knowledge systems? There is an urgent need to break down the artificial disciplinary silos upon which the academic knowledge system relies. When it comes to societal and planetary challenges, no single discipline can provide insights on when, where, how best, or how not to develop governance responses. Relying on single-indicator or aggregated indicator analyses to assess something as qualitatively subjective as “well-being” is flawed, and this means that scientific knowledge must integrate knowledge and wisdom beyond economics, incorporating all the social sciences, arts and humanities, and natural and physical sciences. Such integrated, interdisciplinary knowledge is key not only for providing sufficient and relevant evidence to the policymaking process, but also for establishing valid forms of evaluation and learning as policies are implemented.

Furthermore, insights from research on the quality of democracy in the EU highlight the importance of citizen participation. The EU has long suffered from a reputation of democratic deficit. Research on deliberative democratic processes that engage citizens at all stages of policymaking has shown that such processes can alleviate perceptions of democratic underrepresentation, and can also be particularly appropriate when developing policies for sustainable transformation—policies that tend to have direct impacts on citizens’ lives. Such citizen participation can occur at the stage of knowledge-creation, by drawing on local, indigenous, and lived experiences and wisdom to cocreate the “actionable” knowledge or wisdom for policymakers. Deliberative processes with citizens help codevelop appropriate policy options. Groups of citizens can decide together with policymakers on the final policy option and on the scope of the evaluation and learning processes to follow implementation.

Imagining, investigating, and implementing a radical, societal, and governance transformation that moves away from entrenched ideas of growth is a collective endeavor. The call raised by the authors challenges policy, knowledge, community, and other societal actors to face the implications of this necessary transformation.

Assistant Professor of European Governance

Ghent University, Belgium

Zora Kovacic and her colleagues provide a valuable critique of gross domestic product and the challenges of uncoupling economic growth from environmental damage. The idea that getting rich helps the environment as some proponents advocate is simply magical thinking. Currently, the faster GDP grows, the faster we destroy the natural world that supports us, and a semicircular economy spinning faster may actually end up doing more damage to nature than a sluggish economy.

I applaud the authors’ call to recast government as facilitators of local deliberation, to seek plural views on progress beyond GDP growth. The European Commission’s vision to “live well within the limits of the planet” is useful rhetoric, but what is needed is more facilitated dialogue on the key elements of “living well” (and presumably how to reconcile trade-offs when one person’s good life impacts others’).

For the latter part of the commission’s vision—“within the limits of the planet”—there are a growing number of initiatives to measure sustainable progress at the local level that go beyond GDP. These include, for example, city-level assessments of progress toward the United Nations’ Sustainable Development Goals, or attempts to downscale the planetary boundaries concept, so that economic activities do not exceed biophysical thresholds supporting human flourishing.

The idea that getting rich helps the environment as some proponents advocate is simply magical thinking.

These creative approaches must continue “breaking the taboo,” as Kovacic et al. put it, of focusing primarily on economic growth. Yet if such approaches are to be successful, they might also need to tackle another taboo: questioning the self-identity and attitudes of citizens.

The authors highlight how materialist values influence people’s response to the idea of limits on economic growth. There is evidence, too, linking self-identity, values, and attitudes to the exceedance of planetary boundaries. Excessive individualism and narcissism are associated with fewer pro-environmental behaviors, while a greater sense of connection to other people and the natural world promote greener actions, such as recycling and reducing carbon dioxide emissions.

Many Western democracies are reticent to influence the self-identity of citizens. Perhaps this is unsurprising given the tragic history of interventions in some communist and fascist regimes, which aimed to transform (“brainwash”) the characters of citizens. Nonetheless, the laissez-faire attitude of liberal governments toward self-identity does not mean citizens experience a lack of influence: people’s mindsets are continually shaped by media, business, education systems, and government action (even if unintentionally). Over the past half century, evidence shows self-identity shifting toward more individualistic values and attitudes in most countries, accompanied by a greater focus on the accumulation of material wealth.

The paradox is that by framing progress primarily in terms of economic growth, governments have all the time been modeling (potentially misleading) views on what people need to live well, while at the same time undermining tendencies to live within planetary limits.

Although tracking citizen attitudes is increasingly common, active intervention is still somewhat taboo. To deal with the sustainability crisis, however, it is a matter that must be navigated soon. The role of governments in stewarding self-identity for planetary health is a ripe area for ethical research.

Professor of Applied Ecology

University of Reading, United Kingdom

Episode 2: Doing Science With Everyone at the Table

Could we create more knowledge by changing the way we do scientific research? We spoke with the NASA Psyche mission’s principal investigator and ASU Interplanetary Initiative vice president Lindy Elkins-Tanton about the limitations of “hero science,” and how she is using an inclusive model where collaborative teams pursue “profound and important questions.”

Read Lindy Elkins-Tanton’s Issues essay, “Time to Say Goodbye to Our Heroes?

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. I’m Lisa Margonelli, editor-in-chief of Issues, and on this episode we’re interviewing Lindy Elkins-Tanton. Lindy is the vice president of the ASU Interplanetary Initiative, and she’s also principal investigator of the NASA Psyche mission, which launches in 2022 to explore a unique metallic asteroid orbiting the sun between Mars and Jupiter. In her August 2021 Issues essay, Lindy argues for a radical restructuring of how we do research, divesting from big names and asking teams instead to focus on big questions and ambitious goals. The future of humankind, she says, requires that we hear all the voices at the table—not only the loudest.

Margonelli: Lindy, thank you so much for joining us today. I’d like to ask you how you got interested in science. Was there some sort of ideal picture of what a scientist was?

Elkins-Tanton: That’s a great question. Thanks, Lisa. It’s really great to be here to talk about this. I’m looking forward to the conversation. You know, I often ask when I give a public lecture, especially to people interested in astronomy and planetary science: What was the moment in your life when you knew that you wanted to do this, or be interested in this, or follow this? It’s like the question you just asked me. And probably almost a third of the audience says it was when they saw Jupiter or Saturn through a telescope when they were 10 or 11 or 12. It’s just this formative moment. For the rest, it’s mostly Carl Sagan — Cosmos, Star Trek, Star Wars, and NASA. And I had all those things, including the Jupiter and Saturn sighting when I was about that age. But I still wanted to be a veterinarian.

I had this tremendous, all-consuming interest in natural sciences that carried me across all the disciplines. And even though as an undergraduate I studied science, I was not quite ready to go to graduate school. So for me, it’s been not a real direct path into science, but instead, a real passion that grew largely in the decade after my undergraduate degree in how teams of people work together. What is it that makes for not just a good outcome for the project, but a good outcome for the person? The thing that made me come back was the knowledge that in research science, the questions can be as challenging as you want. You never need get bored, you can always challenge yourself to a greater question. And it came along with the beautiful opportunity to teach. Those are the things that drove me back to science, so I’ve had multiple multiple drives all along.

Margonelli: That’s a very unusual path, especially to where you’re working now. I want to know, when did you realize that science was cutthroat?

Elkins-Tanton: You know, I feel like that is an education that most of us get—as I got—during our PhDs. Some people are clever enough to cotton on to this a little sooner. But for the rest of us, there’s really a bit of a professional education during our PhDs, where we learn that we need to stand up and fight for our ideas. We shed that sweet, naive notion that if I do a fantastic study that gives us new insight into the world around us, and I publish it, and it’s peer reviewed, then there it is—people will understand it, and they will adopt it, and it will change human thought.

Very quickly, you begin to realize that that’s not enough. You can publish a brilliant piece of work, but unless you go out on the conference circuit, give talks, engage with other people, have what can be heated conversations, and you’re determined, your information doesn’t really spread. It’s little epiphanies like that, that begin to help us understand what the culture really is.

Margonelli: Did you have a particular epiphany about what the culture really was, where you realized, “Oh, this is really, really highly competitive?”

Elkins-Tanton: There wasn’t one specific epiphany, but I was at MIT for my PhD, a place that I love and have huge loyalty for but which is also absolutely a series of warring city-states among the faculty

People really are fighting for their name, for their results, to be known and not to be dismissed and not to be disrespected, but instead to be adopted by the field and seen for what it is. And I got the feeling while I was there, and also a little bit later in my career, that talking about things like the culture of the laboratory wasn’t welcome. This was around 2000, so it’s only 20 years ago. It’s not totally ancient history. I got the feeling that talking about things like, “let’s definitely take turns speaking at team meetings” and “maybe when you criticize someone else’s work, you could go about it in a more supportive way”—those were thought to be for people who were too weak to make it in the real way. And that if you were really meant to be a serious, top-notch research scientist, you didn’t need to worry about those kinds of things because you’re ready to play hardball. And it took me, oh, about 15 years to figure out what the rebuttal to that was. It took a long time.

Margonelli: I want to move to your rebuttal in a second. I think it’s so interesting because many of us have a really heroic ideal of scientists from movies, from the books that we read, just from our culture. We see them as explorers, visionaries, people who solve problems, moral exemplars, the whole bit. And we don’t really like to think of them as competitive, cutthroat, potentially underhanded, undermining, loud, maybe mean. But let’s talk about this thing that started happening after you’d been in the field for 15 years, and you start to look closer at what was going on around you. You saw something wrong, and you called it the “hero model.” What did you see?

Elkins-Tanton: To address exactly these words that you’re using, I think a lot of scientists are adventurers and explorers and visionaries. And I think a lot of scientists are truly driving forward human knowledge. That’s what science is about‑it’s a way to apprehend the world around us and deepen human knowledge in a way that we hope eliminates or reduces our implicit and explicit bias about what we are observing. We’re just trying to be better observers.

But if you think for a moment, science is a human endeavor. Everything humans do is a human endeavor, made up of humans with all of our faults and foibles and all of our inclinations. And of course, there are people in science who want to be famous. And of course, there are people in science who want to be lauded as excellent and people who want to win awards. I think it’s true in every field of human endeavor. And in science, unfortunately, it does pull us a little bit away from the reason that we’re there in the first place.

While I was a management consultant, I had this sort of epiphany moment around what it can be to work together—where it’s not always each person wanting their own reputation to be the more famous, it’s not always each person trying to be so careful not to ask a question that might be viewed as stupid, or to show weakness. Instead, you can have a circumstance where everyone is working together to create an outcome which is more important than their personal fame. This was a moment working with what was then Touche Ross Management Consulting in Philadelphia, and it’s now at Deloitte. We were working with a client around an issue that the client had, a big client. And we started as a team, envisioning how we could organize ourselves and the client in such a way that we would have a better outcome. And we made up this construct in our heads, and then we convinced everyone to do it. It sounds so simple, right? We all sat around, thought of a way to change, and discussed it. And then it happened. It was an organizational change. It was how the team was going to be organized, the actions that they were going to take, and the outcomes that they were going to make.

That was in stark difference to the kind of science I was doing, where you can’t just imagine what the outcome is and then make it happen. You don’t make it up in your head and then it becomes real. Suddenly, I realized that in the human endeavor, that is what we do. We agree upon how we’re going to organize ourselves, we agree upon the culture we’re going to take, and we agree upon the outcomes we’re trying to create, and magic—it happens. And that was the reason why I realized that in science, we could be doing these great outcomes, we could be creating this new knowledge, but in a construct that was more human and inclusive and positive and effective. We could make up that part of it.

Margonelli: If I can just back up, I think that what happened here is that a management consultant went to some of the fanciest labs in the country and said, “Why are they managed this way? Why are people interacting this way?” I think that’s what you’re saying. And I need you to give me a picture of how science labs are organized and why you called it “hero science.”

Elkins-Tanton: Yeah. Let’s go back to what was happening in 19th-century Germany that was then carried forward to other parts of Europe and to the United States. And I’m going to give you what’s a little bit of a caricature, but for anyone who’s active in research science, I think you’ll also absolutely recognize it. It’s a circumstance where one professor is the person who personifies their subdiscipline at that university. They own that field, they own that body of knowledge, they are the expert in it. And they also own a pyramid of resources. In extreme cases, that includes junior faculty hires along with lab techs and staff to run their organization, and graduate students, and sometimes undergraduate interns, postdocs, and then budgets and equipment, and access to that equipment. So there’s a big pyramid of resources, and on the top is the “hero” professor. So, you know, what could go wrong?

Margonelli: So they started this way back in Germany, in the 15 or 1600s—this was the beginning of the German research?

Elkins-Tanton: Yeah. And then it really got developed in the 18th and 19th centuries, when there was actually a recognition there that to become a leading faculty, you actually had to have charisma and fame. And that was part of your job: to stand up there and assert, “I am the expert. Listen to me, I’ll use my deep convincing voice. And I will never end my sentences with an upturned question inflection.”

There was this culture to create the hero. It was a purposeful culture; we wanted our senior faculty to stand up and be heroes.

Margonelli: And now what we’ve done is we’ve imported that over here, many, many years later, doing a completely different kind of science. We’re not looking at ants and saying where are their ovaries? We’re doing a completely different kind of science that pervades our entire lives. And we still have funding, fame, students’ education, discovery, equipment tied up to an individual.

Elkins-Tanton: That’s right.

Margonelli: And so that has become, in effect, a management culture of science. So how did that model go adrift?

Elkins-Tanton: It served us well for almost a millennium, didn’t it? You know, alas, we’re no longer Lord Kelvin, we can’t any longer discover fundamental chemistry in our kitchen. And it’s very hard to make gigantic breakthroughs in individual subdisciplines unconnected to other subdisciplines.

There are many different ways that it can and has gone wrong and ways that it’s still working really well, too. There are subdisciplines that are super fruitful in this model. But one problem is there is a limit to the resources that are available, so people become very protective of their pyramid of resources. In some cases, this even means that they don’t like their graduate students to spend time with other faculty or research in other labs because they want all of their time and attention in that one discipline on their thesis.

So this kind of “team” culture that’s led entirely by one senior person—who I might add, in general, has never had any leadership or management training, or HR training of any kind; they come at us purely as an individual scientist—it can be rife for bullying and harassment. And often, there’s very little transparency to outsiders or other people in the organization, and few paths for help. This is something we’ve heard so much about since the Me Too movement began, there’s been a big National Academy report—we know that there are problems with harassment and bullying in science and engineering and STEM fields.

Part of it is this: there is not a network of resources available for the people in the pyramid, and their entire careers are dependent upon that senior person. So those are some of the ways [the hero model] goes wrong.

And I would just add that another really critical way it can go wrong is that the senior scientist, the hero scientist, is very motivated to protect their intellectual property and not have other people, at their own institution or others, who claim to have exactly the same or better expertise in that area. So new discoveries tend to be in incremental slivers of real estate around that pyramid of resources and knowledge, up until they bump against another subdiscipline. Right away that paradigm is something that has to be broken. We have to be welcome and rewarded for connecting outside of our pyramid.

Margonelli: It’s interesting. So you’re saying there’s two issues here. One is there’s a set of incentives that drive people towards competitive behaviors. There might be bullying, there might be harassment. I think Science in 2017 published an article by two academics called “Bullying Is Real,” which is kind of a wild stage on which to have that realization. And then there’s also this problem with reproducing the science. Nature interviewed 1500 scientists in 2016 and found that 70% of them said that they couldn’t reproduce their colleagues’ studies, which means that there’s incentives in place to publish that do not also incentivize that being good, reproducible research. So there’s a set of incentives for negative behaviors. And then there’s another set of incentives that are hampering progress, or the same set of incentives are hampering progress on big questions.

Elkins-Tanton: That’s right. That’s exactly right. So we would like the questions to be bigger, and we’d like progress toward them to be faster. And we would also like the process to be more rewarding and inclusive for everyone who wants to participate. Here’s really the bottom line. To me, the absolute bottom line is that science is the best way that humans have ever invented to create lasting knowledge, knowledge that we don’t immediately find out is wrong, knowledge that we can actually make progress based upon. It’s the knowledge that gave us the Pfizer and the Moderna vaccinations. These are things that really matter, and this is a process that really matters.

But of course, it’s imperfect. It’s imperfect because it’s done by humans. It’s not that science is either this perfect thing or we stop believing in it. It’s that science as a human endeavor, and like every human endeavor, we can improve it.

So here are some ways that we could make it better: We can remove some of the things that make harassment and bullying possible. We can create new connections. We can reward scientists and other researchers for working across disciplines. And then, how do we stretch out of the subdiscipline model? That’s the second part of it. How do we ask bigger questions?

Margonelli: One of the things that came up in reading your story and talking to you is that while we’re all kind of hung up on the hero model because it seems totally normal to us and it’s a big part of our popular culture; in fact, there are places like NASA that don’t use it. They have a different organizational model. Can you explain to me what these other models could be, and the models that you’re thinking about?

Elkins-Tanton: Yeah, let’s consider a kind of axis of models where on the one hand, we’ve got this hero model of the person sitting on top of their mountain and asserting that what they know is true. And so the product here is knowledge, but it is produced by a person—in fact, a personality, I would say—and that’s what leads to the hero aspect. On the other end might be something where you’re just focused on the product, where you really are looking at an outcome, and the people are a way to create that outcome. A corporate setting is often a situation where that happens, and any place where there’s a project that’s bigger than the individual.

That’s what happens a lot of times with NASA missions. I’m working on one right now, and working on this mission really did lead to a lot of epiphanies for me about how things can work. This is not to say that NASA is without heroes. In a lot of ways, NASA, and all space exploration, is all about heroes. But it doesn’t have to be. Everything we do can be more inclusive, more voices heard, focused on the outcome. It doesn’t have to be about making individual people more famous.

Margonelli: So there’s a couple of things that you do. You’re doing the Psyche mission, I think that has 800 people involved in it. So obviously your management training is a really big deal there, being able to think in terms of what do you do with 800 people. But the other thing is that you’re working with ASU’s Interplanetary Initiative. And you’re thinking about how to create learning environments at the same time, because one of the issues is that the heroes are supposed to train students. And they do train students, but there are a lot of other incentives involved in here which may not end up with students who are set to go to work. So let’s stop and talk just a little bit about heroes and students, and then talk about your approach.

Elkins-Tanton: Yeah, I’d love to, thanks. Of course, faculty at universities and colleges teach classes to undergraduates. So that’s one very important part of our purpose. And our addition to society is teaching people not just content, but how to learn. Teaching people to be learners, teaching people to have agency, teaching people to go out and be effective in the world and in their lives.

That takes on its most focused version when faculty are working with graduate students, students who are getting their masters or their PhDs. They’re really entered into that pyramid of resources because usually they’re doing original research that is based upon an idea that the faculty member had. It’s the faculty member’s idea–usually, not always—and the student’s job is to carry it out and simultaneously to learn. It’s an apprenticeship model. Now, apprenticeship [models], when they’re done well—totally brilliant. The students learn to be a top expert in their aspect of this subdiscipline, and they’re supported by their faculty member, who then writes them supportive letters, and helps them get jobs, and talks to their colleagues about how great they are, and sets them up for talks, and does all the things that that a dedicated mentor can do to help launch their career.

Now, I don’t need to say, that is a lot of work. It takes some emotional intelligence as well as an intellectual and emotional commitment to the student. So you can immediately see, if you haven’t experienced it yourself, how this can either be a beautiful, effective thing or a tremendous tragedy for the student.

So we’re working at the Interplanetary Initiative at ASU not just on different ways to put together teams for more rapid and effective outcomes, and also more positive ones for the team members, but also on the education side. I’ve been focusing a lot on undergraduates because here’s the divide that I’ve been seeing in education: undergraduates, in its sort of pure end number state, listen to lectures and read textbooks and give back the information on a test, which is incredibly passive. We’ve known for decades that that is not the most effective way to learn. But it’s the way that we faculty thing that undergraduates have to learn in order to get all the content that we need to cram into their heads during these four precious years that we have to influence what they know.

But of course, we’re now in the information age, where all information is everywhere. So how about if we teach students instead the skills that they would otherwise have to wait [until] graduate school to learn? What if we teach them how to find information, decide upon its biases and its verity, and know what to do with that information, decide what outcomes they’re looking for, and figure out how to do those outcomes? In other words, how to be a master learner—someone who can actually execute with expertise, someone who can decide for themselves whether the answer is right or wrong.

These things are not what’s usually taught in undergraduate [education]. And they lead graduate students to have existential crises because all the ways they’d been judged a good student in their lives—test scores, grades, sitting still and listening—are now no longer useful. In fact, they’re the opposite of what a graduate student needs. The graduate student needs to think for themselves, find their own information, decide for themselves when it’s right, decide how to take action. So we’re trying to teach all those things as undergraduates; we’re trying to give the agency and the voice to everyone in the pyramid, not just to the hero.

Margonelli: Wow. Okay. So now let’s talk a little bit about how the Interplanetary Initiative is trying to move away from the hero model into a different way of doing research.

Elkins-Tanton: I’m excited to talk about this. So people talk a lot about how do we bring together art and science. And what I’ve mainly seen happen, from a scientist’s point of view, is there’s a hero scientist who’s running this research project. And an artist is just seconded onto their team almost like a mascot, who’s going to follow them around, learn about this, and create some art. And I haven’t seen many cases where that drove forward the science or the art. So I felt like that was an unsuccessful way to become interdisciplinary.

Meanwhile, I start working on the Psyche mission at Jet Propulsion Laboratory and with our many other partners across the country and around the world. As you said, at peak, we’ve had 800 people working on this team. And I see meetings where, in the room, we’ve got, say, three engineers and a couple of scientists, a graphic designer, a scheduler, a budgeter, a photographer, and we’re all working together and everyone is speaking. And we’re all creating these plans and these actions.

It really struck me like, this is such a different model for how people actually sit around a table, plan their actions, and then go off and produce a product. And the thing that I realized was that, in this model, the goal that we’re doing is exterior to ourselves. Everyone is there because they themselves, and their specific knowledge, is required to reach that goal. And that’s not the same thing as in a scientific lab where the one person has the idea—so the goal is almost internal to the leader—and the other people are brought along, maybe almost as observers, in some cases.

So at Interplanetary Initiative, we’re trying to use this other model, where we agree upon an external goal. It doesn’t just come from one person who’s the leader and the thought proposer, it comes from the whole group. We decide on an external goal, we assemble the team of disciplines that are required to reach that goal, and then everyone’s there for a reason, everyone’s voice gets heard, everyone’s knowledge is necessary. You immediately start with a much more equal and collaborative culture, working toward a goal that everyone equally values.

Margonelli: The culture I’ve been brought up in, which isn’t even the culture of science, says, “Well, you know, that’s just much too squishy. Expertise has to have some edges to it, and if you let everything in, you’re no longer experts.” Give me a really close-up look of like, how do you come up with the questions? And how do you compose the teams?

Elkins-Tanton: So coming up with the questions, we’ve been experimenting with different processing, so I’ll describe to you the one that we’re using right now that seems to be working pretty well. But I want to start with a little preamble, which might be a question of, how do scientists and engineers decide the questions that they’re pursuing? Did I start, when I was purely an academic scientist, thinking to myself, “What is the most important thing I could possibly solve with my time and effort here on Earth?” Generally, not. Generally, I start with, “What is the next really cool question that could possibly be addressed with the tools that I have in my tool belt?”—which is a different question, which is a different way to come about your research. That’s not true for everyone. There are labs all over the world where people are saying, “The very most important thing I can solve with my knowledge in the world today is blah, blah, blah.” And whatever it is, they’re going for it. It’s a really big, important goal. But a lot of us start with a little bit closer horizon.

And so what we’ve been doing instead is we do something we call the big questions process, where we bring as many people as we can into a room. The first time we did it, it was 40 or 50 people from the university and from the community.

Margonelli: They weren’t all scientists?

Elkins-Tanton: No, right. So I just invited everyone I thought I could convince to come because it was such a kind of flyer experiment that I was running. This was in 2017. And we’ve updated a little bit, but basically, the process was I invited people I thought would come. I had some deans, I had somebody from business school, somebody from public service, I had somebody from science, I had faculty. And then I had graduate students, and even undergraduates, and also some members of the general community outside the university who were just interested in what we were doing. So 40 or 50 people, very wide range of disciplines and very wide range of experiences.

And we started with a really classic brainstorm, meaning no criticism. Meaning everyone’s idea is received with a welcome. That’s very important so as not to cause people to shut up from pressure. And what we were trying to do is discover what the questions were that needed to be answered to create a positive human space future. What are the most important questions for us to answer to create a positive human space future? And people started thinking of ideas. One idea would be, “How do we make sure that when we are settled on another body, when we become interplanetary on the Moon or Mars, that humans have a structure to interact? And we understand what we’re going to be—who’s our governance, how they relate to each other?” Questions like that, all the way to, “What is a faster propulsion system that will get us to Mars?” And also, “How do we educate humans here at home on Earth so they’ll be ready to be interplanetary?”

So many questions across such a wide range of things. So we wrote them all down. And after we were finished writing them down on the board, we voted on them. We talked about them a little bit but didn’t want to get people into their critical mode.

Margonelli: This actually gives a very interesting view into your mindset, which is that you’re really looking at interactions with humans and then thinking about results, rather than looking to, in the crudest terms, separate the sheep from the goats, which has often been a winnowing process in science of separating out the people who don’t get to talk. And so this is much more about using every bit of information to structure some set of results that you might deliver or act upon.

Elkins-Tanton: That’s so right. It’s the fundamental belief that I have that science and engineering is in service of all humanity. It’s not in service of a tiny club of your closest peers who could recognize or contest what you’ve discovered. That is not a sufficient use of our time and resources. It’s really in service of all of humanity. And so let’s involve everyone in thinking about what’s important and feeling like they’re a part of the conversation.

Now, very, very important distinction: this is not getting rid of the idea of an expert. It’s not downplaying in any way the importance of a deeply rigorous education and an absolutely unflagging determination to find something that is true and not just guided by your own biases. You have to have that. You need to have disciplinary expertise of the deepest variety. But the thing that’s different is that we can bring those disciplinary experts together in groups of people who include non-disciplinary experts and find directions that are even more important for all of us.

Margonelli: I want you to actually talk about what happens when people get interdisciplinary. Then you set up these teams, and then the teams work in a really different way. Can you just talk about that a little bit?

Elkins-Tanton: Right. Let me start by saying that we give little bits of seed funding to these projects to kind of get them going. And of course, the traditional way that a seed funding program works is that individual heroes come and say, “Here’s my proposal for this brilliant idea.” And then they get some money to take back and do with as they normally would.

So that’s not what we do. We do these big questions. It’s a group project. And then around each of the highest voted questions, we invite people to volunteer into teams—all happening in the same afternoon. This isn’t a go-home exercise, this is all happening in real time. And then they have a couple of different jobs to do while they’re sitting together in the room for an hour. What are some milestones that we could reach in one year with a modest amount of funding that would get us on the track toward a solution for this very big question? Some of these questions are questions that would take a lifetime or several lifetimes to answer. But you can make a milestone for the year.

So first of all, setting really big outcomes and goals. And then you have to identify the disciplines that are needed for your milestones that you don’t yet have. Who are the empty seats at the table, so to speak? Then we pick a leader—there’s no leader till then. We pick a leader and we send them away, and we give them about two weeks to come back with a budget and a team and the fleshed-out milestones. The budgets aren’t big, you know, $10,000 $20,000 per year—they don’t even pay for a whole student. But if you have a leader, if you’ve picked a leader who can come back in two weeks with those things, then they’re probably effective enough to go for the year.

And then the big difference is we put them under professional project management. So we actually hold them to their milestones and their goals and their budget. And we support them if they need extra help in a different way. That’s not usual in academia, and I expected people to kind of run screaming—but it turned out people loved it. We’ve had very few teams disband. People really respond to having a question that’s bigger than just themselves. And that’s about being on a motivated team and having the supportive structure. It turns out, it really connects to something deeply human among us, and it’s been really successful.

Margonelli: Wow, that’s such an inspiring model. And you make it sound kind of fun.

If you want to read more about how successful Lindy’s ideas have been, read her article over on issues.org. The way we conduct research could be very, very different.

Thank you for joining us for this episode of The Ongoing Transformation. And thank you so much to our guest, Lindy Elkins Tanton, for talking to us about the problems of the hero model of science, how we can change it, and how to train the next generation of science leaders. Visit us at issues.org for more conversations and articles. I’m Lisa Margonelli, editor-in-chief of Issues in Science and Technology. See you next time.

Episode 1: Science Policymakers’ Required Reading

Every Monday afternoon, the Washington, DC, science policy community clicks open an email newsletter from the American Institute of Physics’ science policy news service, FYI, to learn what they’ve missed. We spoke with Mitch Ambrose and Will Thomas about this amazing must-read: how it comes together in real time and what it reveals about the ever-changing world of science policy itself.

Find FYI’s trackers and subscribe to their newsletters at aip.org/fyi.

Transcript

Josh Trapani: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Science, Engineering and Medicine, and Arizona State University. I’m Joshua Trapani, senior editor at Issues in Science and Technology. I’m joined by Mitch Ambrose and Will Thomas from the American Institute of Physics science policy news service called FYI. Their newsletters and tools for tracking science policy budgets and legislation are key assets in the science policy community. On this episode, we’ll talk to Mitch and Will about their view of science policy and get a look under the hood at what goes into creating FYI’s newsletters and resources. Welcome, Mitch and Will, it’s a real pleasure to have you with us.

Thomas: Thanks very much. We’re pleased to be here.

Ambrose: Great to be on.

Trapani: So FYI describes itself as an authoritative news and resource center for federal science policy. And I’d like to start with a big picture question: How do you define science policy?

Ambrose: So there’s a very classical formulation of that, that’s the two sides of the coin of science for policy and policy for science. And I have nothing against that formulation, I think it is helpful to broadly bin the types of issues you come across. But we don’t really think about it in that way in FYI. We approach it in a variety of ways. We were not thinking about “Oh, this is a policy for science story,” and “Oh, this is a science for policy story.” We’re focused on various aspects of the process. You know, there’s the very formal set of budget documents that gets through the annual appropriations process following the President’s budget request to the House and Senate appropriations bills to the final outcome. There’s a whole procedure around that and a whole cast of characters involved in that process, and really getting a sense of what individual people’s priorities are, and the whole machinery of how priorities get set. And that’s just one lane of science policy. But then there’s all sorts of other lanes as well that we pay attention to. So we take this very procedural focus, I would say.

Thomas: Yeah, I think of it as a very empirical approach. What is science policy? Well, it’s what policymakers are talking about. What challenges are they facing? What opportunities do they see? What proposals are they putting out there? And what kinds of arguments are they making for and against different sorts of things? And when you do that, you pick up on a lot of things that maybe you didn’t even think about as science policy ahead of time, or you find that there’s an issue in some area of policy, like trade relations, for example, that turns out to have a very technical dimension to it. The nice thing about taking that approach as a news organization is that you’re always talking about things that are definitely on policymakers’ agendas, whether it’s in Congress, or in the agencies, or in universities, or among advocacy groups.

Trapani: How do you draw that line? If there’s something that maybe hasn’t traditionally been part of science policy but it comes up, how do you decide: this is in or this is out? Or is that not how it works?

Thomas: Yeah, I think it’s a really cogent question that we’re asking ourselves all the time. You know, we work for the American Institute of Physics, which is a federation of 10 member societies—American Physical Society, Optica, the American Astronomical Society, and so on. And so we’re always asking ourselves, “What sorts of issues might they be interested in?” That’s a very practical way of delimiting ourselves because we’re a team of four people and we can only cover so much.

And then there are certain issues that just kind of creep onto our agenda after a while. For example, the meteorologists have been concerned for quite some time about federal allocation of radio spectrum because with new 5G devices coming online, that can interfere with weather satellite observations, for example. And so for a long time, we were interested in this issue in a very, very top-level way. We just saw spectrum meetings, and we took note of the fact—“Oh, there’s something with spectrum going on.” And then starting about two, three years ago, this became a really, really serious issue. We decided that we had to learn about it because there was a lot of action going on, a lot of arguments between federal agencies—between the Federal Communications Commission, NASA, the National Oceanic and Atmospheric Administration, Department of Defense—and so suddenly, something that had not been part of our agenda at all was part of our agenda simply because it cropped up and you couldn’t ignore it anymore.

Ambrose: I’d like to build on Will’s comments there. To the broader point of how would you bound science policy—setting aside bandwidth constraints—how would you bound the topic space? I would say, well, one approach would be: OK. There are various committees in Congress that have control over science. There’s the appropriations committees, and there’s no one appropriations committee for science; it’s distributed across many different subcommittees. So the subcommittee that funds the Department of Energy also funds the Army Corps of Engineers. And then there’s a separate subcommittee that funds NASA, NSF, and NIST, but also the FBI, and all sorts of other agencies that have nothing really to do with science policy, in a narrow sense. So you could take a very structured approach to just looking at specific committees that have jurisdiction over science.

But what we found, especially over the past few years, is that science policy is cropping up across many, many committees that we would never expect. The Judiciary Committee, for instance, is considering immigration reforms, some of which have very big implications, potentially, for the science workforce. You have many, many committees, beyond even the Intelligence Committee, that are getting interested in the topic of what I’ll call research security, which is largely tied to the US-China dynamic, where there’s many people across committees in Congress that believe that China’s taking advantage of the US research system in various ways. That’s just become such a burning topic that it’s showing up in all sorts of places we never looked at.

We have a congressional tracking service. And I had a keyword search for various science terms. All of a sudden, I started hearing the FBI director start talking about science. And I’m like, “Huh, that’s interesting.” So there’s a whole new set of institutions that we had to learn about, as essentially an emerging area of science policy. And I would say as well, and we can get into this later—there’s a whole series of prosecutions of scientists through the Justice Department’s China Initiative. For my first few years in this job, I never looked at court documents at all—it just didn’t come up as an area of physical science policy. But now, when scientists are starting to get prosecuted, now I’m looking through the PACER system, and it’s a whole new set of procedures that I would argue is now part of science policy based on the current dynamics.

Thomas: It’s an interesting thing. I mean, in 2018, when the FBI director first started talking about this, we were one of the very few organizations that was really paying attention. And we noticed that it started cropping up in additional congressional committees, and there were a series of members of Congress who were really interested in the issue. So now you have large petitions at Stanford University and other universities and large protest movements against this China Initiative. I’ve seen it on the evening news—but that’s been only within the past year or two. By taking this empirical approach, we’ve been there all along, and we’ve been tracing the different facets of the issue. That’s one area where our, for the lack of a better term, empirical approach has really kind of paid off.

Trapani: This is really interesting, you have this really broad, comprehensive, holistic view of science policy that lets you almost see out ahead of where things are. I was wondering if you wanted to provide any insights on what you see as the most important things happening right now, that people either aren’t paying attention to or aren’t paying sufficient attention to, in the realm of science policy?

Ambrose: To build on what Will just said, I would say the China Initiative itself is something that wasn’t being paid attention to enough until fairly recently. And now, as Will mentioned, you have these campaigns of scientists at different universities that are starting to really mobilize around that issue. When this initiative was first announced, I think in late 2018, it took quite a while—after a few of these prosecutions of scientists to sink in, you know, what are the effects on the academic community. And now people are going to pay much more attention to it, and it’s getting a lot more media coverage broadly. So I would say that that issue is now getting the attention it warrants.

But there are others, like the spectrum issue that Will mentioned is another one that really burst onto the scene. And you know, the FCC, if you had been following filing documents for FCC going back several years—and this was actually the topic of a recent hearing in the Science Committee where essentially the chair of the committee, Representative Eddie Bernice Johnson, made the point that had the science agencies been paying attention to these FCC proceedings more closely, they would have been able to see this coming years in advance, this issue of spectrum interference with Earth observation satellites or astronomical observations. But it was just a foreign area of policy, even to the science agencies themselves, and it’s quite arcane. And she made this remark about essentially, you need lawyers to decipher this sort of thing for you. But now that issue blew up, you had these fights between agencies over spectrum allocations, and now it’s getting quite a bit of attention.

One other one that I’ll mention quickly is this issue of light pollution from satellite mega-constellations. And what it really took was that first launch of a bunch of satellites from SpaceX’s Starlink constellation, and then the astronomers are like, “Oh, no, this is gonna be a huge deal.” So now it’s getting a ton of attention. There’s a few issues like that, that just within the past few years have burst onto the scene for our reporting, that I think if people had perhaps been a bit savvier, [they] would have seen them coming down the pipe. We didn’t forecast those issues until they burst into public view ourselves, so we’re not claiming special knowledge in this area. But I think those are some good recent examples of how these hot topics can really come out of nowhere, almost, in science policy.

Thomas: One thing you asked, is enough attention being paid to an issue? And the question is really, attention by whom? Sometimes there are people who are fairly niche who are really, really interested in an issue, and nobody else pays any attention to it whatsoever. So FCC filings, for example, the telecommunications industry is paying attention to that all the time. But scientists weren’t. The scientists didn’t know how to do it. Scientists’ lawyers … didn’t know how to pay attention to it. And so it’s only recently, years after the initial filing, that they really glommed on to it and said, “Hey, actually, this is a really important issue, and it could cause us some fairly serious problems.”

Similarly, we have two issues that are really big in science policy right now—we mentioned the China Initiative and all these arrests of people with Chinese backgrounds, be they immigrants from China or visitors from China or simply Chinese Americans, and then you have other sets of people who are interested in diversity, equity, and inclusion issues. And those really aren’t the same groups, even though they’re united by a common cause of justice and civil liberties and that sort of thing. So one of the things that we hope that we do—we don’t know if we do it or not; we don’t know how effective we are in doing it—is if there’s a fairly niche issue, or if there’s a community that should be paying attention to it, that we can help alert them to the existence of these issues and help to get them up to speed on the nitty gritty of it as best as we can.

Trapani: I’d like to turn to FYI itself. There is a lot of reporting, there’s a weekly newsletter, there’s a budget tracker, you track people in the science policy world. And it gets circulated around the science policy world quite broadly. Before I came to Issues in Science and Technology, I was in several other science policy roles, and when I first learned about FYI, I was like, “Oh my gosh, I have to go subscribe to this immediately.” I learned about it in a way that a lot of people do, which is people will forward it on or forward on chunks of it.

The thing is that once you subscribe, you realize that a lot of the really smart people who seem like they’re in the know in your organization are actually just forwarding on bits and pieces of FYI. And then you get to laugh at those people in your mind. But within a few weeks, you find yourself turning around and engaging in exactly the same behavior, because it is just such a valuable resource for the community. I was wondering if you could just talk a little bit, because it is so comprehensive, about how do you go about gathering up all the things that go into the weekly newsletter or the other tools that you have? And what kind of analysis goes into that?

Thomas: That is really our secret sauce. I’ll let Mitch take the lead.

Ambrose: I’d first like to just sketch a bit of the history of FYI, I think it’d be instructive at this stage. It was started in the late 80s by essentially one person, Dick Jones. And at that time, it was just distributed literally by paper mail for the first few years of its existence, but it did have certain elements that have continued through today.

As I mentioned at the outset, we have this very formalized way of covering the federal budget process, for instance. There’s a series of documents that are produced through that: there’s the President’s budget request, then the House Appropriations Committee advances its set of bills that have reports with all sorts of detailed policy instruction. Then the Senate will eventually do its version of those same set of reports. And then they finally, usually several weeks late, have a final agreement. And there’s documents associated with every stage and from its outset, FYI, its bread and butter has been stepping through those foundational science policy documents. And that continues through the current day, except we’re much more in-depth than we used to be, which I’ll get into in a bit. And also, FYI covered a lot of speeches from policymakers, and did still have that kind of people-focused approach. But it was essentially just one person, for the most part, up until when the founder, Dick Jones, retired in about 2015.

And then AIP reflected at that point, people seem to really like this type of information. Let’s really scale this up. So over the coming couple of years, we scaled up to four people. And that has really enabled us to [make] a sea change in FYI reporting, where we launched this weekly newsletter, called FYI This Week, that is giving you a preview of what’s coming down the pipe in the coming week or so, a summary of the big things that happened in the previous week, and then all sorts of additional information like an event calendar and a roundup of job opportunities, and also a roundup of other people’s reporting. And we’re very generous in acknowledging just good science policy reporting that we see. Every edition has about 100 or so links at the end. It’s almost like this little appendix of interesting science policy articles that the team sees throughout the previous week.

I’m always floored at how much science policy reporting there is, if you just know where to look. And it was in the process of constructing this very comprehensive, weekly newsletter that we started to really formalize a way for surveilling what’s going on. And we have all sorts of fishing lines, I like to think, out looking for relevant events, relevant reports, there’s a series of information streams that we’ve set up in order to have this week-over-week reporting on what’s happening—and trying not just to catch the newsiest things. We do give those more attention, but also including all sorts of links to less newsy things—but you can kind of see something’s bubbling up. And so we have this way of, across the whole landscape, we try to pay attention, essentially, to everything—as much as we can at once. I can’t say everything at once. By paying attention to the entire landscape, or as much as you can at one time, then you can start to see these little deltas of activity in different committees or different agencies. And then eventually, that might bubble up into something that we write a full article about. And that’s, we have this thing called the FYI Bulletin, which has existed from the beginning, which is our full length reporting. So we have this interplay between the weekly newsletter, which is OK, here’s the week-to-week churn, and then once something becomes a big enough story, we do a Bulletin on it. And I’ll stop there and see if, Will, you want to add to that.

Thomas: Yeah, I mean, it’s really just being knowledgeable about what sorts of documents are apt to contain… We develop a baseline knowledge of what exists right now, we call it the landscape of science policy. And the more you can know about that, the more you can see where it changed. Mitch is apt to call this a delta, with his physics background—and then you learn about the windows where these things are apt to come out.

So I mentioned the documents, but there are also congressional hearings. And you know, 95% of what’s said at a congressional hearing is not, frankly, going to be very interesting—to be honest, like 99%. But there’s always going to be some little thing, maybe it’s in the opening statement, maybe it’s later on in the hearing, maybe it’s something the witnesses say, and you can glom onto that, if you know what’s already out there, and say, “That’s new, I have not heard that before. This is something that we have to pay attention to.”

All the federal agencies have these advisory committees of outside scientists, and that tends to be where they talk about what’s going on with their programs. Is something over budget, is there something that they’re worried about, what’s their latest initiative in research or in some other aspect of their activities. It used to be that we would be listening to these things live, and that ultimately became untenable because one, there’s only going to be a little bit that you really, truly need to pay attention to, and also there’s lots of different things going on at once.

So we started to be a little bit smarter about recording these things, feeding them into these new AI transcription services so that we can scan what was said a lot more easily, and it’s been just really a series of small innovations that lets us consume more and more and more, even though we’re a really small team, we can pay attention to an astonishingly large amount of things. And then things that we miss, we depend on reporters for other outlets. We can say, Science magazine, it has excellent reporters, SpaceNews is just awesome, awesome reporting in the space sector, Nature—it goes on and on. What does it, Mitch, National Journal that’s been reporting on the science policy legislation?

Ambrose: Yeah, there’s a particular reporter at National Journal who’s gotten very interested in science policy.

Thomas: And they just came out of nowhere, and they do a lot of important work for you. And we always say like, we’re only four people, we’re not going to only cite ourselves, because there are a lot of people who are paying attention to a lot of things we simply can’t pay attention to. And we want to acknowledge them as part of this science policy news ecosystem.

Trapani: It’s remarkable how much information you all process and put into your stuff. I would have thought that there would have been an army over there, so I was really curious as to how you did it. It sounds like FYI has grown—in terms of sophistication, in terms of people—in terms of the issues over the last few years. What do you see as coming next for FYI? Or what would you like to see next?

Ambrose: One other thing I didn’t mention that we launched over the past few years, in addition to the weekly newsletter, is we have this series of trackers. They’re essentially landing pages on our website. We have a budget tracker, which has very fine-grained information on, for a given agency, what is the funding outlook for [a] particular project? Then we have a leadership tracker, which is the “who are in positions of power over the science agencies in some way?”—both people going through the Senate confirmation process, but also a whole constellation of people who are career officials that don’t typically turn over with a given administration. And then finally, we have a bill tracker, which is an index of key legislation relevant to the physical sciences.

It just gives you this whole map of in these different categories of data—budget data, people data, and legislative data—what’s going on? To your question about some new things we’d like to do right now, each of those trackers is in a pretty rudimentary stage. We’d really like to take each of them to the next level, and build out what we kind of call to ourselves an information architecture. And how do you provide some extra context around all the information that’s provided in there? Particularly with budget information for large facilities as they’re going through the process, for instance, it can be difficult to interpret the significance of certain changes in the funding profile.

Thomas: If I can offer an example: NASA launches science missions, right? And their funding will go on an arc, and one day, you’ll see they’re going to cut the budget for the science mission by 80%. Well, yes, that’s because it’s launching—it’s not because they don’t like it. And we don’t communicate that in any way in the budget tracker. As it stands right now, you just have to be aware of that, whether by reading our bulletins or because you’re an insider. So that’s what Mitch means by context.

Ambrose: We think you could even make, beyond just the contextual information, you know, building it into a richer resource for people. The astronomy and astrophysics community just came out with their latest decadal survey, an extremely important prioritization report for that discipline. And a big part of that exercise is, you know, constructing different budget wedges of, how much money will we have in a given amount of time to do a flagship space telescope mission versus a ground-based telescope? And how do we fit that under certain budget guidance that we’ve gotten from the agencies? We feel like we could, for instance, build out our budget tracker into a tool to help those types of planning exercises—really look at what past budget wedges were like for different sets of projects, how did that fit under a given constraint. And also looking forward as to what the current projections are, adding that up in what’s known as a sand chart and seeing if you’re going to be able to fit under a certain budget target. That’s just one of many examples I could give of how you could make richer information resources that aren’t strictly news, per se, but we think could provide, both in aiding our own understanding of these processes but also providing tools for the scientific community to understand what’s going on. So a lot of this falls under a concept we’ve thought about of establishing almost like a research hub for science policy to complement the journalism that FYI does.

Thomas: When you’re a news organization, you accumulate a lot of information over time. Something you knew as a fact last year may no longer be true this year. And you can follow FYI, maybe if you do very studiously, you’ll be really up on the issue. But if you haven’t been an absolute scholar on this issue, we’d like to have a place where you can go so that you can learn everything that we’ve learned about this issue and have the most up-to-date information. And that would really make us almost as much of a research organization as it would make us a news organization.

The fact is, we have four people who work for FYI: me and Mitch, Adria Schwarber and Andrea Peterson, and none of us have backgrounds in journalism. We all are in science or history of science or something like that. We’re all researchers in one way or another. And so we’re not, in some sense, content just to write news articles—we want to share our knowledge with the world, so to speak. That’s kind of the central idea, is becoming more of a research organization. There are multiple ways in which we can do that. And expanding our trackers, creating these issue guides, those are two facets of what we’d like to do. We just have to find a logical way to expand that doesn’t put too much pressure on us because we’re pretty much at the red line as it is.

Trapani: Figuring out how to expand, reach new audiences, and create new resources while at the red line is a challenge for Issues too. We’re inspired by what you’re doing at FYI. On behalf of myself, Issues in Science and Technology, and probably thousands of people who work in science policy fields, I’d like to thank you for all you do and all the tools that you put out there.

Thomas: Thanks so much! It’s been really enjoyable.

Ambrose: And thanks for the opportunity to come on the podcast.

Trapani: Thank you for joining us for this episode of The Ongoing Transformation. If you have any comments, please email us at [email protected] and visit us at issues.org for more conversations and articles. I’m Josh Trapani, senior editor of Issues in Science and Technology. See you next time.

A Veneer of Objectivity

In “Unmasking Scientific Expertise” (Issues, Summer 2021), M. Anthony Mills exposes the danger of the vacuous “follow the science” slogan that has been used by politicians, scientists, and others throughout the COVID-19 pandemic to command allegiance to particular scientific conclusions or policies and to shut down what is sometimes reasonable disagreement. The pandemic is rife with disagreements over the science or the scientific backing of public health actions. Some of those disagreements are militant enough to evoke the (admittedly overused) metaphor of a science war. The possible explanations for scientific disagreements are many. Here is a non-exhaustive list of explanations for the sometimes-stark disagreements among scientists, public health experts, and other science advisers during the pandemic, some of which Mills discusses.

Normal science in real time. Reasonable uncertainty over unsettled science generates normal, rational disagreement. There is nothing unusual here in need of a special explanation. It seems unusual only to outsiders who are not used to seeing scientific disagreements livestreamed and live-Tweeted.

Fast science, bad science. The pandemic has provided a breeding ground for bad science owing to the urgency of the situation. Fast science promotes bad science, and bad science promotes scientific disagreement.

The pandemic is rife with disagreements over the science or the scientific backing of public health actions.

Belief factions. Belief factions are rival networks of knowledge users, sometimes though not always formed along lines of political affiliation, that preferentially believe, endorse, or share information coming from within the network. Even seemingly politically neutral matters such as whether hydroxychloroquine is effective can become polarized by belief factions. Different science experts may be part of distinct networks.

Epistemic trespassing. Given the enormity and multidimensional nature of the problems faced, experts from different fields have become COVID researchers or thought leaders. They commit epistemic trespassing when they overstep their expertise, potentially leading them to spuriously challenge the “real experts.”

Different disciplines, different disciplinary frameworks. Individuals from different research traditions such as evidence-based medicine and public health epidemiology sometimes rely on different standards or principles of evidence, reasoning, and decisionmaking, leading to disagreements that can be resolved only through higher order analysis.

Policy proxy wars. Policy conflicts rooted in disagreements over values or decisionmaking can masquerade as disagreements over science or evidence, fought by appealing to (or producing) research favorable to one’s preferred policy and criticizing or discrediting unfavorable research rather than deliberating over the values and decisionmaking at issue.

Pandemic theater. Disagreements among experts may be exaggerated, amplified, dramatized, or concocted in network media, on social media, by politicians, or by others.

Of course, a list of explanations for disagreements among politicians and members of the wider public would look a bit different. Distinct explanations might better explain distinct disagreements. Because these distinct explanations often demand different responses, it is important to consider which explanations apply in a given case.

Finally, absent from this consideration is the notion that experts are not actually following the science. Though nonexperts may sometimes ignore the science, when scientific experts disagree it is more likely that they are interpreting or weighing research findings differently, perhaps for reasons above.

Assistant Professor

Department of History and Philosophy of Science

University of Pittsburgh

M. Anthony Mills argues that the technocratic rhetoric of “following the science” hides the role of judgment and values behind a veneer of objectivity. On Mills’s analysis, this mismatch between the appearance of value-freedom and the reality of value-ladenness has contributed to the twin crises of loss of trust in scientific expertise and general political polarization.

I agree with Mills’s diagnosis. Policy-relevant science is necessarily “shot through with values,” to use the phrase of the philosopher of science Janet Kourany. And the mismatch between the value-free ideal and value-laden reality has indeed caused significant problems. But the underlying mechanisms are more complex than Mills indicates.

Trust in scientific expertise is itself a partisan phenomenon. Survey studies by the sociologist Gordon Gauchat and the Pew Research Center show that over the past five decades, liberals have had steady or even increasing trust in science and scientists, while conservatives have gradually lost trust. But even this is an oversimplification, as conservatives have maintained trust in what the sociologists Aaron McCright and Riley Dunlap call “production science” (science as used by industry) and lost trust only in “impact science” (science as used by regulatory agencies for goals such as restricting pollution and protecting human health). At the same time, many conservative voters support environmental and public health policies, even when their elected representatives do not. For example, long-running surveys by the Yale Program on Climate Change Communication indicate that about half of conservative Republicans have supported regulating carbon dioxide as a pollutant since at least 2008.

The mismatch between the value-free ideal and value-laden reality has indeed caused significant problems. But the underlying mechanisms are more complex than Mills indicates.

This paradoxical set of conservative attitudes toward science policy is plausibly due to the way that certain industries have used public relations campaigns and “merchants of doubt”—a term introduced by the historians Naomi Oreskes and Erik Conway to refer to scientists paid by industry to raise often-specious concerns about impact science. Merchants of doubt have sometimes weaponized the value-free ideal in these public scientific controversies, attacking the work of climate scientists or environmental epidemiologists as “politically motivated” “junk science.” Meanwhile, these industries’ own scientific staff typically know about the hazards posed by their products, at the same time as outsiders are being paid to act as merchants of doubt. This dual strategy, hiring merchants of doubt to attack impact scientists while concealing the findings of their own regulatory scientists, has evidently been effective in confusing the public—especially conservatives—and delaying regulation.

As the science policy scholar Sheila Jasanoff has demonstrated, the value-free ideal was supposed to ensure the legitimacy of technocratic policymaking at agencies such as the Centers for Disease Control and Prevention, the Food and Drug Administration, and the Environmental Protection Agency. By being value-neutral, science was supposed to provide an apolitical foundation for policy, immune to partisan politics. Instead, the value-free ideal has been weaponized by regulated industries to challenge the legitimacy of any unfavorable policies. The value-free ideal has undermined itself not so much because of general scientific hubris, but more because it has been susceptible to profit-motivated exploitation.

Assistant Professor of Philosophy

Department of Cognitive and Information Sciences

University of California, Merced

M. Anthony Mills calls for us to rethink the proper place of scientific expertise in policymaking and public deliberation. His inventory of the consequences of “follow the science” politics is sobering, applying to COVID-19 no less than to climate change and nuclear energy. When scientific advice is framed as unassailable and value-free, about-faces in policy undermine public trust in authorities. When “following the science” stifles debate, conflicts become a battle between competing experts and studies.

We must grapple with the complex and difficult trade-offs and judgment calls out in the open, rather than hide behind people in lab coats, if we are to successfully and democratically navigate the conflicts and crises that we face.

I want to expand on one of Mills’s points, namely that public conversation is increasingly preoccupied with who is or isn’t following the science. Our democracy is pathologically tribalized, as Mills says, when science becomes “a shibboleth,” and rules “begin to resemble cultural prohibitions more than public policies: taboos to be ritualistically followed or transgressed.”

When “following the science” stifles debate, conflicts become a battle between competing experts and studies.

Perhaps the most pernicious consequence of following the science is what it does to us as political beings. Debate, negotiation, and compromise are shunted aside as disagreements take on a Manichean good/evil character. Resistance to mandates about masking, restaurant shutdowns, or vaccines is no longer understood in terms of mistrust in authorities, concerns about unanticipated consequences, or political interests. It is cast as the rebellion against rationality writ large. The political correspondent Tim Dickinson in the February issue of Rolling Stone didn’t blink when blaming Americans’ vaccine hesitancy on their “surrender” to a “kind of unreal thinking.”

But “you can’t fix stupid,” as people across the political spectrum often chant. And because democracy offers little recourse to “correct” what opponents see as each other’s irredeemable cognitive defects, our political discourse takes on a fanatical impatience. News headlines have noted the increasing anger among the vaccinated. Editorials shame the unvaccinated for their “idiocy” or “arrogance,” and social media are filled with comments proposing that we let the willingly unvaccinated die. All the while, vaccine hesitancy transforms into outright hostility.

Fanaticized discourse, in turn, legitimates strong-arm policy. The Biden White House, which brands itself as an administration that “respects” and “follows” science, will restrict nursing homes’ access to Medicare and Medicaid unless staff meet vaccination quotas. This move mirrors threats made by Governor Greg Abbot of Texas and Governor Ron DeSantis of Florida to defund mask-mandating school districts. Both follow-the-science policy and its populist nemesis prefer executive decree over democracy, which risks making our gridlocked political system even worse.

The philosopher Karl Popper warned about this in The Open Society and Its Enemies: “They split mankind into friends and foes; into the few who share in reason with the gods, and the many who don’t.… Once we have done this, political equalitarianism becomes practically impossible.” Although his book was more concerned with Marxists and Fascists who claimed to know the essence of human society, Popper’s warning applies equally to the effort to make science politically authoritative. What we need most right now is not a society that respects science, but one that respects disagreement.

Associate Professor of Social Science

New Mexico Institute of Mining and Technology

Author of The Divide: How Fanatical Certitude Is Destroying Democracy (MIT Press, 2021)

As the public health establishment stared down the oncoming pandemic in early 2020, quite a few members of this community pointed out a conundrum: if they convinced the country to ramp up a massive response to SARS-CoV-2, and, by doing so, successfully prevented it from becoming a serious problem, critics would nevertheless bemoan the waste of public resources. What pandemic, they’d say, smugly and stupidly.

But noting this possibility hardly settles the question. Massive anticipatory responses to novel pathogens, or potential hurricanes, or date-sensitive computer glitches, really can be wasteful, and self-serving for bureaucracies that hold themselves forth as fixers.

My colleague Anthony Mills’s invocation in his Issues article of the 1976 “swine-flu fiasco,” as the New York Times called it, shows us that the political perils of success in heading off a serious problem are not merely theoretical. Because the problem with swine flu remained potential—because catastrophe failed to materialize—the preparations to combat it were seen as a politically motivated stunt.

Former New York governor Andrew Cuomo’s run as a media darling in 2020 shows something like the opposite. Even as his state suffered some of the country’s worst COVID-19 outcomes, Cuomo’s willingness to hold himself forth as a responsible, science-following leader left commentators musing about whether he could replace Joe Biden atop the Democratic ticket. Cuomo showed that policy failure could be spun into political gold, at least for a time. Sometimes seeming good beats being good.

Mills tells us, “Reestablishing an appropriate role for science in our politics … requires restoring the central role of politics itself in making policy decisions.” I heartily concur. But I worry that this makes a saner discourse sound much too easy to achieve. Because I fear that what the public wants, at bottom, is someone who will do exactly the right thing, every time, without any vexing complications.

I fear that what the public wants, at bottom, is someone who will do exactly the right thing, every time, without any vexing complications.

That is not a realistic expectation, of course. Once conflicting values—held by different individuals, or even by single persons—are taken into account, it is usually not even a sensible concept. But “follow the science” has been a siren song precisely because it tells people that they do not have to confront the unrealism of this desire. If all we have to do is be led to the science, the burdens of self-government fall away.

Doing politics is the proper way to resolve difficult questions such as whether it is worth it for us to force people to wear masks—but it is painful. That makes getting to a healthier, more openly political discourse very difficult. If one side begins the process, the other side can simply call them “political” (often an effective slander) and congratulate themselves on their willingness to be “scientific.” This is one of those problems for which a clear sense of what is wrong does not immediately lead to a solution.

That said, a discourse that understands the proper relationship between politics and science can’t hurt, and we should be glad that Mills is leading the way.

Senior Fellow

American Enterprise Institute

Accounting for Lives Lost

By some estimates at least 1.8 million Africans lost their lives during the transatlantic slave trade. Using an online database called Slave Voyages, artist Kathie Foley-Meyer studied maps detailing the paths that slave ships took from Africa to the Americas. Foley-Meyer, a PhD student in visual studies at the University of California, Irvine, created the painting In the Wake: With the Bones of Our Ancestors in an effort to remember those who perished before making landfall. “I just became obsessed with the lives of these human beings that are accounted for, but not really,” she told a university reporter. “They exist as bodies that disintegrated when they were put into the ocean and became part of the oceanic life cycle.” 

In the Wake With the Bones of Our Ancestors
KATHIE FOLEY-MEYER
In the Wake: With the Bones of Our Ancestors, 2018
Watercolor, chalk, and wax on paper, 41.5 x 29 inches
Collection of the National Academy of Sciences

She continued, “I remember looking at the statistics of human cargo—the number of people that survived the voyage to the New World and the number of people who did not. I began to wonder, other than numbers, how do you account for those lives lost? Those people were taken from their homeland and deposited in the ocean for one reason or another that rendered them disposable and not recognized as human beings.” 

Foley-Meyer is a participant in the Ocean Memory Project, a collaboration of scientists, artists, engineers, and designers who are exploring the question, “Does the ocean have a memory?” The project is funded by the National Academies Keck Futures Initiative and is led by National Academy of Sciences member Jody Deming. 

Image courtesy of the artist.

Ethics and Policymaking

The ongoing public health crisis is a moment of reckoning for those of us who work in the field that has come to be known as bioethics. As R. Alta Charo notes in her interview (Issues, Summer 2021), the word was coined by Van Rensselaer Potter, a biochemist at the University of Wisconsin, her former institution. For Potter, bioethics signified the integration of biology and moral values for the sake of human survival. In those days there was an emerging awareness of the fragility of the ecosystem upon which human life on the planet depends, culminating in the first Earth Day in 1970 and in the publication of The Limits to Growth, commissioned by the Club of Rome, in 1972.

Oddly, though, the word was captured not by environmentalism (Potter later tried to rename his concept “global bioethics,” to no avail) but by another emerging field, one captured in the unwieldy original name coined by the Hastings Center in 1969 as the intersection of “society, ethics, and the life sciences.” When Georgetown University’s Kennedy Institute was founded in 1971, the word used was simply ethics, as continues to be the case in the formal name of the institute today. Perhaps an earnest archivist will connect the dots that led to the expedient adoption of “bioethics” by the original participants. What is certain is that by the mid-1970s the word was ensconced in the early literature and in a growing media presence.

Bioethics signified the integration of biology and moral values for the sake of human survival.

Other dynamics in the late 1960s and early 1970s are relevant to the biography of the word bioethics. The full import of the Nuremberg Code’s insistence on “voluntary consent” was more evident as a series of research ethics scandals were reported in the media, tying into the contemporary civil rights movements. By the time the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research published its Belmont Report, in 1979, “respect for persons” (defined as respect for autonomy) was a core value, perhaps the primary principle. On the other hand, there was growing confidence in the benefits of mass vaccination, including the beginnings of the World Health Organization’s smallpox eradication program, allowing some optimism about ending the scourge of infectious diseases. In short, bioethics appeared as individual rights were gaining public interest and threats to public health seemed to be diminishing.

Now, with public health of immediate concern amid the COVID-19 pandemic, the “first among equals” standing of respect for autonomy requires closer examination. Too often opponents of elementary public health practices such as masking have exploited the legitimacy that the academic literature has granted appeals to individual autonomy. My own preference is for a greater role for reciprocity in the pantheon of bioethical values, and not limited to specific cases such as organ donation. For bioethics scholarship, there is much more work to be done to specify reciprocity in concepts of justice and fairness, in effect shifting the balance of power among bioethics principles.

David and Lyn Silfen University Professor

University of Pennsylvania

S&T Policy and Economic Security

The post-World War II period ushered in a rapid increase in and unprecedented level of international scientific cooperation and knowledge flows. The end of the Cold War and the opening and rise of China, economically and scientifically, have further boosted this trend. For three decades, the global enterprise of science benefitted from a strong consensus on the positive value of international collaboration. Throughout most of this period, the United States has been the undisputed leader in terms of knowledge resources, international attractiveness for global talent, and economic prowess.

In more recent times, however, policymakers around the world are reassessing internationalization or globalization of research and development as something unequivocally positive. They are also revising their view of the United States as an undisputed beacon in the global science landscape.

Against this background, the article by Bruce Guile and Caroline S. Wagner, “A New S&T Policy for a New Global Reality,” and the one by Laura D. Tyson and Bruce Guile, “Innovation-Based Economic Security,” both in Issues, Summer 2021, provide an important impulse to the domestic policy discussion.

Guile and Wagner point to the declining US dominance in the global science and technology enterprise and argue for a different approach to international science and technology cooperation. Tyson and Guile see an urgent need for linking science, technology, and innovation more closely to issues of economic security. They warn of rising cross-border supply chain vulnerabilities and the risk of other countries appropriating the gains of new technologies and knowledge-intensive ventures originating in the United States.

Policymakers around the world are reassessing internationalization of research and development as something unequivocally positive.

Both pieces argue that the United States needs to rethink its role in the global knowledge landscape and adopt a more strategic and coordinated approach to science and technology policy and to international cooperation. The United States, the authors write, should focus less on generating all knowledge at home and more on tapping into and benefitting from the knowledge and innovation created outside its borders.

Working in the European scientific and policymaking community, I see a growing concern over the United States wavering in its commitment to the global enterprise of science. Some of its closest traditional allies worry that it is prioritizing its own narrow national interests over the benefits of international cooperation, even pitting the two against each other or instrumentalizing the latter in pursuit of the former in a zero-sum fashion. They also observe US actions that seemingly reflect a lack of awareness of a rapidly changing role and perception of the United States in the rest of the world, not least among its closest friends. Large parts of the world and the global scientific community are looking to the United States to redefine its role and regain its credibility as a pillar of responsible research and innovation in a rapidly changing global context. As someone who is firmly rooted in both Europe and the United States, I echo that hope.

Professor of Research Policy

Lund University, Sweden

Bruce Guile, Caroline S. Wagner, and Laura D. Tyson present strong arguments for better research and development policies (Issues, Summer 2021). In “A New S&T Policy for a New Global Reality,” Guile and Wagner correctly acknowledge that America is no longer the only country that does intensive R&D. Europe, Japan, and more recently Korea, Taiwan, and China also contribute a growing percentage of academic papers, patents, and commercialized high-tech products and services. As their article subtitle suggests: “US policies need to be reconfigured to respond.”

In “Innovation-Based Economic Security,” Tyson and Guile also correctly argue that innovation is an issue of economic security. Without innovation, American companies, workers, and institutions will suffer. And to innovate, America must make holding onto new, emerging technologies a national priority, and, as the authors stress, improve its “ability to capture economic or national security value from scientific and engineering advances originating outside the United States.” They argue that a better integration and coherence of US policies can help.

On the other hand, the articles don’t acknowledge that innovation has dramatically slowed over the past decade, meaning fewer new technologies are being commercialized not only in the United States but throughout the world. The result is slowing productivity and few new manufacturing industries to employ America’s middle-class workers.

For instance, the 2010s produced fewer new digital industries than did the 1990s and 2000s. The 1990s gave us e-commerce and enterprise software and the 2000s gave us smartphones and cloud computing, each producing more than $100 billion in revenues by the end of their respective decades. In contrast, in the decade ending in 2020, only one new digital technology, video streaming, had achieved $50 billion in sales, while “big data” analytic software and services, tablet computers, and OLED displays contributed between $20 and $50 billion. Artificial intelligence, virtual reality, augmented reality, commercial drones, smart homes, and blockchain have even smaller markets (as do nondigital technologies such as nanotechnology, quantum computers, and fusion).

Innovation has dramatically slowed over the past decade, meaning fewer new technologies are being commercialized not only in the United States but throughout the world.

A lack of new technologies is also a big reason why today’s so-called unicorn start-ups—those with a valuation of $1 billion or more—have much higher losses than those of previous decades. My recent analysis published on the MarketWatch website found that 51 of 76 unicorn start-ups have more than $500 million in cumulative losses, 27 more than $1 billion, and 6 more than Amazon’s peak cumulative losses of $3 billion. Many of these start-ups will be unable to overcome these losses because half of them had losses greater than 20% of revenues and one-fourth had losses greater than 40% in 2020.

The small markets for these technologies and the disappointing performance of America’s start-ups suggest that new ways of doing R&D are needed. We need university researchers to bring technologies closer to commercialization and not just write academic papers, and we need companies to commercialize more of this university research. How can America’s policymakers encourage companies and universities to work more closely together? A first step is for funding agencies to measure output from university research more by commercialized products and services than by numbers of academic papers and their citations. Improving America’s R&D system is a big challenge.

Independent technology consultant

Solar Climate Intervention

The “moral hazard” of solar geoengineering that Daniel Bodansky and Andy Parker examine in “Research on Solar Climate Intervention Is the Cure for Moral Hazard” (Issues, Summer 2021) is an illustration of a general phenomenon: introducing a new, potentially low-cost opportunity for reducing the risk of a loss may weaken the incentive to take other actions that prevent that risk from occurring. Some climate policy stakeholders have opposed solar geoengineering (SG) research and deployment out of concern that SG would discourage and hence substitute for emission mitigation. This prospect of new strategies influencing the use of existing strategies to combat climate change raises two important policy and political economy questions.

First, how is SG different from other approaches that reduce the risks of a changing climate? Substitution among climate change risk-reduction strategies already characterizes climate policy in practice. Investing in solar panels reduces the emission-cutting returns of energy efficiency investments, and vice versa. R&D on battery storage may enable dispatching of intermittent solar power, and reduce the returns to R&D on carbon capture and storage technology.

One may argue that substitution within emission mitigation is fine, but different from SG substitution, since the former represents various ways of preventing climate change risk, instead of potentially ameliorating the risk under SG. The same logic, however, applies to climate adaptation and resilience efforts. The emerging acceptance of the need for adaptation is clear evidence of insufficient emission mitigation over the past three decades. The failure of the single-pronged emission mitigation strategy has strengthened the incentives of individuals, businesses, and governments to invest in climate-adaptation programs.

This prospect of new strategies influencing the use of existing strategies to combat climate change raises two important policy and political economy questions.

Second, how could policymakers craft and implement a portfolio approach to climate change risk reduction? For example, would SG substitute for or complement emission mitigation? The underlying logic of the SG moral hazard critique is that decisionmakers optimize their risk reduction strategies. The analysis that SG deployment reduces the social return for a unit of emission mitigation thereby causing decisionmakers to undertake less emission mitigation presumes that decisionmakers already pursue optimal emission mitigation. The myriad imperfections and inadequacies of mitigation policy to date undermines this assumption and should give us pause about the prospect of optimizing the deployment of SG (and adaptation) to displace some emission mitigation.

Pursuing SG research and enhancing its salience among policymakers, stakeholders, and the public may represent an “awful action alert”—considering actions to block some of the incoming sunlight may galvanize public attention and enhance support for mitigation and adaptation. As my colleague Richard Zeckhauser and I emphasize in our paper “Three Prongs for Prudent Climate Policy,” such an awful action alert may spur greater emission mitigation and increase support for using every tool for reducing climate change risks. As Bodansky and Parker note in their compelling case for SG research, there is already preliminary social science research consistent with this notion. Going forward, we need to better understand the political economy of a portfolio approach to climate change risks. This suggests that a SG research agenda should address the political, economic, sociological, and international relations dimensions of SG research and deployment, in addition to the engineering and scientific dimensions of solar geoengineering.

Professor of the Practice of Public Policy

Harvard Kennedy School

Daniel Bodansky and Andy Parker’s call for more research into solar geoengineering rests on a neat but false dichotomy. They imply that research must be either constrained or extended. In practice, what is needed is neither a ban nor a free-for-all, but appropriately regulated multilateral research.

The authors are concerned about fears of mitigation deterrence or “moral hazard,” using the latter term despite widespread criticism of its inappropriateness. They argue that such fears will motivate more opposition to research, of the sort recently mounted by an international coalition of Indigenous peoples and environmental groups when Harvard researchers prepared to conduct solar geoengineering experiments in northern Sweden without first engaging with the local Saami people, or indeed other Swedish and European stakeholders.

In defending this sort of careless research management, Bodansky and Parker do not help their own case. They also slip into a rather one-sided review of the existing literature on moral hazard and mitigation deterrence, foregrounding individual effects rather than political, systemic, and emergent ones. Though it is generally accepted that in rich Western populations, exposure to ideas of solar geoengineering tends to galvanize concern over climate change, there is a striking contrast between the German and American experiments the authors cite. The German researchers showed that their participants supported stronger mitigation measures, while the Americans merely revealed that some individuals expressed more concern about climate change when they were told about a possible response that would not mean restricting their emissions. In other words, one of the experiments that Bodansky and Parker cite as rejecting moral hazard actually illustrated it.

What is needed is neither a ban nor a free-for-all, but appropriately regulated multilateral research.

Moreover, as the authors themselves acknowledge, politicians and businesses face stronger incentives than individuals to grasp at excuses for delay in climate action. Their solution is often to ignore the problem or hope for the best, deflecting attention to the reasonable—but tangential—concern that more research is necessary to deter future decisionmakers, faced with serious climate impacts, from ill-informed efforts at geoengineering. Unfortunately, the record of solar geoengineering research in providing such practical guidance is poor, with most modeling-based studies presuming away a whole range of technical and political limitations and risks that would make the carefully designed and modulated interventions they consider impossible in practice.

More research of this sort risks reinforcing unrealistic expectations of the possibilities. The authors might retort that this is exactly why more experimental research should be undertaken. Sadly, while small-scale experiments might help us understand how particular chemicals will react in the stratosphere, they offer little scope to understand large-scale climate system responses, or to help accurately attribute climate effects to geoengineering interventions. As has been long recognized, the only experiments that could answer such questions would actually constitute global-scale long-term interventions.

But the central problem of Bodansky and Parker’s piece is not their limited and partial coverage of the literature, nor their “knowledge-gap” theory of research that overestimates the learning that could be achieved through more experimentation, but their presumption that the choice we face is binary. There is a middle way, in which research is conducted in ways that minimize the risks of mitigation deterrence through prior development of binding international governance standards and procedures, including requirements for appropriate advance public engagement. Advocates for geoengineering research need to stop attempting to dismiss the risks of mitigation deterrence, and accept the challenge to collectively design research processes that minimize those risks.

Research Fellow, Lancaster Environment Centre

Lancaster University, United Kingdom

A New Model for Research Teams

In “Time to Say Goodbye to Our Heroes?” (Issues, Summer 2021), Lindy Elkins-Tanton challenges the conventional wisdom on how we organize our research enterprises. She calls our current approach the “hero model,” where professors in subdisciplines control a pyramid of resources—mini-fiefdoms that end up vying for attention, students, and budget. This model has tended to disincentivize collaboration, encourage cutthroat competition for resources, and in the worst cases, facilitate bullying and harassment. Without collaboration, research tends away from interdisciplinary work, where many of the true breakthroughs in science and technology emerge.

Even more worryingly, the hero model has produced a personality-based environment, driving away many students who could have truly contributed. It might preserve the students who thrive in a highly competitive environment, but not necessarily the best or most creative scientists. It has helped suppress diversity and discouraged inclusion.

Instead, Elkins-Tanton, who is a colleague of mine, suggests that the research community could move toward a more team-based model, with multidisciplinary groups addressing big challenges in science and society. In order to solve big problems such as climate change, we need multiple skillsets and voices. Both she and I have seen this model work extremely well at NASA, where multidisciplinary teams have conceptualized and implemented missions that explore our solar system and the universe. Our most significant challenges require interdisciplinary work, and require us to include all voices.

Most research enterprises are aligned much the way universities have been organized for hundreds of years. To truly move science and technology forward, it is time to break this paradigm and rethink how we conduct our enterprise. Heroes can’t save us—we all need to be part of the solutions.

Under Secretary for Science and Research

Smithsonian Institution

In her thought-provoking essay, Lindy Elkins-Tanton urges her fellow scientists to “ask ourselves whether we are solving the biggest and most urgent problems, and whether we are lifting up our colleagues and the next generation to do the same.” At universities, the stark answer to this critical question is no. However, given the challenges facing all of us across the globe, we need to change our approach so we can answer yes—and we need to do it right now. The challenges are too complex, too impactful, and too urgent to continue as we are.

We need to change the way we do research, and Elkins-Tanton offers an indispensable framework: identifying questions, creating an interdisciplinary team, using seed funding, and making a professional project manager a key member of the team. She cites no loss of scholarly output from these changes; in fact, they provide the added benefits of increased speed of innovation, incorporation of goals not usually pursued, and a transformative change in culture.

Importantly, this framework motivates a focus on big questions that matter not only to scientists but also to people in the community, whom Elkins-Tanton invites to participate in the problem-formulation stage. By emphasizing expansive interdisciplinary teams, she places diversity and inclusion at the center of ethical and pragmatic science, where they belong. As she writes, “The collective future of humankind requires that we hear all the voices at the table, not just the loudest.”

The challenges are too complex, too impactful, and too urgent to continue as we are.

As a statistician, I would urge anyone considering implementing Elkins-Tanton’s model—and I hope many do, quickly—to include from the start robust assessment tools and the collection and analysis of data. Her proposed framework deserves a rigorous empirical understanding of what is working and why, so that the model can be improved with each iteration. The resulting evidence will also promote the model’s adoption.

Changing the way we do research, of course, cannot be achieved with the snap of our fingers. Elkins-Tanton alludes to the need to alter incentives around hiring, promotion, and tenure—issues that are often allergic to risk-taking, team-based projects and scholarship derived from societal needs. Fortunately, there is work being done to identify ways to evolve universities’ existing practices, as evidenced by a workshop I participated in led by the Meta-Research Innovation Center at Stanford, the conclusions of which appeared in an article by David Moher and colleagues in PLOS Biology in 2018, as well in an article by Moher et al. in Issues the same year.

It will take strong leadership across all universities to evolve faculty incentives, but that work is worth it because until academics can answer Elkins-Tanton’s key question in the affirmative, we are not serving the true needs of humanity. We are serving only ourselves.

Executive Vice President, Knowledge Enterprise

Professor of Health Solutions, and Mathematical and Statistical Sciences

Arizona State University

In “Time to Say Goodbye to Our Heroes?” Lindy Elkins-Tanton not only asks and answers this critical question about our traditional academic structure, but pushes us to reevaluate the underlying value system and reward structure of the knowledge creation enterprise. She suggests that knowledge creation be driven by “big questions” rather than “big names.” I expect that the “heroes” themselves were originally motivated by such big questions, but the current funding structures and conservatism of review panels make it difficult to shift to the bigger, more complex questions asked by modern society.

As Elkins-Tanton also describes, research development is most innovative and fruitful when led by a diverse, creative, empowered team equipped with the opportunity and safety to bring its best ideas. I myself am a product of the traditional “hero” system, but only now—being solidly mid-career with tenure—am I able to realize the full potential of a collaborative and diverse team.

Research development is most innovative and fruitful when led by a diverse, creative, empowered team equipped with the opportunity and safety to bring its best ideas.

We work exclusively with this model in our research projects at Arizona State University’s Interplanetary Initiative, of which Elkins-Tanton is vice president and I am an associate director. And our experiment is working!

Since the hero model is increasingly in conflict with the societal shift toward teamwork, interdisciplinarity, and the inclusion of diverse voices, it’s time to broaden the scope of our experiment.

Here are a few opportunities for bringing these values to the wider academic system:

  • Review panels for any resource allocation should be double-blind when possible, such that the research questions and proposed experiment methodology are evaluated rather than the principal investigator. This has been shown to work in at least the few cases I am particularly familiar with, such as the time allocation process of the Hubble Space Telescope and some smaller grant programs within NASA and the National Science Foundation.
  • Universities’ promotion and tenure criteria should include an explicit evaluation of these values, so that people coming up the ranks with these newer research perspectives are able to reach positions of influence and promote the values-evolution process.
  • We need to teach students at the undergraduate level how to ask big questions and guide their own learning as part of interdisciplinary and diverse teams, while training people to be both leaders and collaborators. Our Interplanetary Initiative is now in its second year of offering a Technological Leadership bachelor of science degree, which is designed to do exactly this, aligning the next generation of learners with the needs of modern society.

There are many more changes, both systemic and specific, we need to make as our values in the knowledge creation enterprise shift away from the hero model. It won’t be easy, but it is necessary to ask and answer the big questions society faces today.

Associate Professor of Astrophysics, School of Earth and Space Exploration

Associate Director, Interplanetary Initiative

Arizona State University

When Lindy Elkins-Tanton asks if it’s “time to say goodbye to our heroes,” I respond: “Most definitely!” Her article focused on the social and productive benefits of teamwork, specifically mentioning NASA mission teams. As cochair of the National Academies of Sciences, Engineering, and Medicine’s committee charged with “Increasing Diversity and Inclusion in the Leadership of Competed Space Missions” proposed to NASA’s Science Mission Directorate, I’ve been following her teamwork approach—especially on the Psyche mission.

But I think it’s clear that the problem starts long before the time of graduate students and junior researchers that she mentions. I’ve been following the demographics regularly posted by the American Institute of Physics’ Statistical Research Center, which show that the big drop in participation in science, technology, engineering, and mathematics—the STEM fields—by historically underrepresented communities happens earlier along the career pathway. The “pinch-point” is somewhere between high school and the first couple years of college. It’s those 400-student Physics 1 classes where the “hero” culture hits home.

It’s clear that the problem starts long before the time of graduate students and junior researchers that she mentions.

True, many universities have moved on from the “chalk and talk” lecture mode. But despite increases in class demonstrations, group discussions, and the use of classroom response systems known as “clickers,” there’s still a culture of the person in the front (still most likely to be an older white man) knowing it all and telling you—perhaps with a well-meaning smile—the facts you need to memorize. Studies reported in the 1997 book Talking About Leaving: Why Undergraduates Leave The Sciences, by Elaine Seymour and Nancy M. Hewitt, showed that both women and men had similar negative reactions to such teaching, but the men tended to stay while the (equally capable) women tended to leave. A follow-up book in 2019, Talking About Leaving Revisited, showed that such issues persist, and extend beyond the factor of gender to race and ethnicity.

I fully respect the work of Elkins-Tanton and her Interplanetary Initiative. The much harder job will be changing the education system to increase the embarrassingly low (and demographically narrow) US per-capita production of STEM bachelor’s degrees, as shown in, among other sources, The Perils of Complacency: America at a Tipping Point in Science & Engineering, published in 2020. Achieving that goal will require not just saying goodbye to the heroes but also making serious national investments in education.

Assistant Director for Planetary Science

Laboratory for Atmospheric and Space Physics

University of Colorado Boulder

Climate Scenarios and Reality

Progress on the important issue of climate change requires a framework for evaluating the likely consequences of different courses of action. Science can powerfully inform public decisions on energy systems, infrastructure, and economic policy when researchers explore, using the best available evidence, a range of possible futures using emissions scenarios. The process of constructing, describing, and using these scenarios is challenging for many reasons. The continued evolution and improvement of emissions scenarios is an important element of the future of climate-change research. But in “How Climate Scenarios Lost Touch With Reality” (Issues, Summer 2021), Roger Pielke Jr. and Justin Ritchie are wildly off base in declaring that the “misuse of scenarios in climate research has become pervasive and consequential—so much so that we view it as one of the most significant failures of scientific integrity in the twenty-first century thus far.”

Their characterization is wrong for three main reasons. First, the scenario developers and the Intergovernmental Panel on Climate Change have been explicit about the features of the scenarios and the limits on their relevance to specific applications. In particular, the high-emissions RCP8.5 scenario has long been described as a “business-as-usual” pathway with a continued emphasis on energy from fossil fuels with no climate policies in place. This remains 100% accurate, even if RCP8.5 does not appear to be the most likely high-emissions pathway.

The scenario developers and the Intergovernmental Panel on Climate Change have been explicit about the features of the scenarios and the limits on their relevance to specific applications.

Second, one of the main motivations for emissions scenarios is to provide a basis for comparing futures with and without policies related to climate change. Until recently, it has been reasonable to expect that a no-policy future would be a world of continuing high emissions and ongoing emphasis on fossil fuels, namely RCP8.5. As greater understanding of climate change spurs new policies and advances in technology, the notion of a no-policy world becomes increasingly abstract. But a no-policy endpoint remains an important point for comparison, even after the world has begun to diverge from the no-policy path. Referring to this no-policy endpoint as business-as-usual is imprecise, but it is not a significant failure of scientific integrity.

Third, at least part of the reason that the world is moving away from RCP8.5 and toward lower emissions is that effective communication of risks from a changing climate (and the unacceptable consequences to society of the business-as-usual scenario) has stimulated technology advances, incentives, and policies that now make RCP8.5 unlikely. Progress in tackling the risks of a changing climate, even if progress is still too slow, should be celebrated. It should not be converted into an implied failure of scientific integrity. Around the world, tens of thousands of scientists are working hard to understand the details of climate change and the risks it brings. The research tools are imperfect, and the future has many features that are unknowable. In this setting, the key to maintaining the highest standards of scientific integrity is maintaining commitments to professionalism and transparency, including continuing to fine-tune the development, use, and interpretation of emissions scenarios.

Perry L. McCarty Director of the Stanford Woods Institute for the Environment

Stanford University

President

National Academy of Sciences

“All models are wrong,” said the renowned statistician George Box, “but some are useful.” The same could be said of future predictions. Climate models have proved enormously useful and minimally wrong: they have captured the observed pattern and magnitude of human-caused global warming stunningly well. But they don’t even try to predict the future. Instead, they make projections: incomplete but informative pictures of possible worlds conditional on different carbon dioxide emissions scenarios.

I agree with Roger Pielke Jr. and Justin Ritchie’s statement that we shouldn’t call the high-emissions RCP8.5 scenario “business as usual,” and they are right to call for the climate community to end this sloppy wording. The world appears to be off that particular nightmare trajectory, but horrors still await us if we fail to rein in greenhouse gas emissions. We don’t know what the future holds, but we are clear that the biggest wild card is completely within our control. This is the message that emerges from the best available climate science, a complex and remarkable picture assembled from climate models; basic theory; observations of temperature, ice, precipitation, sea level, cloud cover, and many other variables; as well as reconstructions of past climate.

Climate models have proved enormously useful and minimally wrong: they have captured the observed pattern and magnitude of human-caused global warming stunningly well.

I was, however, saddened and confused by the authors’ contention that the use of RCP8.5 threatens the integrity of that science. Neither the most recent Intergovernmental Panel of Climate Change report nor the National Climate Assessment claims RCP8.5 is “business as usual,” but even an unrealistic scenario can yield interesting science if used appropriately. After all, we can do experiments in a climate model that we’d never be able or allowed to do in the real world. We abruptly quadruple carbon dioxide in the atmosphere, return it to preindustrial levels, or increase it steadily by 1% every year. I am using RCP8.5 in my research right now—not because I believe it to be business as usual or our inevitable future, but because I am interested in what happens to the climate as Earth passes temperature thresholds as it warms. There is not much difference between a world that passes 1.5 degrees Centigrade and eventually warms by three degrees and a world that exceeds that threshold on its way to something hotter.

Thousands of scientists use this scenario for other perfectly legitimate reasons: to understand signals of forced change against a background of natural variability, for instance, or to compare state-of-the-art climate models to earlier generations. They do so while facing constant criticism, much of which I worry is in bad faith. As Pielke Jr. and Richie note, “Groups such as the Global Warming Policy Foundation in London and the Competitiveness Enterprise Institute in Washington, DC, are highlighting the misuse of RCP8.5 to call into question the quality and legitimacy of climate science and assessments as a whole.” I think it’s wrong to claim that the existence of a high-forcing scenario compromises scientific integrity. But for some, it’s certainly useful.

Research Scientist

Columbia University and NASA Goddard Institute for Space Studies

Roger Pielke Jr. and Justin Ritchie make a number of provocative claims that deserve additional scrutiny.

Since the beginning of global climate modeling, scientists have been acutely aware of the need to maximize the ratio of climate change signals to the noise of chaotic internal variability. Two approaches are widely used. One is employing large-magnitude “forcings” (such as projecting abrupt increases of carbon dioxide concentrations by as much as four times current levels, or increasing carbon dioxide levels by 1% annually) to establish patterns of future climate change. The second is using wide spreads of storyline-based scenarios, where emissions and land use/land cover change as functions of varying underlying assumptions about energy use, economic growth, and other factors. These will hopefully bracket potential future changes and explore thresholds and non-linearities in the transient climate system response.

However, as climate models have become more comprehensive, creating coherent storyline-based scenarios for all relevant inputs has become more challenging. The increased coherence requires a substantial length of time (years) for the mostly unfunded volunteer economic and energy modelers around the world to create the input files for the climate modelers who, in turn, take another couple of years to complete the multi-model simulations and make the results available. It is thus neither remarkable nor surprising that the literature available for assessments such as those by the Intergovernmental Panel on Climate Change (IPCC) relies heavily on scenarios established a decade ago, including a high-emissions scenario (RCP8.5) that was originally described as “business-as-usual,” in the event society made no efforts to cut greenhouse gas emissions.

Over time, assumptions underlying the storylines can become more or less plausible, and specific scenarios, more or less useful. This was true for scenarios devised in the early 1980s that didn’t envisage the success of 1987’s Montreal Protocol on Substances that Deplete the Ozone Layer in curbing emissions of chlorofluorocarbons or foresee China’s rapid industrialization. Notably, we agree that the concept of a business-as-usual scenario in today’s fast-moving policy environment is poorly defined—particularly for a general audience—though neither recent IPCC reports nor the National Climate Assessment use such terminology.

As climate models have become more comprehensive, creating coherent storyline-based scenarios for all relevant inputs has become more challenging.

Despite claims by Pielke Jr. and Ritchie, the use of a wide range of plausible scenarios is neither a blunder on par with misidentified cancer cell lines (an absurd claim) nor an issue of “scientific integrity.” Rather, the scientific community is already responding to the need for increased diversity and real-world grounding of projections, as well as new conceptual approaches. New scenarios are continually developed for many different purposes, for example, to assess the climate impact of the COVID-19 pandemic. Additionally, there is already movement to assess impacts based on the commonly projected “scenario-free” global warming levels of 1.5 degrees Centigrade, 2ºC, 3ºC, and so on, which can be used broadly to quantify impacts for any new proposed scenarios.

Faster updates could be accelerated by institutionalizing scenario development and associated climate model input files. More focus on scenario-free analyses would also be useful. Certainly, increased communication between economic and energy modelers, climate modelers, and impact modelers is welcome. We stress that the use of a scenario such as RCP8.5 tells nothing about whether the results depend on the realism of the scenario itself. Thus, assessing the worth of scientific contributions by counting which scenarios are mentioned is like assessing honesty by counting the number of times the word integrity is used in an article; it is both pointless and misleading.

Director, NASA Goddard Institute for Space Studies

Senior Climate Science Advisor to the NASA Administrator

Strategic Science Advisor, Earth Communications

NASA Goddard Space Flight Center

Note: Chris Field and Marcia McNutt’s letter has been updated to include a more complete quotation from the original essay by Roger Pielke Jr. and Justin Ritchie.

“Science and Technology Now Sit in the Center of Every Policy and Social Issue”

In January 2021, President Biden appointed sociologist Alondra Nelson, a leading scholar of science, technology, medicine, and social inequality, to be the first deputy director for science and society in the White House Office of Science and Technology Policy (OSTP). Issues in Science and Technology editor William Kearney recently spoke with her about her role in bringing social science expertise to federal science and technology (S&T) policy and the Biden administration’s goal to make that policy fair and equitable for all members of society.

Photo by Dan Komoda.

You were writing a book about OSTP before your appointment there, and you’ve followed the ways its role in federal science policy has fluctuated over the decades. President Biden immediately heightened its role, however, when he elevated his science advisor, the OSTP director, to his cabinet. What is the significance of that move?

Nelson: I started doing the research for the book because I found it such a fascinating office for somebody who is a student of science policy. In the 1970s, the OSTP was originally imagined to be a small shop, but what’s happened over the intervening decades is that science and technology now sit in the center of every policy and social issue. And so it only makes sense—when I track evolution of this work with my academic’s hat on—that at this moment it would be a cabinet-level office.  

“What’s happened over the intervening decades is that science and technology now sit in the center of every policy and social issue.”

In answering your question, it is also important to think about the current context. Every president faces profound challenges and a unique set of historical circumstances when they come into office. For President Biden, this was a once-in-a-century pandemic combined with a climate emergency—all in the context of a growing awareness of injustice and inequity in American society, and globally. Every dimension of national and international policy, from health and education, to security, to social welfare, and everything in between, has something to do with science and technology. There’s no way to tackle the major challenges and opportunities we face without engaging science and technology. From that perspective, and given the president’s commitment to having a government that is evidence-based and informed by science, it follows that this would be a cabinet-level position. I think that the fulfillment of the aspirations and values of the Biden-Harris administration are manifest in the elevation of OSTP’s directorship to the cabinet.

OSTP is still a small shop compared to big agencies, so how do you coordinate science policy across the entire federal government so that it aligns with President Biden’s goals and vision? Is that the job of OSTP?

Nelson: Strategy and coordination are part of OSTP’s founding mission. We work in parallel with, and administer, the National Science and Technology Council (NSTC)—about which I think not enough is known by the public—to coordinate interagency alignment with the administration’s priorities. NSTC was established in 1993 and there is now a nearly 30-year infrastructure for doing exactly the kind of interagency work you suggest. NSTC is doing work on critical minerals, advanced manufacturing, scientific integrity, STEM equity, algorithmic accountability, and many of the other big issues we face. There are interagency folks at the table, sitting with OSTP colleagues, working to create strategy and policy.

On the eve of his inauguration, President-elect Biden wrote a public letter to Eric Lander, who he had nominated as OSTP director, tasking him with answering five big strategic science and technology policy questions. Among them was, “How can we guarantee that the fruits of science and technology are fully shared across all of America and among all Americans?” How are you trying to answer that question? What would success look like?

Nelson: The question President Biden posed to Director Lander in that letter suggests what is distinctive about this OSTP—and what I find really exciting about it. The question is the foundation of the Science and Society Division, which is a new division that I have the privilege of leading. Every day we are working with public servants, researchers and scientists, policymakers across government, and sectors of the American public to answer this question.

The goal is to build a science policy that intentionally and explicitly includes the perspectives of the American public, including seeing science and technology through the eyes of folks who are marginalized or vulnerable. This approach to policy views innovation as something that has been extraordinary and offered great progress and promise to some people, but has also sometimes come at the cost of harm and damage to other communities. And in this moment in which there is diminished trust in institutions and diminished trust in science, it means bringing S&T policy development out of the shadows. A phrase I often use is “showing our work.” For the government, that means being more transparent about the past, about what we’re doing in the present, and about our goals for the future. What you’ve been hearing in the language of the administration is an explicit effort to situate science and technology policy with democratic values, including inclusion, accountability, justice, and integrity. The challenge is to drive, design, and implement policy with those values always in mind.

“What you’ve been hearing in the language of the administration is an explicit effort to situate science and technology policy with democratic values, including inclusion, accountability, justice, and integrity.”

What would success look like? A STEM workforce that really looks like all of us, that reflects all of us, in the classroom and in the boardroom. Empowering new communities to be at the table of S&T policy. I think success looks like a public that feels that it can be engaged in the work of government; a lot of work we are doing in OSTP is conducting listening sessions and using other ways of engaging the public to help us think about the work we do. Success also includes a new set of rules of the road, such as an approach to innovation that is rooted in inclusion and scientific integrity. It means having a sense of responsibility to have aspirations, safeguards, and values in place that can help ensure that folks are not abused or discriminated against as new S&T comes online—to ensure, per President Biden’s question, that it really benefits all people.

You said there’s a need to be transparent about the past. What do you mean by that?

Nelson: The Biden-Harris administration has set out to pursue racial and economic justice in every facet of our work and to address head-on disparities and inequities that exist because of things that have happened in the past and continue to happen in the present. Disparities in medicine, health, and access to education didn’t just appear overnight; they congealed over time, one generation after the next, one injustice on top of another. Even those of us who might consider ourselves technophiles and science optimists grew up hearing stories of tragedies, and indeed horrors, in the past. The story that we hear most about is the Tuskegee syphilis experiments, which I often remind people was a project of the US Public Health Service, not just something that just sort of emerged or was in the private sector. That was 40 years of government research. 

We need to say that we know science and technology has not equally benefited all people. We stipulate that at the beginning. As I said before, in a context of low trust in government and institutions, it’s incumbent upon government, in a very profound way, to be forthright. If we are really going to be in service to the American public, we need to have some difficult conversations. I think from honest accounting we can move into truly innovative and mutually beneficial S&T policy and outcomes. 

“From honest accounting we can move into truly innovative and mutually beneficial S&T policy and outcomes.”

A couple of examples are the listening sessions, which I mentioned earlier, hosted by the Scientific Integrity Task Force. The task force was established through a memorandum from President Biden and was asked to recommend policies and practices that can prevent political interference in federal science, with the aim of restoring trust in government. Part of the work of the task force has been an accounting of lapses in scientific integrity as a necessary part of the process of suggesting a way forward. A second example is the Equitable Data Working Group that I cochair. This was established on the first day of the administration through an executive order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. This group is attempting to identify and fill in demographic data gaps to help answer the question of whether or not government is doing its work equitably. We need to be honest that in many instances we couldn’t answer that question in the past because we didn’t have the data we need to do so.

Almost 20 years ago you coedited a book, Technicolor, that challenged some common assumptions about the relationship between race and technology. What misconceptions persist about the so-called digital divide?

Nelson: I’ve been thinking about these issues for a long time. Technicolor was framed around early conceptions of the digital divide. A stereotype had emerged, a kind of false narrative about technological evolution, that held that progress had been forged largely by white scientists and technologists, white innovators, and white inventors, and that the other side of the coin was that people of color were somehow less capable when it came to technology. I think now we are a little more aware as a society that that framing is incorrect; there is a rich history of Black and brown scientists, inventors, and innovators who’ve achieved critical breakthroughs, often against incredible odds. In that early work, we were trying to surface some of that history and explore the idea that the digital divide, at its worst, can become this kind of self-fulfilling prophecy, a kind of fiction that people of color can’t keep pace in a high-tech world. We shouldn’t accept the notion that working-class people, or people who haven’t had certain kinds of educational benefits, are less competent, less interested, less passionate about, and less innovative in science and technology. We’ve got to think in different ways about the digital divide.

“We shouldn’t accept the notion that working-class people, or people who haven’t had certain kinds of educational benefits, are less competent, less interested, less passionate about, and less innovative in science and technology.”

In this moment what’s true and important about the digital divide is the extent to which it offers us a prism for understanding infrastructure inequality in the United States. Certainly, COVID-19 shined a light on a range of disparities, including the inability of many to get online to work remotely or to give kids access to schooling. I’ve been proud of what the administration has done to measure those disparities and to also try to address them. The National Telecommunications and Information Administration, which advises the president on telecom issues, published this incredible mapping tool where you can actually see the places and populations with more reliable or less reliable broadband coverage. The Biden-Harris administration is planning to invest $65 billion to connect Americans to highspeed internet.

How do we change the thinking about where innovation comes from?

Nelson: We know from the organizational behavior literature that it is diversity broadly—not just racial and ethnic diversity, but broad diversity of perspective and experience—that is one of the most significant drivers of innovation. When we are setting the conditions for innovation in science and technology policy, it is a shame if we are not also leveraging this one demonstrated driver of innovation. We need to get more people involved in the work of doing science and technology policy and, of course, science and technology research and development itself. The United States is this great lab of innovation, and we should be able to turn that innovation into products and practices that not only take on hard problems like climate change and pandemics but are also more equitable.

Do you see social science becoming a bigger part of the policymaking toolkit?

Nelson: I certainly hope so. This in part is why I am at OSTP. To go back to our earlier conversation, many of the tools that we need for robust government—tools for understanding the lived experiences of the American public; for assessing the equitable, successful delivery of government services, for identifying demographic trends in economy, labor, and STEM professions; for applied data science across pressing policy areas—come from social science. How do we assess whether or not programs are serving intended communities? Is this federal program serving hard-hit communities in low-lying lands that are more likely to be exposed to climate change? That, and many others, are empirical questions that can be answered when we apply social science concepts to qualitative and quantitative data. The answers we generate can then inform policy. 

I think that as government becomes more analytical, it is very important to have social scientists at the table. One of the most important reasons is because we think about answering questions with different kinds of data, produced using both quantitative and qualitative methods. And as much as the technical analysis matters, policymaking is always going to involve that social piece, that human piece, that historical piece. I hope a new way of thinking about not just S&T policymaking but policymaking more generally can be found in social science, which helps us see tensions in society, map them, reconcile them, and understand them, and recommend changes more conducive to equitable experiences and outcomes among all members of society. I believe as a scholar and researcher, and as a policymaker, that social science evidence, at its best, really can point us to better policy solutions.

How do you communicate to the public the urgency of climate change or other pressing issues in the midst of a still overwhelming pandemic?

Nelson: One of the lessons of COVID-19 is that, in some way, we all became social scientists. It is this moment, I think, in which all of us had to come to terms with the profound complexity of the challenges that we face right now, and in the coming years. There were times in the pandemic when all of us became armchair epidemiologists, making risk assessment calculations for our families, for our neighborhoods, for our workplaces and schools. 

“As government becomes more analytical, it is very important to have social scientists at the table.”

At the same time, the science and technology around the pandemic was extraordinary: we decoded the genome of the virus in a month or so, we had a vaccine in less than a year. Yet we realize we have not conquered it. It has not been for lack of science and technology that we have not conquered it, but because of the environment in which that science and technology emerged—these are profound social questions. And when it comes to climate change, we’re living in a time where the impact is acute, it’s urgent and existential. I want to believe that all of us in the American public are learning to face up to the complexities of climate change, and the pandemic may have primed how we think about it. I hope that presents some opportunities for courageous possibilities for both domestic and international climate change policy and for pandemic preparedness.

Is there anything else you would like Issues’ readers to know about President Biden’s science policy priorities?

Nelson: I would like your readers to know that the federal R&D budget for the 2023 fiscal year not only puts a priority on cutting-edge science and technology, but it also puts a priority on innovation for equity. We’re proposing a new kind of social compact for S&T policy, in which it is pursued in the context of the social ecosystem it sits in, with a greater awareness of whom it’s supposed to benefit—and how.

A Revolution for Engineering Education

Kudos to Sheryl Sorby, Norman L. Fortenberry, and Gary Bertoline for trying to foment “humanistically” a revolution in engineering education. In “Stuck in 1955, Engineering Education Needs a Revolution” (Issues, September 13, 2021), they call for ending the “pipeline mindset.” Their article aligns with descriptions of structural education problems—and proposed solutions—in Educating Scientists and Engineers: Grade School to Grad School, produced in 1988 by the Office of Technology Assessment (OTA) and presented to the House Science Committee. It noted:

  • The pipeline is a model of the process that refines abundant “crude” talent into select “finished” products as signified by the award of baccalaureate, master’s, and doctorate degrees.
  • The pipeline model is still a black box of the educational process as a dwindling supply of talent, with its composition in flux, that has been sorted and guided toward future careers.
  • To the extent that the education system unduly limits the talent pool by prematurely shunting aside students or accepting society’s gender, race, and class biases in its talent selection, it is acting out a self-fulfilling prophecy of demographic determinism.

Unfortunately, the pipeline metaphor persists to this day. Yet so does a fundamental policy prescription that OTA identified: “The skills of scientists and engineers must be both specialized enough to satisfy the demands of a stable market for science and engineering faculty and industrial researchers and general enough to qualify degree-holders for special opportunities that arise farther afield from their training but grow central to the national interest.”

What was compelling to the OTA project team back then is even more so today: the more “semi-permeable” the nation’s talent development pathways, the heartier and more inclusive will engineering education and the workforce become.

Independent Consultant

Savannah, Georgia

Cognitive Ecosystems

Braden R. Allenby’s article, “Worldwide Weird: Rise of the Cognitive Ecosystem” (Issues, Spring 2021), is timely as we rush to build the cyber-human world. Cognitive ecosystems have always existed, as Allenby cites in the example of Edwin Hutchins’s observations of Micronesian navigators. The difference between the old cognitive systems and the new is that the old were mainly local, and the control of resources and knowledge was also local. The printed word, the industrial revolution, and colonialism produced dramatic changes to the cognitive ecosystem over the past 400–500 years. Allenby describes the cognitive ecosystem of the future taking place around us as a continuation of the trajectory of increasing complexity of techno-human systems. He emphasizes the difficulty in perceiving the challenges that this new direction entails. Emergence is inherent in any complex adaptive system, but scale multiplies techno-human systems and complexity over time. 

Since the industrial revolution, scaling, power amplification, and efficiency have been primary drivers of development. As we scale, complexity increases and the need for control increases, with lack of predictability leading to nonlinear effects. The sociologist Charles Perrow has warned us that complex designed systems will lead to emergent failures embedded in the design that were unknown to the designers. The challenges becomes unfathomable for open systems with lots of “intelligent” black boxes built in and for distributed cognitive ecology. Who is building them?

Emergence is inherent in any complex adaptive system, but scale multiplies techno-human systems and complexity over time.

An emerging model is China’s social credit system that wants to shape the cognitive ecology ordained by the party. Elsewhere, tech giants and other entities determine our ecological direction, primarily for profit. In both cases, the systems are leveraging technology to consolidate and centralize data on the physical world and citizenry, its processing and memory afforded by the scalability of the techno-cognitive ecosystem.

Allenby points out that citizens and institutions are not oriented to absorb this mass scale rapid evolution of the cognitive ecosystem—Alvin Toffler’s “future shock.” Cognitive technologies enhance centralization, at the cost of reshaping local structures and making them less independent. The loss of local newspapers weakens the local cognitive ecosystem. Consolidation of power is inevitable when scaling is made possible through technology for physical or calculative power. The real question Allenby raises is whether the United States understands this well enough to compete to preserve the power of the people while not losing to China in its march to consolidation of power in an authoritarian cognitive ecology. 

Technology facilitates scaling, in turn producing consolidation of power that leads to loss of local cognitive autonomy and ecology. American democracy was envisioned to flourish by providing a space for democratic experimentation. If that spirit is lost to this new consolidation of power, the United States will in effect will become no different than China with a different illusion of harmony—not of fear but unconscious subjugation. Without the democratic ability to shape this cognitive ecosystem, it will only consolidate existing social and national power relationships rather than the imaginary freedom that the computational cognitive ecosystem promised. The centralization of power in this cognitive ecosystem to the state or corporate structures will be the end of social democratic innovation in a democracy.

Rephrasing Allenby’s challenge, how we design institutions that check the consolidation of social power and preserve the innovative and adaptive local cognitive ecosystems without loss of freedom, while taking advantage of the global cognitive ecosystem, is the question to be answered. Justice Louis Brandeis is speaking to us and warning us again of consolidation of power in democratic societies.

Research Professor

Engineering Research Accelerator

Engineering and Public Policy

Carnegie Mellon University

Beyond Trust in Science

In “Trust in Science Is Not the Problem” (Issues, Spring 2021), Alan I. Leshner urges scientists to stretch outside their comfort zones to regularly engage with the people who are paying the bills (taxpayers and their elected representatives), and who have some questions. A skeptical habit of mind is normally highly valued by scientists, who are trained to wield skepticism with the precision of a scalpel, and to disdain those with lesser skill sets. I think it’s fair to say that some scientists are disdainful of nonscientists; nonscientists pick up on that, and they don’t much like it. The science community should take a pledge to stop criticizing—or, worse, condemning—nonscientists who are actually just acting like scientists, asking questions, expressing skepticism.

Not all those who are asking questions, criticizing science, are eager to learn or change. Many are not! But some are open to engagement, and that’s where the opportunity lies.

The science community should take a pledge to stop criticizing—or, worse, condemning—nonscientists who are actually just acting like scientists, asking questions, expressing skepticism.

I agree with Leshner that instead of asking the public to change, we should expect, and empower, the science community to make some changes. There are science societies and foundation-funded programs that are doing some important work, helping interested members of the science community learn how to effectively engage the public. It’s time to take these initiatives and more to scale, and to learn as we go, just as in any new field of scholarship and pedagogy. Let’s incentivize academia to modernize the training curriculum for graduate students to include public engagement and communication. Teaching these skills and expecting evidence of competence is important. So is including public engagement activities in promotion and tenure reviews. These are important steps to speeding accomplishment of the goal of earning public confidence and trust on a sustained basis.

Let’s require federally funded science training to include a public engagement component. (Who could make that happen? An individual university could, federal agencies could, or Congress could.) Over time, generations of scientists will be empowered to encourage—rather than discourage or scorn—public engagement by their peers; scientists will welcome skeptical questions from nonscientists and will model the scientific process by stimulating more questions. More and more effective public engagement by scientists will also underscore the power of science to add value to all our lives.

President and CEO

Research!America

Principles for US Industrial Policy

In “Design Principles for American Industrial Policy,” (Issues, Spring 2021), Andrew Schrank calls for new design principles with which to anchor new innovation and industrial policies. To have a sustained positive impact, he notes, those policies must create a wide coalition of actors supporting them. This is a truly important insight, and Schrank has demonstrated it across an array of policies over several decades. It is clear that the United States will need to heed those lessons and build new policies along the lines of the targeted-universalism design principles he favors.

Where I would add to Schrank’s contribution is by focusing on the sociopolitical ideals with which we should employ those design principles. After 50 years of growing inequality and decreasing social mobility, the United States has a dual window of political opportunity. The reality is that the majority of Americans now face significant economic insecurity and fear for their future and the future of their children. Hence, Americans want a stronger, but also fairer, nation where everyone has a real shot at the American dream.

For that reason I argue that the United States employ distribution-sensitive innovation policies (DSIPs) as its sociopolitical design principle. DSIPs are designed to reach the dual goals of increasing economic growth while enhancing economic distribution. Amos Zehavi and I have examined such policies in multiple countries of the Organisation for Economic Co-operation and Development, and our findings dovetail with Schrank’s insights. DSIPs can be successful, but their survival depends on crafting a political logic that addresses current political needs and creates a constituency that welcomes their efforts, becoming politically mobilized to ensure their survival.

After 50 years of growing inequality and decreasing social mobility, the United States has a dual window of political opportunity.

In his article Schrank mentions two modes of DSIPs, those that aims at low-skilled manufacturing workers and those aimed at the economic periphery. He showed how such programs—for example, the Manufacturing Extension Partnerships, based at the National Institute of Standards and Technology—have been achieved their policy goals, but they have done so only by creating and mobilizing a political coalition to support them. Let me offer two other domains of DSIPs to consider.

Minorities. Governments intent on better integrating members of disadvantaged minorities into the workforce tend to focus on the low end of labor markets. But real progress happens when minorities get into the growing and innovative sectors of the economy. It is not enough to get disadvantage minorities into STEM education; it is also necessary to get them into innovative activities in technology-intensive workplaces. Minority group pioneers can play a critical role by serving as role models in their communities and by becoming nodes in social-professional networks that help future generations navigate the world of technology-intensive industries. Further, the success of such programs creates its own newly empowered political supporters.

People with Disabilities (PWDs). With the United States’ rapidly aging population, the percentage of PWDs is constantly rising. Alarmingly, labor market participation rates for PWDs are very low. More than ever, new technologies hold the promise of enabling PWDs better incorporation into the workforce. Governments can help by pushing for their development and implementation. As the political battles around Medicare demonstrated, older people comprise one of the nation’s strongest political forces, and more and more of them are becoming PWDs.

Schrank has powerfully demonstrated the need to apply targeted-universalism as the core design principle. At the same time, it will be important to ensure that more and more people can actively participate in the economy and fulfill the American dream.

University Professor and Munk Chair of Innovation Studies

University of Toronto

Codirector, CIFAR’s program in Innovation, Equity & The Future of Prosperity

Author of Innovation in Real Places: Strategies for Prosperity in an Unforgiving World (Oxford University Press, April 2021)

Andrew Schrank compares two ill-fated federal industrial policy programs to three others that are still alive and kicking. He compellingly argues that the difference between the failure of the former and the success of the latter was not in their economic effectiveness, but in their political viability. Contrary to the common wisdom that programs that evade attention are the most resilient, Schrank argues that the path to political viability depends on the different programs’ ability to foster broad constituencies.

Building broad constituencies requires federal programs to adopt a “targeting within universalism” design in which universalism guarantees that all relevant program clients get something and the economically least-developed are targeted to receive a disproportionately higher share of funding than others. However, unlike in social policy programs, targeting in industrial policy is not required to further essential program goals, but to acquire the support of actors—often the representatives of economically weaker states—that would hardly benefit from program allocations rewarded according to purely competitive criteria.

While I readily agree with Schrank that universalism is required to build broad support for programs, I wonder whether targeting is necessary from a political perspective. It is likely that as long as a state receives an equal share of program funding, it would extend its support for the program. Hence, targeting—that is, allocating outsized shares to the less-developed—is unnecessary, at least from a political standpoint.

I wonder whether targeting is necessary from a political perspective.

However, as Schrank duly notes, industrial policy is not exclusively about promoting industry competitiveness. In an era of rising inequality in general, and rising spatial inequality more specifically, governments are seeking ways to narrow the gaps and jump start economic development in “left behind” regions and towns. While it is true that return on government investment tends to be higher in economic powerhouses such as California, from a social equity perspective investing in less-developed Arkansas is the higher priority. I would argue therefore that the rationale for targeting (within universalism) is primarily furthering social goals. All states should benefit from funding to create a broad constituency; targeting is required to address growing social inequities.

Regardless, Schrank’s broader message that for industrial programs to succeed they must expand their constituencies is apt. Indeed, following this reasoning, for industrial policy programs to gain and retain political viability they should be designed to be inclusive. For instance, engaging unions in these programs—as is done, for example, in Germany—could bring a significant new constituency into the fold.

Of course, doing this, or more generally initiating new programs, is no mean task in today’s politically polarized age. Nevertheless, the Senate’s recent passage of the $250 billion US Innovation and Competition Act offers hope that given economic challenges (think China) and widespread social plight, industrial policy is on the rise again. Schrank offers sound advice about program design principles that if followed would increase the likelihood that these new programs would survive and thrive in the coming decades.

Chair, Department of Public Policy

Department of Political Science

Tel Aviv University (Israel)

Associate Program Director, CIFAR’s program in Innovation, Equity & the Future of Prosperity

Andrew Schrank makes a series of excellent points about the contemporary industrial policy discussion in the United States. I have considerable sympathy for what he says and regard the worries that he addresses concerning the ability of the US political system to design an effective set of policies benefitting American businesses and their workers to be of central contemporary political importance. I offer two thoughts in reaction to his argument.

Loss of industries, firms, and employment due to the China shock has left us with a surviving manufacturing sector that is relatively lean, and fairly competitive.

First, how uncompetitive are American companies that are still in business? The US decline in manufacturing employment has to do with the emergence of China and with secular improvements in productivity, more with the former than with the latter. The United States is still one of the largest manufacturers in the world. Loss of industries, firms, and employment due to the China shock has left us with a surviving manufacturing sector that is relatively lean, and fairly competitive. It is just that the surviving relatively competitive manufacturing sector that we have does not generate a great deal of employment. It would be important to know if Schrank wants policies that will make these already competitive companies even more competitive, which would benefit existing companies, but likely have only modest employment effects. Or if he wants to create more companies so that more people will be employed in manufacturing. Industrial policy will address the first problem, but it’s not clear that it is the right tool for the second problem.

Second, if multiplier effects from manufacturing on the generation of jobs are a key reason to be concerned with the health of manufacturing, wouldn’t it be important to generate an industrial policy that paid explicit attention to the ways in which employment and competitiveness are entangled across manufacturing, service, financial, and even agricultural domains? In some ways, Schrank’s premise—the nation needs a politically feasible but perhaps not so efficient industrial policy—is undermined by his focus on manufacturing alone. Industrial policy is a targeted program, not a universal one. By his own analysis, this is likely to generate opposition from those domains that are not targeted. Schrank wants those drawing up industrial policy plans to take the tension between universality and particular benefits into account, but there are a variety of universals and particulars in play. Is he focusing on the right ones?

Paul Klapper Professor in the College and Division of Social Science

Department of Political Science

University of Chicago

Democratizing Talent and Ideas

The new National Science Foundation director, Sethuraman Panchanathan, or Panch, as he encourages us to address him, shows in his Issues interview (Spring 2021) why he was chosen to lead NSF at this challenging time for the agency and the country. He has the ideal background, vision, energy, and passion to take on the task. That will all be tested as NSF moves into new territory—not uncharted, but not quite traditional either.

At a time when American leadership, its economy, and indeed the future well-being of its people are being challenged as never before, the nation’s political leaders are turning to NSF to play a particularly important role. They are asking for it not only to “promote the progress of science,” as the beginning of the agency’s mission statement reads, but to ensure that scientific discoveries and inventions are put to use by supporting translation to industrial application. Concerns have been raised about whether this is a proper role for NSF, whether this new responsibility will erode NSF’s tradition of excellence in supporting basic research in most nonbiomedical areas of science and engineering, and whether NSF can deliver.

The proposed bipartisan, bicameral Endless Frontier Act (renamed the Innovation and Competition Act), sponsored by Senate Majority Leader Chuck Schumer of New York, Senator Todd Young (R-IN), and Representatives Ro Khanna (D-CA) and Mike Gallagher (R-WI), is unprecedented in its proposed funding and bold challenges for NSF, including the creation of a new directorate focused on technology and innovation. It represents a major step up in NSF’s funding, responsibilities, and expectations on the part of Congress. 

At a time when American leadership, its economy, and indeed the future well-being of its people are being challenged as never before, the nation’s political leaders are turning to NSF to play a particularly important role.

On the House side, the proposed National Science Foundation for the Future Act is also bold and contains many of the features of the Senate counterpart, but with more emphasis on traditional research programs and on education and human resources. President Biden has similar aspirations for NSF and is proposing a 20% increase for the agency for fiscal year 2022, which, in part, will also fund a new directorate.

At this moment I would not venture to predict the outcome. But it’s clear that NSF is likely to be challenged to expand the scope of its activities—hopefully, with substantial additional funding. So, the questions are apt: Why NSF? And, can NSF do this job?

Serving as NSF’s tenth director, I was privileged to see firsthand how its program managers and support staff work so effectively and efficiently to get the most out of the agency’s relatively small budgets. I experienced the benefits of advice and cooperation from the National Science Board, which shares policymaking authority with the director. And I had a chance to study how NSF, over seven decades since its founding, has been able to adapt its programs to changes in the science and engineering disciplines and in requirements for new experimental facilities and research modes, and also to incorporate the most effective approaches to improving STEM education and inclusiveness. And it has done this while continuing to fund the most meritorious basic research proposals, using expert peer review. I am confident that NSF can do the job.

As to why NSF? I don’t see any other independent agency that could better do what is being asked of NSF. And there is no time to create one. Congressional leaders believe that a lead agency is needed and they have turned to NSF to play that important role. But I want to be clear on this: the challenge the United States faces in the coming decades, primarily from the rapid rise of China, is larger than what any one agency can do.

Fortunately, the federal government has many mission agencies that support excellent research, and it will be necessary for all of them—the Department of Energy, the National Institutes of Health, the National Aeronautics and Space Administration, the National Institute of Standards and Technology, the National Oceanographic and Atmospheric Administration, the Defense Advanced Research Projects Agency, and others—to be given additional funding so they can prioritize and coordinate their research activities in support of President Biden’s list of charges to his science advisor and Office of Science and Technology Policy director Eric Lander, who now sits on the president’s Cabinet.

Senior Fellow, Rice University’s Baker Institute for Public Policy

Former Director, National Science Foundation, 1993–1998

When asked to comment on the interview with National Science Foundation Director Sethuraman Panchanathan, I paused because I felt he had covered the subject so brilliantly and comprehensively that there was little I could add. Instead, I decided to focus on ways in which synergies within the “research triangle” (academia, industry, and government) amplify advances in science and technology to meet national objectives. Vannevar Bush, the architect of postwar US science policy, and Arthur Bueche, the influential head of research and technology at General Electric, were not only early advocates of building synergies within the triangle; they also personified the pursuit of these synergies in their own careers.

NSF nurtures synergistic research through interdisciplinary collaborations. In so doing, bright graduate students from all over the world come to US universities to pursue doctorate degrees in science and engineering under highly recognized faculty, many of foreign origin.

What Bush and Bueche would find amazing if they were alive today is the extent of innovation and entrepreneurship now taking place within the research triangle. Many universities have established centers to teach innovation by both learning and doing, foundries for rapid prototyping and testing, start-up centers for enterprise development, legal counseling for preparing and filing patents, and university-managed research parks for nurturing start-ups and attracting venture capital.

Technically aligned companies now locate development centers in proximity to these universities not only to gain access to unique research instruments and facilities but also to recruit top talent.

Government agencies play key roles in advancing science and technology developments within the triangle. They do so not only through their own laboratories but also through federally funded research and development centers, university-affiliated research centers, and cooperative research and development agreements by which they share their facilities and expertise with private companies to aid them in new product developments. Also included are industry technology development clusters, corridors, and parks in proximity to Department of Energy and Department of Defense laboratories and to NASA research centers.

Government agencies play key roles in advancing science and technology developments within the triangle.

By examining the types of collaborative science and technology clusters in the United States, one can appreciate the many ways in which entrepreneurial push can join with commercialization pull to build bridges across the so-called valley of death between R&D activity and commercial use. If one examines the distribution of these centers and clusters among the 50 states, one finds fewer than five not significantly represented.

Finally, it is important to delineate the difference between “incubators” and “concentrators.” All of the examples mentioned above are incubators of scientific discoveries, new technologies, and economic growth. Concentrators exist primarily in large metropolitan areas that attract rapidly growing innovative enterprises because of their proximity to supply chains, transportation hubs, air- and seaports, markets, and large-enterprise services (business, legal, and financial).

Economic growth concentrates as it migrates from distributed incubators to regional concentrators. It doesn’t follow that by increasing the geographic distribution of incubators that a greater distribution of concentrators will follow without substantial infrastructure investment.

David A. Ross Distinguished Professor of Nuclear Engineering Emeritus

Purdue University