Uncovering Hidden Bias in Clinical Research

Check the end of any recent study, and there will be a list of study funders and disclosures about competing interests. It’s important to know about potential biases in research, but this kind of transparency was not always the norm. Understanding bias in research and helping policymakers use the most reliable evidence to guide their decisions is a science in itself.

Lisa Bero, a professor at the University of Colorado Anschutz Medical Campus, has been at the forefront of understanding how corporate funding biases research and how to assess what scientific evidence is reliable. She talks to host Monya Baker about her investigations into the tobacco and pharmaceutical industries, techniques industries use to shape evidence to favor their products, and the importance of independent research to inform policy.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Monya Baker: Welcome to The Ongoing Transformation, a podcast from Issues in Science and TechnologyIssues is a quarterly journal published by the National Academy of Sciences and Arizona State University.

Check the end of any recent study and you’ll see a list of funders and disclosures about competing interests. It’s important to know about potential biases in research but this kind of transparency was not always the norm. In fact, helping policymakers use the most reliable evidence to guide their decisions is a science in itself.

My name is Monya Baker. My guest today, Lisa Bero, has been at the forefront of understanding how corporate funds bias research and how to assess what scientific evidence is reliable. Her investigations of the tobacco industry reveal techniques that industries use to shape evidence that favors their products and her analyses of statistical indicators of bias and drug-sponsored research has caused journals to change publication practices. She’s a professor at the University of Colorado Anschutz Medical Campus.

Lisa, welcome!

Lisa Bero: Happy to be here.

Baker: Tell me about how you started studying bias in clinical research.

I realized that a lot of policy solutions didn’t seem to align with what we were studying in the lab and we had the potential for all these great breakthroughs and they weren’t getting translated into a policy.

Bero: So, my PhD is in pharmacology and I was working in a lab doing studies of drugs, trying to figure out the mechanism of action for addiction. And I was also, at the same time, volunteering at places that serve people suffering from addiction. And I really got a bit disillusioned because I realized that a lot of policy solutions didn’t seem to align with what we were studying in the lab and we had the potential for all these great breakthroughs and they weren’t getting translated into a policy. So, I did start a postdoc in my basic science area but then I switched and did a postdoc in health policy and epidemiology. And frankly, that really changed my life because then I really wanted to shift to how we use evidence better to make informed health policy decisions.

But unfortunately, I got even a little bit more disillusioned because I realized that the evidence itself could be manipulated and that, if we wanted to have good evidence-based decisions, the first thing we need to do was have good evidence and we needed to make it understandable to policymakers. So, that set me off on my entire career looking at bias in evidence and how we better package that so people can understand it and use it.

Baker: Some of your early work was in tobacco and drug sponsored symposia. Tell me a little bit about what those are.

Bero: My first paper really looking at bias in a body of evidence was looking at drug studies. There were all of these sort of quasi-clinical trials being published in what were called symposia. So, they were usually attached to peer-reviewed journals; they often said they were peer-reviewed articles; they looked like peer-reviewed journal articles. But when we started to investigate them, we realized that often they weren’t peer-reviewed, at least to the same standard, and there was very much a lack of balance in these articles. So, they might say something like new breakthroughs in anxiety but only one drug for anxiety was ever examined and drugs weren’t compared to each other. There were all sorts of problems with how the studies were done that made them look very much in favor of whatever drug the symposium was about.

Drug companies would pay to get the symposium published and then use it for marketing.

So, we published that saying that these symposium were basically biased and, a few years later, actually, many journals stopped publishing these symposia because they realized that they were actually being used for marketing purposes and they were just… Drug companies would pay to get the symposium published and then use it for marketing. And then I was literally at some reception and somebody came up to me and said, “Oh, you think drug studies can be biased, you really ought to look at tobacco.” So, I did. I started looking into particularly studies on the harms of tobacco and second-hand smoke.

Baker: Tell me a little bit about the tobacco papers.

Bero: So, when I was researching tobacco, I was doing what we call these meta-research studies. So, we look across a body of evidence on tobacco and then we can detect these biases. What that tells you is that there is a bias towards a finding in favor of the industry but it doesn’t actually tell you how that bias comes about. And it was in the early 1990s that a whistleblower, actually, at tobacco companies started to release some documents that actually revealed how the companies were manipulating research. And those were such a breakthrough because, like I said, we could detect the bias but we couldn’t actually see how it was done.

And then the master settlement agreement came along among the attorneys generals in the US with the tobacco companies in 1998 I think it was. And basically, part of that settlement agreement said that all of the documents that were reviewed in lawsuits against the tobacco companies had to be made public. And that was an avalanche of information and it was amazing. They would say things like, “Oh, if such and such a study that we’re sponsoring does not find a statistically significant result, we’re not going to publish it.” So, basically the decision to publish a study was predetermined by the finding. There are also lots of documents on how they courted researchers by paying them lots and lots of money but they would cut them off and not pay them anymore if they started publishing findings that were not favorable to the industry.

The tobacco industry documents also showed how the companies worked together to do this. So they weren’t just working alone, they formed whole research organizations.

And the tobacco industry documents also showed how the companies worked together to do this. So they weren’t just working alone, they formed whole research organizations like the Center for Indoor Air Research to drive research that found results favorable to tobacco, in this case, second-hand smoke. So the Center for Indoor Air Research, they published a lot of research on basically whether green plants in your office were good for you or carpet off-gassing was bad for you. But interestingly, they didn’t publish research on the harms of tobacco but we found out they had a separate program that was an internal program that was focused on studying the harms of tobacco but those weren’t published in the peer-reviewed literature.

Baker: It just sounds antithetical to your goal of making sure that there was good evidence to be used to guide policy. Were you surprised?

Bero: Well, we were surprised and I did a lot of studies looking at, basically, public commentary that companies would submit on proposed regulation. And in this commentary, you would find all this science, they were quoting tons and tons of science and scientific articles, refuting independent studies, refuting Surgeon General’s reports and this and that and you’re like, “Where do these copious numbers of studies come from?” And then you dig around and you find out, “Oh, well, they come, actually, from the industry themselves.” And they may even be in the peer-reviewed literature or they may be a study that wasn’t published in the peer-reviewed literature and they say is a consensus of scientists.

And so that’s why, actually, all of this evidence was being produced: it was so industry could use it to protect their product from regulation. And, really, the public health community, they weren’t submitting a lot of counter evidence to the companies. The companies had a lot more money to produce and submit that evidence but, still, the public health community needs to respond and I think we’ve learned that. We’ve learned that we need to counter all these industry-sponsored studies.

Baker: So, speaking of just copious studies, what is the work to look at all of these together? Tell us about meta research, systematic reviews, how that plays into bias.

Bero: So, a systematic review is when you try to identify all of the studies that have been done on a particular topic. And then once those are identified, you extract data from those studies and you do what’s called a risk of bias assessment, which is basically to assess each individual study for methodological biases, and then these studies are summarized some way. And if it’s quantitative, it’s called a meta-analysis but they can also be qualitatively summarized. So, the key to systematic review is it doesn’t just look at one study or two in isolation, it looks at all the relevant studies on a topic and it includes this risk of bias assessment. So, that’s why that method can not only give you an answer to whatever the question is, but it can also identify bias across a body of evidence.

The outcomes of these studies only differ by who the sponsor is; they’re not different in the way they were conducted or in their methods. And so, there’s some other bias going on.

So, we’ve done a lot of studies in nutrition, for example, looking at studies that have been sponsored by sugar companies or cereal companies or dairy companies, comparing them to other studies and we see that the industry studies have outcomes that favor the sponsor’s product even when you control for other risk of bias in the study. The outcomes of these studies only differ by who the sponsor is; they’re not different in the way they were conducted or in their methods. And so, there’s some other bias going on.

Baker: I’m going to go a little bit on a tangent here because it’s not just corporate interests that can bias work. There’s also some academic research that is problematic that’s been published and that’s also something that you have done a lot of work to uncover.

Bero: Yes. And if you look at any individual study, you can usually detect biases. And so, that’s why this meta research is very important because you look across a body of evidence. That’s when you can detect that there’s an industry bias because, as I mentioned, the only thing that seems related to the outcome is who is sponsoring it. And so, if you look at other biases that academics might have, for example, a strong intellectual view on something or a track record or personal experience in a particular area, those aren’t in a single direction like the industry bias is which is in favor of their product. And so, you may have researchers who have different views on weight loss, for example, but they’ll all be different and so it doesn’t have this driving force like the industry research.

The other thing—and we’ve studied this—is academic researchers, if they’re good academic researchers and a lot of them are, they may change their mind over the course of their career. They may find that their hypothesis isn’t panning out and so they shift and do something else and that’s quite common as well. And you don’t see industry sponsors doing that, they stick with their hypothesis.

Baker: Tell me a little bit about your work at Cochrane and maybe start by explaining what that is.

Bero: Yeah. So, Cochrane is an international organization that publishes systematic reviews and other types of evidence synthesis and has really set the standards, over the last 20 years, for how you do systematic reviews. One of the things that Cochrane has been very concerned about is we apply these tools to assess risk of bias to the studies that we include in the review but, if the study isn’t real or if there are problems with the data that can’t be assessed by reading the study or looking at the data, then we wind up synthesizing evidence that’s not true. And so, what’s happened over the last few years is, in certain fields, people have started to do studies to show that a certain proportion of clinical trials probably have faked data or faked analysis. It’s not always just the data, it can be part of the clinical trial.

And I can’t give you an exact number on what that is because it seems to really vary by field but the fact that any of these fake studies exist and could be included in a systematic review is really a problem because we’ve assessed them for bias, we’ll put them in the review. And so, some of my research with Cochrane now is to develop a tool to try to detect problematic studies. So, studies that, not only may be fake, but may have major data integrity problems. So, that’s what I’m doing. There are quite a few tools out there and we hope to have one that will be agreed upon and recommended by Cochrane through professional consensus. And I can imagine, in a few years, this will become part of a standard systematic review method to assess studies for their data integrity before including them in a systematic review.

Baker: I wonder if maybe you could talk a little bit about how the status quo around transparency has changed.

Bero: When I talk about this, sometimes I feel like, “Oh, I’m Miss Doom and Gloom and everything’s so terrible.” But actually, we have made huge advances that have improved the scientific literature and most of those have been around transparency. And so by that I mean we know a lot more about how studies are conducted, who’s funding them, the conflicts of interest of the authors. So, it’ll be hard for some people imagine but, many years ago, there was no disclosure of who was funding it or what the conflicts of the interest of the authors are and that is much, much more common now. The International Committee of Medical Journal editors took this up, many individual journals took it up, so it’s really common practice now to have these disclosures.

If there’s one message I want to get across here is that transparency is not the same as independence.

We also have transparency databases in certain areas. So, for example, that’ll tell you payments that drug companies make to doctors. It’s also become much more standard practice to publish protocols of studies before you start conducting the study and then to also make the data from the study openly available. And that’s really important because that means someone can go and look and say was this study conducted according to the protocol. And if you have access to the raw data, you can look and see if there’s any manipulation of that data during the analysis. So, this is great, I am all for transparency but, if there’s one message I want to get across here is that transparency is not the same as independence. And we know, for example, that all this disclosure does not prevent the industry bias that we observe in these meta-research studies because they’re based on detecting the industry sponsorship through disclosure.

And so, we’ve got the disclosures on the studies and that’s how we detect that there’s an industry bias because we can classify them as industry or not. And there’s also been some really elaborate experiments that have been done in psychology that show that, when people disclose they have a financial tie, they actually give more biased advice. So, disclosure is great and it’s absolutely necessary but it’s not sufficient to ensure independence from these industry sponsors.

Baker: How do you think independence can move forward?

Bero: So, I define independence as researchers being able to do their research free of any interference with how the research agenda is set, how the question is framed, how the study is conducted and whether it gets published in full or not because those are all the areas where we’ve seen that industry can introduce bias. But on the other hand, we know that corporate interests are big funders of research. And so, one way to make this funding independent of all these decisions about the research is really to change the funding model. So, one, we need more funding for research that’s independent of industry. And some people think, “Well, that’s really pie in the sky, it’s a high bar.” It is a high bar but, on the other hand, if you think about it, why should companies be evaluating their own products for harm? That’s like having an arsonist do fire inspections.

And so, some ways to separate that funding that will be separate from having more public funding would be to have companies put their funding into a pool and then no one company can have any say about the research. And in Italy, actually, there was a model for this for a while that was quite successful where drug companies in Italy were charged a fee for how much they spent on drug advertising in Italy. And then all those fees were collected and used to fund independent studies reviewed by the government and independent peer reviewers on harms of those drugs. So, it is possible to have these consortia funding.

So, I think changing the funding model will need more than just editors behind it, it’ll really need regulatory agencies. This has also been proposed for the FDA, for example, that there ought to be a separate government agency funded by pooled money from pharmaceutical companies that would focus on assessing harm of drugs, for example. Some journals have actually acted on this so there are bans on funding industry funded research in some journals, a lot of journals, British Medical Journal led on this and there’s some journals will not publish research funded by industry. But again, that’s after the fact, the research has already been done so much better if we can prevent it up front.

Baker: I think you were also telling me that there had been some student activism about making sure that, if the industry did give funds for research, there were no strings attached?

Bero: Yeah. So, this is great. The American Medical Student Association, years ago, I can’t remember exactly when it started, but it was when there was a lot of information coming about undisclosed financial ties. They had this idea—and I worked with some of these students—that they ought to know if their professors are getting paid by the pharmaceutical industry if they’re lecturing them on teaching them about drugs made by that company, for example. And so, they created something called the AMSA Scorecard and so they would score all the medical schools on how strong their conflict of interest policies were. And this may be another approach to get people to change their behavior because medical schools actually became quite competitive, they wanted to have a good AMSA score.

And so, some of them were actually creating conflict of interest policies, making their professors disclose and also putting even restrictions on what the professors could receive because they wanted to increase their score and look good to these medical students. So, that was a very powerful, I think, grassroots effort to push on transparency.

Baker: You’ve looked at various different topics. Do you see much difference in different industries?

It shows a timeline of when harmful effects of PFAS were detected by the company versus when they were published in the scientific literature and there was a big time lag. So, the company knew things years before what was getting published in the scientific literature.

Bero: Nope. Actually, we’ve deliberately compared industry tactics, actually, to influence research. And the more we look, the more similarities we see. Many years ago, we did a study looking at tactics used by pharmaceutical, tobacco, vinyl chloride, lead and asbestos industries and, again, we had some internal documents that were available related to those different companies and they were all using the same tactics. Nowadays, UCSF, the University of California San Francisco, has an archive of internal documents that now includes many, many types of documents, including documents from chemical companies. In one of the more recent studies I did, we looked at documents from 3M and other companies that make these forever chemicals called PFAS. We actually deliberately compared the strategies revealed in those documents to what we studied with other industries and they were the same. And I think one of the most compelling figures in that paper, it shows a timeline of when harmful effects of PFAS were detected by the company versus when they were published in the scientific literature and there was a big time lag. So, the company knew things years before what was getting published in the scientific literature.

Baker: Is there anything else that can be done to decrease corporate bias at research besides independence?

Bero: Well, I think there’s a lot being done now and my main heroes in this whole type of work I do are all the people who get attacked by industry because they are doing independent research. And I think that that’s something that… It’s been written about quite a bit but I don’t think people realize the scale of it. And so, one of the main things we can do to have independent research is to support independent researchers more. And I don’t mean financially, I just also mean through their university structures and through some of these attacks that they get from industry. So, that’s really, really important. And I think that some of the stuff I’m doing in the environmental area is to say that, when people are doing risk assessment of a chemical, they need to recognize that the industry-sponsored studies have a bias in favor of their product and they need to take this into account when they’re doing evidence synthesis.

I’m not saying that they should never look at industry-sponsored studies but I am saying that they should take that into account as a risk of bias and do a sensitivity analysis or some analysis that would control for that.

So, I’m not saying that they should never look at industry-sponsored studies but I am saying that they should take that into account as a risk of bias and do a sensitivity analysis or some analysis that would control for that and see if there really is a bias just related to the sponsor. So, I think one way to improve the independence of research is to just support it more and really take care of the researchers who are doing that independent work.

Baker: So, in preparation for this interview, I was reading one of your articles from last year and you were looking at, I think, a decade of work and you found that 80% of articles that were criticizing systematic reviews in policy had an industry chai compared to 35% of positive responses. And then you also found that, if somebody was responding to a systematic review, that they often didn’t respond in the same journal, they would respond in a different journal and that would be paywalled.

Bero: That’s a great example of an orchestrated industry response to an independent study. So, yeah, Adrian Traeger was doing Cochrane reviews looking at the effects of spinal cord stimulation on pain and found that, basically, when you assessed the risk of bias of the studies and if you looked at industry sponsorship as a potential risk of bias, then you found that the independent studies say there’s no effect of this intervention on pain and, in fact, there is some harm of the intervention. And when he published this Cochrane review, they were just… This orchestrated attack began and I’ve seen this over and over and over which is why I wrote that article with Adrian. It happens in all sorts of fields.

Baker: I know that, when you uncovered all this evidence that there were academics who were submitting, essentially, fake clinical research that one response that really concerned you was that some people conducting systematic reviews would say, “Okay, well, I’m just not going to include any research if it comes from a country that seems to have a high rate of faked clinical research.” And I wonder if you think that it would make sense to leave out research by industry?

Bero: Yeah, that’s a very contentious topic, actually, and there certainly are folks… I was just, yeah, giving a talk where this came up earlier this week at an environmental health meeting and there are certainly people who do research and commercial determinants of health who think that industry-sponsored research shouldn’t be considered at all. I’m actually not like that, I think that industry research should be assessed for risk of bias using these meta research methods. So, not just focusing on individual studies, but seeing if they really are driving the research agenda or not publishing all the data, that sort of thing. And, if so, then they need to be discounted because of that bias.

And there are tools for doing this, there’s a tool called the Navigation Guide that’s used in environmental health that Tracey Woodruff’s group has come out with so there are ways to do this. And the reason I say that is because, like anything, when you do a clinical trial—and I’m sure people get this—you get a statistical estimate of the effect. And so, you give a drug and 30% more of the patients who got the drug don’t have a heart attack which is the desired outcome and that’s considered a huge effect in a clinical trial. 30% of the treated population had the desired outcome. And when we do these meta research studies, it’s the same thing. We can say that drug industry sponsored studies overestimate effects of their drugs 30% of the time. So, that means that there’s a big bias there but it also doesn’t mean that every single study has an invalid result. So, that’s why I think we need to consider these industry sponsorship as a risk of bias and take it into account by discounting those studies. Depending on the area, it would be by a different amount.

Baker: Okay. So, be wary but don’t apply too broad of a brush.

Bero: Yeah. I’d say be very worried and don’t apply too broad of a brush. And I do think, in some areas, and this is interesting, I mentioned before how, in some areas, almost every single study will be funded by a sponsor. And in that case, I think we need to be very, very worried and say we don’t have the answer. Because if every single study that looked at effects of a particular drug are funded by the maker of the drug, then we don’t have anything to compare it to, we don’t have any independent research.

And interestingly, I was working with some patient groups with Cochrane several years ago where we were trying to make our abstracts into more plain language and one of the things we did not put in our abstracts were how many of the studies included in the reviews were industry sponsored. And the patients really wanted to know and I remember one of them saying to me, “Well, I would like to know if there’s four drug trials in this review and they were all funded by the company that makes the drug because I’m just not going to believe it then.”

Baker: I want to get back to the question that has motivated your career. What do you think can be done so that systematic reviews are better serving the needs of policymakers?

Bero: Well, I think policymakers need to have skills or helpers who can help them figure out what reviews come from a trusted source. As systematic reviews have become much more prevalent and, actually, much more useful and respected by policymakers, industry just cranks out systematic reviews like crazy. Just like original research studies, they may not agree with the results of independent reviews. One of the first studies I did was looking at reviews on studies of second-hand smoke and I looked at tobacco industry sponsored versus not. And this was the biggest effect size I’ve ever found but the tobacco industry funded reviews were about 80 times more likely to say that second-hand smoke was not harmful compared to reviews with other sponsors and that control for the methods of the reviews and all that.

Sometimes we don’t have studies for important policy questions and when a review is done, the answer is, “Oh, we’re uncertain, we don’t know because there’s not enough evidence,” and that’s a very, very disappointing outcome for a policymaker.

But the tobacco industry had been producing all of these reviews and getting them directly in front of policymakers saying, “See, we have a review of second-hand smoke, it shows it’s not harmful,” so they were directly competing with the independent reviews. And so, that’s the first thing. I think policymakers need to look for reviews from trusted sources and that have actually tried to … That have independence from industry sponsors. So, for example, Cochrane reviews cannot be sponsored by a company whose product is being assessed in the review. We don’t allow any corporate sponsorship of Cochrane reviews so we’ve just eliminated that as a risk factor for the review.

So, I think that’s one thing that policymakers need and I think another is, and this has been an eternal battle, is to get systematic reviews in line with the important policy questions because systematic reviewers review studies that exist, usually. And sometimes we don’t have studies for important policy questions and when a review is done, the answer is, “Oh, we’re uncertain, we don’t know because there’s not enough evidence,” and that’s a very, very disappointing outcome for a policymaker. That’s a little more complicated because that means we also have to get the original research in line with the policy questions and we need to do some good forecasting.

Baker: So the researchers know what’s needed and the systematic reviewers know what’s needed and that it’s there?

Bero: Yes, exactly, exactly.

Baker: What other things do you think are important in bringing good evidence to policy through evidence synthesis?

Bero: Well, we need to make them understandable too and, again, that’s also been a long battle because, just like a clinical trial, when you do a systematic review, you get a numerical result. So, I can say pharmaceutical industry sponsored studies are about 30% more likely to have a favorable result than those with other sponsors and then we always put an uncertainty level around that like plus or minus or 95% confidence interval. And it’s very hard to make a decision when there’s a lot of uncertainty around the effect. But I’m not a science communication expert but I know there’s a lot that can be done to help people understand uncertainty in a finding and how to deal with that.

There’s a lot that can be done to help people understand uncertainty in a finding and how to deal with that.

So, I think that’s another thing. Systematic reviews are complicated and there’s a lot that goes into them. There’s usually multiple analyses in one review and they may look at effects, they may look at harm, they may look at both and so then you have to weigh all that when making a decision about a product, for example. So, they’re pretty complicated and figuring out the best ways to summarize and convey those results is really important. But I would still say that there are much better assessment of the evidence than just looking at one or two studies out of context.

Baker: Lisa, this has been a fascinating conversation. To learn more about how bias impacts scientific research and what can be done about it, visit our show notes to find links to more of Lisa Bero’s work. Please subscribe to The Ongoing Transformation wherever you get your podcasts and write to us at podcast@issues.org. Thanks to our podcast producer Kimberly Quach and our audio engineer Shannon Lynch. My name is Monya Baker, thank you for joining us.

Hope for Hydrogen

In “Moving Beyond the Hype on Hydrogen” (Issues, Summer 2024), Valerie J. Karplus and M. Granger Morgan provide an excellent assessment of hydrogen’s advantages and significant barriers to market formation. Toyota has more than 30 years of experience with all phases of the hype cycle for hydrogen—innovation, inflated expectations, disillusionment, and enlightenment.

Toyota began developing its hydrogen-powered fuel cell vehicles in 1992, one year after Sony commercialized the lithium-ion battery. Sales of the first hydrogen passenger car, the Mirai, launched in 2014, with the second generation in 2021. Over those 30 years, we saw the initial innovations in fuel cells dramatically improve in unexpected ways. During the same period, we also watched lithium-ion batteries grow into the clear leader in the race to decarbonize passenger cars.

Despite the technical success of the Mirai, the vehicle has struggled in the marketplace due to the difficulties of hydrogen supply and fueling infrastructure. The challenges continue, with high prices for hydrogen at the pump and fuel stations closing. Despite headwinds, at Toyota we find ourselves asking the same question as the article: “Is hydrogen’s long-forecast—and long-hyped—future [as a fuel for transportation] finally here? There are reasons to be hopeful.

Transportation encompasses more than passenger cars, with about 25% of transportation carbon emissions coming from medium- and heavy-duty commercial transport. While hydrogen will compete with battery electrics in commercial vehicles, both have significant infrastructure challenges. Battery electrics don’t have the same advantages in large vehicles with high mileage as they do for passenger cars. The best choice remains unclear.

Not long ago, the technical barriers for fuel cells in large commercial vehicles seemed insurmountable. But the technology is here today. The key barriers remain in the hydrogen ecosystem: achieving low-cost production, sufficient distribution, and matching of supply and demand. The US hydrogen hubs are an exciting idea for creating a useful hydrogen market, tackling production and multisector consumption in a coordinated way. Initiatives such as the hubs are important to advance the portfolio of hydrogen applications beyond transportation.

Not long ago, the technical barriers for fuel cells in large commercial vehicles seemed insurmountable. But the technology is here today.

The success of hydrogen in commercial transport depends on the key question the article asks: “Which users of fossil fuels must bear the costs?” Companies that operate commercial vehicles are sensitive to the total cost of ownership. Diesel is a low-cost, energy-dense fuel with an existing infrastructure. While the low cost makes diesel difficult to displace, we must also account for all societal costs. Diesel trucks are large emitters of particulate matter and pollutants, which have severe impacts on health in many communities.

Karplus and Morgan place a 70:30 bet that hydrogen “will become an important part of the portfolio of technologies” for decarbonization. Portfolio is a key word here, and we need to explore all options for commercial transport including battery-electrics, better fuels, and better fuel economy. I don’t know if the 70:30 odds for hydrogen are a good bet or not. But at Toyota, we’re aggressively developing the technologies to try to tilt those odds toward success as strongly as we can.

Vice President of Energy & Materials

Toyota Research Institute

Valerie J. Karplus and M. Granger Morgan clearly describe the potential for hydrogen to decarbonize the US economy and the need for both policy and low-cost ways to safely harness the advantages of hydrogen.

To play its part, the US Department of Energy Hydrogen Program has worked for decades to accelerate technological advances and de-risk industry investments. The program includes the regional clean hydrogen hubs and the Hydrogen Energy Earthshot (aimed at enabling production of 1 kilogram of clean hydrogen for $1 in 1 decade), which are two of the flagship initiatives launched by the Biden-Harris administration. It also includes long-standing efforts across research and development, manufacturing, and financing. Hydrogen is not considered a silver bullet, but is one part of a comprehensive portfolio of solutions to meet the nation’s climate goals. As stated in the US National Clean Hydrogen Strategy and Roadmap, clean hydrogen can cut emissions across sectors such as industry (e.g., iron, steel, and fertilizer production) and heavy-duty transportation, and can enable greater adoption of renewables through long-duration energy storage.

Although challenges remain, there has been significant progress resulting from DOE’s decades of leadership and investment in hydrogen technologies.

While the authors focus on the hubs, the administration’s investments in clean hydrogen also include several other relevant initiatives. Programs across DOE and the Hydrogen Interagency Taskforce, which encompasses 12 federal agencies, are addressing challenges spanning the entire clean-hydrogen value chain—including siting, permitting, and developing sensors to monitor emissions; ensuring safety; fostering a robust supply chain; establishing fueling stations; and lowering cost across production, delivery, storage, dispensing, and end uses of clean hydrogen. New projects are being launched to share best practices for community engagement to help inform clean hydrogen hubs and other deployments.

Although challenges remain, there has been significant progress resulting from DOE’s decades of leadership and investment in hydrogen technologies. At least 4.5 gigawatts of electrolyzer deployments (not including the hubs) are underway in the United States, up from 0.17 GW in 2021. (One GW is roughly the size of two coal-fired power plants and is enough energy to power 750,000 homes.) With funding from the Infrastructure Investment and Jobs Act, also known as the Bipartisan Infrastructure Law, adopted in 2021, DOE is enabling 10 GW per year of electrolyzer manufacturing and 14 GW per year of fuel cell manufacturing—an order of magnitude increase over today’s capacity. Thousands of commercial systems in diverse applications such as forklifts, trucks, buses, and stationary power are now operating. DOE funding has led to over 1,080 US patents since 2004 in hydrogen and fuel cell innovations. Over 30 of these DOE-funded technologies have been commercialized and about 65 could be commercial in the next several years.

The Strategy and Roadmap targets 10 million metric tons (MMT) per year of clean hydrogen use by 2030, 20 MMT per year by 2040, and 50 MMT per year by 2050, which will enable up to 10% reduction in total greenhouse gas emissions economy-wide by 2050. To meet these ambitious goals, it is essential to accelerate deployments and scale up. Recent public announcements add up to 14 MMT per year of planned clean hydrogen production, or over 17 MMT per year including the hubs, pending final investment decisions. While designing and implementing a perfect policy framework is challenging, the current programs and policies in place are having impact, and the nation must keep up the momentum to realize the full benefits of clean hydrogen for the climate and the economy.

Director, Hydrogen and Fuel Cell Technologies Office

US Department of Energy

“You Learn More From Failure—When Things Are Not Working Well.”

Katalin Karikó is a pioneer in research on messenger RNA (mRNA) and its medical possibilities, which were revealed on a global scale during the COVID-19 pandemic. With her colleague Drew Weissman, she developed the type of mRNA that eventually made possible the Pfizer-BioNTech and Moderna vaccines, which saved millions of lives worldwide. For their breakthrough, they received the 2023 Nobel Prize in Physiology or Medicine. 

Karikó is an adjunct professor of neurosurgery at the University of Pennsylvania. She recently authored the memoir Breaking Through: My Life in Science, and is a member of the National Academy of Medicine. 

Karikó spoke with editor Sara Frueh about her childhood in Hungary, the joys and challenges of bench science, her struggle to find support for her RNA research, and the experience of finding herself suddenly in the spotlight.

What drew you to science as a young person?

Karikó: In Hungary, we started to learn biology in fifth grade. In the fall we went outside and the teacher picked up leaves and said, “Isn’t that interesting? Why is it falling down? Why is it not growing very big?” And you just wonder. Science is not about being in a special place like a lab—it is just noticing what is right around you and asking, “Isn’t that interesting?” 

Even in elementary school, we did this little experiment over a saturated salt solution. We put a thread in it, and over a couple of weeks we checked to see if a crystal is growing. And later here at Penn, when sometimes I would see a little crystal form in the bottom of a bottle, I remember how happy I was. 

You grew up in rural Hungary, and you emerged as a world-class scientist. Were there things about the Hungarian education system that helped that happen?

Karikó: It was the ’50s and ’60s. We didn’t have phones, we didn’t have television, so we just played outside. We didn’t have special toys, so we had to figure out something we can use as a tool to play with. When you have fewer resources, you have to be more inventive. Our parents hadn’t gone to high school—they had no resources, and they had to earn their living. My father was a butcher. Even when he was five, six years old, he was already working, herding animals, so that he could earn the food he ate. 

By the time my sister and I entered school, the Hungarian system tried to say, “You can be anything—you just have to study.” 

“Science is not about being in a special place like a lab—it is just noticing what is right around you and asking, ‘Isn’t that interesting?’”

The first time I saw a university building and a professor was at a summer program for high school students at the University of Szeged. We stayed in the dormitory, one room with 30 kids in metal beds. So that’s how the system provided. At five in the morning, we would get up and go to lectures until the late evening. Later we came back during winter vacation for more lectures and testing, and eventually we got accepted to the university, which was very difficult. Many times I thought, How many kids couldn’t come here and study because maybe their parents thought that they shouldn’t, or it never even occurred to them?

What first appealed to you about studying mRNA and drew you to that line of work? It sounds like working with RNA, especially back then, was really difficult.

Karikó: When I started working with RNA, it was not a visionary thing. As an undergrad, I worked in a team at the Biological Research Center in Szeged studying lipids, and we happened to make liposomes that helped us to deliver DNA into cultured cells.

Then the organic chemist Jenő Tomasz said he had an opening in his team researching RNA, and did I want to do my PhD there? I said, “OK.” At that time, getting a degree was not very organized in Hungary—you just worked for somebody, and whether it is good or bad, that is your luck. 

So I started to work with RNA and, you know, you learn to do something, you enjoy it, so you read all about it. And then you are good at it, and then you enjoy it even more. This is how it happened with RNA. And I have to say, I have no special talent, no special memory, nothing. It is just that I can focus and work hard.

I continued to work with RNA when I came to the United States to work at Temple University in Philadelphia. I came to realize that maybe messenger RNA would be a better way to deliver therapies to cells than DNA.

When I went to Penn in ’89, I looked up how we could make mRNA and that’s where I started. Most laboratories at the time had difficulties working with RNA—it is very fragile and degrades easily. So they felt sorry for me when I said that I’m working with mRNA: “Oh my God, poor Kati.” 

Douglas Melton, who first reported how to make mRNA in a tube, never thought that it could be used as medicine; he thought it would be useful as laboratory research tool. At the beginning, only a very small amount of protein could be produced from the mRNA, and people questioned whether it would ever be therapeutically useful. But as I worked at the bench, I constantly improved the mRNA, the quantity of protein produced from it.

How did you develop the type of mRNA that was eventually used in the COVID vaccines?

Karikó: I didn’t set up my career path so that I would make noninflammatory mRNA. In fact, I didn’t even realize that RNA was inflammatory until I met and worked with Drew Weissman at Penn.  

In an experiment where we introduced mRNA into immune cells, we could see inflammatory molecules were being generated. And we asked, “Why is that?” It was curiosity-driven research; we tried to understand the inflammation, and thought, maybe because the mRNA is coming from outside into the cells, it is a danger signal to the cell. We also wondered if all types of RNA trigger inflammation. 

“They felt sorry for me when I said that I’m working with mRNA: ‘Oh my God, poor Kati.’”

By that time, I had already worked with mRNA for 10 years, and I had a repertoire of different RNA isolates that we could test. So we did the experiment, and it turned out that one type of RNA called transfer RNA did not cause inflammation; it was nonimmunogenic. Considering that transfer RNA contains a lot of modified nucleosides, we suspected that those were making the RNA noninflammatory. We then generated a kind of mRNA with similar modifications, which was not only noninflammatory but also produced ten times more protein. Both these qualities were important in creating effective vaccines.

For a long time, you found it difficult to get grant funding or institutional support for your work on mRNA. Why do you think the potential of mRNA went unseen for so long?

Karikó: In 1992, Floyd Bloom and colleagues published a paper in Science about successfully treating sick animals by injecting vasopressin mRNA into their brain. It was an important work, but they never published anything on mRNA again. Another group of scientists who were using mRNA to try to develop a cancer vaccine told me that they couldn’t get funding. My experience was similar—for two years at Penn, I wrote at least one grant application a month and not one of them came through.

I think what happened is what my Hungarian colleague Csaba Szabó describes in his forthcoming book, Unreliable. A scientist who gets a grant becomes a member of the National Institutes of Health committee to evaluate grant applications by fellow scientists working in the same field. They might get 10 grant applications and have to read all of them. They try to absorb them when a zillion things are going on in their life—they have to write their own grants, publish their papers, manage their lab, and so on.

And when they evaluate the grant applications, in some case they understand it immediately because it’s similar to what they are doing. And in other cases, they think, “What? mRNA? And who is this person?” They think less of those applications because the science—and the scientist—is unfamiliar to them, and they do not have time to learn more. That’s what I think happens, even if everybody is trying to do their job the best they can.

There was an article in the Harvard Business Review that described how there is a center where the money and the prestige are, and if you are not there, you are in the periphery, like me, and can get lost. And the article said the only benefit you have in the periphery is the freedom that you are doing what you think is important.

But you need a connection. You need somebody channeling at least enough money to survive, because otherwise your work will be lost. If you have enough funding to continue, at least you can advance the research. I didn’t get grants, but Elliot Barnathan and David Langer helped me to survive at Penn until I met Drew Weissman, so thanks to them every time. 

One thing that comes through in your memoir is your unwavering commitment to meticulousness and rigor in research, and to prioritizing quality over the number of publications and external rewards. Should academic science be doing more to impress those priorities on researchers?

Karikó: I did not pay attention to what others are doing in terms of getting promotions and grants. So that’s what I tell the students: don’t compare yourself to others. If I would have compared myself, I would have left the whole field a long time ago.

“The only benefit you have in the periphery is the freedom that you are doing what you think is important.” 

But if your focus is always the science—“I want to understand this biological mechanism”—you never get disappointed. Not even when somebody else publishes something about what you are investigating, because you want to understand, and that person contributed to the knowledge.

Everybody starts by focusing on science, and somehow they shift to, “Oh, we should do more experiments. We need more people, so we need more money.” They start to write grants, and more people come to work in the lab. Now those people have to publish to get to their PhDs. Then you are submitting more and more grant applications because you have to keep the lab running, and your promotion is coming up, and your tenure. And the goal has now became that, and performing experiments becomes a tool to reach that—not a tool for understanding.

Your memoir notes an idea, I think by the endocrinologist Hans Selye, that deeply resonated with you: that a scientific experiment asks a question of nature, to which nature answers “yes” or “no.” And you’ve always been patient with the “noes”—the negative results. Why are they important, and should the scientific community be attaching more value to them?

Karikó: When you do an experiment and don’t get what you expected, you think, “Oh, it didn’t work.” But the reality is that you just don’t understand what’s going on yet. It is very well known that you learn more from failure—when things are not working well.  

With failure, you ask, “What’s going on? I didn’t get the result I thought I would. Maybe there is another approach I can try.” And then you start to study. And then you figure out, “Oh, probably if I add this, then this will happen.” 

It is very important not to focus on success—it’s so rare. If you want instant gratification, don’t be a scientist, because you won’t get that. You try many things and you don’t know whether something is doable or not. But this is being a scientist. We are doing things that nobody has done before, and we don’t know whether it is possible.  

During the pandemic, what was it like to find out that the vaccines you’d been working on actually worked? And what was the most satisfying part about all of that?

Karikó: I have to say that I expected that the vaccines would work. BioNTech signed an agreement with Pfizer in 2018 to develop an mRNA-based influenza vaccine. And by end of 2019 we had already seen the results of animal trials. We were ready for the human trial with the nucleoside-modified mRNA. So in 2020 we just had to change the template so the generated mRNA coded for coronavirus-specific protein. Considering all the prior results, I expected that this new vaccine would work. 

What I did not expect was that I would be recognized. One day at the end of 2020 CNN called me, and I was so scared that I had to say something. I was watching CNN, and I got a call from CNN, which could be seen right on the screen, “call from CNN,” and I thought, “Oh my God, oh my God.”

“You try many things and you don’t know whether something is doable or not. But this is being a scientist. We are doing things that nobody has done before, and we don’t know whether it is possible.”

It was just so stressful. I could hardly say anything because I was not used to giving interviews. Later when I received awards, I felt the same way. It took time to realize: OK, the prize is for science; the spotlight is on the science. This is a good thing that people are talking about science. And I have to help the public to understand better what the scientists are doing. And I have to inspire the next generation. And I started to talk about these topics.

What advice do you have for young scientists, and students who are thinking about becoming scientists?

Karikó: Whatever you do, you have to enjoy, and love it, and then you will be good at it. And if you like to solve puzzles, then you should consider science as something that you could pursue in life. And it can be a fulfilling life. You won’t be rich; it’s not that kind of life. It is hard work, but the fun is there, as solving puzzles of science is fun. 

Also, your physical and mental health is very important. Exercise regularly and learn how to handle stress. It is important to focus on things that you can change. Ask yourself what you can do, and not what others should do. And please do not compare yourself to others, it takes away your attention from those things that you can have influence on.  

You have to believe in yourself—that with hard work you can achieve your goals. It is not easy, nothing is easy. But if you are working in a laboratory, you are already in a great place to have a wonderful and fulfilling life. 

A Very Different Voice


Xavier Cortada’s The Underwater is a series of public art installations that reveals the vulnerability of Florida’s coastal communities to rising seas. In the form of murals, crosswalks, concrete monuments, and yard signs, the artworks prominently feature the elevation of the site where they’re located. Through community workshops, apps, and even buswraps, these works raise awareness, spark conversations about climate, and catalyze civic engagement. 

As the chief resilience officer of Broward County, Florida, this kind of engagement is vital to me: I’m responsible for leading climate mitigation and adaptation strategies across our 31 municipalities with 1.9 million people. Our land is between 4 and 9 feet above sea level, and we have nowhere to retreat if it floods. A lot of my effort is focused on helping to guide planning and management decisions that support our natural resources as well as our built environment, in addition to working with state, federal, and local agencies on a coordinated strategy to reduce the severity of the impacts of climate change.

In Broward County, we’ve been working on climate initiatives since 2000. But despite all our progress, we’ve really struggled with communications and community engagement. We don’t have the large community-based activist groups that have served as on-the-ground partners in some other places. And we are aware that government isn’t always the best messenger, and that we need a diversity of voices.

I first encountered Xavier’s work at an environmental youth summit with a few thousand students. I was really overwhelmed with the quality of the work and the students’ connection with the history behind it. Xavier explained his own experience as an artist: he had gone to Antarctica and felt the ice melt in his hands and realized how that connected to the communities that he loves. He was utilizing this creative messaging of art to engage and communicate in such a powerful way. 


After I met Xavier and discovered his work, we in Broward County began to envision how we might work with him directly. The people who typically work in this arena tend to communicate like I do—I mean, we’re all technical people. No matter how I try to simplify things, I have trouble getting my message across. But Xavier is a very different voice. Talking with him wasn’t like an academic conversation; it was one of really deep connection. What he says has its origin in his heart. We asked Xavier to present to our climate task force, and then we began to plan to work together with schools.

One thing that struck me is that Xavier engages in a different kind of climate conversation. Often people are just overwhelmed and feeling a sense of devastation and loss when it comes to climate change. Xavier is able to talk about providing individuals with agency—that you have an important stake in what’s happening, and you have the capability to be an important messenger. He emphasizes the power of your voice and your participation, that this is about your community.


Another reason we were excited to work with Xavier is that he really understands and cares about young people. He leaves them feeling empowered and capable and invested. When children join a workshop, they learn about sea level rise. They can use an app to see what their local elevation is, and how changes in sea level will affect their school and their neighborhood. They work on making a yard sign with the elevation of their home and Xavier paints an incredibly beautiful mural in the hallway of their school, with the elevation of the school. Imagine how empowering it is for those 100 students to be the keepers of this knowledge: they’re the experts, they’re able to lead that next level of conversation. I think he leaves all of us feeling like we’re not destined to inherit the problem. We have the capacity to be part of the solution.

We not only hosted workshops at 10 public schools, but we also invited Xavier to speak and hold a workshop at Water Matters Day—our county’s largest environmental event, with 3,000–4,000 attendees annually. Later we organized a community climate conversation with Xavier focused on our central county neighborhoods, and we are installing Xavier’s art on the façade our county garage. It’s a beautiful tile mural featuring the elevation of the garage in downtown Fort Lauderdale. And just now, actually, I was able to sign off on an additional art installation we will feature right outside our commission chambers.

So much about Xavier’s conversation is trying to get people to pause long enough to ask the question: “What am I looking at?” And to use that art as an opportunity to have conversations that wouldn’t have happened otherwise.


Xavier Cortada, Underwater Elevation Sculpture (Hard Positive), 2023, sustainable concrete.

Xavier Cortada, Underwater Elevation Sculpture (Hard Positive), 2023, sustainable concrete.

Cortada is creating a permanent interactive art installation of sustainable concrete elevation sculptures across all the more than 200 parks in Miami-Dade County, Florida. Anyone who scans the sculptures’ QR codes can discover their home’s elevation above sea level and pick up a free Elevation Yard Sign to put in their front yard, joining the countywide installation and raising awareness in their neighborhood.

Xavier Cortada, Underwater Elevation Sculptures (Hard Positive Numbers), 2023, sustainable concrete.

These numbers are used in Cortada’s large sculptures, each depicting a park’s elevation above sea level. The concrete markers are made from seawater, recycled aggregate, and nonmetallic reinforcement, aiming to use building materials that are less energy-intensive and better for the environment.

Xavier Cortada, Underwater Elevation Sculptures (Hard Positive Numbers), 2023, sustainable concrete.

Xavier Cortada's Antarctic Ice Paintings

In 2007, Cortada created a series of paintings using sea ice, glacier, and sediment samples provided by scientists working in Antarctica, making the continent itself both the subject and the medium of the art. The paintings, serene yet foreboding, are a poignant reflection on the impact of climate change on the artist’s hometown of Miami—and the world. Made in Antarctica, these artworks use the ice that threatens to melt and submerge coastal cities.

In response to his Antarctic Ice Paintings created 18 months earlier, Cortada used arctic ice to produce a series of paintings aboard a Russian icebreaker returning from the North Pole. He taped pieces of paper to the top deck of the icebreaker and placed North Pole sea ice and paint on them. As the icebreaker journeyed south from 90 degrees north, carving through the sea ice by sliding on top of it and then crushing down through it, Cortada’s ice melted and the pooled water moved as it evaporated, creating the Arctic Ice Paintings. He sought to use the vessel’s motion to capture the existence of sea ice before it vanished, understanding that traversing the Arctic Ocean in the decades to come would not require an icebreaker.

The Puzzle of Pricing the Future

Discounting the Future: The Ascendancy of a Political Technology by Liliana Doganova.

Liliana Doganova describes her book, Discounting the Future: The Ascendency of a Political Technology, as a study in the “historical sociology” of discounting. Discounting—determining what value to place on income in the future relative to how we value it today—has its roots in finance and economics and is much discussed in today’s climate change policymaking.

However, discounting also raises social, political, ethical, and other issues. An evaluation and critique of discounting from these various perspectives could be valuable. Doganova’s book attempts such a critique of the theory and application of discounting. Although the book reveals some interesting episodes in the history of discounting and raises some provocative questions concerning its application, I found the book disappointing overall.

Virtually all economists share an interest in the mechanics and philosophy of discounting. As someone who has worked on the economics of the environment and natural resources for three decades, however, I am particularly interested in questions concerning the value applied to looming risks of climate change and biodiversity loss that are difficult to quantify. I date my own appreciation for how critical discounting is for these issues to the late 1990s, when Resources for the Future (RFF), a Washington think tank where I was a senior fellow, sponsored a conference of leading economists speaking on “Discounting and Intergenerational Equity.” A great deal of subsequent research has emerged since that event, but most of the contributions I followed were written by, and largely for, economists.

Most of the contributions I followed were written by, and largely for, economists. I was eager, then, to discover a different perspective.

I was eager, then, to discover a different perspective in Discounting the Future. Doganova is an associate professor at the Centre de Sociologie de l’Innovation, a Paris-based research institution focused on sociology, economics, and political science. Her bio says she works at “the intersection of economic sociology and science and technology studies.”

While I felt that reading an outsider’s perspective on the economics of discounting might be revealing, even criticism or rejection of economic models needs to be grounded in an understanding of economics. Regrettably, I found this book’s discussion of the economics of discounting incomplete, limited, and sometimes superficial. Consequently, the book’s critiques often don’t land squarely, and it does not articulate alternatives to the practices it faults.

In the introduction, Doganova notes the importance of discounting in The Economics of Climate Change: The Stern Review, Nicholas Stern’s 2007 assessment of the economic costs of climate change, and economist William Nordhaus’ rejoinder to it from the same year. Unfortunately, climate policy is not mentioned again until the book’s conclusion. Instead, most of the book is spent exploring historical case studies of the use of discounting, including in corporate investment planning, drug development, and the valuation of Chilean copper.

Regrettably, I found this book’s discussion of the economics of discounting incomplete, limited, and sometimes superficial.

For example, chapter two discusses nineteenth-century German forester Martin Faustmann’s simple model describing how often forests should be cut and replanted in order to maximize long-term profits. Over the last several decades, hundreds of what are now known generically as “Faustmann models” have appeared. They incorporate storm, fire, and pest infestation risks; reflect the value of forest ecosystem services and carbon sequestration; and allow for differences in harvest conditions. However, Doganova seems to dismiss the relevance and usefulness of such Faustmann models based on an anecdote in which an unnamed economist describes a conversation with a Scottish forester who mentioned that because tall trees snap off more easily in the region’s winds, some forests must be harvested sooner than the financial model prescribes. From my point of view, this is not a contradiction but a confirmation that the basic model can be augmented to incorporate the probability of loss to pests or fire or, indeed, high winds.

I regret that the author did not focus more on the application of discounting in climate policy. The book’s central critique would have benefitted by examining the burst of interest and creativity in the analysis of discounting that has occurred since the Stern Review and the RFF conference that first piqued my curiosity. For example, the late economist Martin Weitzman’s views on discounting were evolving at the time of the RFF conference. In several papers in the late 2000s and early 2010s, Weitzman suggested that the profound uncertainties that may arise under climate change cast serious doubt on the application of received methods of discounting. Doganova’s arguments would have been bolstered by engaging with Weitzman’s work.

The book’s central critique would have benefitted by examining the burst of interest and creativity in the analysis of discounting that has occurred since the Stern Review and the RFF conference.

What Dogonova appears most interested in is what she characterizes as a central and irresolvable contradiction inherent to discounting: “a theory of value that simultaneously and paradoxically both values and devalues the future. It both claims that the future is the source of value … and that the future is less valuable.” How, the author wonders, can the economic value of capital assets be based on projections of the income they will generate in the future, yet discounting means that income at any point in the future is worth less than income today?

I think these questions might have been resolved by probing deeper into the history of how the ideas were developed, particularly by Frank Ramsey, an extraordinary early twentieth-century British polymath who made seminal contributions to mathematics and philosophy. He also wrote two of the most important papers in the history of economics—all prior to dying three weeks before what would have been his twenty-seventh birthday. His 1928 paper, “A Mathematical Theory of Saving,” introduced what has been known since as “Ramsey discounting,” the canonical model and starting point for virtually all work since on the economics of discounting. Aside from a single mention in Discounting the Future, Ramsey’s work is not explored and his name does not appear in the index.

Ramsey focused on the welfare of future generations, not their income. A dollar to be received by people in the future would be worth less than would a dollar today if people in the future are expected to be wealthier than people today.

To a degree, Ramsey might have agreed with Doganova. He famously asserted that treating the welfare of future generations any differently than that of those living now is “ethically indefensible and arises merely from the weakness of the imagination.” The key, though, is that Ramsey focused on the welfare of future generations, not their income. A dollar to be received by people in the future would be worth less than would a dollar today if people in the future are expected to be wealthier than people today. By the same token, a dollar to be received in the future might be worth relatively more if our descendants were impoverished due to, for example, climate change. Doganova hints at this explanation in the last four pages of Discounting the Future, but doesn’t recognize it as the resolution to her paradox.

An additional aspect of Ramsey’s work might have informed Doganova’s book. When I teach discounting to my economics students, I tell them—as my instructors told me when I was a student—to remember that the discount rate is a price. As such, it is determined not just by supply or demand considerations, but by both: by consumers’ willingness to forgo present consumption in hopes of higher future consumption, as well as producers’ expectations of return on their capital investments. Moreover, the price that emerges from Ramsey’s original model describes an optimal outcome. Ramsey set up his analysis to determine the optimal rate of saving—that is, the sacrifice of current consumption at each point in time that will generate the greatest overall well-being for present and future generations.

Special interests will attempt to twist any method of analysis that comes to be adopted as a “technology of government” to their own purposes.

Now, of course, phrases such as “generate the greatest overall well-being for present and future generations” reveal a number of debatable assumptions lurking below the surface. For one, Ramsey assumed a utilitarian objective that not everyone may agree with. Much of the work over the near century since Ramsey’s writings has been devoted to considering how robust his results are to alternative assumptions. I am not suggesting that Doganova or anyone else should simply adopt Ramsey discounting on faith. There are stylized models in which the combined welfare of all generations is maximized and in which the discount rate shows the ideal ratio at which to trade present for future income. This most certainly does not mean that we should conclude that we live in the best of all possible worlds and discount rates observed in the real world are necessarily “right” in any ethical sense.

It is worth underscoring that I am not claiming that any of this literature contains solutions to all the very real, and still unresolved, dilemmas that discounting raises. Ramsey and more recent writers have adopted the philosophical approach of utilitarianism, and it is fair to ask if people really make the rational decisions that the utilitarian model presupposes. We might also ask whether the present generation should be empowered to act as stewards of the interests of posterity as well as its own, as Ramsey’s model assumes. Doganova is also right to note the important, if unsurprising, point that special interests will attempt to twist any method of analysis that comes to be adopted as a “technology of government” to their own purposes. But discounting remains a useful “political technology” for helping decisionmakers think about current actions in relation to an unknowable future.

Second-Order Effects of Artificial Intelligence

In “Governing AI With Intelligence” (Issues, Summer 2024), Urs Gasser provides an insightful survey on regulating artificial intelligence during a time of expanding development of a technology that has both tremendous upside potential but also downside risk. His article should prove especially valuable for policymakers faced with making critical decisions in a rapidly changing and complex technological landscape. And while it is difficult enough to make decisions based on the direct consequences of AI technologies, we’re now beginning to understand and experience some second-order effects of AI that will need to be considered.

Two examples may prove illustrating. Focusing on generative AI, we’ve witnessed over the past decade or so rapid development and scaling of the transformer architecture and diffusion models that have revolutionized how we generate content—text, images, software, and more. Applications based on these developments (e.g., ChatGPT, Copilot, Midjourney, Stable Diffusion) have become commonplace, used by millions of people every day. Much has been observed about increases in worker productivity as a consequence of using generative AI, and indeed there are now numerous careful empirical studies demonstrating positive effects to productivity in, for example, writing, software development, and customer service. But as worker productivity goes up, will there be reduced need for today’s quantity of workers? Indeed, the investment firm Goldman Sachs has estimated that 300 million jobs could be lost or diminished by AI technology. The company goes on to estimate that 25% of current work tasks could be automated by AI, with particularly high exposures in administrative and legal positions. Still, the company also points out that workforce displacement due to automation has historically been offset by the creation of new jobs following technological innovation and that new jobs are created that actually account for employment growth in the long run.

We’re now beginning to understand and experience some second-order effects of AI that will need to be considered.

A second example relates to AI energy consumption. As generative AI technologies and applications scale with more and more content being generated, we are learning more about the energy that is consumed in training the models and in generating the new content. From a global energy consumption view, one estimate holds that by 2027 the AI sector could consume as much energy as a small country (e.g., the Netherlands)—potentially representing a half a percent of global energy consumption by then. Taking a more granular view, researchers have reported that generating a single image based on a powerful AI model uses as much energy as it does to charge an iPhone, and that a single ChatGPT query consumes nearly as much energy as 10 Google searches. Here again there may be some good news, as it may well be possible to use AI to come up with ways to reduce global energy usage that more than makes up for the increased energy usage need to power modern AI.

As use of AI expands, these and other second-order (and higher) effects will likely prove increasingly important to consider as we work to develop policies that lead to responsible governance of this critical technology.

Professor Emeritus, Department of Computer Science

Southern Methodist University

There is much wisdom in Urs Gasser’s essay: verbal and visual maps of the emerging variety of governance approaches across the globe and some cross-cutting insights. Especially important is the call for capacity-building to equip more people and sectors of societies to contribute to the governance and development of what seems the most significant technological development since the invention of the book. Apple’s Steve Jobs once said, “Technology alone is not enough. It’s technology married with the liberal arts, married with the humanities, which yields us the results that make our hearts sing.” Ensuring that the “we” here includes people from varied backgrounds, communities, and perspectives is not only a matter of equity and fairness, but also important to the quality and trustworthiness of the tools and their uses.

The challenge is not simply unequal knowledge and resources, but also altering who is “at the table” where vital decisions about purposes, design choices, risk levels, and even data sources are made.

In mapping governance approaches, Gasser includes developing constraining and enabling norms efforts that seek to “level the playing field” through AI literacy and workforce training efforts, and addressing gaps and transparency or disclosure requirements that “seek to bridge gaps in information between tech companies and users and societies at large.” Here the challenge is not simply unequal knowledge and resources, but also altering who is “at the table” where vital decisions about purposes, design choices, risk levels, and even data sources are made.

Individual “users” are not organized, and governments and companies are more likely to be hearing from particularly powerful and informed sectors in designing governance approaches. What would it take to ensure the active involvement of civil society organizations—workers’ groups, Indigenous groups, charitable organizations, faith-based organizations, professional associations, and scholars—in not only governance but also design efforts?

Experiments in drawing in such groups and individuals would be a worthy priority for governance initiatives. Meanwhile, Gasser’s guide offers an effective place for people from many different sectors to identify where to try to take part.

300th Anniversary University Professor

Harvard University

Much talk about governing artificial intelligence is about a problematic balance. On the one hand, there are those who caution that regulation (not even overregulation) will slow innovation. This position rests on two assumptions each requiring substantiation, not assertion: that regulation retards frontier thinking, and that innovation brings wider social benefit beyond profit for the information economy. On the other hand, there are those who fear the risks of AI, already apparent and many yet to be realized. Whether the balance debate is grounded in more than supposition, it raises fundamental questions about how we value and prioritize AI.

Urs Gasser is confident of a growing consensus around the challenges and benefits attendant on AI, but not so about the “guardrails” necessary to ensure its safe and satisfying application. He holds that there is a galvanizing of norms that might provide form for governing AI intelligently. No doubt, there have been decades of deliberation in formulating an “ethics” to influence the attribution and distribution of responsibility without coming closer to agreement on what degree of risk we are willing to tolerate for what benefits, no matter the principles applied to either. National, industrial, and global energies directed at governance exhibit diversity in strategies and languages that in actuality are as much evidence of a failing to achieve a common intelligence for governing AI, than demonstrating an emerging consensus. Not surprising when politically, economically, commercially, and scientifically so much hope is invested in an AI-led recovery from degrowth, and AI-answers to impending global crises.

Are we advancing toward a more informed governance future for AI by concentration on similarities and divergences in systems, means, aims, and purposes across a tapestry of regulatory styles? Even if patterns can be distilled, do they indicate anything beyond Gasser’s “islands of cooperation in the oceans of AI governance”? He is correct in arguing that guardrails forming boundaries of permission within which a healthy alliance between human decisionmaking and AI probabilities are essential, if even a viable focus for AI governance is to be determined. However, with governance following what he calls the “the narrow passage allowed by realpolitik, the dominant political economy, and the influence of particular political and industrial incumbents,” the need for innovation in AI governance is pressing.

Are we advancing toward a more informed governance future for AI by concentration on similarities and divergences in systems, means, aims, and purposes across a tapestry of regulatory styles?

So, we have the call to arms. Now, what to do about it in practical policy terms? Recently when asked what was the greatest danger posed by AI, a renowned data scientist immediately responded “dependency.” Our digital universe has enveloped us in a culture of convenience, making it almost impossible to determine whether the ways we depend on AI-assisted technology are good or bad. Beyond this crucial governance question, it is imperative to reposition how we prioritize intelligence. Why should generative AI predominate over the natural processes of human deliberation? From the Enlightenment to the present age of techno humanism, scientific managerialism has come to dominate reason and rationality. It is time for governance to show courage in valuing human reasoning when measuring the benefits of scientific rationality.

Distinguished Fellow, British Institute of International and Comparative Law

Honorary Professorial Fellow of the Law School, University of Edinburgh

Transforming How the Environmental Protection Agency Does Science

For the past half century, the US Environmental Protection Agency (EPA) has applied a common regulatory framework to the implementation of public health and environmental statutes that primarily involves focusing on discrete sources of air, land, or water pollution. This strategy has been successful in reducing the presence of specific compounds that can be harmful in the environment. But that is as far as it goes.  

This method—known as command-and-control regulation—was very much of its time and continues as EPA’s predominant policy framework. Consider, for example, how car pollution is regulated. EPA restricts the emission of individual pollutant classes, such as carbon monoxide and nitrogen oxides escaping from tailpipes. Sometimes the agency targets multiple pollutants at once, as it did this year when finalizing new rules to ratchet up vehicle emission standards and reduce pollution. Ultimately, these updated regulations will require adoption of cars and trucks that aren’t powered by fossil fuels—principally electric vehicles (EVs).

But because EPA scientists and administrators focus on protecting the public from particular pollutant exposures, they are unable to adequately address the broader question of what this EV transition will mean for the environment and for human health. The agency has not studied the release of greenhouse gases and other pollutants from the extraction of lithium, manganese, nickel, and other materials necessary to build EV batteries and other vehicle components. Neither has it evaluated population and ecosystem exposures to new sources of pollution associated with the manufacture of EVs. EPA scientists have also not identified impacts from extracting more water in areas already stressed by limited water resources, nor have they determined health and environmental risks from transporting, storing, and processing materials used in EV production, or assessed pollution levels from EV use in commercial vehicles or by consumers.

EPA’s knowledge gaps also span other major health and environmental challenges. These include: controlling emissions in the power-generation sector, even as the transition is underway from coal and natural gas to renewables and nuclear energy; decoupling plastics production from reliance on natural gas—another necessary transformation in the age of climate change; and managing the 10,000 variants of per- and polyfluoroalkyl substances (or PFAS, commonly called “forever chemicals”) that are present in thousands of communities across the country and are associated with a range of reported health effects.

As EPA’s toolkit expands to include life cycle analysis, data analytics, and other methods, a significant and fundamental challenge will be developing the capacity to understand key relationships between future pollution sources and economic and energy transformations currently underway. Today’s globally integrated economy, with millions of supply-chain and value-chain pollution sources, has rendered the single-pathway method of command-and-control regulation ineffective. Contemporary challenges are systemic in nature; each originates from multiple kinds of sources and economic enterprises. Agencies such as EPA need to modernize their approach to scientific and regulatory decisionmaking to better understand the causes of contemporary environmental risks and respond to them effectively.

A significant and fundamental challenge will be developing the capacity to understand key relationships between future pollution sources and economic and energy transformations currently underway.

Researchers have begun to assess environmental and health impacts from multiple aspects of a product, service, or industrial process—starting with the production of raw materials and continuing through manufacturing, distribution, use, and disposal. This “systems” approach to research planning and environmental decisionmaking can yield both innovations and insights to protect future public health and the environment. Consequential reform is possible: the One Environment–One Health framework, an interdisciplinary approach first developed by epidemiologists working to prevent disease transmission between wildlife and humans in the early 2000s, has been adopted in various parts of the US government and among international institutions. The framework has motivated a systems approach to the science and regulation involved in ensuring a livable and sustainable human habitat.

This need for a shift from single-media regulation to a systems approach occurs in the context of other significant changes: concerns about environmental justice; the energy transition away from fossil fuels; accelerating climate change; and unpredictable new technologies as widely dispersed as social media, artificial intelligence, and biotechnology—not to mention possible new limits on the agency’s regulatory and enforcement authority, such as those triggered by recent Supreme Court rulings. For all of these challenges, a modernized EPA, implementing the One Environment–One Health framework, would have much to offer. A question, then, is what prevents the agency from taking this different course. And an even more important question, perhaps, is how exactly to foster the necessary changes.

The path forward: One Environment–One Health

A chief obstacle to systems thinking is EPA’s antiquated culture and strategy for generating scientific information and presenting it to policymakers, business executives, and consumers. In a deeply interconnected and rapidly changing world, EPA must develop a culture of innovation and collaboration that moves away from the single-pathway approach. In its place, the agency urgently needs a new framework for generating knowledge that can identify more policy options for decisionmakers and stakeholders and also disseminate expertise to the public in a transparent way. 

A scientific culture of innovation and collaboration rests on two pillars. First, it is essential that regulators appreciate the interdependencies of human and environmental health. Second, they must mobilize multiple scientific disciplines and institutions to address risks affecting both human and environmental endpoints. These are also the cornerstones of One Environment–One Health. In their recent report Transforming EPA Science to Meet Today’s and Tomorrow’s Challenges, the National Academies of Sciences, Engineering, and Medicine recommend that EPA adopt the One Environment–One Health framework to govern both its selection of research projects and its processes for communicating results to policymakers, businesses, the media, and consumers.

There are a number of key differences between One Environment–One Health and EPA’s current approaches to planning for science and decisionmaking. Importantly, One Environment–One Health provides a systems lens for identifying and evaluating risks. Following this framework means studying the full life cycle of each challenge across each level of the biosphere, beginning with organelles and cells to tissues and organs, individual organisms, the communities they comprise, and ultimately ecosystems assembled from interacting species. Crucially, research of this sort integrates data and knowledge provided by multiple stakeholders across disciplines. Their perspectives help to assure that studies ask the appropriate questions and anticipate the full range of impacts, including secondary and tertiary consequences. By taking advantage of such collaboration, research carried out under the One Environment–One Health framework can lead to emergent solutions that would not be discovered using more traditional, siloed methods of research and public-policy management.

As the diagram illustrates, One Environment–One Health applies a systems-thinking approach, with a sequence of steps to integrate information from multiple scientific disciplines. Each step is linked to consideration of all layers of the biosphere. Collaboration across organizations will enhance identification of scientific and technical advances for meeting future environmental and health challenges.

A roadmap for EPA transformation

Broadly speaking, there are four areas in which EPA can improve its capacity to achieve a culture of innovation and collaboration. These improvements would be critical investments in the agency’s future as well as in the health of the people and ecosystems it serves.

First, the agency could do more to leverage information technology. The One Environment–One Health framework is data-heavy: the physical environment is the source of much of those data—and a rich source at that. Digital technologies provide the means of collecting, integrating, analyzing, and using all kinds of information. For instance, mitigating problems of environmental justice requires integrating many kinds of knowledge about particular communities, including knowledge of their demographics, disease burdens, access to medical care, and pollution loadings. Obtaining such extensive data involves multiple inputs, such as from sensors that trace pollutants as well as the participation of community residents surveying their health and location information.

Research carried out under the One Environment–One Health framework can lead to emergent solutions that would not be discovered using more traditional, siloed methods of research and public-policy management.

This data gathering could lead to more knowledge relevant to decisionmaking. For example, machine learning may be valuable in detecting patterns of risks across multiple pollutant exposures and identifying stresses affecting humans and critical nonhuman species. Integrated datasets can be used to compare relative toxicities for a range of pollutant exposures and estimate their effects. Doing this sort of modeling could expand the options available to decisionmakers. For example, by prioritizing risks in communities that are exposed to complex mixtures of pollutants, decisionmakers can develop more effective strategies to protect the health of people and environments in the area—using not only regulatory tools, but also direct stakeholder engagement.

EPA has already begun a transition to more formally adopting such monitoring methods, both in research and community surveillance, through a series of initiatives in lower-income communities in the lower Mississippi River and other regions. Further, it has promulgated regulations to limit hazardous chemical emissions of ethylene oxide and other substances.

A second area for improvement can come through nurturing innovation networks, both within EPA and by crossing over boundaries to other institutions. No single organization possesses the resources, workforce skills, or technology platforms necessary to develop effective solutions to problems at local, regional, or global scales. Those solutions will only come from innovation ecosystems: organizations and people with common or complementary objectives working together by exchanging information, talent, and resources.

For instance, a key environmental protection innovation of recent years has seen corporate water users and their suppliers collaborating to reduce carbon emissions and water consumption across their business activities. Innovation is, after all, a social process; it happens at the intersection of diverse cultures and missions. Professionals working together across disciplines and harnessing varied points of view are essential for innovation, regardless of organizational structures. A small number of loosely connected teams or enterprises can innovate; so can large-scale global corporations and partnerships.

A number of recent innovations—the use of satellites to detect methane releases to the environment from oil and gas operations in remote locations, for example, and more systemic identification of plastics ingredients across product life cycles—embody the multidisciplinary collaboration that yields creative solutions to large-scale problems. In each case, scientists from academia, government, business, nongovernmental organizations, and philanthropies worked in teams and leveraged resources to design research projects aimed at addressing global problems. 

Third, and related to the above, it is essential that EPA develop a wider culture of innovation and collaboration that can operate at the scale of current and future problems. Most scientific organizations have a history of collaboration with select research partners. But as public health and environmental challenges grow more complex—with any given challenge involving and influencing more and more industries, ecosystems, and human populations—these partnerships must evolve beyond narrow project- and subject-specific focuses. Collaboration must be as systemic as the problems themselves.

No single organization possesses the resources, workforce skills, or technology platforms necessary to develop effective solutions to problems at local, regional, or global scales. Those solutions will only come from innovation ecosystems.

What, concretely, does this look like? For starters, agencies like EPA should cultivate collaboration among major players within particular industrial sectors while convening nationwide and global multistakeholder partnerships. EPA researchers could serve as brokers, creating collaboration platforms and developing commitment mechanisms. EPA or other agencies can use their role as conveners to encourage adoption of collaborative research plans in which all major partners join together to co-define research objectives and participate as co-decisionmakers in management and oversight of research and funding. Agency conveners can improve transparency in research planning as well as in reporting results, which would strengthen accountability among the partners and enhance the credibility of research findings. And organizations like EPA, through their global professional networks, are well-positioned to expand international collaboration with the goal of addressing transborder problems such as climate change, water scarcity, ecosystem stress, declining biodiversity, and the environmental consequences of geopolitical conflicts.

There are multiple benefits of such collaborations. Research initiatives gain access to talented professionals from a range of disciplines and institutions. Collaborations help build constituencies and buy-in across scientific enterprises. Partners can leverage each other’s resources to enable research at a significantly larger scale. By negotiating commitments, partners harmonize their priorities, allowing them to efficiently contribute to collective goals. And collaborations gain expanded capacity to disseminate scientific results, enabling them to reach broad audiences and speak with the authority of multiple expert organizations.

The fourth intervention in EPA’s innovation culture is to foster open scientific communication using social media platforms. Open communication facilitates collaboration and trust in science, which in turn can help researchers and policymakers get the most out of One Environment–One Health. The framework prioritizes exchange and public impact, both of which demand sharing scientific data among researchers and organizations that use scientific findings to drive decisionmaking.

Organizations like EPA, through their global professional networks, are well-positioned to expand international collaboration with the goal of addressing transborder problems.

Importantly, the scientific community and the many organizations informed by science must be empowered to reduce public skepticism. The audience for scientific communication has grown and changed dramatically thanks to the online revolution in information sharing. Vocal and well-organized groups can absorb scientific information and distort it; many reject the evidence base of business and public policy decisions. Many consumers of scientific information are casting doubt on the credibility, relevance, and ethical underpinnings of research findings, as well as the motivation behind policies designed to protect public health and the environment. Scientists and agencies such as EPA need more effective means to counter distrust.

Today, EPA and its partners engage the public primarily by using traditional tools, such as conference presentations, publication in peer-reviewed literature, websites, and the public comment mechanism built into the regulatory process. These are necessary tools, but they are not sufficient to meet shifting expectations for transparency surrounding data collection and use.

To respond to these challenges, scientists and their sponsoring organizations should embrace the more open system One Environment–One Health calls for. In particular, scientists can supplement existing communication platforms with social media initiatives. Four strategies in particular can help. First, researchers in science-based organizations can use their online presence to communicate about their conformity with established standards for scientific ethics. Second, scientists should collaborate with communications professionals to develop clear, data-driven messages for dissemination via social media. Third, science-based public policy should include citizen-facing reports of major studies or groups of studies. Aimed at everyday readers who don’t have scientific expertise, such reports would help to contextualize scientific findings in a broader narrative relevant to and comprehensible by nonexperts. Finally, EPA and other science-based regulators should expand and integrate their research into classroom materials, promoting scientific literacy via education systems.

Applying One Environment–One Health to large-scale problems

Replacing the single-endpoint or single-media approach with the One Environment–One Health framework would be transformative, enabling EPA to prioritize decisionmaking relevant to public health and environmental protection problems that are truly urgent today. This approach will also prepare EPA to address new problems as they arise.

Consider how this structure might operate in some of these significant real-world challenges. One pressing difficulty is food waste, a major contributor to three simultaneous planetary crises. Food production is fossil-fuel intensive, so waste needlessly adds to carbon emissions; food waste also aggravates problems otherwise associated with climate change, affecting, for instance, water resource availability. Industrial-scale agriculture is further linked with ecosystem and biodiversity loss. Finally, runoff of agricultural nutrients and pesticides is a serious source of pollution. As it is, the United Nations Environment Programme estimated that in 2021, food waste from households, retail enterprises, and the food-service industry globally totaled 931 million metric tons, and the amount of waste is growing.

A major challenge in ongoing efforts to solve the food waste problem, both locally and globally, lies in the fragmented relationships among farmers, food collection and transport systems, processing businesses, retail establishments, and consumers. The lack of data exchanges and collaboration among these value-chain participants mirrors disjointed policy design by governments and investment decisions by businesses.

Replacing the single-endpoint or single-media approach with the One Environment–One Health framework would be transformative, enabling EPA to prioritize decisionmaking relevant to public health and environmental protection problems that are truly urgent today.

The One Environment–One Health approach facilitates comprehensive solutions by promoting open-source data that everyone—including farmers and consumers—can use to better appreciate their interrelated roles in the food system. Within those data sources are clear signals that could aid in resolving a growing planetary problem. Integrated knowledge of the food value chain would encourage governments and businesses to raise the bar, developing innovative agricultural practices that improve efficiencies in energy and water use and implement postharvest refrigeration technologies to prolong the life cycle of food products.

A second problem that could be addressed using the One Environment–One Health framework is plastic waste. More than 10 million metric tons of plastic waste enter the oceans each year from land-based sources, and that figure is expected to rise to 20 million metric tons per year by 2050. Simultaneously, less than 10% of plastic materials are recycled, and 32% of plastic packaging is not captured in collection systems. Yet even as huge amounts of plastic become waste streams, worldwide production is exploding. In the US Gulf Coast region alone, 10 new plastic production plants and 17 plant expansion projects are planned over the next five years.

Since 2022, delegates from 175 countries have been attempting to negotiate an internationally binding treaty to curb plastic pollution, including in the marine environment. But the various national delegations are at loggerheads over specific commitments, which include production limits for particular plastics, investment in waste-collection infrastructure, and provisions to encourage enhanced recycling and reuse of plastics. This is an opportunity to apply a systems-thinking approach to plastic waste—as opposed to managing individual elements of the problem (e.g., waste management or enhanced recycling)—to address its many interconnected challenges.

Applying a One Environment–One Health framework could help develop more robust solutions. Using innovative, cost effective, data-rich labeling systems to track plastics could make recyclable and reusable waste easier to identify. But even bigger results could come from research assessing the environmental and social impact of plastics across their life cycles, particularly if agencies and businesses encourage collaboration among researchers and product-makers to invest in design for recyclability. These efforts would help to establish an analytically sound foundation for recycling targets set by governments while also formalizing extended-producer responsibility programs, which task manufacturers with handling products at the end of their useful lives. Finally, implementing the framework would encourage assessment of greenhouse gas emissions from plastics production as part of climate change policy development and business planning.

The example of plastics is just one way that the use of One Environment–One Health and interoperable datasets can help society learn how to better control the fate of materials already in use and design future products to avoid waste.

Making more robust decisions

Within EPA and across federal, state, and local governments, many individual elements of a One Environment–One Health approach to scientific planning and decisionmaking are already in place. However, these elements are not sufficiently coordinated. Government agencies need to further invest in building cohesiveness, continuity, and scope into the use of the framework. When they do, many citizens will be surprised to learn just how much data analysis can contribute to solving problems of both immediate and longer-term concern across a growing range of health and environmental challenges. Importantly, greater transparency can improve the credibility of research findings, building confidence in science among both public- and private-sector stakeholders and encouraging greater buy-in.

The One Environment–One Health approach facilitates comprehensive solutions by promoting open-source data that everyone—including farmers and consumers—can use to better appreciate their interrelated roles in the food system.

In June 2024, the US Supreme Court ruled in Loper Bright Enterprises v. Raimondo that a four-decade precedent of federal court decisions to defer to agency interpretations of statutes, known as Chevron deference, was no longer valid. This decision could affect government policy decisions across a range of public health, environmental, pharmaceutical, financial integrity, telecommunications, and workplace safety issues. But it will require years, if not decades, of subsequent litigation to clarify the intent and scope of judicial authority over regulatory policy development. Still, many of today’s statutes are, in fact, clear and specific in their language and scope—and EPA, along with other agencies, will still need to retain its ability to conduct research and assess risks to inform policymaking choices.

Adopting new and transparent frameworks for assessing risk and building consensus among stakeholders could prove invaluable to agencies as they navigate this period of legal uncertainty. As public health and environmental policymaking become more driven by stakeholder expectations, implementing a One Environment–One Health framework can further inform and empower these stakeholders in their communications with government agencies, and thereby further legitimize actions the latter may consider.

In this and other circumstances, the One Environment–One Health framework for research and analysis can provide decisionmakers inside and outside government with a more complete understanding of health and environmental challenges. And, by advancing this outcome in a transparent manner, it can add credibility and value to efforts to address major risks of the present and future. Compared to current frameworks that focus on individual pollutants and pathways, One Environment–One Health, with its foundation in systems thinking, can provide more significant support to EPA and other agencies to advance their ultimate goals: healthier people and a healthier planet.

Science Diplomacy and the Rise of Technopoles

In the three decades after the Cold War ended, science diplomacy became an important component of the foreign policy toolkit. In particular, it became a key tool for responding to global challenges that involve science—including climate change and global development. Diplomacy’s integration of science and technology expertise has reshaped how nations address such issues, fostering a more collaborative and informed international community. However, the conditions under which science diplomacy blossomed in an era of growing globalization are now changing. By contrast, in today’s multipolar world of fracturing alliances, the influence of science and technology is increasingly tied to the advancement of individual nations’ geostrategic and economic interests. In this new context, science diplomacy must evolve.

The development of modern science diplomacy

Although the origins of science diplomacy are often traced to the Cold War, its modern form began to take shape in the late 1990s, when US Secretary of State Madeline Albright undertook a broader reframing of US strategy and priorities after the Cold War. She asked the National Academy of Sciences to provide guidance on the role of science, technology, and health in US foreign policy. The role of science adviser to the secretary of state was created as a result, reflecting a shift in integrating scientific expertise into foreign policy and underscoring the increasing importance of science and technology in international relations. Parallel efforts to bolster science capacity in foreign ministries were undertaken worldwide as nations recognized that embedding scientific expertise within their diplomatic frameworks made them better equipped to participate in international negotiations, shape policy, and foster collaborations to address global challenges in fields including cybersecurity, biotechnology, and environmental policy.

At times, scientists themselves have played a role in foreign policy. During the Obama administration, for example, a long-standing scientific and professional relationship between US Secretary of Energy Ernest Moniz and head of Iran’s Atomic Energy Organization Ali Salehi paved the way for both technical and diplomatic agreement. And after Russia’s invasion of Ukraine in 2022, active engagement between Western and Ukrainian researchers worked to integrate Ukraine’s science and innovation community into Western systems, including Europe’s Horizon program.

In today’s multipolar world of fracturing alliances, the influence of science and technology is increasingly tied to the advancement of individual nations’ geostrategic and economic interests. In this new context, science diplomacy must evolve.

Meanwhile, the nongovernmental sector’s role in promoting science diplomacy has grown. In 2008, the American Association for the Advancement of Science established the Center for Science Diplomacy, which aims to promote better understanding and cooperation between countries through science and provides a framework for addressing global challenges such as climate change, pandemics, and food security. The center’s success has inspired the creation of similar institutions, including the EU Science Diplomacy Alliance, to promote science diplomacy as a tool for the European Union’s external actions.

Although science diplomacy was seen as a tool of large countries during the Cold War, by the 2000s some smaller countries started to use it to advance their own interests. Israel and Singapore leveraged their investment in science to attract multinational companies for economic advancement. In 2009, New Zealand appointed a science envoy to assist in developing relationships with other small, advanced economies with whom they’d otherwise had relatively little interaction. As a gateway to the Antarctic, New Zealand was able to provide logistics support for joint scientific expeditions as a way to smooth over tensions with the United States around nuclear policies. Rwanda also started to emphasize using science diplomacy to attract investment and expert assistance, leading the country to emerge as a continental leader in new technologies. 

Emergent challenges

Today, one cannot look at the landscape of science diplomacy without recognizing that the era of globalization—and, with it, the commitment to global interdependence and cooperation on global science issues—is in retreat. Already, active conflicts in Ukraine and the Middle East are explicitly putting greater strain on traditional instruments of science collaboration, such as the International Institute for Applied Systems Analysis and the Arctic Council. But in a larger sense, the unstable state of this multipolar world constricts and changes the space that science diplomacy can operate in.

An underlying assumption of the era of globalization was that rules-based trade requiring cooperation between state actors would ultimately reduce global tensions and allow global action on common issues. But recently, as economies have become more intertwined, tensions have grown. And with the proliferation of technology, the interface between science, technology, economics, and security interests have tightened. Now that some emerging technologies cannot be considered independently from economic, defense, and security interests, relatively unsophisticated measures such as export controls may not be up to the task of protecting national interests. By 2024, the drive to open science was being replaced in political declarations from many countries with the mantra “as open as possible, as closed as necessary.”

One cannot look at the landscape of science diplomacy without recognizing that the era of globalization—and, with it, the commitment to global interdependence and cooperation on global science issues—is in retreat.

But even as countries have begun restricting scientific interchange, the world faces common, global challenges that science and technology must address. This obvious paradox points to weaknesses in the previous conception of science diplomacy and explains why responses to global issues such as climate change, sustainability, pandemics, and autonomous weapons have been inadequate. When science diplomacy becomes disconnected from critical national security and economic priorities, it can no longer influence policy. One of the criticisms of the Kyoto Protocol, which required ratifying nations to set individualized targets for greenhouse gas emissions reduction, was that it used an international agreement to drive domestic policy. Although the effort had some success in some countries, it encountered greater difficulty in the United States, where the domestic political consensus for climate action had not been resolved. Under such circumstances, one can build consensus internationally but fail to build it nationally—and national priorities inevitably carry the day.

These fault lines in traditional science diplomacy were significant, but they are now enlarged by new technologies that easily cross national boundaries. These include digital technologies—particularly the rapidly emerging advances arising from artificial intelligence and large language models—synthetic biology, and the use of space and extraterrestrial resources. Quantum technologies with security and defense applications will likely create further challenges.

Compounding this complexity is the role of transnational platform companies in developing and selling emergent technology. Some of these companies are successfully avoiding national regulations, which is challenging the role of nation states. They have found ways to take advantage of a weak and divided multilateral system that has failed to ensure oversight that benefits the planet and its citizens. Social media companies, for example, have been slow to comply with myriad EU rules, leading to the August 2024 arrest of the Telegram CEO in Paris. And Elon Musk, owner of Starlink, decided to override US interests in the Ukraine conflict, highlighting the power that individuals can now exert in what was traditionally an arena for state actors alone.

Even as countries have begun restricting scientific interchange, the world faces common, global challenges that science and technology must address.

Finally, in an environment already in flux on many levels, China has shifted its position from being an active driver of collaboration in global science toward being a more independent and self-reliant science power. Indeed, China’s shift shows the increasingly critical role that science and technology play in defining geostrategic positions. The multipolar world is now defined as much by distinct approaches to technology and innovation as it is to ideology. We might even call such powers technopoles.

It is difficult to forecast where these trends will lead. The rise of populist, isolationist, or right-wing parties—in Europe or the United States—could change the landscape of scientific collaboration and diplomacy still further. And there is a possibility that conflict within regions could increase, destabilizing not only regional scientific collaboration, but also international bonds. Already, national budgets and priorities can be seen turning toward national security–focused economies, as in Europe’s response to aggression from Russia and growing concerns with China. The highly anticipated competitive roadmap for Europe produced by former Italian prime minister Mario Draghi highlights such a priority shift.

Where will science diplomacy go? 

As the conditions that gave rise to today’s forms of science diplomacy continue to shift, the field must evolve. And as it does, it faces an inherent dilemma. Is the purpose of science diplomacy to narrowly promote a country’s economic and security interests? Or is the purpose also to advance a global agenda—progress on issues such as climate change, pandemic prevention, and sustainable development—through science and science-based innovation by treating science as a global public good and which by advancing the global good advances every nation’s interests? There is an explicit tension in these two different views of science diplomacy’s future role. The fundamental challenge for the field is whether it can serve both these roles—and if so, how?

A further challenge for science diplomacy is that domestic science, economic, and national security policies can conflict with broader objectives related to the global commons. For example, research security policies are being elevated above common interests, including reducing carbon emissions—even among like-minded nations. New mechanisms are needed to better align global priorities with these research security policies. To accomplish this, science diplomats must find ways to bring a broader range of stakeholders into the discussion, including governments, business, and academia. As science diplomacy moves beyond the use of science to “build relations among geopolitical adversaries”—its traditional conception—it has the opportunity to play a new role in building partnerships and shared rules to achieve global objectives while respecting national priorities. As former science advisors from two different governments, we believe that science diplomats should begin to explore several avenues for resolving these tensions: the use of regional alliances, reconsideration of the roles of formal and informal science diplomacy, and building trust through institutions and shared rule-making.

Regional collaboration

The rise of populist, isolationist, or right-wing parties—in Europe or the United States—could change the landscape of scientific collaboration and diplomacy still further.

One possible avenue for science diplomacy is to expand its purview beyond immediate national benefit to an expanded understanding of how the field can operate at what might be called regional levels. Here, regional is a loose term. There are real opportunities in working not only among neighbors, but also among allied nations with shared values and broader objectives. The recently completed AUKUS agreement provides an important example of the emerging use of technology partnerships among like-minded and like-valued countries. Under this agreement, which focuses on a trilateral security arrangement between Australia, the United Kingdom, and the United States, the countries will work together to develop next-generation submarines. A central pillar of this agreement addresses the technology partnership between the nations, which focuses on joint work in critical and emerging technology areas including artificial intelligence and autonomy, undersea capabilities, quantum technologies, advanced cyberhypersonic and counter-hypersonic capabilities, and electronic warfare

The AUKUS partnership could provide the template for a broader pan-Pacific partnership among like-minded and like-valued countries in developing a technologically based free trading block. This agreement also demonstrates a fast-developing paradigm for reconciling national security policy and diplomacy, international science and technology policy, and domestic research security. And although tensions remain among the stakeholders, such convergence will be critical for science and technology diplomacy to flourish under these circumstances.

The European Union’s General Data Protection Regulations (GDPR) are another example of regional or allied science diplomacy initiatives. The GDPR reflects a values-led approach to technology regulation, and the regulations have influence well beyond Europe. Indeed, one way for philosophically like-minded countries to build cooperation in the science space is to expand on what they have in common. 

Beyond track 1 and track 2

Another opportunity to make progress in this conflicted space is to consider the potential roles of different actors. Much academic discussion has focused on track 1 diplomacy, or formal diplomacy, suggesting that track 2, or informal, efforts were largely a spillover from scientific cooperation. However, the reality has been more nuanced. Sometimes projects initiated by track 1 players have been enacted by track 2 actors. During the Obama era, for example, the US National Academy of Sciences played an active role in mediating the intended rapprochement with Cuba. Conversely, track 2 activities have led to significant diplomatic achievements. The scientifically led International Geophysical Year of 1957–58 resulted in the Antarctic Treaty. Track 1 and 2 approaches are not separate, but increasingly intertwined.

Inevitably, direct national interests will be primarily driven by the political processes determining economic and security policy. Domestic scientific communities strive to show their relevance to their national funders by supporting such efforts. But at the same time, it has been the global scientific community that has brought attention to climate change, biodiversity loss, pandemic risks, and many other existential threats which require concerted, collective action. In this narrowing window of opportunities to make progress on critically important global goals, track 2 science diplomacy may become even more necessary. For example, the activities proposed for the 2032 International Polar Year, which emphasizes involvement and coproduction of knowledge between a range of Arctic stakeholders, could help to reduce diplomatic tensions and build relationships.

A renewed emphasis on track 2, or a hybrid approach utilizing both tracks, would actually be returning to a role the international science community has often played in history. In the eighteenth century, for example, scientists worked across conflicted nations on issues of common interest, such as gathering measurements of the transit of Venus from multiple sites to estimate the solar unit. But in the current context of rapid change in science when inter-nation tensions are high, track 2 efforts could be an important tool.

Engaging with trusted brokers

Another historically proven way to balance national science goals with global ones while building trust can be found in the many institutions that are natural brokers in the space of international science. One example that demonstrates the evolving potential of these brokers is the International Council of Scientific Unions (ICSU), now known as the International Science Council (ISC). With origins in the late nineteenth century, during the Cold War the ICSU played an important role by sponsoring the aforementioned International Geophysical Year of 1957–58. Later, the ICSU cosponsored the Villach Conference, which led to demand for political action on climate change and thus the formation of UN Framework Convention on Climate Change

As science diplomacy moves beyond the use of science to “build relations among geopolitical adversaries”—its traditional conception—it has the opportunity to play a new role in building partnerships and shared rules to achieve global objectives while respecting national priorities.

In 2018, the organization became the ISC following the merger between the ICSU, which then represented the natural sciences, and its equivalent in the social sciences. Today the ISC’s membership includes virtually all of the world’s scientific academies and international scientific organizations across the Global North, South, East, and West, as well as across both the natural and social sciences. Worldwide, the ISC has promoted more transdisciplinary approaches in science to generate actionable knowledge in local contexts, while also considering how new technology regulation might be put in practice.

Recently the organization has taken a greater lead in track 2 diplomacy, particularly in connecting the global scientific community and the United Nations. For example, their work on policy lessons from the COVID pandemic involved partnerships with UN Office for Disaster Risk Reduction and the World Health Organization. The organization is also a core partner on UN initiatives such as the International Decade of Sciences for Sustainable Development. ISC works to bring greater equity to the global scientific commons by, for example, fostering the development of scientific voices in the Global South through the Pacific Academy of Sciences

Many other scientific bodies could take new and ambitious roles. For example, as the influence of the Antarctic and southern oceans in global climate becomes better understood, the Scientific Committee on Antarctic Research will be of growing importance. However, making the most of these opportunities for topic-specific or regional trust-building requires shifting resources away from business as usual toward deliberate and creative engagement.

Setting standards

Science and scientists have long played a pivotal role in setting the standards that are central to global trade and knowledge exchange. As the way that knowledge is generated and deployed becomes increasingly digitized, engaging international bodies on digital standards setting offers an opportunity to build global trust and interoperability. Organizations such as the World Data System and CODATA, which set standards for how data is aggregated, curated, and used, could be important places to find common ground and reduce the risks of geostrategic conflicts undermining harmonization.

As the way that knowledge is generated and deployed becomes increasingly digitized, engaging international bodies on digital standards setting offers an opportunity to build global trust and interoperability.

A focus on standards may also help span gaps between autonomous private-sector actors, the scientific community, and nation states. As the scientific enterprise is challenged by artificial intelligence—not just in the generation of knowledge but in its assessment and reporting—new forms of diplomacy are needed to bridge and coordinate among government, the science community, and corporations building and deploying AI. Coordinated standards-setting for AI presents an opportunity to avoid some feared negative outcomes from the technology, including digital inequities between the Global North and South and issues of privacy or copyright.

Another area where the scientific community’s ability to harmonize standards could play an important role is the digital transition. If left to corporations and disparate governments, the process of digitization, now in its early stages, could create more division in the world. The scientific and standards-setting community has an opportunity to imagine what a global digital compact might look like. This compact could redefine the relationship between people and digital data and technology, and make commitments to future generations. Both formal and informal science diplomacy are required to address these issues.

Assuming a leading role

As the concept of science diplomacy matures, the field is becoming a central area for achieving diplomatic goals. However, the interface between science and diplomacy needs to become much more effective, moving beyond the vague concepts of international science cooperation and building bridges among countries in conflict, to more fundamental and substantial actions.

Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals. Climate change may be the most obvious example of where global action is needed, but many other issues have similar characteristics—deep ocean resources, space, and other ungoverned areas, to name a few.

However, taking up this mantle requires acknowledging why past efforts have failed to meet their goals. The global commitment to Sustainable Development Goals (SDGs) is an example. Weaknesses in the UN system, compounded by varied commitments from member states, will prevent the achievement of the SDGs by 2030. This year’s UN Summit of the Future is intended to reboot the global commitment to the sustainability agenda. Regardless of what type of agreement is signed at the summit, its impact may be limited.  

Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals.

The science community must play an active part in ensuring progress is in fact made, but that will require an expansion of the community’s current role. To understand what this might mean, consider that the Pact for the Future agreed in New York City in September 2024 places “science, technology, and innovation” as one of its five themes. But that becomes actionable either in the narrow sense that technology will provide “answers” to global problems or in the platitudinous sense that science provides advice that is not acted upon. This dichotomy of unacceptable approaches has long bedeviled science’s influence.

For the world to make better use of science, science must take on an expanded responsibility in solving problems at both global and local scales. And science itself must become part of a toolkit—both at the practical and the diplomatic level—to address the sorts of challenges the world will face in the future. To make this happen, more countries must make science diplomacy a core part of their agenda by embedding science advisors within foreign ministries, connecting diplomats to science communities.

As the pace of technological change generates both existential risk and economic, environmental, and social opportunities, science diplomacy has a vital task in balancing outcomes for the benefit of more people. It can also bring the science community (including the social sciences and humanities) to play a critical role alongside nation states. And, as new technological developments enable nonstate actors, and especially the private sector, science diplomacy has an important role to play in helping nation states develop policy that can identify common solutions and engage key partners.

How the Octopus Got to the Senate

Octopuses are famously smart: they can recognize individual humans, solve problems, and even keep gardens. They are also a popular food for humans: around 350,000 tons of octopus are caught worldwide each year, and demand is only growing. Some governments and start-ups have invested significant resources into domesticating octopus, and the world’s first octopus farm may soon open in Spain’s Canary Islands. 


But should octopus be farmed at all? That question is being debated in several pieces of legislation right now, including a bipartisan US Senate bill. For Jennifer Jacquet, professor of environmental science and policy at the University of Miami, the answer is a resounding no. For the last decade, she has worked to end octopus farming before it begins, as she wrote in Issues in 2019. On this episode, Jacquet discusses why octopuses are poor candidates for farming, the growing social movements around octopus protection, and why we need public conversations about new technologies before investments begin.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and TechnologyIssues is a quarterly journal published by the National Academy of Sciences and Arizona State University.

Octopuses are famously smart. They can solve problems; they can open jars; they can recognize individual people, and they can even—supposedly—keep gardens. In addition to all of this, they are a very popular food. Major resources have been invested into domesticating the octopus and the world’s first octopus farm may soon open in Spain’s Canary Islands. But should octopuses even be farmed? That question is being debated in several pieces of legislation right now, including a bipartisan Senate bill.

I’m Lisa Margonelli, editor-in-chief at Issues. I’m joined by Jennifer Jacquet, a professor of environmental science and policy at the University of Miami. Starting in 2014, Jennifer has worked to end octopus farming. She published her very first article on the subject in Issues back in 2019. We’ll discuss why octopuses are poor candidates for farming, how she built a social movement, and why we need public conversations about new technologies long before the investments begin.

Isn’t it odd that we are not having a public conversation about this? Isn’t it odd that we’re just putting species into mass production without really having any deliberation?

Jennifer, welcome!

Jennifer Jacquet: Thank you for having me.

Margonelli: So one of the questions that is a big operating issue at Issues is how can scientists affect policy? Tell me, what do you study and how did you get involved in octopuses?

Jacquet: I work on the twin problems of climate change and the biodiversity crisis. For my PhD, I focused a lot on fisheries and aquaculture issues within the biodiversity crisis. Ten years ago now, I saw an article about how farming octopuses was going to be the next big thing. It was published in 2014 and the article gave me pause. I remember teaching it in my class and saying, “Look at what’s coming down the pipeline,” and feeling like, “Isn’t it odd that we are not having a public conversation about this? Isn’t it odd that we’re just putting species into mass production without really having any deliberation?”

Thus, the seed was planted and I began working on this issue of octopus farming in part because it felt interesting to work on something that hadn’t gotten off the ground yet as opposed to the other things I work on like trying to keep oil in the ground and dismantle fossil fuel infrastructure and things like that.

Margonelli: So part of this was the challenge of something that wasn’t happening yet. It was a chance to head something off maybe.

Jacquet: Exactly. I felt like it was a very different approach, even policy wise, to the other things I was working on and that gave me momentum.

Margonelli: What did you know about octopuses at that point?

Jacquet: That’s a really good question. I mean, even my own affinity for octopuses at that point especially was hard to explain, but I felt like I knew based on the videos I had seen, the articles that were coming out, Sy Montgomery’s book, The Soul of an Octopus, was published the following year in 2015, that they were not good candidates for mass production, that they were simply too curious, too carnivorous, and too like us to some degree, despite being an invertebrate, to really take to this form of production—not that any animal really takes to it—but they seemed an especially bad candidate.

Margonelli: So you developed essentially two different arguments against octopus farming. One was about them being ill-suited to being put in small octopus cages and the other one had to do with the amount of protein that octopuses need to eat.

Jacquet: Yes. The third actually that we mentioned explicitly in the article in Issues is about the question of food security. Do we really need this to feed the world? So it was both an ecological and animal welfare and the food security argument that we brought together and wove together in this argument.

Margonelli: The ecological argument on the amount of food that octopuses need is they need to eat three times their own weight in other fish and crabs to grow. So, it’s taking all the fish in the sea and converting them into a third of themselves and then harvesting that octopus.

Scientists have been calling since the 1970s to stop putting carnivorous species into mass production, things like Atlantic salmon, because of the demands that they put on wild fish. We actually have to capture fish from the ocean to feed them.

Jacquet: Yes, that’s right. Interestingly, we’re in the midst of this incredible revolution regarding aquaculture—the farming of aquatic fish and invertebrates—in that we have domesticated or put into mass production something like 400 species over the last few decades. A good portion of those species have been carnivorous. Compare that to the 20 or so terrestrial animals that we’ve domesticated over thousands of years, and none of those animals are pure carnivores.

So you already have this real difference between these two farming systems and scientists have been calling since the 1970s to stop putting carnivorous species into mass production, things like Atlantic salmon, because of the demands that they put on wild fish. We actually have to capture fish from the ocean to feed them and, as you say, convert it to this more exotic or luxury product. So, we take anchovies and menhaden, convert them to salmon, and we were going to be taking hake and squid and crabs and converting it to octopuses. This just doesn’t make good ecological sense. There’s a strong scientific argument for this that’s been around for decades.

Margonelli: So you started getting interested in this topic in 2014. Then in 2015, Sy Montgomery’s book, The Soul of the Octopus, came out and people got interested in what’s going on with these octopi and why can they open a jar and things like that, these bigger cool questions about the octopus. Then you started working on a scientific article about it and tell me what happened there.

Jacquet: Yeah, I would just add with Sy’s work as well, most people had seen also incredible footage of octopuses doing interesting things. The whole rise of YouTube and the digital culture had allowed a glimpse at this animal, but both her book and many of those videos showed us what octopuses did in captivity mainly. We began writing an article after, especially this conference on animal consciousness where I spoke and also Peter Godfrey-Smith spoke. He has studied octopuses in the wild, which is very remarkable. I remember speaking with him after the conference and saying, “We need to write something about this.” He agreed. And Becca Franks was also at NYU and she is an animal welfare scientist. I, at that same conference, had met Walter Sanchez-Suarez who was finishing his PhD in animal behavior and is from Spain, which is the country who was making the biggest investment into octopus farming. So, all of that converged on writing up an article making the case against octopus farming that we tried to place in a scientific journal. It had, I think, 50 or 70 scientific references. It was nevertheless a normative argument. Of course, in the title you can hear it—the case against octopus farming—and trying to get this article placed was a challenge in and of itself.

Margonelli: Tell me a little bit about the challenge. What happened? What did people say when they saw it or what did the scientists who were doing the review say?

Jacquet: Yeah, so I sent it to the usual list of suspects where I’ve sent things before, Nature, Proceedings of the Royal Society, Bioscience, Frontiers in Ecology and the Environment, a number of journals. I got back an endless list of rejections for various reasons. But overwhelmingly, the response was that this seemed unbalanced, that really a scientific article, what it should do would weigh the pros and cons of farming as opposed to making a clear argument against it. I felt that that argument was already tacitly in the literature that actually if you looked, all you saw with regards to octopus farming were scientific attempts to technologically get it off the ground.

There was never a moral justification, but there was a technocratic approach as to how you could farm octopuses. This just felt to me like a totally different conversation that it wasn’t about reviewing that existing literature, but about opening up the conversation beyond what was already there or thinking about what was there with regards to say carnivorous fishes and using some of those scientific arguments to apply to another carnivorous species, which is octopus.

Margonelli: So your question was more like, yeah, maybe we could farm octopuses, but should we? It was this moral question. So, somehow you came to Issues in Science and Technology back in 2018 and it was published in the winter of 2019.

The entire field of aquaculture science is there at the behest of the industry. So, these broader conversations about what should or shouldn’t be farmed felt like they were not happening there.

Jacquet: Yes. I remember one of those journal editors… I mean, I think this is something that stuck with me, saying we should send the argument to an aquaculture journal. I just thought that would be the exact wrong fit in part because the entire field of aquaculture science is there at the behest of the industry. So, these broader conversations about what should or shouldn’t be farmed felt like they were not happening there. I was very fortunate that the editor at the time at Issues liked this argument, liked the idea of the piece, but of course, had to reshape it to fit Issues, which was stripping it in many ways of the many scientific references and changing the overall tone of the article, but I think improving its readability vastly.

Margonelli: So what happened after it was published in Issues? Was there a great clamor for octopus legislation?

Jacquet: There was a deafening silence. I felt like it was an incredible effort on everyone’s part and then it had this splash of next to nothing. But then, just weirdly and I think quite randomly—I’m not even sure how he found it—but Robin McKie from The Guardian asked to write about the argument in May of that year, so months after the Issues article came out. Once The Guardian article came out, the ball really got rolling.

Margonelli: So if I’d come to you in September of 2019 and said, “Hey, five years from now, there’s going to be a Senate bill about this,” what would you have said to me?

Jacquet: Given how much effort I’ve put into many other things with absolutely nothing happening policy-wise, it’s nothing short of a dream come true for me personally, especially to just have this level of attention on octopuses and to see that sometimes where you are and where the public is or where Congress is, is actually pretty strongly aligned. It renewed my faith in the entire process.

Margonelli:

One of the things that happened during this five-year period is that there’s been an incredible amount of octopus scholarship. So, there’s been a lot of science that supports the policy take. Can you talk a little bit about that?

Jacquet: Yes, there have been some really important pieces that came out. Robin Crook’s work from San Francisco State, I believe, people pointed as the definitive piece of evidence that octopuses feel pain. There was a lot of discussion because there was also an enormous campaign launched by Compassion in World Farming that discussed the means of slaughter and the impact that this would have on the animals. It wasn’t peer reviewed work, but it was, I think, very persuasive. Then there was a report published by the London School of Economics talking about how giving octopuses a good life in captivity was virtually impossible.

Margonelli: That report was about sentience. So, it connected the octopus to the concept of sentience and whether they had feelings and then connected that back to the question of farming.

Jacquet: Yes. It was also not exclusively on octopuses, but covered a wide range of invertebrates. So, it was interesting to see octopuses relative to these other species as well.

Margonelli: And then at the same time, there was a cultural movement around octopuses. Do you think that if there had only been the growing science around octopuses that was also being applied to this question of farming… I mean, how important was the whole cultural movement and what was the cultural movement?

This is really a watershed moment for octopuses.

Jacquet: I can say with pretty high confidence that the cultural aspect is enormously important. A number of colleagues and I also worked on a report comparing protections for whales to the protections of tunas to the protections of octopuses. You can really see through those case studies the differences. We have an enormous body of science about tuna and we have next to nothing in terms of protecting them, especially as animals rather than just as fisheries resources. Whales, there was the science happening, there was this enormous cultural momentum. There were civil society groups acting. With octopuses, I really felt like, “Oh, we’re on the precipice of something.”

Then when the film My Octopus Teacher came out in 2020 in the midst of a global pandemic and becomes number one on Netflix and really everyone and their brother seemed to have seen that film, it shatters, I think, a lot of preconceptions about what octopuses want. It shows, I think, for the first time in an incredibly persuasive way that these animals are wild animals and they prefer a wild existence. That what you saw them do in an aquarium—open a lid here or there, escape from one aquarium to another—is dwarfed by their behavior in the wild, by their ability to outwit a shark, by their ability to camouflage and just weave together this narrative around the life of Octopus vulgaris.

The very same species that is being discussed for mass production is the species featured in that movie and then wins the Academy Award. I mean, I think this is really a watershed moment for octopuses.

Margonelli: It’s interesting because what you’re talking about really is taking all of these scientific questions and framing them in ways that the public can engage and think about them ethically. It’s not just octopuses are too smart to be farmed. It’s octopuses in the wild. So, the wild has a specific meaning. There’s specific behaviors attached to it. You have to understand the framing of an octopus in the wild and who they would be as different than an octopus in a tank and then there’s some questions of empathy. I mean, it’s a very complex set of problem framings and understandings and cultural relevance of this animal. It’s a very complex set of things that would be hard to see, I think, in advance.

Jacquet: I think it would be very hard to predict any of that in advance and yet it does look… I don’t want to say identical by any means, but similar to what happened for whales where there really was this shift in global consciousness for this entire group of animals that led to… I mean, I don’t want to say instantaneous, but it was a very rapid change from being a pro-whaling world to an anti-whaling world. I mean the conversation has not gotten to should we kill octopuses in the wild? That is still happening and no one’s really discussing it, but the idea that we should subject these animals to a life of mass production? is being deliberated upon right now. It feels very 2024 to me.

Margonelli: Can you dig into why it feels so specifically 2024?

That has been what we’ve done to nature over the course of the Anthropocene. If we can get out of that with octopuses, maybe we can undo it in other ways too.

Jacquet: We are blindly walking down this road of domesticating aquatic species without talking much about it. Here is an animal that really has captured, I’d say, the imaginations and the hearts of the public and seeing who this animal is in the wild makes you go, “We can’t do this to this animal. There’s no reason to do this to animal right now.” I feel that entire exercise is just so healthy given how many changes and the kind of shift in perspective that we need to actually transform society right now to be compatible with ecological limits and to live in some agreement with nature.

I want to say there is a little bit of an analog to the whale conversation yet I think this conversation is still completely new because it’s not about just us going out and killing things, which we’ve done for thousands of years. It’s about, “Do we want to subject them to domestication, to our whims alone, to completely instrumental value? That they have no other value besides that value?” That has been what we’ve done to nature over the course of the Anthropocene. If we can get out of that with octopuses, maybe we can undo it in other ways too.

Margonelli: It’s interesting too because it’s only possible in this particular information environment that we find ourselves in. As you mentioned, it partly has to do with the pandemic and people being able to watch things on YouTube and the growing popularity of octopuses then driving octopus research in the science and then that feeding into popular culture, which then continues to drive the research. So, you have this interesting octopus singularity here going on.

Jacquet: Yeah, the prediction with the circle of moral empathy is we’ll go to chimpanzees, we’ll go to marine mammals, and then the octopus is just a complete… It is a mollusk. We really could care less about so many of its cousins in terms of bivalves, oysters, whatever. Yet we are like, “No, we need to stop and start talking about this animal.” That’s also really interesting to me. It’s really undoing, I think, very traditional views of what is commonly referred to as the great chain of being, our moral priorities vis-a-vis other species.

Margonelli: So let’s move away from the mollusks and talk about the solitary scientist who starts out trying to figure out how to build a movement. Did you have an eye towards building a movement when you started in 2019 or so?

Jacquet: My level of ambition almost always exceeds whatever happens in real life. So, yes, I had a vision. I mean, my vision involves no octopus farming for the rest of eternity and we are not there yet, but my greatest ambition at the time when the article came out or when we were started talking about it among some actors in civil society in New York City, where I was, was a city-level ban on octopus farming. I don’t think I set out to build a movement, but I was very driven on this issue and felt that anybody who wanted to talk about octopuses, I would talk to them at any time. I kept trying to persuade students, lawyers, some people I knew in city government to take the issue seriously whenever you have their ear. I kept taking up opportunities.

So when Compassion in World Farming, which we mentioned before, they launched a campaign against octopus farming, which is great. They’re a European group. Again, the farm that’s slated for the Canary Islands, which is a Spanish-owned island off of Africa. They were wanting to contact all the MPs in Brussels, and they said, “A letter would be really helpful if we had one.” I was in the middle of summer. I remember I had my new daughter and was supposed to be spending time with my in-laws, and instead I was writing up a letter of support for what they were interested in and decided to get it published. We published that in Animal Sentience and we had over 100 signatories on that.

There is no octopus farming lobby because there is no octopus farming yet. It felt like maybe if we could get there and have that conversation that people would still be sensible on things, that it wouldn’t be just about power. It would actually be a citizens conversation.

I would say what it took from my side was a willingness to drop everything when the moment called for it. Not everybody has that ability or motivation necessarily. So, I’d say that is a challenge, but again, I feel so fortunate and propelled by octopuses themselves that it doesn’t take much to get people moving. Just a quick fast forward is just earlier this year, the first bill actually passed in Washington State. It was introduced last year, and these just passionate people got the bill introduced Washington State. It’s really impressive what they were doing. They reached out to me. I said, “Of course, I’ll testify.” I attended. I spent an hour. I had two daughters, so I had to get the other daughter a babysitter in order to attend this meeting. Then I didn’t even get to testify.

There was no time. It was all taken up by the talk of wolves and livestock. They gave octopuses one minute at the end and I thought, “Okay, this seems really hopeless.” Honestly, the people who were helping sponsor the bill thought it seemed hopeless too. Nevertheless, it passed and it just started gaining momentum. Then lo and behold, earlier this year, it’s signed into law by the Governor of Washington State. Now just last week, we see the octopus bill in California passed 36 to zero. I mean just zero opposition.

This was another thing that I always had felt deep down in my grappling with environmental policy in the United States. There are such enormous forces of obstruction now. There’s such a planned counter-movement to any regulation, but there is no octopus farming lobby because there is no octopus farming yet. It felt like maybe if we could get there and have that conversation that people would still be sensible on things, that it wouldn’t be just about power. It would actually be a citizens conversation.

Seeing that 36 to zero in California just gave me, again, such renewed faith that if it wasn’t for the way that these vested interests have worked out how to own our political system over the last four decades, that we would be in much better shape as a country. Now, the bill’s in Hawaii. Now, we have the national legislation as you’ve mentioned. I’m talking next week to more people in Europe trying to get the ball rolling over there. I’m just hopeful.

Margonelli: Okay, so let’s talk just a little bit about the role of letters signed by scientists because you did have a piece. You did have a signed letter in Animal Sentience that was in response to what was going on in the EU. But after the Octopus Act was proposed by Senators Whitehouse and Lisa Murkowski in July, you then went out and worked with a group of nearly 100 scientists to pull together another letter that was published in Science in mid-August.

Jacquet: Those are mostly North American scholars, not entirely, but also Sy Montgomery joined the letter. It was wonderful to have a lot. I tried to get a lot more octopus-oriented people involved. Jennifer Mather, really renowned octopus behavioralist and biologist, signed the letter as well. That was heartening. Now, the question is how that builds momentum or whether or not it will build any momentum through Congress. So, we have a similar letter, the same content, but a different format that actually will go to Congress. I think you ask a really important question, does that matter? I think it’s really hard to know what matters, honestly.

So, as I say, whenever I’ve been called upon, I try to answer that call. Whether or not it’s talking to a potential staffer or congressperson who’s interested, whether it’s talking to an NGO interested in getting something off the ground, whether it’s writing a letter of support. I feel like it’s an any and all of the above and it’s really hard to know what makes the final difference. As you say, maybe it’s Craig Foster and My Octopus Teacher at the end of the day.

Margonelli: Or the charismatic mollusk, the octopus itself. My last question is you raised a really interesting question about this because there actually isn’t a working octopus farm yet and it hasn’t happened. So, on the one hand, that allowed you to have a discussion purely about the morality of it and the ethics of it and its potential role in sustainable food systems. You’re able to have a clear conversation without different power players being involved. But at the same time, I think Senator Whitehouse said, “It would preemptively prevent this farm from taking place.”

In a sense, it’s going to prevent certain avenues of innovation. Other people have said it’ll prevent certain types of investment, all of which is great if you don’t want octopus farms, but there’s both a genius to legislation that prevents certain kinds of potentially harmful innovation and perhaps we should use it more, but at the same time, there’s a scariness to it because it could potentially limit certain other kinds of innovation and investment that could be very helpful and it could be used by people to prevent things that they don’t want. I just wondered, you’ve been thinking about this over the years. What do you think about that? How do you slice that?

We’ve been dragged into this without having a conversation, without the public really being on board, and with an enormous amount of public investment. Maybe if we started to have these conversations sooner, we would come to agreement about really the world we want to live in.

Jacquet: Well, I guess I see it a little more that it starts even long before that legislation’s introduced in the sense that the European Union and the Spanish government have already made these enormous investments into octopus farming. It was a technological problem for a long time of getting the larval octopuses to grow, of reducing cannibalism. While there isn’t any commercial octopus farming, there are lots of experimental farms out there and there’s been lots of taxpayer investment in this, I think, without strong scientific conversation or public discourse around whether or not this is the right or wrong use of our public money. I think if the conversation that happened actually at that stage, it would’ve been even better because we could have made different kinds of investments into more sustainable aquaculture, into aquaculture that would feed the world and into maybe cellular octopus that could be cloned and grown in a lab that wouldn’t involve any octopus individuals or into plant-based octopus products.

I don’t see this as limiting innovation. If anything, I feel like we’ve been dragged into this without having a conversation, without the public really being on board, and with an enormous amount of public investment. Maybe if we started to have these conversations sooner, we would come to agreement about really the world we want to live in. So, I don’t see this as limiting actually. I see it as opening up saying, “You know what? The public is really not in favor of this form of production, but here are these other options that we could explore.” Maybe we always should have had a more diverse portfolio of investments when it came to aquaculture because this kind of, “Oh, people will pay a lot of money for octopuses. All right, we should farm octopuses” to me was not the deliberate, slow thinking that we really count on the government and scientists to be having.

So, I actually see it as much more a disappointment with the early part of this stage than a fear about what’s happening currently. I feel like what’s happening currently is trying to address the problem of not having had those conversations sooner.

Margonelli: To wrap up, I’m curious, do you have your eye on another animal or another issue that you’re really thinking about?

Jacquet: Well, not to scare people too much, but I really think that octopuses are a kind of umbrella species for the aquaculture conversation generally, because again, we have done work. Becca Franks led the work showing that we’ve jumped into this mass production of aquatic species, over 400 again compared to the 20 or so domesticated terrestrially. We know next to nothing about these animals in terms of anything and almost nothing in terms of their welfare, how to give them a good life in captivity.

I think that there’s reason to put the brakes on what’s happening with aquaculture right now in general and rethink it and re-strategize with a greater amount of deliberation with more kinds of people, lawyers, concerned citizens, widening the number of stakeholders people say, beyond aquaculture science, which again, has been very much about serving the industry and making money to say, “What does this industry look like moving forward?”

So I certainly see the octopuses as opening of a door to that conversation. I have a lot of other unrealistic pipe dreams aside from that. But I continue to work on octopuses, of course, because of them, because of who they are, because of the issues we’ve talked about, but again, also because I see them as some guidepost for the aquaculture conversation as a whole.

Margonelli: This has been an incredible conversation, not just about octopuses, but also about how to use science to create social change. If this podcast inspired you to learn more about octopuses, go to issues.org to read Jennifer’s piece, The Case Against Octopus Farming, and you can visit our show notes to learn more about her work on aquaculture conservation.

Please subscribe to The Ongoing Transformation wherever you get your podcasts and write to us about anything, especially octopus facts, at podcast@issues.org. Thanks to our podcast producer Kimberly Quach and our audio engineer Shannon Lynch. I’m Lisa Margonelli, editor-in-chief at Issues.

Join us on October 22nd for an interview with Lisa Bero about bias in clinical research!

The Politics of Wastewater Reuse

In “Industrial Terroir Takes on the Yuck Factor” (Issues, Summer 2024), Christy Spackman describes clever attempts to overcome the prevailing challenge of public skepticism toward the prospect of potable water reuse.

The effects of infrastructure have long been recognized by urban historians as profound and path dependent, albeit indeterminate. In the case of water reuse, once the initial water and sewers systems are laid, the accompanying social, economic, and cultural institutions serve to entrench a commitment to waterborne sanitation systems materially, culturally, and politically. Thus, in the United States and elsewhere, the flush toilet and the treatment-based approach to managing water quality results in investment in water purification technologies and, ultimately, finding beneficial uses for wastewater. In this regard, the “treat, treat, and treat again” industrial terroir supports the reasonableness, acceptability, and inevitability of reusing wastewater for drinking water.

By adapting to existing infrastructure, including political commitment to flush toilets and the removal of pollutants via centralized wastewater treatment, engineers apply new tools and new procedures to move a finite amount of water through higher levels of treatment.

For boosters of potable water reuse, purity and security are key discursive concepts. At the molecular level, treatment processes remove all markers of “place” from water, but as soon as we change our scale, as Spackman does, we understand that urban drinking water is an intimate and embodied experience. Further, water is geopolitical. Water infrastructures are, in essence, social arrangements. The focus on the molecular scale provides little opportunity to consider the inevitable changes in social power that accompany this shift. Who gains, who suffers, and who pays for this change?

By adapting to existing infrastructure, including political commitment to flush toilets and the removal of pollutants via centralized wastewater treatment, engineers apply new tools and new procedures to move a finite amount of water through higher levels of treatment. As a result, highly treated wastewater is seen as a solution to many of the growing challenges of urban water scarcity in many regions. Although purported as radical reorganization of water governance (by Spackman and others), potable water reuse is an approach that minimally disrupts the fundamental infrastructure and inertia of large sociotechnical systems. In this case, innovative new technologies have been designed to retrofit and protect outdated infrastructures in a process the political scientist Langdon Winner described as “reverse adaptation.” This preference to adapt to the established infrastructure has meant that alternative means of managing human bodily wastes have never realistically been considered.

The universal ideal of modern sanitation is not complete, nor is it necessarily stable. Cities across the globe are facing serious water, energy, and transportation challenges. The prospect of potable water reuse offers a unique opportunity to make connections, discover alternatives, and acknowledge that urban transition is inevitable. Water development aimed at providing greater water security with the least social disruption over the short term may be a maladaptation. The question is not solely if the public will accept that potable water reuse can be done safely, but if reuse will lend itself to a sustainable and just transition at the city and regional scale.

Associate Professor, Department of Geography

University of Nevada, Reno

In a seminal lecture in Dallas in 1984, which would later get published as H20 and the Waters of Forgetfulness, the philosopher, priest, and social critic Ivan Illich argued for a separation of water and H20. The latter, a modernist creation, was “stuff” produced by an industrial society and circulated through pipelines to deodorize and sanitize urban space. Devoid of social and spiritual meaning, H20 was reduced to the odorless and tasteless substance we became familiar with in school textbooks, but perhaps rarely encountered in our everyday lives.

Christy Spackman makes it clear that the struggle between water and H20 continues to animate contemporary concerns around “scarcity” and “reuse.” The scientific and technological labor that transformed water into H20 involved a two-step process. The first required the material reconstruction of water by removing “undesirable” salts, metals, and minerals, and purifying it by adding chlorine (and in many parts of the world “fortifying” it with fluoride). The second step involved reworking the sensorial and social script around H20 and resocializing it into potable water.

The acceptability of direct potable reuse of wastewater has to negotiate this challenge of resocialization. Recycled wastewater has to regain its place in society. It has to shed the history of its recent defilement by illustrating that what is being used to produce beer is not just engineered H20, but potable water.

Technologies can materially reconstitute H20 in myriad ways and claim it to be “just straight water,” but to users water quality remains a product of history.

Matter constitutes memory in water—where it has been (place), for how long (time). When we add and subtract matter in water, we reconstitute its relation to place and time. One might assume that since modern (and secular) water emerges out of a continual process of addition and subtraction, it should not be difficult to convince users to drink recycled water. The “yuck” factor that Spackman describes contradicts that logic. Technologies can materially reconstitute H20 in myriad ways and claim it to be “just straight water,” but to users water quality remains a product of history. The engineer can erase the material history of water, but the user will remember its past relationships with place and time. This shows up in Spackman’s discussion of the humorous expression “poop beer,” which refers to beer made with recycled water. Resocializing H20 as water, therefore, requires not only reconstituting matter in water but also the users’ memory of that water.

The author’s lively essay illustrates the continued contest of competing imaginations around water in Arizona. I cannot but wonder as to how memories will be reconstituted in Flint, Michigan, or Jackson, Mississippi, where water has the color of lead and the odor of racism. As place forcefully asserts its presence in water in these sites, it reminds us that increasing demand for recycling wastewater for potable reuse will soon have to contend not only with matters of taste but also with concerns of justice.

Lecturer, Water Governance

IHE Delft Institute for Water Education

The Netherlands

For a More Competitive US Research Enterprise, the Work Begins Now

The US scientific enterprise has for decades been a juggernaut for innovation, economic growth, and lasting national security and prosperity. However, as the head of a premier US science organization, I am growing increasingly alarmed by worrying trends that threaten to undermine our global leadership in science and our ability to continue producing the advances that our nation and world have long depended upon. 

As a result, I felt strongly that it was time to do what we scientists do best—take a hard look at data to get an informed assessment of the health of the US research enterprise and current trends in science leadership. I shared my findings publicly in June when I delivered my first State of the Science address. Modeled after the State of the Union addresses that US presidents give each year, the goal of my speech was to explore actions we need to take now if American science is to remain strong and successful in the years ahead—and to spark a call to action among researchers, policymakers, university administrators, philanthropists, and others in the public and private sector.

I felt strongly that it was time to do what we scientists do best—take a hard look at data to get an informed assessment of the health of the US research enterprise and current trends in science leadership.

In my speech, I presented data on the status of US scientific leadership in the world. While we still invest the most money in research and development, China’s rate of investment in R&D is growing at twice that of the United States, and China is now on track to surpass US investments. That investment is also producing more research output—for example, China’s global share of drugs in phase I to III trials has grown from 4% in 2013 to 28%. And China is not the only country making this investment. Other nations are understandably following the US model of investing in STEM education and basic research to fuel their economic growth and prosperity. This, in turn, is increasing the fierce global competition for STEM talent. The United States is especially reliant on foreign-born STEM students and workers, who fill some 20% of STEM jobs that require an undergraduate degree. They also make up more than half of graduate and more than a third of postdoc STEM applicants at universities.

Recent decades have seen major shifts in how the US research enterprise operates. For instance, although the federal government remains the largest source of funding for basic research, its rate of investment in science has steadily declined over the years in inflation-adjusted dollars. Industry now dominates R&D in America, and the share of research funded by private philanthropy is also on the rise. These trends raise new challenges for setting and meeting national goals and objectives in an efficient and coordinated manner.

Recent decades have seen major shifts in how the US research enterprise operates.

Finally, reversing these indicators of loss in US scientific leadership is nearly impossible without a public that trusts both science and scientists. According to a recent Pew Research survey, in the wake of the COVD-19 pandemic, fewer Americans say that science has had a mostly positive impact on society. This has major implications for how people understand the world, whether they are willing to support public funding for research, and if they will even follow the best science-based advice on climate change, pandemic preparations, and assuring energy, food, and water security.

To put US science on the best path for the future, I identified several major challenges that all of us in the research community should tackle. They include:

Improving K–12 education. As other nations increase their investment in R&D, fueling competition for international talent, we need to break our dependence on global STEM workers and cultivate our own domestic workforce. That means revamping and strengthening US STEM education. In particular, we should encourage children’s innate curiosity by making science classrooms much more hands-on and experiential. We could also explore the potential of new technologies such as artificial intelligence to help overburdened teachers.

Creating a national research strategy. As industry and philanthropy become major funders of research alongside government, we should do a better job of coordinating research so that investments in science have maximum impact. The White House Office of Science and Technology Policy is working on such a strategy—which is a challenging task, given the many public and private entities involved in research and their differing goals and objectives. We need a balanced approach that also allows for the ability to take advantage of new and unexpected discoveries when they arise.

Reducing red tape. Decisionmakers should work to reduce the burden of regulations on faculty researchers, who spend 40% of their time outside the classroom on paperwork. The United States should lessen red tape that can be a barrier for foreign students who wish to study here, as well as for foreign graduates who wish to work stateside.

We should do a better job of coordinating research so that investments in science have maximum impact.

Bolstering university-industry partnerships. As industry continues to dominate R&D, we must find ways to strengthen engagement between industry and universities. I am especially concerned about research on AI, which is predominantly performed in the private sector at the moment, limiting opportunities to ensure that AI is applied for public good absent a profit motive. Rules of university engagement with industry should be modernized while remaining alert to possible conflicts of interest, which undermine public trust in science.

Strengthening international partnerships. Increasingly, big science projects such as CERN or the International Fusion Project depend upon the talent and resources of multiple countries. The United States should strengthen partnerships with other countries and invite their collaboration on US research priorities when appropriate, create well-communicated policies for where and when we should collaborate, and deploy procedures for evaluating the success of these collaborations.

Cultivating trust in science. Scientists should demonstrate that they are producing research that is credible, prudent, lacks bias, is self-correcting, and is beneficial—all qualities positively correlated with public support for science. Researchers at all levels should be rewarded for producing research that is excellent and trustworthy, and the research community should support excellence in communicating science to the public.

Navigating a course correction for science is no easy feat, given the complexity of our system and the number of players who need to get involved. To that end, we invited several prominent voices in the scientific community to respond to my address and share their perspectives on how we can make progress on these goals. I hope that everyone who cares about our research system finds a way to contribute. We must act now to ensure that science remains strong—for the benefit of all Americans.

K–12 Education

Alexandra Fuentes

Though the United States has all the ingredients for leadership in science—world-class higher education institutions, strong industry and nonprofit sectors, philanthropic giving, and talented young people—not all US students have access to early or sustained learning experiences in the fields of science, technology, engineering, arts, and mathematics (STEAM) and computer science (CS) in prekindergarten through grade 12 (pK–12). To close the gaps, more coordination across the education ecosystem is necessary.

Every student should be able to access STEAM and CS learning experiences embedded in the school day. In elementary grades, this includes integrating content across disciplines and providing time for students to do hands-on engineering, coding, and computational thinking projects. In middle and high school, elective courses create pathways to college and careers. These classroom experiences can be enhanced when offered in conjunction with after-school and summer programs, family events, and work-based learning opportunities such as workplace tours, internships, and apprenticeships. Providing these layered STEAM and CS offerings demands more resources and coordination.

In Fairfax County, Virginia, where I work, state leadership and federal government investments are supporting student access to STEAM and CS. Virginia was one of the first states to establish K–12 CS standards and expand early access to CS. In a school division that serves nearly 183,000 students, grant funding from the Department of Defense Education Activity Military-Connected Local Educational Agencies for Academic and Support Programs has offered flexibility to fund central staff to support Code Up projects that are dramatically accelerating the integration of CS and STEAM into math instruction system wide.

Not all US students have access to early or sustained learning experiences in the fields of science, technology, engineering, arts, and mathematics and computer science in prekindergarten through grade 12.

Local employers, universities, and colleges have partnered with the school system to expand pK–12 opportunities that ignite student interest in STEAM and CS. For example, Capital One connects middle school students with mentors and hands-on learning in their Capital One Coders summer and after-school programs. The nonprofit Children’s Science Center Lab has established a wide array of elementary, middle, and high school programs, including family science nights and internships for high schoolers in high-demand fields like cybersecurity. The Compose and Code program, funded by the National Science Foundation and led in partnership by researchers at George Mason University, Old Dominion University, and the University of Alabama, developed inclusive CS lessons that strengthen the computational thinking, coding, and writing skills of students with and without disabilities. George Mason University’s Building the Quantum Workforce Project connects students with industry leaders and internships. And Northern Virginia Community College and the Northern Virginia Technology Council have partnered on the Aim High initiative to expand career experiences.

This strategic cross-sector coordination invests directly in the talent and ingenuity of young people, equipping pK–12 students in our region with the critical thinking and collaboration skills that fuel success. Providing STEAM and CS opportunities to all pK–12 students requires investment, but it offers a high return. One high school student in Fairfax County Public Schools merged topographic mapping practices with cell histology and artificial intelligence to invent an accurate method of diagnosing cancer—she now works in the field of AI. Her pathway to STEAM and CS began well before graduation.

Alexandra Fuentes is senior manager of STEAM and Computer Science at Fairfax County Public Schools.

Addressing Red Tape

Matt Owens

The partnership between the federal government and academic research institutions has served the United States exceptionally well over the decades. Scientific and engineering discoveries and innovation have bolstered national security, health, and economic growth. Today, however, the necessary and well-intended—but inefficient and risk intolerant—regulation of this partnership impedes research, researchers and their institutions, and taxpayers’ research investments.

The Council on Governmental Relations closely tracks, analyzes, documents, and comments on federal research regulations. In the past 10 years, the number of new and modified federal requirements and substantial updates to policies, business practices, and interpretations has grown by 181%. Many of these regulations address the same core issues, but in a disjointed manner across multiple agencies. This regulatory trajectory is unsustainable if the United States is to retain its leadership in science and innovation.

To cut red tape encumbering federally sponsored research, the following actions should be taken.

This regulatory trajectory is unsustainable if the United States is to retain its leadership in science and innovation.

First, the most consequential action the federal government can take is to stand up the Research Policy Board authorized by the 21st Century Cures Act. No one federal agency can address the bloat and disaggregation of the current regulatory system. As recommended by a 2016 National Academies report, the Research Policy Board would be housed at the White House Office of Management and Budget (OMB)—where all federal regulations ultimately are approved—and serve as a primary policy forum for discussing ways to streamline and harmonize research regulations. In 2021, the nonpartisan Government Accountability Office reaffirmed the recommendation to OMB to establish this body.

Second, the White House Office of Science and Technology Policy should establish a position for associate director for the academic research enterprise, as recommended in the same National Academies report. This senior position would be a principal federal contact for the Research Policy Board, oversee and facilitate the general health of the research partnership, and work closely with OMB’s Office of Information and Regulatory Affairs (OIRA) to manage overall regulatory burden. The position would also work with the OIRA administrator to issue an annual report on regulatory issues and actions affecting the partnership.

Third, regulators must calibrate regulation to risk. Research is never risk-free, and the most effective regulations are calibrated to address known and major anticipated risks without stultifying creativity and innovation. Excessive regulation occurs when risks are overstated and/or the government seeks earnestly to anticipate and eliminate all risk, no matter how minor or unforeseeable. This is an impossible and self-defeating approach that can divert resources needed to mitigate the most severe risks. 

Together, these commonsense actions would establish a more effective regulatory oversight framework and help to rebalance and strengthen the research partnership that is vital to US science, innovation, and competitiveness.

Matt Owens is the president of the Council on Governmental Relations.

Bidirectional Collaboration

James Manyika

Advances in artificial intelligence are changing the tools available to conduct scientific research, reshaping the scale and scope of possible research questions, and accelerating the speed of discovery. Already, scientific discoveries enabled by AI are beginning to make a difference for people, expanding the potential to address pressing societal challenges like climate change and disease.

These advances are being powered by emerging transformations in university-industry relationships. In a conventional understanding of the way the US research enterprise works, the government funds basic research done by academics, and industry primarily focuses on funding and leading applied research. This view no longer represents the reality that most scientists in academia and industry experience. Industry is not just funding research—it is doing foundational research, often in deep collaboration with academic scientists. Additionally, traditional disciplinary distinctions are blurring, especially as the availability and use of data, computational AI, and machine learning tools become part of the foundational techniques advancing research. The new model for advancing the frontiers of science engages universities and industry in bidirectional scientific collaboration. 

Scientific discoveries enabled by AI are beginning to make a difference for people, expanding the potential to address pressing societal challenges like climate change and disease.

A recent landmark effort in connectomics to map a piece of the human brain to a level of detail never previously achieved shows how the traditional model of research is being upended. The breakthrough was made possible by a decade-long investment and deep collaboration between researchers at Google, the Howard Hughes Medical Institute, Harvard University’s Lichtman Lab, and others. The endeavor also highlights the multidisciplinary nature of cutting-edge research, as well as the importance of open collaboration tools—the full dataset, including AI-generated annotations for each cell, is publicly available on Neuroglancer.

Another example is Google DeepMind’s AlphaFold, which led to breakthrough progress on the long-standing challenge of predicting protein structures and has predicted the structure and interactions of all of life’s molecules including proteins, DNA, RNA, and ligands. This work was done in collaboration with academics, the European Molecular Biology Laboratory’s European Bioinformatics Institute, and others. To date, the free, publicly available AlphaFold Server has been accessed by 2.2 million scientists in more than 190 countries—621,000 from the United States alone.

Such collaborations make more kinds of research more possible, but to build truly resilient industry-academic partnerships, and to develop a truly representative national approach to science, we must build research capacity where it doesn’t currently exist. Right now, resources and research capacity are concentrated in a few companies and academic institutions. More investment in computing resources for academic researchers, as well as more shared and collaboratively used infrastructure are necessary. To have a truly national research strategy, it’s essential that more people, institutions, and entities are able to participate.

James Manyika is Google’s senior vice president of research, technology, and society.

Cultivating Trust

J. Marshall Shepherd

Cultivating trust in science requires commitment to the same basic principles that make strong leaders: authenticity, empathy, and logic. Though scholars are taught to be good researchers steeped in theory, methods, and scholarly reporting, I continually advocate for a more evolved approach in training the next generation of scientists, with the recognition that they will become the next generation of science leaders.

The scientific process values logic, providing frameworks for the constant inquiry, review, and advancement of knowledge. But the culture of science places less emphasis on the values of authenticity and empathy—both key to deepening an understanding of the social context of research.

My group’s research on climate risk, for example, shows that socioeconomically disadvantaged groups such as the elderly, children under five, and communities of color are disproportionately vulnerable to floods, heat, and hurricane impacts. Those communities may also be skeptical of “ivory tower” pontification or focused on meeting their immediate needs, like putting food on the table.

The culture of science places less emphasis on the values of authenticity and empathy—both key to deepening an understanding of the social context of research.

The concept of “end-to-end” science incorporates the typical graduate training sequence of coursework, research, publishing, presenting, and so forth. However, it also includes media training, policy exposure, experiential learning, and coproduction of knowledge with stakeholders. By incorporating these elements in formal graduate education, we can start developing cohorts of empathetic scientists who see beyond the test tube, Doppler radar, or machine learning algorithm.

Even as the scientific establishment evolves to cultivate trust, there will still be bias, misinformation, disinformation, motivated reasoning, political agendas, and literacy challenges to overcome. But if scientists are not broadly engaging outside of laboratories or academic comfort zones, others will gladly rush in, like air to a vacuum, to fill the voids we leave behind.

J. Marshall Shepherd is associate dean of the Franklin College of Arts and Sciences, Georgia Athletic Association Distinguished Professor, and director the Atmospheric Sciences Program at the University of Georgia.

Learning to Listen

Stephanie J. Diem

Whether virologists are trying to warn of an epidemic or roboticists are building new tools for workers, trust is the necessary ingredient in enabling scientific research to contribute to social transformation. If the public doesn’t trust the methods, results, and intentions of science, it will not be interested in what we have to offer. And eventually, it will object to funding scientists and scientific education.

Public trust in clean energy technologies is essential to my work as a fusion engineer and plasma scientist, but over the course of my career I have often felt underprepared to engage in the important work of building such trust. The way scientists are trained and evaluated—often in academic siloes—can accentuate the importance of the theoretical at the expense of the actual, real-life impacts of research. And it usually skips over the significance of communicating science to the public, or even to other scientists.

Once I recognized this gap in my own scientific education, I set out to invest in my ability to engage in active listening, to better understand the values and points of view of others, and to apply that to telling the story of my work. The insights I’ve gained have transformed the way I communicate science and conduct my research. I am better able to inform my work with concerns and questions directly sourced from affected communities.

If the public doesn’t trust the methods, results, and intentions of science, it will not be interested in what we have to offer.

Today, in addition to running an experimental research group focused on developing new plasma initiation technology for future fusion energy systems, I also participate in interdisciplinary research that engages communities to understand how fusion energy can fit into the broader post-carbon energy portfolio. I do this work in concert with colleagues at the University of Wisconsin-Madison, the University of Michigan, and Arizona State University’s Consortium for Science, Policy & Outcomes. Together, we are developing adaptations of the participatory technology assessment methodologies that have been used by NASA to determine how the public felt about asteroids. This process begins with 6–8 hour sessions where community members explore their values and how their hopes, concerns, and priorities might influence technology choices in fusion energy systems.

Bringing together diverse groups of people to find agreement or discover conflicting views in an accepting space is a rarity in this highly polarized time. And participants are excited their voices are being considered in the development of tomorrow’s energy systems, in ways that may translate to tangible improvements to their lives and livelihoods.

Our hope is that by pursuing engagements like this, we can build trust and transparency during the process to design better, sustainable systems. When participants are asked to share their thoughts about the exercise, the responses are positive. One wrote, “I wish more researchers could do events like this to help the general public learn about their work. I’m sure there are so many interesting things being studied.” 

To me, this is why it is imperative that scientists have more opportunities to leave our siloes and learn to engage in real, two-way conversations with the public. To truly fulfill our mission of using science to create a better world, we must understand how the public wants that world to feel. Trust is a process that arises between people over time, and we must start to work on it now.

Stephanie J. Diem is an experimental fusion engineer, plasma physicist, 2024 US Science Envoy, and assistant professor in the Nuclear Engineering and Engineering Physics Department at the University of Wisconsin–Madison.

Stitches and STEM

When I was about eight years old, my grandmother took me to a local fabric store to pick out a pattern for a dress we could sew together. Piecing together the brown pattern paper, cutting out fabric, and learning to pin and hem, I felt like I was solving the ultimate wearable puzzle.

What I didn’t know at the time was that I was also preparing for my PhD in cell biophysics—the study of how cells, and the structures within them, move and grow. Cells form, and interact with, the squishy, stretchy template of our tissues, where they are always jostling and vying for space. When one cell contracts and gets smaller, its neighbors get pulled along, expanding to fill the space.

Life, it turns out, is another spatial puzzle to solve.

My thesis research rests on understanding how chromosomes, our coiled and packaged DNA, stretch and fold in the cell and what those shape changes tell us about the forces acting on them. On the surface, it is completely unrelated to the crafting I do to unwind—knitting scarves and hats, sewing crescent bags and pajama sets. But research shows that expertise in fiber arts—like sewing, knitting, crocheting, and even fabric dyeing—may help build the spatial intuition I’ve needed for my research.

The study of how we perceive objects in the physical world and infer details about their relationship to other objects in space is called spatial cognition or spatial reasoning.  The most well-studied of these skills are rigid mental rotations, which involve identifying a single object rotated in space. Mental rotations come in handy for architects, engineers, and plumbers, who need a good grasp of how parts fit together or how pieces will round a tight corner. They’re also valuable for chemists, who often have to visualize molecules too small to see from various angles to understand their structure.

Expertise in fiber arts—like sewing, knitting, crocheting, and even fabric dyeing—may help build the spatial intuition I’ve needed for my research.

Being good at rigid spatial reasoning doesn’t necessarily translate to success in other spatial tasks, many of which are less well studied. The nonrigid spatial reasoning skills of mental bending and folding ask us to imagine how an object would look after being deformed. This is important for understanding fluid dynamics—how liquids and gases move in space—which atmospheric scientists and oceanographers use to study how wind or water flow. Cell biophysicists like the scientists I work with measure the effects of similar flows. They ask how fluid moves through cells to create currents; how proteins bend, fold, and fit together to create a functional cellular machine; or, in my case, how chromosome stretching and folding signals to the cell whether it is correctly attached to the cell division machinery, which ensures future generations of cells will inherit all the right DNA.

I use some of the same science and engineering principles at my sewing table, to help visualize how swatches of fabric fit together to form a three-dimensional garment. Unlike simply slotting flat panels together to build a box, sewing a garment requires an understanding of how fabric drapes to fit around a body, how the shape of a pocket changes the way it bears weight, and how folding fabric before sewing can affect the final fit. Researchers have seen that students who performed better in apparel design courses also tended to score higher on some general spatial visualization tests.

Knitting, too, is a remarkable mechanical process. Taking a one-dimensional yarn that has little stretch or give on its own and winding it into a series of knots that can create a stretchy surface or even a three-dimensional tube is a feat of engineering. And the pliancy of the final product can change based on the pattern of stitches you use. Because of its flexibility and the versatility of patterns, knitting is perfect for crafting handheld versions of abstract math concepts, such as Klein bottles and other manifolds, and it helps students reason through tough geometry and calculus problems.

Because of its flexibility and the versatility of patterns, knitting is perfect for crafting handheld versions of abstract math concepts.

Mathematician and crafter Daina Taimina turned to another yarn-based craft, crocheting, to create the first physical model of a hyperbolic plane. Inspired by this breakthrough, the Crochet Coral Reef is an ongoing community art project that reflects on climate change and honors female labor and applied mathematics. By developing patterns that riff on Taimina’s original hyperbolic plane, crocheters collectively create a rich marine ecosystem and explore non-Euclidean geometric space in the process. Multiple studies have shown how incorporating crocheting programs improved students’ STEM learning and math achievements.

Continued practice of fiber arts staves off age-related decline in spatial reasoning. In knitting hats, crocheting amigurumi, and sewing jackets, artists gain an understanding of the world around them—and manage to keep it. Perhaps this understanding of the physical world is woven into textiles of all kinds. Spiders, master weavers, are thought by some observers to have “extended cognition” because of the way their webs help them understand the world. They are known to reinforce web areas that are particularly rich with prey, expand sections that haven’t been so fruitful, and adapt their web’s shape to its build site—essentially storing their life’s experience in the webs they weave.

Humans, too, have long stored memory in our fiber creations. The Inca and other ancient Andean cultures used quipus, fiber strings tied in a detailed knotting system, to record dates, census information, and even oral texts. Women in the ancient world wove classic Greek and Roman tales into tapestries, their creations a collective, social form of storytelling. A few years ago, a friend and I were inspired by the story quilts of Faith Ringgold to collaborate on one of our own. These textiles act as a physical representation of abstract knowledge, both in terms of the skill required to craft them and the cultural stories they record.

 These textiles act as a physical representation of abstract knowledge, both in terms of the skill required to craft them and the cultural stories they record.

Why do we persist in building walls or imagining chasms between art and science, instead of weaving them closer together? The ageism and sexism are hard to ignore. You won’t see fiber arts as a rich source for building spatial reasoning skills if you’re convinced crafting is mainly a light activity for elderly women. Especially given the common perception that men excel at both spatial reasoning and math, while women are not naturally skilled. Data suggest men do outperform women at some rigid spatial reasoning, but that idea doesn’t hold for many nonrigid tasks. Maybe a first step toward equity in STEM is to stop viewing spatial reasoning as an innate gift granted only to a select few and instead as a skill we can all learn like any other.

For me, craft has been a critical part of moving my research along. Even when I tired of the lab and retreated to my hobbies, my brain was hard at work building the intuition to confidently analyze and interpret the movies I took of living cells dividing themselves in two. The art of crafting connects the abstract, twisting ideas in my head to a concrete reality I can hold, and untangles some of the thornier ideas in the process.

Lav Varshney Connects AI Research, Executive Policy, and Public Service

In this installment of Science Policy IRL, host Jason Lloyd goes behind the scenes of the White House Fellowship program with Lav Varshney, associate professor of engineering, computer science, and neuroscience at the University of Illinois Urbana-Champaign. Varshney served as a White House Fellow from 2022 to 2023, where he worked at the National Security Council with Anne Neuberger, the deputy national security advisor for cyber and emerging technology.

In this episode, Varshney describes the day-to-day experience of working at the White House, gaps in the innovation system that science policy can help fill, and how making artificial intelligence systems more transparent could define the future of AI applications.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Jason Lloyd: Welcome to The Ongoing Transformation, a podcast from Issues in Science and TechnologyIssues is a quarterly journal published by the National Academy of Sciences and Arizona State University.

I’m Jason Lloyd, managing editor of Issues. On our series Science Policy IRL, we talk to people in science policy about what they do and how they got there. On this episode, I’m joined by Lav Varshney, an associate professor at the University of Illinois Urbana-Champaign, whose research focuses on artificial intelligence. Lav recently served as a White House fellow from 2022 to 2023, where he worked at the National Security Council with Anne Neuberger, the deputy national security advisor for cyber and emerging technology.

On this episode, Lav describes the day-to-day experience of working at the White House, gaps in the innovation system that science policy can help fill, and how making artificial intelligence systems more transparent could define the future of AI applications.

Lav, thank you for joining us today!

Lav Varshney: Thanks Jay. I’m excited to be here.

Lloyd: I wanted to start with what we ask all our guests. How do you define science policy or do you have a working definition of science policy?

What I saw when I was serving at the White House is that scientific research and the development of science can impact all kinds of levers of state power.

Varshney: I think everyone probably goes into the “using science for making policy” versus “policy for science.” And let me focus first on the policy of science. I think there’s the resources and funding aspect of it. But I think what’s even more interesting and what I saw when I was serving at the White House is that scientific research and the development of science can impact all kinds of levers of state power. So what is being researched can have impacts on the missions of a variety of the departments and agencies and government. It can have impacts on nearly every societal and industrial sector, but also it can play a role even in international relations. Working together with multilateral or bilateral partners can itself be something that’s valuable, but it can also help drive different policies. I think it’s actually fairly broad and getting broader in a sense because science and technology seem to be entering into nearly everything you can think of. On the flip side, there is the use of science in policymaking, kind of evidence-based policymaking, and I think that’s also kind of a compelling application. And I think for areas like health and wellbeing, I think it’s especially important to be drawing on scientific evidence as one makes policy.

Lloyd: Yeah, so that leads well into the next question about what you were doing at the White House. I’m really curious about the fellowship program. So what kind of science policy do you do either in your daily life in your day-to-day work? Or what were you doing day-to-day as a White House fellow?

Varshney: So let me start with what I was up to when I was serving in Washington. Just as a little bit of a background, the White House Fellowship is a program that was started in the Johnson administration, so it’s been around for nearly 60 years, and between 11 and 19 people are brought in to serve at the highest levels of government. Most of them are not scientists. In my cohort, three were from the military, there was a police officer, a city councilman, a community organizer, some doctors, a dentist, and so on. And so, it’s a nice mix of people and you’re kind of parachuted in your placement into being a special assistant to either at a senior staff at the White House or cabinet secretary in the departments and agencies. And so I was serving in the National Security Council staff and I was working very closely with the deputy national security advisor for cyber and emerging technology, Anne Neuberger.

The White House Fellowship is often taken on by people who don’t have so much experience in policymaking and it gives you a exposure on how to do that.

The portfolio that I was looking at was focused on AI policy. And that especially picked up once ChatGPT came out in November of 2022. I had started in September of 2022, and then also did a fair bid on wireless communication policy and also looked at a few things on the geopolitical side with respect to the role that science and technology play as these kind of levers of state power in various ways. I was able to do a variety of things, which was really amazing.

In terms of my professorial and startup life that’s historically been less connected to policy. And in fact, the White House Fellowship is often taken on by people who don’t have so much experience in policymaking and it gives you a exposure on how to do that. But when I was a grad student at MIT, I actually did take a science policy class, so at least I knew a little bit about it. And over the years I’ve done a little bit of scholarly work in science policy as well, or policy relevant scientific research depending on which paper you’re looking at. So, yeah, definitely have continued to be involved and going forward, now that I’m back, I’m keeping some connection to the policy world in Washington, but also pursuing some scholarly research that is policy relevant and then also some policy-focused research.

Lloyd: Cool. Can I ask some very practical questions? So when you got this fellowship… Okay, first of all, did you apply or were you asked to apply? Were you selected?

Varshney: Yeah, someone I knew through the National Academy of Engineering, in fact, encouraged me to apply. And so there’s a written application, then there’s regional interviews, then there’s national interviews, and if you’re selected, you go back again for another week for interviews for placement. You interview in many of the offices in the cabinet secretary’s offices or the senior staff offices to determine where you’re placed. So it’s actually a fairly long process. The application is due at the beginning of January, and then you’re selected roughly in June, and then your placement interviews are July. And you might not find out where you’re placed until maybe just a couple of weeks before you start, which is the end of August.

Lloyd: Oh, wow. So when you were doing the placement interviews, do you then kind of rank where you would prefer to be based? Or is it kind of they choose where you end up?

Varshney: Yeah, it’s kind of a matching process. So it’s a little bit like medical school residencies. You provide some rankings and the offices provide some rankings and then some magic happens in the background.

Lloyd: So did you get your top? Were you interested in going to the NSC?

Varshney: Yeah, yeah, it was definitely some place I wanted to go and I’m happy that I ended up there, gave me a lot of exposure to a lot of different things.

Lloyd: So on a day to day-to-day workflow, was it a lot of meetings? Were you doing research? Was it a lot of… Were you writing white papers? What is sort of your daily activities look like?

Varshney: Yeah, yeah. So the day often started with reading intelligence briefings. One nice thing with the serving in the National Security Council is that you get access to the intelligence community, pretty much all the products that are relevant, whether it’s from NSA or CIA or pretty much anything that you care to learn about, they provide. So that was very helpful especially… the first few weeks I just spent reading and learning what’s going on in the world beyond what you read in the newspapers. And then once I settled in, it was a mix.

The role of the White House is very much in convening, so the White House itself can’t actually do anything.

The role of the White House is very much in convening, so the White House itself can’t actually do anything. It’s the departments and agencies that have the capabilities to actually execute on policies. And so there’s a lot of interagency efforts on getting ideas from the departments, agencies, and then coordinating and getting those back down for execution. So there was a lot of that kind of interagency coordination work. There was also a lot of just reading and thinking and characterizing what would make sense in terms of policies, whether on the domestic side or international things.

To take two examples that I worked on the international side. The US and the European Union signed an administrative arrangement on AI collaboration when I was serving, and then we were executing that. That required an understanding of what capabilities and strengths each of the parties could contribute and how that would play out in terms of larger policy goals because it was nice to get collaborative science done, but it was also nice to think through how to harmonize joint efforts across the Atlantic when we have different regulatory constraints. And can we create a playbook on how not just governments can collaborate in terms of data sharing or data collaboration and AI collaboration, but also private parties because it’s really nice if like-minded and democratic countries can work together to push these truly amazing technologies.

And to give another example, the US and India, we worked on some agreements on 6G and 5G wireless joint task forces. One focused more on the research side and one focused more on the industry side. And so again, trying to bring folks together to understand what contributions the US and India can make respectively and how we can drive joint efforts. Because if the second and third-largest markets in the world can really work together, that can be really powerful for spreading more open and secure and interoperable ways of doing wireless.

Lloyd: As you’re working on things, what is the work product? Are you producing memos that then go to the relevant agencies or directives, findings? What are you kind of producing?

Varshney: Yeah, so one kind of big thing that came out of my work—and of course I was only one small piece of this—the president signed an Executive Order on artificial intelligence in October of 2023. And so that’s a fairly long and detailed document. It’s, I think, 111 pages in the White House format. And it covers things ranging from national security to labor to workforce to immigration to fairness. I mean it covers DeepFakes; it covers all kinds of topics. So certain sections of those I contributed to quite significantly. So that would be an example of a work product.

Or coming back to wireless, so 6G wireless. So the US led a group of about 10 countries and to come up with principles of what we believe 6G wireless should be like. And we held a meeting in Washington that brought together academics, folks from industry, civil society folks, and then representatives from these 10 countries. And the final result of that is a joint statement that was issued, I think, maybe seven or eight months ago, which all the countries agreed to these principles. And so I was involved in organizing this fairly large convening, working with the State Department to get things going, et cetera. The document itself plus the convening, plus this coordination, that was the result of my efforts.

Lloyd: Yeah, that’s really cool. So we just did a recent interview with Rashada Alexander. She directs the Science and Technology Policy Fellows program at AAAS. And one of the interesting things she mentioned when we were talking to her was that she estimated that maybe 40 to 50% of each cohort of… These are all scientists, unlike the White House program. That 40 to 50% actually stayed in government service. They left bench science, lab science, and stayed in service to work in policy. But I’m curious if you were tempted to stay in government work in some capacity?

Varshney: Yeah, I mean it was a great experience. I learned a lot and hopefully contributed some things solid to the country and to the world. And yeah, definitely if I’m called back to serve in some way that’s important and critical, I would definitely go back. Depending on where things shake out, that would be really great.

To give you some examples, some former White House fellows have gone on to pretty high positions in government. So for example, Wes Moore, the current governor of Maryland was a White House fellow or Elaine Chao, the former secretary of labor and of transportation. She was a White House fellow. And Colin Powell was a White House fellow as well. So there’s kind of an interesting historical precedent for White House fellows to end up ending pretty high up either in appointed or electoral positions. In fact, even my boss at the White House, Anne Neuberger, she was a White House fellow about 15 years ago.

Lloyd: Oh, okay. Oh, interesting. Well, actually, so that gets to the next question, and we’re sort of moving a little bit back in time, but I am curious about even before you were White House fellow, you had mentioned, you touched on briefly that some of your academic work engages with policymaking. I know one of the things, a piece that you co-authored for Issues was very policy relevant involving artificial intelligence and intellectual property law and institutions. And so I’m curious how you became interested in policymaking and what the journey, for lack of better word, from of studying what sounds like pretty hardcore electrical engineering, computer science to ultimately a White House fellow, but just engaging with policy ideas along the way. What was the interest there? How did you end up getting involved or interested in policy?

Varshney: Yeah, it’s a good question because I’m not particularly self-reflective on my life, nor am I all that strategic. So in many ways it’s more opportunistic. Like when I see an opportunity to make an impact, I try to take it. I mean, I think we all have a duty and a obligation to serve our communities, and it was really the service element that drew me.

Intellectually, policymaking is interesting because we’re in a deliberative democracy where there’s a lot of different viewpoints and a lot of different desires, and trying to bring those together in ways that actually improve people’s lives is a challenge.

And if you look at some of my ancestors, they’ve had a bit of an influence as well. My great-great-grandfather, he was actually the second Indian to study at MIT back in 1904. So he took the boat over to San Francisco and then took the train via St. Louis and saw the World’s Fair and then ended up in Boston. So he studied glass making, and then he went back and started the first glassmaking factory in India and really established that industry. And the interesting thing is that first factory was part of this what’s called the Swadeshi movement, kind of self-sustainable industry, and he worked with some of the freedom fighters and then really helped push towards Indian independence. He was an example of taking science and technology and really applying it to industry as part of a freedom movement.

Another example is my mother, she passed away two years ago, but she was quite involved in volunteer work in our local community in Syracuse. She was also a public high school teacher in the city of Syracuse. And so she influenced me in this desire for public service as well. Those were kind some of the examples of public service or driving impact towards scientific and political goals and societal goals, that were in the background for me as well.

And then also intellectually, policymaking is interesting because we’re in a deliberative democracy where there’s a lot of different viewpoints and a lot of different desires, and trying to bring those together in ways that actually improve people’s lives is a challenge. And it’s in many ways, very similar to research challenges as well, except it’s not a closed deductive system where one can prove mathematical theorems. As you know, I’m a big fan of Claude Shannon who was the master of mathematizing. Here it’s all open, everything has to come in and out. And so it’s a different intellectual challenge as well.

Lloyd: I had forgotten, I guess, that Shannon was sort of a protégé or at very least studied under Vannevar Bush, who was the architect of the postwar scientific enterprise in the United States. But Shannon did not go in that direction. He stayed on the research side and really focused on problems in computer science, in engineering systems. But it seems like you’ve, unlike Shannon, have found a way to integrate these things a little bit more in your research in ways that the research and the policy are connected in this of service-oriented way.

Varshney: Yeah, it’s a little embarrassing to compare oneself to Shannon or to Bush. I’m nowhere in their leagues, but-

Lloyd: (laughs) You can let me do the comparison.

Varshney: (laughs) Yeah, I mean the endless frontier that Bush imagined as compared actually to the closed frontier that Shannon imagined in a way because he established fundamental limits, they’re actually quite kind of different and different ways of thinking. But yeah, I’ve been lucky enough to work in different epistemic cultures as it were, to walk around and do different things. And because I’m not particularly strategic, it works out, so it’s enjoyable to do all of these different things.

Lloyd: Well, for not being particularly strategic. I mean, you’ve chosen sort of a focus of study, certainly in artificial intelligence that has a tremendous amount of policy relevance. So it seems like you could kind of choose whichever direction you wanted to go in, which a little bit gets to the last question here. What are some of the big questions that animate your interest in science policy?

It seems to me that there’s things in-between—between curiosity-driven and mission-driven science that are still worth doing and often fall through the cracks.

Varshney: I think the question of innovation and how to carry it out I think is an overarching question in science policy. And the way we’ve organized our system following the Bush blueprint is that we have these mission-driven agencies that carry out particular kinds of scientific research, whether it’s the Department of Defense or the Department of Energy. On the other hand, we have the more curiosity-driven approaches to scientific research and its efforts like the National Science Foundation. This is the key example. But it seems to me that there’s things in-between—between curiosity-driven and mission-driven science—that are still worth doing and often fall through the cracks. There’s been some theoretical work in science policy that describes Baconian science and Newtonian science, and then Jeffersonian science is kind of in between.

Where Jeffersonian science is curiosity-driven, but has a direct impact in making a mission successful. And so that Jeffersonian science, I think personally is something I strive to do in my own scholarly work, but also I think is important going forward in terms of science policy because there are certain kind of gaps that are less filled. And it seems to me that it would be helpful to have more analytic capability in government to be able to fill those gaps and create settings where scientists can explore their curiosity and be the best scientist they can be, and yet drive towards missions that help society the most. And I think further in society, there are problems that often remain unstudied by those of us in academia, in part because we’re not aware of them. As I was mentioning, we’re a broad society with different concerns and not all of them end up flowing into the scientific research system. Not that I’m claiming science as a solution to everything, but science can actually say something about a lot of problems that remain unstudied. I think that collectively is something that is super important for science.

Another thing on AI in particular that I think is related to this point is that for national competitiveness, we need to drive not just AI innovation, but also diffusion of AI across all industrial and societal sectors. And a lot of my work is actually in that direction these days. I’m involved in some startups. So one is focused on music co-creativity. It’s called Kocree. And we’re building AI technology that is inherently human engaging, human interpretable, human understandable, and further it’s directly using very small amounts of data rather than stealing and scraping everything on the internet, which has all kinds of, not just security, but also intellectual property issues. We’re drawing on this information lattice learning technology that we developed to allow people to decompose music and recompose it into new songs, which also seems to improve people’s wellbeing, but also help drive cultural wealth growth, especially among historically exploited communities. So that’s an example of using AI in a way that’s different than the dominant narrative of these large-scale models that take in everything and are hard to control.

Critical infrastructure is a place where AI can have huge impacts, and we can really build smart and secure infrastructure that allows us to be much more efficient and yet is safe.

Another startup that I’m involved in—it’s called Ensaras—it’s focused on the intersection of AI and wastewater treatment. Very different than music, but it feels to me that critical infrastructure is a place where AI can have huge impacts, and we can really build smart and secure infrastructure that allows us to be much more efficient and yet is safe. We don’t want to cause a cholera epidemic if we misclassify wastewater or significant environmental damage. And so what we’re doing is we’re building kind of this approach to critical infrastructure that brings both of those together. And it seems to me that it’s a perfect sweet spot because it’s the here and now. It’s relevant to us very critically. It’s not a long-term threat. It’s something that we need to worry about right now. And yet it’s very visceral. I mean, it’s something that we engage in every day, and it’s core to government function, providing critical infrastructure. Those are two examples of startups I’m involved in right now.

Likewise with my academic research, we’ve been pushing on some ideas in the foundations of AI to understand how emergent capabilities happen or how scaling and different resources translate into capabilities. Because when one’s doing policymaking, one can sometimes be doing it a little bit blindly because you can regulate resources like compute and data, but not capabilities, and yet that’s what you care about. And so if one has an information theoretic mapping between how resources lead to capabilities, that would be super useful scientifically, but also for policymaking.

Lloyd: Yeah, that’s really fascinating. It seems like one of the things you’re trying to do is sort of open up what’s often described as the black box. AI has these inputs and then the outputs occur, and no one’s quite sure what occurs in the middle in that black box, but it does seem like what happens there is really important potentially for policymaking, but even for intellectual property. Just knowing what the system is doing to existing songs in order to create “new ones”. But opening it up seems like really important work.

Varshney: Yeah, I mean, white box AI, I feel, is the future for nearly all applications, whether it’s critical infrastructure where the safety comes from the openness or for creativity, like I was describing with music and further even for policymaking. In a sense, these alternative paths for AI that are distinct from large language models and related technologies can be a good possibility. Which is not to say that those LLMs and related technologies don’t have a role. I actually, back in 2019, when I was on leave at Salesforce, I helped develop some of the early billion parameter plus language models and worked on open source releasing those. So yeah, so I was involved in those technologies myself quite a bit. I definitely see their place in a variety of applications, but I think these white box AI techniques can be really great as well.

There’s a need for more scientists and engineers and technologists to serve in government because I think we can provide different perspectives, and our training and our life experiences as whole people can be really valuable.

Lloyd: Well, this has been really fantastic. Thank you so much for spending some time with us. I really learned a lot, and I have learned a great deal about being a White House fellow and also your work, which is fascinating.

Varshney: Thanks, Jay. I would encourage listeners to look into the White House Fellowship and other ways of coming into government because I think it is really a great program, and I think there’s a need for more scientists and engineers and technologists to serve in government because I think we can provide different perspectives, and our training and our life experiences as whole people can be really valuable.

Lloyd: If you’d like to learn more about the White House Fellowship or find links to Lav Varshney’s work, check out our show notes. Please subscribe to The Ongoing Transformation wherever you get your podcasts, and write to us at podcast@issues.org. Thanks to our audio engineer, Shannon Lynch, and producer Kimberly Quach. I’m Jason Lloyd, managing editor of Issues. Tune in on October 8th for an interview with Jennifer Jacquet about octopus farming.

Combining Tradition and Technology

In “Reform Federal Policies to Enable Native American Regenerative Agriculture” (Issues, Spring 2024), Aude K. Chesnais, Joseph P. Brewer II, Kyle P. Whyte, Raffaele Sindoni Saposhnik, and Michael Kotutwa Johnson provide a useful baseline into the history of regenerative agriculture and its use on tribal lands. Inherent in tribal producers, regenerative agriculture continues to be practiced since time immemorial using traditional ecological knowledge (TEK), which works with ecosystem function through place-based innovation.

Often, TEK is dismissed despite thousands of years of responsive adaptation. Many methods used by tribal producers yield equivalent or higher outcomes than the practices stipulated for reimbursement by the US Department of Agriculture’s Natural Resources Conservation Service, yet they are not always eligible for the same payments because they are based on methods instead of equivalent outcomes.

Native systems recognize that soil health cannot be siloed from water quality, habitat preservation, or any other element because of the impact of its interconnectedness across all parts of the ecosystem.

TEK is a living adaptive science that uses traditional knowledge fused with current conditions and new technologies to create innovative Indigenous land stewardship. Not only do Native systems of regenerative agriculture assist in carbon sequestration, but they also focus on whole ecosystem function and interaction. This creates a more long-term sustainable regenerative system. Native systems recognize that soil health cannot be siloed from water quality, habitat preservation, or any other element because of the impact of its interconnectedness across all parts of the ecosystem.

The right to data sovereignty, resource protection, and cultural information is integral for the progress of regenerative agriculture on tribal lands. From historical inequities to current complex jurisdictional issues, Native producers face challenges not faced by other producers. Most tribal land is marginalized, contaminated, less productive, and thought to be less desirable. Tribes have experienced historical barriers that have led to problems in accessing or controlling their own data and to misuse of their data by outside entities. Moving in the right direction, tribally-focused data networks will help tribal nations combine tradition and technology for optimal land stewardship.

Natural Resources Director

Intertribal Agriculture Council

Educating Engineers for a New Nuclear Age

Engineering education has long prioritized technical mastery above all else. And though such prioritization has equipped engineers with powerful analytical tools, this singular focus has left them unprepared to address some of the most pressing and complex challenges humanity faces. A more robust standard of engineering proficiency would include a deep appreciation of the social, cultural, and ethical priorities and implications of the technological solutions engineers are tasked with designing and deploying.

We are faculty in nuclear engineering (Verma), mechanical engineering (Daly), and technical communication (Snyder) at the College of Engineering at the University of Michigan. As researchers and educators, we are teaching our students the necessary skills and tools to collaborate on the design of nuclear facilities with communities from the start—rather than as an afterthought. Together with students, we are working to advance a new vision of what it means to be an ethically engaged engineer.

Engineering curricula are largely disengaged from peoples’ lived experiences in vulnerable communities.

Our incoming engineering students already take this challenge to heart. On the one hand, they are eager to design solutions for the wicked, intractable problems across the built world—climate change and energy injustice and shortages of housing and food, for example. And they can see that they and their peers will need to craft excellent technological interventions in the coming decades if humans are to endure and thrive. On the other hand, our students are aware of the role the discipline of engineering has played in exacerbating the very problems engineers are tasked with solving—like climate change and the energy transition. They are attuned to the global scale of energy injustice, recognizing that minoritized and under-resourced communities are the most likely to bear the toxic consequences of open-pit mining, coal-fired power plants, and fracking—systems partly enabled and perpetuated by engineers. They desperately want to help address these injustices, but they see that engineering curricula are largely disengaged from peoples’ lived experiences in vulnerable communities. What’s more, they see that the engineering disciplines they are called to are uncritical of the outsized power and often harmful consequences of engineering work.

Nuclear communication

As faculty, we are acutely aware of the need to prepare engineers to work on the next generation of smaller nuclear facilities to reduce carbon emissions quickly. Fusion energy systems are being developed at a rapid pace for deployment as early as mid-century; many of these will likely be much smaller than today’s gigawatt-scale fission facilities. New modular fission reactors (which produce up to 300 megawatts of energy) and microreactors (which produce between 1 and 20 megawatts of thermal energy) will soon be deployed, potentially at commercial scale in the 2030s. Though powerful enough to electrify small towns, these technologies are about the size of shipping containers—small enough that they could literally sit in someone’s backyard. It is ethically and practically imperative that engineers collaborate with and understand the needs of the people who will live among these facilities.

Nuclear engineering has notably fallen short when it comes to engaging ethically with communities. Approaches to siting facilities have often ignored the rights, needs, and values of local communities and the land they inhabit. The operating assumption of the default approach—sometimes described as “decide, announce, defend”—is that engineering experts know best, and it is their prerogative to decide where and how facilities should be built. Having made these decisions, engineers need only explain and defend them before the public.

The nuclear energy sector would likely be better off today had engineers worked hand in hand with communities from the start, gaining local perspective and winning buy-in that might have kept the industry on track.

The results of such an approach have sometimes been tragic. On the front end of the nuclear fuel cycle, there are many abandoned and unremediated uranium mines on Navajo Nation lands in the southwestern United States, for example. Land, water, and air have been contaminated, affecting not only human health but also traditional ways of life and whole ecosystems. On the back end of the fuel cycle, failure to consult host communities while siting a nuclear waste repository at Yucca Mountain in Nevada led to billions of dollars in expenditure for a project that, as a result of strong local and state opposition, was ultimately abandoned. Nuclear plants themselves, including in our home state of Michigan, have been sites of protests—particularly when decisions about whether to keep a plant running are in play. The nuclear energy sector would likely be better off today had engineers worked hand in hand with communities from the start, gaining local perspective and winning buy-in that might have kept the industry on track. 

Teaching sociotechnical engineering

The courses we’ve designed are shaped by understanding the importance of the social aspects of engineering work, departing from the traditional focus on purely technical engineering concerns. For example, we’ve embraced participatory design, a valuable tool for engaging communities in the design process that originated in Scandinavia in the 1970s. This conceptual framing has been instrumental in developing disciplines such as human-computer interaction, biomedical engineering, and human factors engineering—and we believe it can be transformative in the field of nuclear engineering as well.

Two of us (Verma and Snyder) created an introductory course, “Socially Engaged Design of Nuclear Energy Technologies,” that is a living lab—an open-innovation space where ideas are developed, tested, and iterated in real-life contexts with communities. Students learn the fundamentals of nuclear engineering at the same time as they gain skills in technical communication, qualitative research methods, ethical community engagement, and participatory design. For their term project, they collaborate with community members to design a hypothetical nuclear energy facility in southeast Michigan.

Complex sociotechnical systems such as power plants may have a handful of users or operators in the traditional sense, but they affect hundreds, if not thousands, of people’s lives and environments.

The course is for first-year undergraduates, so we start by teaching them the fundamentals of nuclear energy systems. We give lectures on nuclear physics, fission, and fusion. Across three lab sessions, they explore virtual reality (VR) models of fission and fusion energy systems that were built the course, using cross-sectional models that they can take apart and label to help them understand the subsystems and their fundamental components. In subsequent labs, they explore facility-level models of fission and fusion systems, which helps convey the scale and scope of these systems and how they might be integrated into a site and community. The labs culminate with students recording tours of the nuclear facilities in a virtual environment. We plan to use those VR models as a participatory design tool in future classes.

The course includes multiple opportunities for students to communicate clearly and transparently with those likely to be affected by their work. Complex sociotechnical systems such as power plants may have a handful of users or operators in the traditional sense, but they affect hundreds, if not thousands, of people’s lives and environments in ways that are positive, negative, or both. Students begin by interviewing friends and family from their hometowns, hearing their perspectives on various sources of energy and how their values and preferences influence what they consider desirable and undesirable attributes of those energy sources. Toward the end of the interviews, students ask specifically about the interviewees’ views of nuclear energy.

Next, supported by the instructors, the students conduct two virtual workshops with southeast Michigan residents. Again, participants describe their own values and how those values inform their energy choices. Midway through the workshop, participants are introduced to fusion energy and asked whether and how they might wish to be involved in the design and development of a hypothetical fusion energy facility. Some have said they would be eager to help guide design choices for a fusion system coming to their area. That might include weighing in on decisions such as the size of the plant and its impact on land use, job creation, and waste production and disposal. Once a project’s parameters are set, community input might then inform the design of the facility that houses the fusion energy system.

Questions we discuss with students and still debate among ourselves include: How will representatives of a community be chosen or appointed? How should disagreements and conflicting priorities or visions among community members—or between the community and the engineers—be adjudicated and resolved? And how might long-established engineers already working in the field be encouraged to collaborate with community members?

Communities that host energy facilities—and thereby risk experiencing socioeconomic, environmental, and aesthetic changes—have not only a stake in the project, but the right to participate in the process of design and decisionmaking.

We choose to refer to the community collaborators not just as stakeholders, but as rightsholders. The distinction expresses a central point: communities that host energy facilities—and thereby risk experiencing socioeconomic, environmental, and aesthetic changes—have not only a stake in the project, but the right to participate in the process of design and decisionmaking. Many communities have experienced such impacts in the past but have had little say in either their design or their remediation. Shifting perspective to give communities certain rights in the negotiation process recognizes the need to share power and work collaboratively in energy systems design.

The third and final community engagement is an in-person workshop. Community participants from the virtual sessions bring friends and family and work in teams with students to create designs for hypothetical fusion facilities. Each team’s design is unique, shaped by participants’ perspectives, values, and experiences. For example, some teams have placed their fusion systems right within the community’s boundaries, with shared-use spaces surrounding the facility. Others felt it was important to keep a facility at some distance from residents. Some chose to weave the facility into its natural surroundings, while others prioritized transparency as a key principle, resulting in a design that is literally see-through. Another team wondered how they might repurpose Detroit’s industrial infrastructure into a whole new energy industry. Once teams agree on a design concept, they use artificial intelligence image generators to visualize it.

Figure 1: EXAMPLES OF IMAGES OF FUSION ENERGY FACILITIES GENERATED BY TEAMS MADE UP OF STUDENTS AND COMMUNITY MEMBERS.

The images were generated using Canva AI, Adobe Firefly, and DALL-E2 using the following prompts: (a) A fusion energy facility with colorful lights, sidewalks, and a bullet train rail; (b) A modern designed fusion system with a cooling tower with LED lights on it at sunset with nature and flowers surrounding it. Forest engulfs the background; (c) A futuristic fusion energy facility surrounded by trees; (d) A fusion energy facility with lots of light and bubbles and surrounding greenery; (e) a nuclear fusion facility with giant windows on a river; (f) A transparent fusion energy plant surrounded by plant workers.

Designing energy infrastructure with love as a core value

When it comes to responsible design, nuclear energy requires special consideration beyond the here and now. Engineers must look to the future, both near and distant—up to a million years. What does it mean to design responsibly across such expanses of time? Our students explore this question by designing the surface of a deep geological nuclear waste repository. The exercise forces them to imagine what the world might look like 10 years from now, then 100, 1,000, and 1 million years. They ask who and what might still live around the facility, and how to protect the health and safety of future rightsholders—whether human or not. The workshop helps students appreciate how engineering design decisions can reverberate across generations. More traditional nuclear engineering classes often try to avoid “getting messy” in considering uncertainties, but we’ve found that an honest approach to designing over deep time is itself a valuable learning experience. Students realize they need to proceed with both humility and imagination to contend with vast uncertainties and awesome responsibilities.

More traditional nuclear engineering classes often try to avoid “getting messy” in considering uncertainties, but we’ve found that an honest approach to designing over deep time is itself a valuable learning experience.

One surprising outcome of past workshop experiences was our students’ greater appreciation of the role of feeling and emotions in the engineering design process. We asked teams to describe their individual and shared values before designing began. Trust and respect were the dominant themes across all groups, and love for the community—both the people and the place—was a core value for several.

Engineers are human, after all, and it would be selling ourselves short to say our work is governed solely by logic and rationality. Engineers find pleasure and joy in bringing something new into the world. We experience boredom, frustration, and a range of emotions that shape design decisions in big and small ways. As teachers, we see this play out, especially during our in-person workshop where students and community members collaborate on the fusion energy facility design. Throughout the day, the teams joke, argue, and negotiate with each other. In their summaries, several students have recounted ways that emotion shaped their design journeys. They felt this collaborative approach should be the norm, and that it connected them to something bigger than themselves, the class, or their education. For their part, community members have said they felt heard, enjoyed working with the students, and were impressed by their knowledge and maturity. Students and community members alike said they were proud of their work together. In the workshop’s final presentation, on their last slide, one team summed up their experience in three words: “We loved it!”

We’ve also been surprised and delighted by how quickly students and participants form a sense of shared purpose and forge a commitment to working together across their differences to find consensus. Many students wrote about the value of the connections they made. One wrote, “The most valuable information about what defines success in design is found within the people directly affected by the design.”

In their final presentations, each of the 10 student teams also expressed a desire to engage with rightsholders as part of their future design work. One student said in their team’s semester-end presentation: “What we did this semester was a necessary first step for the dream that is a world built on sustainable systems.” Another commented, “Our recommendation is to focus on the humanity of engineering and really try to design with the intent that you know you’re designing for humans.”

Accelerating the new energy economy with care

Engineering must embark on a journey of transformation, challenging the status quo of engineering education and envisioning a future where engineers are deeply considerate of communities.

We know that communities do not hold all the answers when it comes to design or to solving the energy system challenges society faces. Nor do we suggest that taking the community seriously and giving it the respect it deserves is easy. On the contrary, substantive community engagement is time-consuming and sometimes difficult. But the time and effort are well worth the price—particularly in the case of new nuclear facilities, which must be deployed soon enough to slow climate change and in ways that are safe and just. Though it requires an upfront investment of time, collaboration can expedite and smooth project implementation by gaining early community support, avoiding false starts, and helping to navigate potential objections or misunderstandings. And it simply won’t be possible to achieve a large-scale renewal of energy infrastructure without an alignment of consent across local communities and actors at the state, national, and even international level. 

Engineering must embark on a journey of transformation, challenging the status quo of engineering education and envisioning a future where engineers are deeply considerate of communities. The discipline must make room for engineering solutions that are not just technically sound but also empathic and ethical. In our work, we hope to shape a new kind of engineer—a sociotechnical engineer—who is grounded in the technical knowledge of the discipline while being adept in participatory and human-centered design and ethical, equity-centered communication. We do not believe this approach entails sacrificing technical excellence. Instead, it requires contextualizing it and connecting abstract engineering expertise to real-world problems. What is more, our research, anecdotal evidence, and our own firsthand experiences suggest that this approach to engineering, instead of compromising our world-building disciplines, will draw enthusiastic, talented young people eager to help in ever larger numbers.

What Can Fusion Energy Learn From Biotechnology?

Recent advances in fusion energy research have renewed excitement around this potentially transformative way to produce electricity and replace fossil fuel energy sources. Although fusion appears to face barriers that are unique to this nuclear technology, we believe that its progress could be accelerated by examining the history of another world-altering technology from a very different sector and time: biotechnology.

Five decades ago, emerging insights in biology, particularly in genetics, suggested the possibility of new technology that could address diverse challenges in health care, agriculture, industry, and the environment. But, like fusion today, many obstacles had to be overcome before biotechnology could be applied in ways that were useful or profitable. Unresolved scientific and technological questions would cost billions to answer, and profits were still decades off. Whole new regulatory and investment environments needed to evolve. And powerful cultural resistance to the very idea of genetic manipulation would have to be overcome.

Of course, there are many differences between biotech’s history and fusion’s present. But advocates for fusion can find several key lessons from the last half-century of biotech that could propel fusion forward and help it to navigate around hazards on its way to deployment.

Fusion energy and biotech share the stark financial realities facing all “deep technologies”—that is, early-stage technologies with tough scientific challenges that will require significant upfront capital, have low or unknown probabilities of success, and require long gestation periods before revenues start to flow (if they ever do). Conventional wisdom is that traditional finance methods such as venture capital and private equity are inadequate to sustain such long-shot investments. These methods are not equipped to carry long shots over financial “valleys of death,” for example, where projects can languish or perish due to lack of support.

With lessons from biotech in mind, we propose four initiatives that could accelerate fusion’s progress.

At their earliest stages of development, when the need for funding is greatest, deep technologies have unattractive risk-reward ratios. Investors are naturally drawn to high-yield, low-risk propositions, a relationship that financial economists have described mathematically as the Sharpe ratio: the excess expected investment return above the risk-free rate, divided by the risk. By their nature, deep-tech investments tend to have very low Sharpe ratios at the outset, discouraging all but the most confident and risk-tolerant investors. The situation is worse for the most speculative and transformative technologies, such as fusion energy, because their many “unknown unknowns” put risk quantification out of reach. Economists define risk as randomness one can measure, and uncertainty as randomness one cannot. Investors dislike risk, but they loathe uncertainty.

High risk and uncertainty carve out deep tech’s valleys of death. Although they can’t be eliminated, these valleys can be made less deep and less lethal by increasing a technology’s potential profit, decreasing risk and uncertainty, or both. Successes in biotech can be attributed in part to the dynamic, constantly fermenting public-private funding ecosystem that has kept it moving over hills and dales and even across what could have been lethal valleys. Fusion energy’s challenge today is to create a similar ecosystem.

With lessons from biotech in mind, we propose four initiatives that could accelerate fusion’s progress. Our proposal is ambitious, but we feel it is well justified by the game-changing benefits a limitless source of carbon-free energy would bestow.

Standardize milestones

In the United States, to determine a drug candidate’s safety and efficacy in humans, researchers must conduct randomized controlled trials. These studies are divided into three distinct phases, each involving more human subjects than the previousone. This staged approach was imposed because it minimized exposure to the health risks inherent in taking experimental drugs. But it had another, incidental advantage. It divided the prolonged investment timeline for drugs—often 10 or more years from the beginning of human trials to regulatory approval—into three shorter stages. As a drug candidate graduates from one phase to the next, its risk declines, and its market value grows. These milestones allow biotech companies to periodically demonstrate positive returns to their investors, which attracts additional capital to pay for each next phase of development. And the milestones lead to new financial markets—especially large public equity and debt markets—creating liquidity for company founders, capital funds, and investors and attracting even more money to the sector. From this staging, a fecund biotech ecosystem emerged and coevolved with financial markets, creating safer pathways through potential valleys of death.

A set of milestones for achieving commercially relevant fusion energy should be identified and published by a consortium of stakeholders.

Along the same lines, a set of milestones for achieving commercially relevant fusion energy should be identified and published by a consortium of stakeholders. Milestones could include, for example, the sustained generation of high-temperature plasma producing more energy than it consumes, or identifying and developing materials that can withstand the extreme conditions inside a fusion reactor and form the “first wall” between the plasma and the rest of the reactor.

For milestones to be effective in creating a market, all stakeholders—including regulators, researchers, and fusion companies—must agree that, taken together, the milestones are both necessary and sufficient to achieve commercially relevant fusion energy production. They must also be easy for nonexperts to grasp, and their achievement must be verifiable by an unbiased third party at reasonable cost.

Get universities in on the action

Scientific and engineering expertise is as essential for fusion research and development as it was to biotech—and developing strong ties between industry and academic institutions is key to fusion’s progress. Since the Bayh-Dole Act passed in 1980, universities receiving federal funding have had the right to pursue ownership of their researchers’ inventions, rather than giving their intellectual property (IP) to the federal government. This has rewarded universities for investing in research to advance biotechnologies with commercial potential, creating a virtuous cycle of progress as the profits universities received were plowed back into labs doing cutting-edge research. Fusion R&D could also be accelerated if universities begin to attend to and streamline similar IP commercialization. Today such legal frameworks exist, but universities need to start taking advantage of the opportunities they present.

Three distinct steps could help. First, universities and national laboratories should standardize sponsored research agreements, term sheets, and technology licenses to expedite the process for spinning companies out of academia. Second, universities should mentor academics in fusion-related fields who are unfamiliar with startups and the business world. And third, financial institutions should create investment funds focused on startups among a consortium of universities and research labs. These would appeal to a broad set of investors due to their diversified “multiple-shots-on-goal” structure.

Support a commercialization ecosystem

There is one salient difference between biotech and fusion that cannot be ignored. Unlike the diversity of diseases and other application targets that support various niches in the biotech industry, all fusion companies aim to do one thing: generate safe, clean power.

Nevertheless, the parallels between biotech and fusion are still instructive: fusion energy startups are analogous to early biotech companies, engaging in breakthrough science and engineering programs that in many cases are selling proof of the viability of concepts rather than products themselves. In the fusion ecosystem, the equivalent of “Big Pharma” will be “Big Energy”—oil and gas companies with the financial resources to partner with startups. They can pay to complete the testing of promising fusion technologies, then in-license those technologies or acquire the companies once they have been de-risked. With the emergence of small modular fission reactors and fusion microplants, an even more complex and diversified dynamic between firms is likely to develop.

The parallels between biotech and fusion are still instructive: fusion energy startups are analogous to early biotech companies, engaging in breakthrough science and engineering programs that in many cases are selling proof of the viability of concepts rather than products themselves.

One key component will be US government-led programs positioned to finance growth. These should provide more scientific support for fusion research as well as support for small businesses; for example, with new government subsidies, loan guarantees, and tax incentives for fusion-related investments.

Parallel private-sector initiatives should include creating a robust ecosystem in which all fusion, fusion-supporting, and Big Energy companies can collaborate via in- and out-licensing, joint ventures, and mergers and acquisitions to diversify risk and increase the chance of achieving net energy production and financial gains. To further push this process forward, fusion impact funds—pools of investment capital with an explicit mandate from investors to reduce carbon emissions via fusion energy—could be created as part of a suite of climate-related “green” financial products, including fusion rating-agency metrics that could be used to construct such products.

Finally, in much the same way that fossil-fuel companies created oil and gas futures contracts to manage risk and gather information about supply and demand, financial exchanges should be created for trading standardized financial contracts on key fusion-related inputs and outputs, such as future delivery of tritium, helium-3, and fusion-generated electricity.

Foster two-way engagement

Biotech learned early on that public opinion would influence its trajectory and that the industry had as much to learn from public discussion as it had to contribute to it. Indeed, ethical and safety concerns were raised by the very scientists who pioneered the field, and public discourse regarding these concerns led to a coherent regulatory framework that assists in safely guiding scientific hypotheses toward life-saving drugs and other biotech products. This sensitivity to public interests—and the outreach from government agencies, university researchers, and biopharma designed to build trust as well as improve public appreciation of the technology’s potential benefits and explain and address its risks—have been key to biotech’s advancement.

A similarly robust mixture of regulation, public engagement, and innovative energy companies is required for fusion. It is essential to address public perceptions, especially given the common misunderstandings of the state of progress in fusion research—captured in the quip, “Fusion is the energy of the future—and it always will be.” Fusion is not the first nuclear power that the public has encountered, and grappling seriously with public fears, conceptions, and expectations will be a necessary part of fusion’s path. 

Generating informed enthusiasm and dispelling misinformation will require a systemwide approach to outreach and education, where communication goes both ways. The fusion community must listen as well as inform. As new fusion technologies emerge and as some of them flounder, the industry must be open, direct, and transparent. Only by actively listening to community concerns, and addressing them systematically and comprehensively, can the industry earn the public’s confidence. Public feelings of distrust or betrayal would pose major impediments to fusion energy’s timely progress.

It is essential to address public perceptions, especially given the common misunderstandings of the state of progress in fusion research—captured in the quip, “Fusion is the energy of the future—and it always will be.”

The public at large also needs to understand how the technology is progressing, which should not be left entirely to the promotional efforts of startup companies. Instead, a trade organization like the Fusion Industry Association should actively coordinate communication with media outlets, government representatives, and financial analysts, as well as university and industry public information and news centers. When scientific and engineering milestones are met, they should get the public attention they deserve.

Another crucial step is integrating fusion education into curricula for all levels of students. Middle schoolers should learn the basics, high schoolers should go more in-depth on the technology in physics and environmental science classes, and courses in fusion technology and governance should be accessible to all college and graduate students.

Fusion energy stands today where biotech was several decades ago—on the cusp of revealing potentially transformative insights into one of the most fundamental properties of our world. For biotech, it was understanding the blueprint of life itself. In fusion’s case, it is commanding the power source of stars, the very force that makes life viable in the universe—which, if harnessed here on Earth, could help correct society’s carbon-emissions trajectory. By learning from biotech’s setbacks and triumphs, we believe fusion energy production can become a practical reality too, and that it need not take five decades to do so.

Cool Ideas for a Long, Hot Summer: Indigenous Sustainability

In our miniseries Cool Ideas for a Long, Hot Summer, we’re working with Arizona State University’s Global Futures Lab to highlight bold ideas about how to mitigate and adapt to climate change. The miniseries has explored how economics can be used to advance environmental justice, how solar-powered canoes can protect the Amazon from deforestation, and how refugees create communication networks to respond to climate change.

On the final episode, host Kimberly Quach is joined by ASU professor Melissa K. Nelson. Nelson shares her thoughts about the impacts of climate change on Native American communities, agriculture, and what can be learned from Indigenous sustainability.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Kimberly Quach: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

I’m Kimberly Quach, digital engagement editor at Issues. On Cool Ideas for a Long Hot Summer, we’ve been working with ASU’s Global Futures Lab to highlight cool ideas about how to respond to climate change. We’ve explored how economics can be used to advance environmental justice, how solar-powered canoes can protect the Amazon from deforestation, and how refugees create communication networks to respond to climate change. 

On our last episode of our mini-series, I’m excited to be joined by professor Melissa K. Nelson to learn about the impacts of climate change on Native American communities and what we can learn from indigenous sustainability practices. 

Melissa K. Nelson: Thank you, Kimberly. I appreciate the invitation.

Quach: So the first thing I wanted to ask you is about the impacts of climate change. Could you tell us about the indigenous communities that you’re part of and you work with, and what are the challenges that they face due to climate change?

Indigenous peoples call themselves the canaries in the coal mine when it comes to all environmental change, but especially climate change.

Nelson: Yes, often indigenous peoples call themselves the canaries in the coal mine when it comes to all environmental change, but especially climate change. I was born and raised in California, and so I’ve worked a lot with native California Indian tribes, a lot of coastal tribes. And then also living in Arizona, in the Sonoran Desert, in the desert Southwest, working with a lot of the Native American nations and tribes in the southwestern area. My own tribe is up in the upper Midwest in the northern plains, the Turtle Mountain Chippewa Indian Reservation on the border of North Dakota and Manitoba, Canada. And we too are suffering from extreme changes due to climate change.

I think some of the most dramatic ones that we are all aware of is just unpredictable weather. Native knowledge is based on traditional ecological knowledge, and literally hundreds and thousands of years of data, of understanding the patterns in nature when the rains come in, when the thunder comes in, when the animals migrate. Today, those changes are happening at such a rapid and unpredictable rate. So things like having the salmon migrate up the river so that they can harvest and have their first salmon ceremonies, their feasts feed the family. It was all tied to their economy, their culture, their spirituality. That is really changing. Likewise, my nation, the Chippewa Cree, rely on moose and buffalo. And the migrations of those traditional animals are really dramatically changing because the food is not available where it used to be. Lakes are drying up, berries are desiccated. And so the traditional patterns of food procurement have really dramatically changed for folks who still relied on wild foods.

And then for farming communities, it’s really challenging because of the lack of rain. The soil moisture has become so dry. It’s a process called desertification. And so farmers are really struggling—all farmers—and native farmers especially with their smaller crops. So those are just…I could go on and on, but there are many different ways. I primarily serve indigenous peoples. Being a mixed race native woman. My dad was white Norwegian, my mom, Native American. And so I’ve really focused a lot of my research and activism and educational work serving Native American and Indigenous communities, and they’re very concerned about climate change and its impact on the land, on the water, and on our cultural practices.

Quach: So one of the focuses of your work, and what you were talking about earlier is the food system at large. Could you talk about the impact of the industrialized food system on sustainability and our climate? And what are Native practices that we should implement to improve that?

There are many different ways now, combining indigenous agricultural techniques and more contemporary sustainable agricultural practices like organic and regenerative farming, to transform our food systems because they’re in dire straits.

Nelson: Oh, yes. That is a great question. The industrial food economy or food system has dramatically impacted the health and sustainability of our earth, primarily due to all of the chemical inputs. Instead of relying on organic compost and other natural inputs to replenish the nutrients in the soil—which all organic, regenerative, Indigenous agriculture and traditional agriculture for centuries relied on—now, the use of chemical fertilizers of phosphorus and nitrogen has really created a process called eutrophication, where the runoff has really killed a lot of lakes and freshwater streams and water sources. Likewise, then there’s a lot of chemicals sprayed on—herbicides, fungicides, pesticides—and that all again runs off into water systems. That is the source of life for everything. So industrial agriculture has relied too much on these chemicals and also relied and focused on cash crops. So singular monocrops like cotton; we should not be growing cotton in the desert because it’s so water intensive and all of the little insects love it. So it has a lot of chemical dependency. And so we have to really focus more on polycultures which Native agriculture has always used more than one species. We have the traditional three sisters of Native American agriculture: corns, beans and squash, because they’re ecologically and nutritionally symbiotic. They support each other. The beans are nitrogen fixing for the soil. The squash leaves help shade the soil and retain moisture and create a better habitat for the corn. The corn, it has the large stalk that the beans can crawl around. So it’s really about symbiosis, mutually beneficial feedback loops with our Indigenous agriculture.

And just moving from aerial spraying irrigation to drip irrigation. Most of the aerial spraying—when you see the tsk tsk tsk water sprinklers—in the desert, 50% of that water is evaporated in the air before it actually even reaches the plants and the moisture. But if you put in drip irrigation, it conserves the water and the plants actually get more hydrated. So there are many different ways now, combining indigenous agricultural techniques and more contemporary sustainable agricultural practices like organic and regenerative farming, to transform our food systems because they’re in dire straits. The argument is always, you can’t grow that much food without using industrial fertilizers and pesticides. That’s the argument. But no, we haven’t really even tried it at the scale that we could try organic farming.

Quach: You anticipated one of my questions, which is a big counter argument, is how the monoculture came to be because of the need to feed so many people. Shifting back to the polyculture model seems to have a lot of challenges. One of them is the criticism you already brought up, and another is cultural, right? We’re very used to being able to buy a tomato in December.

Nelson:

If we literally just mapped our food sheds even a thousand miles—and it’s something I do with students, mapping our food sheds—we really can eat more locally and eat lower on the food chain and learn how to minimize our carbon imprint on the planet.

Exactly. So it is a full spectrum system transformation that is needed. And if we educate consumers—that’s why I’m in education and part of the Swette Center for Sustainable Food Systems at ASU. We have to educate consumers. We need to lower our carbon footprint on where food comes from and eat more locally, even regionally. Even if we just try to eat within our region, maybe one state or two states, which I think is very challenging. If we literally just looked at our food shed of a thousand miles. We were at the center of that circle and looked at all of the food that was produced, dairies, ranches, berry picking lettuce farms, small scale community gardens, farmer’s markets, sustainable meat packing areas. If we literally just mapped our food sheds even a thousand miles—and it’s something I do with students, mapping our food sheds—we really can eat more locally and eat lower on the food chain and learn how to minimize our carbon imprint on the planet. We can also really help transform the agricultural industry by demanding different types of food. We have a lot of power as consumers with what we eat for breakfast and what we eat for lunch and dinner. We have more power than we think by putting our money where our mouth is and purchasing different types of food. And also, social activism is also very important. Writing letters to our leaders and requesting and demanding changes in our food system. So I think it’s like I said, a full spectrum transformation from education to consumerism to the production and to policy.

Quach: Yes, I think that’s a very Issues-y pipeline because as an individual consumer sometimes there are a lot of things that make it really difficult to eat locally. One of them being access and expense, of course. So we need not just individual action, but policy change to make it more feasible for many people to access this type of agriculture.

We call it food apartheid. We don’t call it food deserts anymore because to the Akimel O’odham and the Navajo and the Pueblos, the desert is filled with food and filled with medicine.

Nelson: That’s right. No, there’s a whole environmental justice component. Absolutely. And we call it food apartheid. We don’t call it food deserts anymore because to the Akimel O’odham and the Navajo and the Pueblos, the desert is filled with food and filled with medicine. And so to call it a food desert is to denigrate the desert as a sacred place of life for many, many traditional peoples who’ve been in the desert. But food apartheid is real. Communities of color have suffered disproportionately from the pollution I mentioned earlier, and other pollution like factories and runoff and freeways, and also suffered disproportionately by not having access to fresh produce in grocery stores. They have to travel too far and it’s too expensive. So there’s structural poverty issues that also need to be addressed through policy and social change.

Quach: So you talked extensively about how we could bring these practices into our own lives by becoming a localvore, some of the other things that you mentioned, but are there other Indigenous practices that the listeners should adopt into our own lives to help mitigate the impacts of climate change? And if we’re interested in learning more, what should we do?

Nelson: Yes. There are many, many great organizations. I’ll speak just very regionally. We’re in the Arizona and the Southwest and desert farming and desert agriculture. The Native American Food Sovereignty Alliance is a wonderful organization. Lots of volunteer opportunities, online webinars, resources available. They’re based in Arizona in New Mexico. There’s the traditional Native American Farmers Association that also has an annual Indigenous permaculture training that all people are welcome to. And you learn all of these traditional techniques of drip irrigation and waffle gardens and water catchment systems, and so many different ways that we can learn how to use drought tolerant landscaping in our backyards—if we’re so lucky to have a backyard. Even people in apartments growing a few plants in a little shared patio and apartments to reconnect with nature. I think that’s part of it too. So much of our modern life is focused on mediated experience through technology, our phones, our computers, Zoom, televisions, et cetera, and really reconnecting with the sources of life, our waters, our foods, plants, animals, the air, fire.

There’s many different ways from small scale to large scale that we can make a positive impact to address climate change.

I think one of the other areas that Indigenous peoples really have to offer is an understanding of cultural burning and traditional fire management. Fire was seen as a medicine, not as danger, evil. It was something to be feared, certainly if it was out of control, but fire could be used as a resource management tool to reduce biofuels and help our forests be more healthy so that if lightning strikes, there’s not all this fuel load and biomass that creates these catastrophic, massive wildfires that get out of control. So Native American burning practices—what we call cultural burning—is finally catching on with the US Forest Service and Natural Resource Conservation Service, the US Department of Ag. People are realizing due to climate change, we’re having increased catastrophic fires because we have not been managing our landscape and our forests and our resources and tending to them with care and sustainable stewardship, which is really a value and a practice of traditionally ecological knowledge that was kind of lost with the preservation and conservation movement that focused on, it’s the only way to preserve nature is you just lock it up in a nature reserve and don’t touch it, or only have wealthy privileged people go hike in it on occasion. And we have to really transform that model of conservation because humans have to interact with nature. We are part of nature, and so we need to learn how to do that in sustainable ways and cultural burning, traditional agriculture, what we now call regenerative agriculture. There’s many different ways from small scale to large scale that we can make a positive impact to address climate change.

Quach: What you were saying about being disconnected from nature and nature being a bit of a place you go to, not a place that you’re in, really resonated with me as someone who’s always lived in cities. I think a lot of times access to nature is not…it feels actually like a very privileged white American thing. It wasn’t something my parents knew to do. It was just not part of our lives. And it’s something I’m only learning now through conversations like this and thinking about my place in the system.

Melissa K. Nelson:

Yes. Thank you for bringing that up. Yes. I mean, for a lot of people of color, urban areas were safer. Rural areas were not as safe. So it’s really this reckoning also too, with racial justice is another part of this story with reconnecting with nature, and there’s a lot of movement and actions now that are trying to do that in a much more better way. The combination of environmental justice and Indigenous rights and sustainability and regenerating a positive relationship with the natural world because it’s also good for mental health. It’s good for human health. It’s all connected.

Kimberly Quach:

Well, thank you so much for joining us. This has been a really enlightening conversation, not just about Indigenous communities, Indigenous knowledge, but how we should all think about our place in the world.

Melissa K. Nelson:

Will thank you so much, Kimberly, for the great questions and great conversation. Wonderful to speak with you today.

Kimberly Quach:

Check our show notes to learn more about Indigenous practices and environmental sustainability, and to find links to Melissa K. Nelson’s work. 

Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our audio engineer Shannon Lynch. I’m Kimberly Quach, digital engagement editor at Issues in Science and Technology.

I hope you enjoyed our mini-series! Our next season will launch on September 24! Please write to us at podcast@issues.org with topics you’d like us to cover this season. Thanks for listening.

The Anthropocene: Gone But Not Forgotten

In “A Fond Farewell to the Anthropocene” (Issues, Spring 2024), Ritwick Ghosh advances an insightful—and often neglected—analysis. The long-running controversies around the geophysical science of the Anthropocene have not only demonstrated the political nature of the scientific enterprise itself. More importantly, as Ghosh attests, they have illustrated how the political questions that define society-nature relations tend to be covered up or suppressed by the very attempt to displace political conflict to the assumedly neutral terrain of science.

Thus, the nonrecognition of the Anthropocene as a geological epoch by the International Union of Geological Sciences is to be truly welcomed. It formally ends the inherently fraudulent attempt to base decisions about the fate and future of Earth and its inhabitants on a “scientific” notion rather than on a proper political basis. Indeed, I would argue that the rejection makes it possible to foreground the political itself as the central terrain on which to debate and act on policies to protect and perhaps even improve the planet. Politicizing such questions does not depend on the inauguration of a geophysical epoch. We already know that some forms of human action have profound terra-transforming impacts, with devastating consequences for all manner of socio-ecological constellations.

The socio-ecological sciences have systematically demonstrated how social practices radically transform ecological processes and produce often radically new socio-physical assemblages. The most cogent example of this is, of course, climate change. The social dimensions of geophysical transformations demonstrate beyond doubt the immense inequalities and social power relations that constitute Earth’s geophysical conditions.

The nonrecognition of the Anthropocene as a geological epoch by the International Union of Geological Sciences is to be truly welcomed.

The very notion of the Anthropocene decidedly obfuscated this uncomfortable truth. Most humans have no or very limited impact on Earth’s ecological dynamics. Rather, a powerful minority presently drives the planet’s future and shapes its decidedly uneven socio-ecological transformation. Humanity, in the sense that the Anthropocene (and many other cognate approaches) implies, does not exist. It has in fact never existed. As social sciences have systematically demonstrated, it is the power of some humans over others that produces the infernal socio-environmental dynamics that may threaten the futures of all.

Abandoning the Anthropocene as a scientific notion opens, therefore, the terrain for a proper politicization of the environment, and for the potential inauguration of new political imaginaries about what kind of future world can be constituted and how we can arrange human-nonhuman entanglements in mutually nurturing manners. And this is a challenge that no scientific definition can answer. It requires the courage of the intellect that abandons any firm ontological grounding in science, nature, or religion and embraces the assumption that only political equality and its politicization can provide a terraforming constellation that would be supportive for all humans and nonhumans alike.

Professor of Geography

University of Manchester, United Kingdom

Ritwick Ghosh closes the door on this newly named period of geological time—without fully understanding the scientific debate. Let me make it very clear, we are in the Anthropocene. Science shows that we are living in a time of unprecedented human transformation of the planet. How these manifold transformations of Earth’s environmental systems and life itself are unfolding is messy, complex, socially contingent, long-term, and heterogeneous. Most certainly, they cannot be reduced to a single thin line in the “mud” dividing Earth’s history into a time of significant human transformation and a time before. This is why geologists have rejected the simplistic approach of defining an Anthropocene Epoch beginning in 1952 in the sediments of Crawford Lake, in Canada.

Science shows that we are living in a time of unprecedented human transformation of the planet.

Geologically, the Anthropocene is better understood as an ongoing planetary geological event, extending through the late Quaternary: a broad general definition that captures the diversity, complexity, and spatial and temporal heterogeneity of human societal impacts. By ending the search for a narrow epoch definition in the Geologic Time Scale and building instead on the more inclusive common ground of the Anthropocene Event, attention can be turned toward more important and urgent issues than a start date.

The Anthropocene has opened up fertile ground for interdisciplinary advances on crucial planetary issues. Understanding the long-term processes of anthropogenic planetary transformation that have resulted in the environmental and climate crises of our times is critical to help guide the societal transformations required to reduce and reverse the damage done—while enhancing the lives of the planet’s 8 billion people. The Anthropocene is as much a commentary on societies, economic theory, and policies as it is a scientific concept. So I say in response to Ritwick Ghosh: welcome to the Anthropocene. The Anthropocene Epoch may be dead, but the Anthropocene Event and multiple other interpretations of the Anthropocene are alive and developing—and continually challenging us to do something about the polycrisis facing humanity.

Professor of Earth System Science, Department of Geography

University College London

A Public Path to Building a Star on Earth

Although it has been compared to “creating a star on earth,” fusion technology faces many earthbound challenges if it is to fulfill its promise of producing low-carbon energy. At times, public trust in fission has been eroded by accidents and perceived risks; by contrast, fusion has much working in its favor. Fusion doesn’t rely on a chain reaction process, and avoids the potential for accidental releases of highly radioactive fission products, which is what happened in Fukushima and Chernobyl. Furthermore, most fusion designs rely on fuel sources that are nearly unlimited, and the technology does not generate high-level waste. This addresses one of the problems to wider deployment of fission, which is that although nuclear waste storage and disposal may be more political than technical, it has not been solved in the United States. Similarly, the possibility of building a foundation for the social acceptance of fusion underlies its potential to solve multiple challenges for low-carbon energy: balancing a future grid, mitigating the need for a broad expansion of transmission infrastructure or storage solutions, and decarbonizing sectors that are challenging for renewable energy sources. However, for fusion to achieve these goals will take careful work between the public and private sectors to further develop the technology, while assuring its proper regulation, public acceptance, and certainly its affordability.

The funding of future fusion research now requires particular attention from decisionmakers. Federal funding has long underwritten fusion research—although the longstanding emphasis has been on developing a better understanding of plasma, the fourth state of matter. That almost singular focus on the science of plasma physics is changing as the technology reaches a new level of viability for energy production. Four times in the last 18 months, the National Ignition Facility at Lawrence Livermore National Laboratory has exceeded “scientific breakeven,” meaning that more energy came out of a fusion reaction than was input by the test device’s 192 lasers. Though that output is still two orders of magnitude removed from a commercial breakeven standard—where the grid energy needed to drive the device is exceeded by energy to the grid from the fusion reaction—these are remarkable scientific achievements. In February 2022, researchers at the Joint European Torus in the United Kingdom set a new record for producing controlled fusion energy when a five-second test produced 59 megajoules of energy, demonstrating further progress toward viable magnetic-confinement fusion. On the commercial path, there have been noteworthy technical achievements that may accelerate progress to a commercially viable fusion energy system, including demonstration of a high magnetic field in a large-bore model magnet constructed using high-temperature superconducting material.

The funding of future fusion research now requires particular attention from decisionmakers.

These fusion research milestones have led to a growing narrative that the necessary science is proven, and fusion energy providing electricity to the grid is just around the corner. Some startups have already begun to sign contracts for fusion-based electricity production before the end of the decade. In commercial efforts, there are now more than 40 companies pursuing a wide range of concepts, including standard tokamak designs, mirrors, and stellarators as well as less well-explored concepts such as flow-stabilized Z-pinches. Total funding for these companies exceeds $6 billion. Recognizing the opportunity to advance some aspects of fusion research and development more quickly, the US Department of Energy (DOE) initiated a Milestone-Based Fusion Development Program and has contracted to provide funding to eight companies to help augment their research and development efforts with public support. This is an encouraging step. But growth in public budget levels is necessary to ensure that this type of program doesn’t pull money away from open research efforts at labs and universities at a time when even the most viable fusion concepts are still unproven.

Momentum for fusion is clearly building, and everyone should be hopeful that it will lead to discoveries for best-case, long-term solutions to global clean-energy needs. However, this progress is occasionally accompanied by premature calls to truncate or close existing US public research facilities or pull support for the international ITER effort, an experimental fusion system being constructed in France through a collaboration of 35 nations. Although shifting support and focus to private efforts may seem politically and financially attractive to some, it is far too early to reduce funding for the ongoing efforts at national laboratories and universities in the United States. Such a move would not only narrow advancement in crosscutting technologies, but it could also impact development of the workforce needed by both the public and private sectors and hamper efforts to build public trust in the technology more broadly.

Although shifting support and focus to private efforts may seem politically and financially attractive to some, it is far too early to reduce funding for the ongoing efforts at national laboratories and universities in the United States.

At this early stage in the development of fusion technology, a coordinated plan for public and private funding is necessary. Successfully deploying new energy technologies at scale requires more than just technical development. Three critical and equally important areas must be carefully attended to: the underlying science and material challenges; a thorough and clear regulatory structure that manages safety concerns without placing undue burden on commercial development; and public acceptance and energy equity. The future of fusion depends on a coordinated effort across all stakeholders to build a robust scientific, regulatory, and social infrastructure around the technology. Already, there have been worrying signs of possible disconnects, as well as challenges in building a public-private architecture, that will require continuous attention.

Addressing the development challenge

Getting to the point where commercial developers can create fusion generation that can compete with other low-carbon energy sources still requires a significant amount of research in areas such as plasma physics and materials, supported by high-performance computing, artificial intelligence, and digital engineering. To become a reliable supplier of energy, fusion needs a set of supporting technologies, including materials that will withstand very harsh environments, reliable and lower-cost high-field magnets, and sustainable fuel supplies—all of which are now at early stages of readiness. Support for the international ITER project and enhancement of national fusion R&D programs at national labs and universities is necessary for these fundamental innovations. Without this publicly funded work, pathways to a commercially viable fusion system will be much narrower, focusing on specific needs of as-yet-unproven commercial designs.

Although individual companies are attempting to solve some of the challenges that remain, they are often—and understandably—focused on approaches that are specific to their design cases. Private ventures cannot be expected to explore the broader landscape of options that could ultimately lead to more sustainable and affordable solutions. For instance, significant effort is being expended in the private sector to develop high-temperature superconducting magnet technologies. Many companies are focusing primarily on yttrium barium copper oxide superconducting tape, an option that demonstrated high magnetic field for Commonwealth Fusion Systems in the company’s initial model coil tests. But it is still uncertain if this will prove to be a robust, affordable, long-term option for all designs. This is just one example of an area where a public program that examines a broad range of options may yield benefit for the full complement of fusion designs.

Ultimately, private companies are looking to capitalize on existing scientific and technological knowledge, generally adding their own particular innovations in a specific area. They are not typically focused on developing the broad, underlying science and technology that will likely be needed for many different approaches, and they are certainly not focused on making such advances publicly available—that is the job of the public program. The greater the base of publicly available fundamental scientific and technological understanding, as provided by public R&D, the more opportunities there will be for innovation by private companies.

The future of fusion depends on a coordinated effort across all stakeholders to build a robust scientific, regulatory, and social infrastructure around the technology.

Many of fusion’s technical challenges have been detailed in reports from the US National Academies of Sciences, Engineering, and Medicine, such as a 2019 consensus report, A Strategic Plan for US Burning Plasma Research,and a 2021 consensus report, Bringing Fusion to the US Grid. The US Government Accountability office also recently released a report on fusion energy. Successfully meeting the crosscutting challenges these reports outline will augment most commercial development efforts, and will also ensure that no specific commercial design is favored while alternatives that could ultimately prove more attractive are still at low technology readiness level.

To help drive these necessary innovations, the focus of national publicly funded programs is being reevaluated, particularly at the Department of Energy. The public, through DOE and its national labs, has an interest in ensuring that potentially transformational technologies are moved forward; but it also has an interest in ensuring that public funding does not tilt the playing field excessively and serve to limit or skew the competitive landscape. Certainly, DOE must perform due diligence and only fund those companies that can meet agreed milestones. But DOE should also help keep a range of options available, as they have in programs like the Advanced Reactor Demonstration Program for advanced fission, which still funds 10 different designs. Where possible, DOE should consider teaming with other parts of the federal government in areas that overlap (e.g., materials, management of tritium, workforce) and should stay connected with international partners where appropriate.

A research roadmap to guide public investment

To guide efforts for the next decade, DOE’s Fusion Energy Sciences office should create a US research and development roadmap to align the appropriate resources. Some of this work is already in progress: for example, the DOE Fusion Energy Sciences Advisory Committee’s ongoing review of priority research areas and infrastructure needs recently advocated for continuing support for ITER, emphasizing research supporting fuel cycle and breeding blanket test facilities, and developing a prototypic neutron source for materials testing. To structure these efforts, DOE has published a Fusion Energy Strategy that describes the path ahead at a very high level, emphasizing three pillars focusing on closing science and technology gaps, preparing the path for commercialization and building partnerships. In executing this strategy, DOE’s Fusion Energy Sciences has proferred a vision that emphasizes “building bridges.” While these efforts are encouraging, they are still not obviously underpinned with a current technical assessment of the state of readiness across fusion technology development. A more formal assessment of technical readiness for key technologies is still needed to assure leadership across multiple design options. A helpful model to look to is the DOE-sponsored assessment of advanced fission reactor designs conducted by several national laboratories with support from the fission industry.

A more detailed technical assessment and roadmap would enable a better public-private alignment of technological readiness and research priorities. For example, some observers posit that the basic understanding of the fusion power source has reached Technology Readiness Level (TRL) “6,” which implies readiness to move to a prototype at the systems level. But that rating typically requires that all component subsystems be at that same TRL level to move forward. Whether this is correct is unclear, and there is not even full agreement what TRL levels mean in relation to fusion systems. A roadmap, buttressed by a consensus understanding of component and system technical readiness, could assist stakeholders in determining whether this overall TRL assessment is accurate and if more robust funding for prototype development efforts is advisable. Similarly, the roadmap could guide the type of expert elicitation and synthesis that the fission community has used to develop priorities to advance technology while also building a comprehensive training and support infrastructure.

The roadmap can also direct public funds toward prudent investments. Figure 1 reflects the funding levels for research across fusion, fission, and renewables over the last 10 years. Although fusion funding has grown to about $800 million for DOE’s Fusion Energy Sciences, it is still well below the level for a technology that is already proven (fission) and is only on par with research in the four primary renewable technologies (wind, solar, water, and geothermal). Over the last four years, accounting for inflation, fusion funding has essentially remained flat. If the experience of advanced fission reactor design and development is indicative, fusion funding levels will almost certainly need to double or triple in the coming years to ensure success by mid-century.

A more detailed technical assessment and roadmap would enable a better public-private alignment of technological readiness and research priorities.

Even with bipartisan support, it will be difficult to increase public funding to the scale required. Given these limits, some have argued that the public fusion R&D program should be primarily focused on the needs of the growing commercial enterprise. But DOE’s mission and goal is to ensure the technology succeeds without favoring any one company or approach. This has been successful with fission: as new technologies have been developed, DOE increased funding to ensure that critical academic and lab research continues. Similarly, as some commercial designs and technologies for fusion move ahead, the public fusion science program should continue—informed by and, where prudent, aligned with private sector efforts. Furthermore, public efforts should not be constrained by the near-term focus of private investments, which may prioritize financial returns over technological progress. Until fusion technologies can meet public goals, public research should remain broad, and stakeholders should resist attempts at narrowing its reach. Ideally, development should continue in a manner that places the United States both at the leading edge of fusion R&D and at the forefront of commercialization—and keeps it there.

One well-known example of a robust private venture that meets public and private needs is NASA’s Commercial Orbital Transportation Services (COTS) program. COTS transitioned support for space resupply missions for the International Space Station from NASA to the commercial sector and ultimately helped fund development of a commercial space transportation ecosystem. Given the success of this model, advocating for a similar approach for fusion is tempting. However, the development of fusion technologies is not truly comparable with COTS in that, unlike the space program, there is no baseline, proven technology to be optimized and enhanced in a commercially viable and reliable form. In fact, the success of COTS might be seen as a model supporting a more robust public effort, because decades of public funding supported the development and testing of the fundamental technologies which have enabled today’s multiple commercial programs to proceed with rapid iteration and development in the marketplace.

Public funding to build public support

Beyond meeting fusion’s R&D challenges, it will be important for private developers to have regulatory certainty and public trust as they evaluate their designs. In advancing regulatory development, there are many notable examples of thoughtful mixtures of public and private funding that have supported both the development of technologies and their acceptance and integration with society. Rocket systems, the Global Positioning System, the internet, microchips, and LED lights have all started with public funding that led to broad understanding of the technology, regulatory strategies, and workforce development. In electricity generation, for example, fission energy as a power source was led at its early stages by academia and the US Department of Defense—creating a regulatory regime as well as workers versed in the technology—before it was expanded to commercial scale by private industry. A similar period and level of public support is necessary to achieve an appropriate level of social readiness for fusion.

Until fusion technologies can meet public goals, public research should remain broad, and stakeholders should resist attempts at narrowing its reach.

On the regulatory front, work is already underway. The US Nuclear Regulatory Commission is developing a comprehensive regulatory and licensing process in keeping with the timelines laid out in the Nuclear Energy Innovation and Modernization Act, which requires a final rule by the end of 2027. In meeting this regulatory challenge, care must be taken to consider all voices and avoid taking pathways that could undercut public acceptance and energy equity. Public funding of research and appropriate outreach at this stage of development can make sure that safety, security, safeguards, and social license are well examined. When it comes to regulations related to waste and nonproliferation, issues that have troubled the development of fission energy, national laboratories provide the backbone of experience and analytical tools that are needed for careful, thorough analysis. As regulation is developed for near-term fusion designs, private developers will want to be aware that regulators may need to change course as they gain a greater understanding of design specifics from pilot plant development. If this is not carefully managed, industry may miss the mark in balancing speed and cost of development with public acceptance.

The question of building public trust should be given special priority. Even though the risks of fusion are different and of lower consequence than they are with fission, they are not zero. In particular, both public researchers and industry should be wary of understating risks because in the long run, that could erode public support. Any sense that industry is misrepresenting risks of, for example, proliferation or accidental release could affect the public’s perception of the entire class of technology.

Care should also be taken to engage the public early and often to ensure there are no unanticipated hurdles in deploying the technology when it is ready. For example, there has been significant public concern around releases of tritium, a radioactive isotope of hydrogen, including the global reaction to releases of tritiated water from Fukushima in 2023. Opposition to the releases is vocal and has been strongest within Japan’s fishing community, which is concerned that radiation could prevent them from selling seafood internationally. Korea and China have also expressed concerns regarding environmental effects in their territories. Arguably, some of this concern is wrapped up in regional politics, but even though the International Atomic Energy Agency indicates that there will be “negligible radiological impact on the environment and people” from the releases, the government of Japan is still on the defensive.

Even though the risks of fusion are different and of lower consequence than they are with fission, they are not zero.

The Fukushima accident highlights the risks posed by the global nature of the relationship between nuclear development and public trust: accidents anywhere in the world, on any design, may affect public acceptance of this class of technologies elsewhere. As the US Nuclear Regulatory Commission works through domestic criteria, the government should also take a leading role with the International Atomic Energy Agency to clarify global safeguards and security guidelines. In particular, the US government should lead by improving detection and accounting standards for tritium. Risks of tritiated water release may actually be greater at fusion facilities that use a deuterium/tritium fuel cycle due to the very large quantities of tritium involved; these risks should be addressed explicitly and not underplayed. The Fukushima tritium releases will total 2.2 grams over 20–30 years. A single gigawatt-scale fusion energy system will burn over 50 kilograms of tritium in a full power year.

Finally—and importantly—a key part of ensuring public acceptance, regulatory steadiness, and a smooth path to commercialization lies in developing a well-trained, robust, and diverse workforce. Today’s public research is helping to build an initial core workforce, but significantly more effort is needed to build out the supporting infrastructure of skilled trades, operations staff, and engineering teams to construct these complex facilities. More research scientists and engineers are also a critical need. A recent National Academies study examining development of advanced fission reactors recommended a broader whole-of-government approach to building the necessary workforce to support expansion of fission, similar to the approaches taken in the multi-agency National Network for Manufacturing Innovation. An equally ambitious approach should be considered for fusion.

A careful path to deployment

Amid the excitement around the potential for fusion, it has been suggested that development of stringent rules that might affect the cost of commercialization should be balanced against the urgent need to transition to a low-carbon energy system. This type of balancing must be done with care. Policymakers can and must support low-carbon development while also taking reasonable steps to ensure those low-carbon solutions do not generate their own risks. They also must ensure these technologies are deployed equitably. A robust plan for development must incorporate public facilities designed specifically to examine new approaches and new materials with a parallel effort to explore risks.

The last 75 years of experience with nuclear fission demonstrates that a complex technology cannot succeed without both technology improvements and global public trust. If development is properly managed, deployment of fusion can provide the long-term, low-carbon solution the world desperately needs. But we must not lose sight of that long-term goal or be distracted by claims that success is assured.

Welcome to Our Beautiful Power Plant

As you walk toward a field on the edge of the city, you start to make out sculptural objects that appear to rise organically from the landscape. You reach a small park at the edge of the installation, and it feels like you have stepped into the frame of a painting. You’re surprised to learn that the beautiful objects in front of you are the public-facing components of a larger solar power plant that is providing carbon-free electricity to thousands of nearby homes. As the panels on the artwork shift to track the sun, you learn that the site, which includes the interpretive park you’re standing in, is the result of a codesign process between community members and artists that was facilitated by the plant’s developer.

A decade ago, when the cost of solar and wind power was much higher, the idea of expanding design aesthetics beyond utility for energy landscapes was considered impractical. Today, cost is becoming less of a barrier to renewable power while community opposition to facilities is becoming more common. Combined, these trends suggest that sticking with utilitarian design aesthetics is an impractical choice.

We believe energy developers may be overlooking a powerful and important tool: community codesign of power installations.

In a recent survey of renewable energy developers, researchers at Lawrence Berkeley National Laboratory found that nearly 40% of solar projects and 30% of wind energy projects have been canceled in the past five years, and nearly half are significantly delayed. Direct community opposition and local zoning ordinances are among the leading reasons for cancellation or delay, with developers reporting visual concerns to be the most common objections for both wind and solar. More than three-quarters of developers in the survey said that community engagement was desirable, but fewer than one in ten agreed that the public should recommend decisions—and none wanted the public to actually make decisions.

We believe energy developers may be overlooking a powerful and important tool: community codesign of power installations. The Land Art Generator initiative (LAGI) has been working at the intersection of energy landscapes and community codesign since 2008. We consider the power plant a public artwork, simultaneously enhancing the environment, increasing livability, providing a venue for learning, and stimulating local economic development. Our projects invite the public to construct a different relationship with new energy technologies. We believe that sustainability in communities is not only about resources—it is also about cultural and social harmony.

We’ve worked on a range of urban and rural clean-energy projects that provide cobenefits to communities. We always start by working with a key group of local stakeholders on a community design charette. Together we explore the design constraints and goals for the site with those who live and work in proximity to it. Sometimes the community helps to determine the best site. We then come up with a creative brief that can guide the design of an energy landscape that will enhance the livability of the neighborhood. In subsequent workshops, community members provide feedback and ideas along the way from concept to detailed construction drawings.

Our Solar Mural artwork program empowers local artists to use energy as a medium for creative expression. We’ve also developed science, technology, engineering, art, and mathematics (STEAM) programming that is used by schools and community groups. Through design competitions, we have brought together a global network of thousands of creatives who understand the importance of design to the success of climate mitigation pathways.

Meaningful community engagement and codesign takes time and costs money. But considering the growing costs of canceled projects, we believe that it is money well spent. Low-carbon energy installations can produce more than just kilowatt-hours if they reflect the shared aspirations of the communities where they are sited. Public art funding could be expanded to support renewable energy projects with pocket parks, picnic shelters, amphitheaters, river walks, educational experiences, and, of course, public art installations.

The idea of combining the powerful community symbolism of art with energy infrastructure is not new. Art deco sculptures by Oskar J. W. Hansen, for example, form a stunning crown atop the Hoover Dam—which is otherwise a utilitarian hydroelectric power plant. And in the early days of electrification, most power plants were located by necessity in the heart of cities, because efficient long-distance transmission wasn’t yet possible. These old power plants, such as Battersea Power Station in London, were designed as beautiful contributions to art and architecture and became heritage buildings and historic monuments.

As new forms of energy generation that do not pollute the air make their way back into our cities, we can design new monuments to this important time in human culture. By thinking creatively, we can bring solar power to sites where it may have otherwise seemed impossible to get approval. And by creating beautiful places that also serve our energy needs, we can form a lasting and positive connection between popular culture and clean power.

Global Diplomacy for the Arctic

The Arctic was long known as a region where the West and Russia were able to find meaningful ways to collaborate, motivated by shared interests in concerns in environmental protection and sustainable development. The phenomenon even had a name: Arctic exceptionalism. Most of that was lost when Russia invaded Ukraine in February 2022.

There has been much hand-wringing about the extent to which the severing of political and economic ties should apply to scientific collaboration. In the Arctic, science has often persevered even when other forms of engagement were cut off. The International Polar Year 1957–58, the 1973 Agreement on the Conservation of Polar Bears, and the 1991 Arctic Environmental Protection Strategy are prominent examples.

A culture of Arctic scientific collaboration has also defined the work of the Arctic Council, the region’s main intergovernmental forum. Incremental efforts to resume collaboration have focused on allowing the council’s Working Groups—populated largely by experts and researchers, and focused on scientific projects—to resume their work, if virtually rather than in person. There have been many discussions on whether the Arctic Council should continue without Russia; the conventional wisdom is that climate change and other issues are so important that we can’t afford to cut ties completely.

In the Arctic, science has often persevered even when other forms of engagement were cut off.

Academics and scientists were often encouraged to collaborate with Russians on Arctic research between 1991 and 2021. Now it is becoming taboo. As Nataliya Shok and Katherine Ginsbach point out in “Channels for Arctic Diplomacy” (Issues, Spring 2024), “The invasion prompted many Western countries to impose a range of scientific sanctions on Russia … The number of research collaborations between Russian scientists and those from the United States and European countries has fallen.” In fact, a colleague of mine was terminated from the University of Lapland for attending a conference in Russia where he spoke about climate change cooperation.

Shok and Ginsbach do an admirable job of framing this context. But they go beyond that, reminding us of the importance of scientific collaboration on human health in the Arctic region. Some of us may recall, and the authors recount, when a heat wave in 2016 resurfaced anthrax bacteria long buried in permafrost in Russia’s Arctic Yamal Peninsula. The outbreak killed thousands of reindeer and affected nearly a hundred local residents. It had us asking, what else will a warmer Arctic bring back into play? A study conducted by a team of German, French, and Russian scholars before the invasion of Ukraine sought to help answer this, identifying 13 new viruses revived from ancient permafrost.

This type of research is now under threat. There’s a case to be made that regional collaboration on infectious disease is even more urgent than that on melting permafrost or other consequences of climate change. It’s not a competition; but in general, better understanding Arctic sea ice melt or Arctic greening won’t prevent climate change from happening. Understanding the emergence of new Arctic infectious diseases, by contrast, can be used to prevent outbreaks. Shok and Ginsbach recommend, at a minimum, that we establish monitoring stations in the high-latitude Arctic to swiftly identify pathogens in hot spots of microbial diversity, such as mass bird-nesting sites.

There is no easy answer to the question of whether or how to continue scientific collaboration with Russia in the wake of the illegal invasion of Ukraine. But it is undoubtedly a subject that needs contemplation and debate. Shok and Ginsbach provide a good start at that.

Managing Editor, Arctic Yearbook

Director of Energy, Natural Resources and Environment, Macdonald-Laurier Institute, Ottawa, Ontario

The COVID-19 pandemic powerfully demonstrated the importance of international scientific cooperation in addressing a serious threat to human health and the existence of modern society as we know it. Diplomacy in the field of science witnessed a surge in the race to find a cure for the SARS-CoV-2 virus. The thawing Arctic region is at risk of giving rise to a new virus pandemic, and scientific collaboration among democratic and authoritarian regimes in this vast geographical area should always be made possible.

International science collaboration and science diplomacy, however altruistic, risks being run over by the global big power rivalry between players such as Russia, China, and the United States, with the European Union and the BRIC countries acting as players in between. In its recent report From Reluctance to Greater Alignment, the German Marshall Fund argues that Russia’s scientific interests in the Arctic, beyond security considerations, are mostly economic with a focus on hydrocarbon extraction and development of the Northern Sea Route, trumping any environmental or health considerations.

The thawing Arctic region is at risk of giving rise to a new virus pandemic, and scientific collaboration among democratic and authoritarian regimes in this vast geographical area should always be made possible.

Russian and Chinese scientific cooperation in the Arctic has increased significantly since their first joint Arctic expedition in 2016. China was Russia’s top partner for research papers in 2023, and scientists from both countries have downplayed the military implications of their scientific collaborations in the Arctic, emphasizing their focus on economic development. However, many aspects of this collaboration, such as the Arctic Blue Economy Research Center, include military or dual-use applications in space and deep-sea exploration, and have proven links to the Chinese defense sector.

Given the scientific isolation of Russia after its invasion of Ukraine in 2022, scientific collaboration in the areas of health and environmental concerns within the auspices of the Arctic Council and other international organizations seem to be the last benign avenues for Russian scientific collaboration with other Arctic powers and the West at large. Russia’s pairing up with China on seabed and mineral exploration in the Arctic does not, however, strengthen trust and confidence of Russian efforts in health and environmental issues being free of military and security policy.

The last frontier of scientific cooperation for the benefit of health and environmental stability in the Arctic region stands to be overrun by global power politics, with science diplomacy being weaponized as a security policy tool, among others. This is a sad reality acknowledged by the seven countries—Canada, Denmark (Greenland), Finland, Iceland, Norway, Sweden, and the United States—that exercise sovereignty over the lands within the Arctic Circle. (Russia was voted out of the oversight Arctic Council after its invasion of Ukraine.) It is therefore worth considering whether Russia’s Arctic research interest in the fields of health and environment would benefit from a decoupling from China and any other obvious military or dual-use application.

Senior Fellow, Transatlantic Defense and Security

Center for European Policy Analysis, Washington DC

Cool Ideas for a Long, Hot Summer: Refugee Communication Networks

In our miniseries Cool Ideas for a Long, Hot Summer, we’re working with Arizona State University’s Global Futures Lab to highlight bold ideas about how to mitigate and adapt to climate change. 

On this episode, host Kimberly Quach is joined by ASU assistant professor Faheem Hussain to learn about how Rohingya refugees are using social technologies and what they can teach everyone about communicating in disasters. Hussain is a researcher whose trajectory was changed when he visited a Rohingya refugee camp in Bangladesh. There, he learned how the community uses a combination of online and offline technologies to create networks to share information in response to disasters.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Kimberly Quach: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University. I’m Kimberly Quach, Digital Engagement Editor at Issues. On Cool ideas for a Long Hot Summer, we’re working with ASU’s Global Futures Lab to highlight cool ideas about how to respond to climate change. On this episode, I’m joined by assistant professor Faheem Hussain to learn about how Rohingya refugees are using social technologies. Faheem shares how his research trajectory was changed by a visit to a Rohingya refugee camp and explains how the Rohingya are creating communication networks to respond to disasters. Faheem, welcome!

Faheem Hussain: Thank you. Thank you for having me.

Quach: So before we get into the rest of our interview, I wanted to ask a little bit about you. I know that your background is in telecommunications, engineering, public policy. So how did you get involved with the refugee community?

Hussain: The refugee work has been since 2017. I was not planning to do it, but when I saw the Rohingya, that exodus of them escaping the atrocities from Myanmar to Bangladesh crossing the border, I got a ticket, went there, started seeing the disaster firsthand. And that entirely changed my research trajectory to look into the challenges, aspirations, and innovations of people on the run when it comes to access to education, access to communication using digital technologies.

I got a ticket, went there, started seeing the disaster firsthand. And that entirely changed my research trajectory.

Quach: So could you tell me more about the Rohingya? Who are they and why are they being displaced?

Hussain: The way the state of… the Federation of Burma happened and then the military junta now renamed it as Myanmar, there are many ethnicities. And Rohingya are one of those ethnicities. But unfortunately somewhere in mid-eighties or early eighties, they had a new constitution and Rohingyas lost their rights to citizenship. And afterwards, gradually, they lost their access to resources, access to land, access to ownership of many other things. And that’s how they were super marginalized within their own country. And they have been gradually leaving or escaping that country since late eighties, early nineties. And majority of the Rohingyas unfortunately, are now living in Bangladesh as refugees or unofficial refugees.

Quach: How are they impacted by climate change and our long hot summers?

Hussain: Bangladesh is already one of the worst affected country or the region when it comes to climate change. And Rohingyas are right now based in Cox’s Bazar area which has one of the longest uninterrupted sea beach in the world. As a young Bangladeshi when I used to go to the sea beaches of Cox’s Bazar when I was very young, and now when I go there as a researcher, I see how the sea level rose and how it affected the communities there. There are so many extreme events there.

And the Rohingyas primarily lived in the region where there were national forests. And now there’s no green whatsoever. They had to cut down all the trees. And that totally affected in terms of the natural resources and how you are acquiring it, how you are accessing it. And it also made the whole ecosystem of food industry there and agriculture there off balance because the access to the natural resources from the host community is also at peril because of their presence there. So overall, it’s a huge mess.

Quach: So it sounds like a difficult situation not just for the Rohingya but for the host population. And they’re sort of coming into conflict, not just over climate, but culturally and other things. And one thing that I think you mentioned is that we wouldn’t think of this population as innovative in technology, but they actually are using it in really cool ways. Could you tell me about that?

They have their own parallel alternative internet, the network to communicate with each other.

Hussain: Yes. So my main research, the thing that I’ve been documenting since 2017, when the latest exodus happened, is to look into how they’re using digital technologies. Now by default, we think that our languages have alphabets and we have access to internet, the basic access, and we can communicate and connect with each other. But then what really fascinated me as a researcher is Rohingyas do not have any recognized written form. So they do not have alphabets. So that kind prohibits them from accessing the digital technologies using their own languages in a text format. So that’s very important when you are talking about designing things. And to make things worse, technically Rohingyas are not allowed to have legal access to mobile phones in their host community in Bangladesh. They need IDs and whatnot, which they do not have. And the internet access, the quality of service within the camp areas where we are talking about 1.3 million population, it’s a big city per se, the internet is very, very poor, the service is very poor, the connectivity is very poor, availability is very poor. And they pay a lot of price to get whatever service they can have. So overall, that makes things very challenging.

But what we have seen is their resilience in terms of coming up with offline communication. They have their own parallel alternative internet, the network to communicate with each other. They have their mobile shops where they repair things, they exchange the content, and also it’s a hub of exchanging information. And during COVID time as well, it was a center for people accessing health information and advisory in their own languages. So that was fascinating. So these are the things I’ve been looking into to see how people communicate in environment where things are challenging.

Quach: What sorts of innovations are they pioneering? It’s pretty difficult to use any sort of technology if you don’t have access to written language.

Hussain: It’s a very audio-based conversation. So what they do, they used WhatsApp and there are several other— lesser known in our part of the world—voice communication apps they use, where you need very low bandwidth to communicate through video and audio. So they do offline messaging. So whenever someone goes somewhere, they download those things and then come down and listen to those messages. And some of the messages are also being transferred through the USB drives and then can be transferred from phone to phone using offline network. They do it by creating an ad hoc network of Bluetooth and whatnot. Or they transfer it through the laptops of the mobile repair shops. So that way they communicate.

Personal touch is important in terms of validation because there is a lot of misinformation/disinformation there as well. So many of the information are getting first validated by the local elders.

But more importantly, what we have seen—and I found it very important—is that personal touch is important in terms of validation because there is a lot of misinformation/disinformation there as well. So many of the information are getting first validated by the local elders in a certain subcamp and only then they’re disseminating those information. Or if there are some false information, there have been a manual mechanism where they verify it through in-person conversations and whatnot. So what we have seen there is the human connectivity per se, that helps some of the gaps that has been there because of the lack of infrastructure when it comes from digital services.

Quach: So that’s really interesting. I think there’s a lot of lessons we could take from this. We kind of imagine it as being an issue for people far away, but in this country we’ve seen many situations where natural disasters have brought really low connectivity periods for us, like for example, during Hurricane Katrina, a lot of these lessons could be used or our current situation with rampant misinformation, disinformation, how we could kind of use those lessons. For our listeners, what are things you think that would be really useful from our Western standpoint that we could learn from these communities and implement in the U.S.?

Hussain: What I have seen seen, it’s always the human power and human lives and how they do things at the personal level. The technology comes secondary. I think that’s very important for to look into. I have worked with my local partner, Young Power in Social Action, who are officially a partner of ASU when it comes to research. And through them I worked with an amazing program called Voice of Palong. This VOP or Voice of Palong, it’s is a collection of volunteers from both the refugee community and the host community. And these are very young kids who turn to be activists. They’re very advanced user when it comes to mobile phones and internet. And they work together to make sure that the information they communicate to the community are valid. But at the same time, while they do it, they created a lot of good reputation of themselves. So the people trust them.

It’s always the human power and human lives and how they do things at the personal level. The technology comes secondary.

And then what they do is if something bad happens or something urgent happens, they’re the ones first get the messages or information from the affected people and then they pass it. So the validation becomes easier because the recognition of them as valid sources are evident. So it goes both ways. Not only they’re effectively disseminating information, they’re also effectively collecting information from the people. So in that way, the services that can be designed by the organizations working with the refugees and host communities, they have a better sense of idea of what the people really need. So it’s not a top-down approach, it’s a bottom-up approach. And technology is just a secondary part of it. To answer your question, this is something that we can definitely learn and try to focus on when disaster happens here because sometimes I think we are overly and overtly dependent on our technological infrastructure and not pay a lot of attention and knowing the faces behind those.

Quach: Yeah, that’s definitely true. I think that’s a great takeaway that technology and infrastructure isn’t just our things with circuits, but it’s also community building, it’s trust building. All of those things are important innovations that we should keep in mind.

Hussain: There’s tension on access to resources, and this is one of the things that we do not see in the conversation when we talk about displacement, when we talk about migration, that not only the people who are getting displaced are affected, it’s also the people who are being host can’t have huge access to resources. Especially when we are based in Arizona, we definitely see the tension in the borders. What we have seen, the success story of VOP in making those conflict resolutions by working together by younger generation, and then look beyond just some specific projects.

So that really helped the VOP to establish or looking forward to establish a good information, valid information network when it comes to early warning systems, when the flash floods happens, or there are issues with cyclone warning systems because is very much vulnerable towards that. And we have seen some huge success without much investment there. So this is one of the future direction of my research to elaborate on it more, to dig deeper and see if we infuse more better technology with more effective human connection, whether that can really help when it comes to adaptation of climate change there and do something better for the environment.

Quach: Well, thank you. This was fantastic to learn about innovation in such a…I think you’ve called them one of the most oppressed populations, but they are innovating in such interesting ways and show incredible resilience in the face of not just of climate challenges, but a whole host of challenges.

Hussain: Yeah. Yeah. And I hope that as people are struggling all over the world, and it’s not something us versus them, the displacement happens all over. We inherit the world in a similar manner. So climate change does not see boundaries and the geographies or the passports. So I think the knowledge should be flowed freely in both ways, and we can definitely learn from the affected people.

Climate change does not see boundaries and the geographies or the passports. So I think the knowledge should be flowed freely in both ways, and we can definitely learn from the affected people.

Quach: Check our show notes to learn more about Faheem Hussain’s work, the Rohingya refugee crisis, and how to create more effective human connections. And please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our audio engineer Shannon Lynch. I’m Kimberly Quach, digital engagement editor at Issues in Science and Technology. Join us next week for our final episode of our miniseries. We’ll talk to Melissa K. Nelson about problems in the food system and how utilizing indigenous practices can help.

An Ambidextrous Approach to Nuclear Energy Innovation

After many years of stagnation, a new wave of nuclear energy innovation is underway. Announcements of scientific and technical advances from laboratories and fusion and fission companies have become a regular occurrence. Each announcement seems flashier and more hopeful for the future of the industry than the last. With all this new activity, parsing the differences between the technologies in play—and evaluating their potential to transform the energy landscape—is as necessary as it is complex.

In the absence of a clear frontrunner among the various technologies, we argue that the appropriate response is a portfolio of nuclear energy innovation projects, informed by periodic assessments of technology readiness, potential benefits, remaining uncertainties, and economic competitiveness. Comparing new fusion and fission technologies within this kind of framework can inform policymakers, researchers, and investors about trade-offs between technologies. Adopting such a framework could also help decisionmakers understand how stakeholders might align in the pursuit of new forms of nuclear energy that can be deployed at scale.

The spectrum of nuclear energy innovation

Across the range of nuclear energy technologies, there are trade-offs among options for economic, technical, and political feasibility. Currently, all forms of nuclear power involve radioactive materials or have remaining physics uncertainties or face economic challenges—but in varying degrees. Today’s flurry of nuclear energy innovation is aimed at changing that, promising to make nuclear power production cleaner and cheaper. However, each of these aspirations comes at the price of different levels of uncertainty. This tension spans both fusion and fission research efforts.

Parsing the differences between the technologies in play—and evaluating their potential to transform the energy landscape—is as necessary as it is complex.

Depending on which dimension they value most, entrepreneurs, scientists, and investors arrive at different assessments of which approach yields an optimal ratio of potential benefits and remaining uncertainties. In our experience, advocates for each technical approach typically claim that theirs alone occupies a sweet spot, arguing that other projects are either too conservative and lack ambition or are speculative and unrealistic.

It’s also important to consider the ways that nuclear energy innovation agendas vary depending on social, cultural, and historical factors. France, for example, has a markedly different perception of, and tolerance for, nuclear fission than does Germany. Two-thirds of France’s electricity generation comes from nuclear reactors, and public support for fission technology is strong.

Germany, on the other hand, recently closed its last nuclear power plant and is unlikely to reverse course—largely due to a complex political history and public concerns about the radioactive materials needed and produced in the reactors. Consequently, Germany’s nuclear energy innovation agenda focuses on technologies with considerably lower associated radioactivity than traditional fission reactors.

For a comparative analysis of the variety of nuclear innovation efforts underway and to allow for better consideration of the trade-offs involved, we propose situating the various approaches in a design space with three axes: first, the amount of radioactive inventory and waste involved; second, the remaining scientific and technology uncertainty; and third, the expected economic competitiveness. Each fission and fusion approach can be located in this design space. A fine-grained analysis would treat radioactive inventory, radioactive waste, and release potential separately. However, in practice, the three are largely correlated.

The figure below is an example from our perspective of what this prospective analysis could look like. As we will explain in greater detail, more extensive analyses of this kind could be undertaken and may arrive at quantitative values for variables across the spectrum. Results of such analyses could then be debated in public forums and refined.

The figure makes it possible to see that the approaches with the largest amount of radioactive inventory and waste, such as traditional nuclear fission plants, are the most technologically mature. In turn, alternative approaches, such as fusion, that promise lower radioactive inventories and less waste have greater associated physics uncertainties. This schematic defines what we call the spectrum of nuclear energy innovation.

The need for a dynamic portfolio approach

As of yet, there is no consensus about which nuclear technologies will be dominant in the future. Each of the approaches we’ve listed has attracted at least $100 million in funding—and in the case of traditional fusion and fission research, beyond tens of billions and hundreds of billions, respectively. The sheer diversity of approaches—as well as the tenacity with which they continue to be pursued—indicates that this will be a long game. Until greater consensus emerges, a dynamic portfolio strategy that adjusts research and investment priorities periodically will be necessary.

The purpose of a dynamic portfolio is to mitigate and balance out the risks of individual approaches, while benefiting from their potential upsides. But pursuing a portfolio raises the question of weighting: How should research and development resources be allocated across the different technologies?

A fine-grained analysis would treat radioactive inventory, radioactive waste, and release potential separately. However, in practice, the three are largely correlated.

At the local level, the particular politics of nuclear energy will continue to influence the range of preferred innovation activities, leading to different weight preferences in different locales. We anticipate that such distinct concerns will influence regional and national efforts as local players continue to focus their research agendas on particular subsets of the spectrum. These local efforts, forging ahead with specific projects—such as Commonwealth Fusion System’s tokamak demonstrator in the United States or the accelerator-based fission reactor MYRRHA in Belgium—can do a lot to reduce uncertainty and develop better understanding of what works and what does not. In this way, local efforts can avoid paralysis and place bets on competing technologies until there is convergence on a winning approach. And as policymakers, science managers, and other stakeholders decide on how to weigh investments in research to align with national and institutional goals and risk appetites, the whole spectrum of options can be explored around the globe.

However, as these local research agendas with their respective focal points take shape, policymakers must continue to scan the spectrum for emerging alternatives that exhibit the potential to leapfrog existing technologies—and be prepared to jump if warranted. In the management literature, this principle is known as ambidexterity, meaning managers need to simultaneously focus on their existing portfolio while keeping an eye on new developments with disruptive potential. This principle also applies to policymaking: major national fission and fusion programs need to pursue research deeply while scanning the horizon widely. Management scholars recommend that these two complementary focal fields be adopted by different entities to manage the inherent tension between exploiting existing capabilities and exploring new opportunities.

Even if a national portfolio of projects emphasizes one part of the spectrum, other parts should still be watched and evaluated with an open mind, with periodic adjustments to the way resources are allocated. For instance, if one of the proposed mechanisms for scaling the efficiency of proton-boron fusion can be validated, this reduction of scientific uncertainty would boost the attractiveness of the nonconventional fusion part of the portfolio. Alternatively, if traditional fusion experiments discover that high-energy neutron damage is too costly, decisionmakers may wish to invest more in other approaches.

Policymakers must continue to scan the spectrum for emerging alternatives that exhibit the potential to leapfrog existing technologies—and be prepared to jump if warranted.

However, organizations and individuals alike struggle with ambidexterity. Ambidextrous management requires balancing a natural desire to go “all-in” on visualizing the success of a particular approach, with the realization that future technological paradigms and their implications are hard to predict. This balancing is a particular concern for technological approaches on the right side of the spectrum: public research funding in this area has historically tended to be conservative, making it more likely that a program might fail to recognize a breakthrough technology.

In countries that lack expertise in ambidextrous management, national programs risk getting stuck in the all-in mode, without institutional diversity or dynamic reevaluation and readjustment of priorities. For instance, it’s common for European countries to limit their engagement in nuclear fusion research to ITER-related projects and the centralized EUROfusion Consortium. In contrast, US-based institutions dedicated to nuclear energy innovation have achieved some degree of ambidexterity. The Office of Science at the Department of Energy has led funding for research and development in traditional fusion and fission technologies, whereas the innovation agencies ARPA-E and DARPA have shown a greater appetite for funding nonconventional and speculative fusion approaches. These dual strategies have fostered the emergence of a rich ecosystem for nuclear energy innovation, spanning diverse startups, established corporations, universities, and national labs.

Still, there is much room for progress in the US system. We propose formalizing the process of monitoring the spectrum by creating a dedicated organizational body or committee of scientists from a range of academic disciplines. Acting as impartial referees, they could evaluate developments across the emerging technology ecosystem without being champions of any one approach. Such an organization could coordinate information exchange among stakeholders, leading to better appreciation of how progress changes the relative weights of trade-offs across the spectrum. Ideally, if similar national coordinating bodies were established in other countries, they could all could synchronize their findings, gaining insights on local efforts as well as the overall global portfolio. Policymakers and the public should receive regular updates, so they can weigh in on how they value particular advances across the spectrum, enabling further research and investment in promising areas.

Adopting a portfolio-based approach that is guided by impartial coordination bodies is one way to constructively engage with the complex trade-offs, risks, and upsides of emerging approaches to nuclear energy. We advocate greater attention to these aspects of science and innovation policy as they can lead to more effective allocation of resources and faster convergence toward optimal nuclear energy technologies for the future.

Go to Hell, Robots!

"R.U.R. and the Vision of Artificial Life" by Karel Čapek, edited by Jitka Čejková and translated by Štěpán S. Šimek

In 1920, what might be the most successful anthology in German-language publishing history appeared. Menschheitsdämmerung, or“Twilight of Humanity,” collects 275 poems, many of which rage against the Machine Age. Viewing Europe as fallen, vulnerable, and in need of renewal, the poems echo other contemporaneous work, including Oswald Spengler’s Decline of the West and Otto Dix’s paintings, such as Prague Street, in which mutilated veterans evolve into their prosthetics.

In the section titled “Plea and Outrage,” Kurt Heynicke’s poem “Für Martinet” is representative of the anger aimed at the apparatuses that carry out humanity’s work and killing: “Down with technology, down with the machine! / We don’t want to know any more of your cursed infernal inventions / Your electricity, your gasses, acids, powders, wheels, and batteries!”

That same year, playwright and journalist Karel Čapek’s play R.U.R. was published in recently created Czechoslovakia. The letters stand for “Rossum’s Universal Robots,” a factory that manufactures robots—or, as the company’s general manager, Harry Domin, calls them, “artificial humans”—for worldwide distribution. The plot of the three-act play takes place over the course of decades and describes how humans first dominate and are then overthrown by the robots they created to serve them.

As the play opens, however, the factory’s robots represent triumph, not terror. Domin informs Helena Glory, a visitor to the company’s island factory and his future wife, that “a creation designed by an engineer will always be technically more refined than a creation of nature.” Even better, robots cannot feel pleasure or pain. When no longer useful, they are fed into a crusher.

Foreshadowing Elon Musk’s introduction of Tesla’s humanoid robot, Optimus, in 2022 (“It’ll be a fundamental transformation for civilization as we know it,” Musk crowed), Domin kvells that robots allow humans to be masters, “boundless, free, and supreme.” Robots liberate the human from being “a machine and a means of production,” he says. In other words, humans, in danger of becoming robotic drudges, create robots to reclaim their humanity. A similar irony was at play in 2014 when the head of Google X, Astro Teller, promoted his lab’s inventions, including Google Glass and self-driving cars, as making us “feel more human instead of less human.”

In Act II, the robots become conscious of their servile condition and rebel against their human overlords. Robots around the world kill or enslave their former masters and create their own government. The robopsychologist Dr. Hallemeier, sensing the end, pronounces, “It was a great thing to be human. It was something immeasurable.”

Humans, in danger of becoming robotic drudges, create robots to reclaim their humanity.

By Act III, the last human on Earth is a mechanical engineer named Alquist. Robots, though superficially male and female, cannot reproduce and beg Alquist for the secret of their creation. However, the recipe has been destroyed. Resonating with Heynicke’s jeremiad, Alquist sneers, “Machines, always the machines, on and on! Go to hell, robots! …What, is a human of use to you now?”

In this stimulating volume from MIT Press, Štěpán S. Šimek gives us a bracing new translation of R.U.R. Unlike Paul Selver’s 1921 translation, Šimek’s version is unexpurgated. He conveys what I can only assume are the original’s pathos and humor. Following the play are 20 diverse essays that refract Čapek’s drama through the prism of recent research in the field of artificial life, or ALife.

Šimek preserves the word “robot,” which Čapek and his brother Josef coined, derived from the Czech robota, or “forced labor.” When we think of robots, C-3PO of Star Wars may come to mind. However, Čapek protests in an essay included in this volume that his robots are not “made of sheet metal and cogwheels.” They are synthetically biological. He envisioned his robots not with the “hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.”

Čapek’s robots are composed of living “batter,” with nerves and capillaries unspooled in spinning mills. This sets them apart from other designs reaching back to Hephaestus’s wheeled tripods in the Iliad, Frank Zappa’s “Sy Borg” (“a magical pig with marital aids stuck all over it”), or Douglas Adams’s Marvin the Paranoid Android. R.U.R.’s robots are more like the “skin jobs” of Ridley Scott’s Blade Runner or smooth-skinned Ava leaving the compound at the end of Alex Garland’s Ex Machina. They are so convincingly humanoid that Helena, when she first arrives on the island, mistakes R.U.R.’s scientists for robots.

He envisioned his robots not with the “hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.”

Chemical substrates behaving as living matter are also the stuff of ALife. As editor Jitka Čejková notes in her enthusiastic introduction, ALife got its start at a 1987 workshop led by Christopher G. Langton, a theoretical biologist. Today, the field explores biological processes, such as evolution and reproduction, through artificial means, including the creation of artificial cells, tissues, and organs. The geneticist Craig Venter building a bacterium genome from scratch in 2010 is an example. Such work also raises basic questions, Čejková writes, “such as ‘What is life?’, ‘How did life originate?’, ‘What is consciousness?’ … questions [that] were already heard in Čapek’s century-old science fiction play.”

The essays in this volume engage in wide-ranging discussions of ALife. Topics include panpsychism and the soul, evolution and abiogenesis, robot suffering, the history of chemical gardens, and—my favorite—the NukaBot, a food container that monitors bacteria and “speaks” for it, the goal being “to nurture human affection toward the bacteria.”

In his essay, ALife researcher Nathaniel Virgo wonders if scientists might manufacture new forms of life with “the right chemicals” and the “right environmental conditions.” These “enzyme machines” might help in drug discovery or waste disposal. Evolutionary biologist and writer Julie Nováková ponders the consequences of a future made jobless by automation. Robotics researcher Hemma Philamore considers the prospect of rampant humanoids and falling human birthrates. She wonders if we might “automate away the human need to reproduce.” Why fight for the relationship you want when you can seduce “Samantha,” wired for compliance, or “Frigid Farrah,” who (which?) simulates resistance.

What effect does expanding or contracting the boundaries of humanness have on artificial life—and on humans?

The larger questions pulsing behind these essays and R.U.R. itself have to do with just that: simulation. When does simulated life, simulated humanness, become “real”? What is the moral status, if any, of talking bacteria? If robots leave the uncanny valley and become indistinguishable from humans, does that make them a new species with robot rights? What would humans owe them? What if they are “human enough” to suffer? What effect does expanding or contracting the boundaries of humanness have on artificial life—and on humans? One celebrity robot, Sophia, made by Hong Kong company Hanson Robotics, is already more “human” than some humans, at least politically: in 2017, Saudi Arabia granted “her” citizenship. Unlike the world’s millions of stateless humans, Sophia has the right to have rights.

Such a reversal occurs at the end of R.U.R., when Alquist, alone in a world of robots, thinks he hears human laughter. In fact, it is two robots, Primus and Helena, laughing. Alquist duly treats them the old-fashioned way, demanding they acquiesce to dissection so the secret formula of their creation can be discovered. But they do not obey. Primus wants to sacrifice himself. Helena weeps and insists on “dying” herself. Alquist, recognizing they are in love, releases them. They belong to each other, not to him. Like Adam and Eve—God’s disobedient robots—they take their solitary way, leaving behind Alquist, bereft but joyous that “life will not perish” and love “will bloom again on this garbage heap,” as humanity’s dusk gathers and the robots’ dawn breaks.

Cool Ideas for a Long Hot Summer: Solar-Powered Canoes

In our new miniseries Cool Ideas for a Long Hot Summer, we’re working with Arizona State University’s Global Futures Lab to highlight bold ideas about how to mitigate and adapt to climate change. 

On this episode, host Kimberly Quach is joined by ASU associate professor David Manuel-Navarrete to talk about his Solar Canoes Against Deforestation project. Working closely with Ecuadoran engineers and the Kichwa and Waorani people, Manuel-Navarrette’s team has been helping to develop a solar-powered canoe that can bring renewable energy and sustainable infrastructure to the Amazon. The story of the canoe offers lessons about how to meaningfully work with communities to understand their needs and co-produce solutions.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Kimberly Quach: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

I’m Kimberly Quach, Digital Engagement Editor at Issues. On our summer mini-series called Cool Ideas for a Long Hot Summer, we’re working with ASU’s Global Futures Lab to highlight ideas about how to mitigate and adapt to climate change. On this episode, I’m joined by Associate Professor David Manuel-Navarrete. David’s cool idea is called Solar Canoes Against Deforestation. Working closely with Ecuadorian engineers and the Kichwa and Waorani people, his team has been helping to develop a solar-powered canoe that can bring renewable energy and sustainable infrastructure to the Amazon.

David, welcome!

David Manuel-Navarrete: Thank you. It’s great to be here.

Quach: So your work deals with the Ecuadorian Amazon and the communities located there. Could you tell us more about the Indigenous people that live in the Amazon and how they’ve been impacted by climate change?

Ecuador is an Indigenous country almost. It has a huge Indigenous population, and there are 20 something groups only in the Amazon speaking different languages.

Manuel-Navarrete: Yeah. Ecuador is an Indigenous country almost. It has a huge Indigenous population, and there are 20 something groups only in the Amazon speaking different languages. I have worked for the last six years with two particular groups: the Kichwa, a community located in one of the rivers, the Napo River, it’s one of the Amazonian rivers, and the Waorani, which I have been working in a community located in the Curaray River, which is another huge river that is connected to the Amazon.

Quach: And could you tell me about the challenges these communities are facing due to climate change?

Manuel-Navarrete: So the Amazon has been heavily impacted by climate change through droughts which has increased the number of wildfires. There are huge tracts of forests that are burned every summer. But transportation depends on the rivers, and if you don’t have water and the rivers dry up, you cannot even move around. But of course there are many other impacts. The Amazon has a huge role in regulating the global climate. The effects in terms of precipitation, in terms of climate pattern for the entire planet will be altered as well as a result of the Amazon having turned into a savannah-style ecosystem.

Quach: The thing that you’ve been working on for the past six years to respond to this is something called Solar Canoes Against Deforestation. Could you explain why canoes are important to Indigenous communities and what a solar canoe can do to help?

In the most remote and best preserved parts of the Amazon, you need canoes for everything: trading, getting basic supplies, accessing fuel, going to the doctor.

Manuel-Navarrete: Canoes are essential for everything that happens in the Amazon. There are communities that are very remote still and only accessible through canoes. So in the most remote and best preserved parts of the Amazon, you need canoes for everything: trading, getting basic supplies, accessing fuel, going to the doctor, and also for if you want to support some sorts of development like tourism, you need also canoe transportation. Unfortunately, many Indigenous children are forced to move to cities for education, which disconnects them from their elders, their language, and their cultural heritage. And if you don’t have a way, an affordable way of transportation, then it’s very difficult for these kids to go back to their community. So the whole Indigenous population depends on having this affordable way of transportation. That’s in part the idea of a solar-powered transportation system could help reverse this trend and creating new educational and economic opportunities locally for young people through tourism, for instance, or through learning how to maintain the renewable energy systems and allow them to stay in their communities and continue protecting the forest.

Quach: Could you tell me about the canoe itself? How does it work and how was the canoe created

Manuel-Navarrete: Our solar canoe prototype was created with a focus on keeping things local close to the communities where it will be used. The canoe itself is handcrafted by the Cofán people in the Ecuadorian Amazon, and the roof is made by our Kichwa partners. And the outboard system, which is a retrofit of the traditional peke-peke motor, was developed in collaboration with engineers that graduated from St. Francis University of Quito and the Army Polytechnic School, both based in Ecuador, so it’s local engineers. Our big picture goal is to encourage local green economies by bringing renewable energy solutions to communities across the Amazon and eventually to riverine and coastal areas around the wall. We believe that by making transportation more affordable, cleaner and quieter, we can open up a range of sustainability development opportunities for these communities.

By making transportation more affordable, cleaner and quieter, we can open up a range of sustainability development opportunities for these communities.

Quach: How did the idea for solar canoes come about?

Manuel-Navarrete: So back in 2018, during my sabbatical year, I fell in love with the Amazon Rainforest forest. This might sound cliché, but I cannot find better words to describe it. I think it was the overwhelming intensity and concentration of life: the birds singing, insects buzzing, heavy storms, the immense rivers, the playfulness of monkeys. The Amazon is a sensory overload in the most beautiful way I can describe. And on top of all the natural beauty there is the deep wisdom of Indigenous cultures. So the Amazon is magnificent, and yet we continue destroying it. Like other global environmental challenges, this one seems too big for any single individual—like me, for instance, or you maybe who are listening—to do anything at all. Right? It’s easy to fall into inaction, stand by, and watch the tragedy unfold. But what saved me was to believe that the main reason this destruction continues is because too few people truly care. And this is because too few people have had the opportunity as I did, to experience the Amazon’s raw power and vitality firsthand. So the vision of my team is to change that.

We want to give as many people as possible the chance to form deep connections with the Amazon. And to do this, we create opportunities for researchers, professionals, and students in sustainability, biology, engineering, journalism, and any other relevant field really to visit the Amazon and develop projects and build relationships that in turn will support local and Indigenous forest defenders. So for this vision to come through, we needed to build the physical and human infrastructure that facilitates not only the visiting, but reciprocal relationships between visitors and local communities. The problem is that building any infrastructure deep in the forest is a logistical nightmare. And luckily Iyarina, a Kichwa community and language school near Tena, which is a mid-sized town in the Ecuadorian Amazon already has extensive experience in creating and maintaining infrastructure that supports reciprocal relationships within students and local people, exactly like the ones we needed. So between 2019 and 2021, we partnered to expand Iyarina’s efforts and we focused on these Waorani remote Indigenous communities. I was in the Curaray River that I told you about that are accessible only by canoe.

It was in this context that the idea of Solar Canoes Against Deforestation came about. Infrastructure development and study abroad programs that we were running required from us long canoe trips often lasting between five to eight hours. And the motor commonly used for these trips is called peke-peke name after the sound it makes peke-peke-peke-peke, and it’s extremely loud, scaring away wildlife. Plus the canoe pilots must endure the harsh vibrations of the motor for hours. Their hands often tremble for days after a long trip. Gas is also both expensive and very hard to obtain in these communities, and oil can be easily spilled, polluting the river. So as a sustainability scientist, the idea of solar power canoes was too compelling to ignore. Imagine silent canoes utilizing a free source of energy like the sun. This make total sense, and I believe that it will enhance, if we are successful, and I think we will, the well-being of forest protectors helping also to preserve their ancestral culture and reducing significantly the push for road construction, which is a major driver of deforestation in the Amazon.

Quach: It’s really interesting how so many things had to come together for you to create a solar canoe, which is I think something we kind of take for granted that it’s not just the ideas but the communities that we have to engage. And your experience of being there, it’s also asking folks what they need. A lot of times we just assume that of course you would build a road. That’s how everyone gets around, but that’s not what the Indigenous communities want. So how do we meet their needs and challenges? And so it’s really interesting that opportunities provided by these study abroad programs and by essentially like you said, making tourism more accessible. And so we can humanize the communities that are facing these challenges instead of seeing them as a remote far away thing.

Manuel-Navarrete: Yeah. Exactly. I mean, relationships based on trust were central to the success of this project from the outset. And a key feature of Solar Canoes Against Deforestation is that is a collaborative effort and it looks for designing and building an affordable canoe, but ensuring that the whole process is rooted in the local communities where the canoe will be used. Right? And in our context, the most important was also drawing inspiration from and striving maybe even to emulate the kinship model that is central to Kichwa, Waorani, and other Indigenous cultures. This kinship approach to building relationships focuses on fostering deep, meaningful, and lasting connections that are analogous to family bonds, to family ties rather than being or building connections that are transactional or more superficial. Right? So as I said, the relationships and trust were central.

What I have also painfully learned over the years is that there are no universal formulas for building relationships.

But what I have also painfully learned over the years is that there are no universal formulas for building relationships in the sense that relationships must be lived. Like life itself, they require attention, reflection, care, and real-time adaptations. There will be mistakes and you will need to make sacrifices along the way, often without any guarantee of success. But maybe the lesson is that one should focus on assembling a team always of people who are genuinely committed to the project’s success and who can work together and more importantly understand each other. Misunderstandings are one of the things that you really want to avoid in this project. And I think I would say internationally, globally, there is a growing recognition that successful sustainable development projects depend on building a strong relationship. So everybody knows that you need to build relationships between people from the global north and south, between formally educated professionals and locals and across cultures. And it’s not clear that you can simply introduce a solution to a community and expect it to succeed without any meaningful engagement. Right?

However, what not everybody knows maybe is that building this relationship is not only time-consuming and requires efforts and skills that are not typically taught in formal education. But few people realize that forming personal relations in intercultural contexts particular will inevitably change you as it involves questioning your own beliefs and making a space for worldviews that might conflict with your own. It’s a challenging journey, but it’s totally worth it. It’s very enriching. And one final advice is that sometimes the most difficult is knowing when a relationship no longer works and having the courage to acknowledge it and move on and continue building relationships that can work.

Quach: I think that’s really beautiful. I came into the conversation sort of just expecting to learn about how adding solar panels to canoes would change the world…

Manuel-Navarrete: I can tell you that as well. (laughs)

Quach: …but what you are saying is very beautiful. The kinship model is certainly one that many tech innovations and innovators would do well to take to heart. We’ve seen with a lot of technologies like self-driving cars, many places assume they would just change the world, and they haven’t yet and have faced a lot of community challenges, and they haven’t really embraced this kinship with communities.

Manuel-Navarrete: Another thing I have learned is that the technological part is complex and difficult and there are problems and all that, but it’s actually the easier part of this project because the social part and the human part is well much more complicated and difficult to deal with, and you need to really dedicate most of the effort to that part if you want to have any chance. Right? You know, we tend to dedicate more to the technology also because this easier. But if we only focus on the material aspects, it’s very difficult to make things sustainable. You can get the prototype and it might work, but to make it work on the field, on a place, on a particular place, dealing with all the challenges that emerge daily and all the conflicts that may arise between people, that’s something that we need to learn to figure out and be better at.

Quach: For my last question, I would just love to ask how, if I wanted to apply this kinship model to sustainability challenges, I would learn these skills since you said that they’re not ones typically taught in university classes.

Manuel-Navarrete: The problem sometimes is that the classroom context, because it’s very individualized, students come in as individuals with their own grade and expected to perform as individuals. That generates almost like a way of doing things that is not conducive to the other work. Right? And so there is a lot of emotional skills that if people doesn’t possess when the problems emerge, they don’t know what to do with them. And maybe they just walk away or blame others and not making this reflection about, okay, what did I do to create this situation? What is my role, my responsibility, and what tools do I have to deal with it? Right? So yeah, all these are skills that I think can be taught in the context of a classroom, but maybe you need a different type of curriculum.

The classroom context, because it’s very individualized, students come in as individuals with their own grade and expected to perform as individuals. That generates almost like a way of doing things that is not conducive to the other work.

Quach: Or maybe more opportunities like study abroad programs in the Amazon need to be funded.

Manuel-Navarrete: That’s one. Yeah. That’s one. And of course, any student who wants to contribute to this project, the best way would be to get involved and joining our ASU study abroad program in Ecuador in May, 2025, the official name of the program is the Global Intensive Experience on Indigenous Sustainability Solutions in the Amazon. And also, if you are a grad student, you can also apply for a foreign language and area studies fellowship, also known as FLAS from the US Department of Education to study Kichwa at the arena. Feel free to contact me and I can give you the information. And we’re also always inviting researchers from all over the world to visit us, explore research and development ideas, or even bring a study abroad group to Ecuador.

We’re looking for donors as well. People who want to help fund more solar-powered canoes or even entire river transportation system based on solar power that can use our technology wherever they are needed. Visiting these areas can be challenging for some people. You are actually out of your comfort zone many times, and that’s precisely where you start learning to also deal with people who think very different to what you have been taught or what you have learned in your own culture. So yeah, I totally recommend that in terms of learning the skills that we were mentioning.

Quach: Well, thank you so much for joining us. This has been a fantastic conversation about technology, building relationships, and the value of stepping outside our comfort zone, sometimes very far outside our comfort zone.

Manuel-Navarrete: Yeah. But in a good way.

Quach: Yes. In a good way. If you would like to learn more about stepping outside your comfort zone or visit the Amazon, you can visit our show notes to learn more about Solar Canoes Against Deforestation and find other resources.

Please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our audio engineer, Shannon Lynch. I’m Kimberly Quach, Digital Engagement editor at Issues. Tune in next week to learn about the challenges faced by the Rohingya refugees and how they’re using technology to respond.

Boosting Hardware Start-ups

In “Letting Rocket Scientists Be Rocket Scientists: A New Model to Help Hardware Start-ups Scale” (Issues, Spring 2024), John Burer effectively highlights the challenges these companies face, particularly in the defense and space industries. The robotics company example he cites illustrates the pain points of rapid growth coupled with physical infrastructure, demonstrating the different dynamics of hardware enterprises as compared with software.

However, I believe the fundamental business issue for hardware start-ups is generating stable, recurring revenue when relying on sales of physical items that bring in a one-time influx of revenue, but bear no promise of future revenue. Consider consumer companies such as Instant Pot and Peloton, which serve as cautionary tales that rode a wave of virality to high one-time sales and suffered with the failure to create follow-on products to fill production lines and pay staff salaries.

Further analysis of the issues Burer raises would benefit from exploring how the American Center for Manufacturing and Innovation’s (ACMI) industry campus model or other solutions directly address this core problem of revenue stability that any hardware company faces. Does another successful product have to follow the first? Is customer diversity required? Even hardware companies focusing solely on national security face this problem.

While providing shared infrastructure is valuable, more specifics are needed on how ACMI bridges the gap to full-scale production beyond just supplying space. Examining the broader ecosystem of hardware-focused investors, accelerators, and alternative models focused on separating design and manufacturing is also important. The global economy has undergone significant reconfiguration, with much of the manufacturing sector organizing as either factoryless producers of goods or providers of production-as-a-service, focusing on core competencies of product invention and support, or supply chain management and pooling demand. This highly digitally-coordinated model can’t work for every product, but the world looks very different from the golden age of aerospace, when it made sense to make most things in-house or cluster around a local geographic sector specialized in one industry.

Overall, Bruer identifies key challenges, but the hardware innovation community needs a broader conversation on business demands, especially around revenue stability, a wider look at the hardware start-up ecosystem, and concrete evidence of the ACMI model’s impact. I look forward to seeing this important conversation continue to unfold.

Senior Fellow, Center for Defense Concepts and Technology, Hudson Institute

Executive Partner, Thomas H. Lee (THL) Partners

The author is a former program manager and office deputy director of the Defense Advanced Research Projects Agency

John Burer eloquently describes a new paradigm to strategically assemble and develop hardware start-up companies to enhance their success within specific industrial sectors. While the article briefly mentions the integration of this novel approach into the spaceflight marketplace, it does not fully describe the tremendous benefits that a successful space systems campus could provide to the government, military, and commercial space industries, as well as academia. Such a forward-thinking approach is critical to enable innovative life sciences and health research, manufacturing, technology, and other translational applications to benefit both human space exploration and life on Earth.

The advantages of such an approach are clearly beneficial to many research areas, including space life and health sciences. These research domains have consistently shown that diverse biological systems, including animals, humans, plants, and microbes, exhibit unexpected responses pertinent to health that cannot be replicated using conventional terrestrial approaches. However, important lessons learned from previous spaceflight biomedical research revealed the need for new approaches in our process pipelines to accelerate advances in space operations and manufacturing, protect the health of space travelers and their habitats, and translate these findings back to the public on Earth.

A well-integrated, holistic space campus system could overcome many of the current gaps in space life sciences and health research by bringing together scientists and engineers from different disciplines to promote collaboration; consolidate knowledge transfer and retention; and streamline, simplify, and advance experimental spaceflight hardware design and implementation. This type of collaborative approach could disrupt the usual silos of knowledge and experience that slow hardware design and verification by repeatedly requiring reinvention of the same wheel.

A well-integrated, holistic space campus system could overcome many of the current gaps in space life sciences and health research.

Indeed, the inability of current spaceflight hardware design and capabilities to perform fully automated and simple tasks with the same analytical precision, accuracy, and reproducibility achieved in terrestrial laboratories is a major barrier to space biomedical research—and creates unnecessary risks and delays that impact scientific advancement. In addition, the inclusion and support of manufacturing elements in a space campus system can allow scaled production to meet the demands and timelines required for the success of next-generation space life and health sciences research.

The system described by Burer has clear potential to optimize our approach to such research and can lead to new medical and technological advances. By strategically nucleating our knowledge, resources, and energy into a single integrated and interdisciplinary space campus ecosystem, this approach could redefine our concept of a productive space research pipeline and catalyze a much-needed change to advance the burgeoning human spaceflight marketplace while “letting rocket scientists be rocket scientists.”

Professor, School of Life Sciences

Biodesign Center for Fundamental and Applied Microbiomics, Biodesign Institute

Arizona State University

Aerospace Technologist, Life Sciences Research, Biomedical Research and Environmental Sciences Division

NASA Johnson Space Center, Houston, Texas

The Naval Surface Warfare Center Indian Head Division (NSWC IHD) was founded more than 130 years ago as the proving ground for naval guns, and later shifted focus to the research, development, and production of smokeless powder. We continue as a reliable provider of explosives, propellants, and energetic materials for ordnance and propulsion systems for every national conflict, leading us to be recognized as the Navy’s Arsenal.

But this arsenal now needs rebuilding to strengthen and sustain the nation’s deterrence against the growing power of the People’s Republic of China, while also countering aggression around the world.

At the 2024 Sea-Air-Space Exposition, the Navy’s chief of operations, Admiral Lisa Franchetti, discussed how supporting the conflict in Ukraine and the operations in the Red Sea is significantly depleting the US ordnance inventory. NSWC IHD is an aging facility but has untapped capacity, and the Navy is investing in infrastructure upgrades to restore wartime readiness of its arsenal. This investment will modernize production, testing, and evaluation capabilities to allow for increased throughput while maintaining current safety precautions.

Having nearby cooperative industry partners would reduce logistical delays and elevate the opportunity for collaborations and successful technology demonstrations.

NSWC IHD believes that an industrial complex of the type that John Burer describes is worth investigating. While our facility is equipped to meet current demand for energetic materials, we anticipate increased requests for a multitude of products, including precision-machined parts and composite materials. Having nearby cooperative industry partners would reduce logistical delays and elevate the opportunity for collaborations and successful technology demonstrations.

Such a state-of-the-art campus would also provide a safe virtual training environment for energetic formulations, scale-up, and production processes, eliminating the risks inherent with volatile materials and equipment. This capability would allow for the personnel delivering combat capability, to paraphrase Burer, to continue to be rocket scientists and not necessarily trainers.

The Navy recognizes the need to modernize and expand the defense industrial ecosystem to make it more resilient. This will require working in close contact with its partners, including Navy laboratories and NSWC IHD as its arsenal. We must entertain smart, outside-the-box concepts in order to outpace the nation’s adversaries. With these needs in mind, exploring the creation of an industrial campus is a worthwhile endeavor.

Technical Director

Naval Surface Warfare Center Indian Head Division

The growth of the commercial space sector in the United States and abroad, coupled with the increasing threat of adversarial engagement in space, is rapidly accelerating the need for fast-paced development of innovative technologies. To meet the growing demand for these technologies and to maintain the US lead in commercial space activities while ensuring national security, new approaches tackling everything from government procurement processes to manufacturing and deployment at scale are required. John Burer directly addresses these issues and suggests a pathway forward, citing some successful examples including the new initiative at NASA’s Exploration Park in Houston, Texas.

Indeed, activities in Houston, and across the state, provide an excellent confluence of activities that can be a proving ground for the proposed industry campus model in the space domain. The Houston Spaceport and NASA’s Exploration Park are providing the drive, strategy, and resources for space technology innovation, development, and growth. These efforts are augmented by $350 million in funds provided by the state of Texas under the auspices of the newly created Texas Space Commission. The American Center for Manufacturing and Innovation (ACMI), working with the NASA Johnson Space Center, is a key component of the strategy for space in Houston, looking to implement the approach that Burer proposes.

To maintain the US lead in commercial space activities while ensuring national security, new approaches tackling everything from government procurement processes to manufacturing and deployment at scale are required.

There is a unique opportunity to bring together civil, commercial, and national security space activities under a joint technology development umbrella. Many of the technologies needed for exploration, scientific discovery, commercial operation, and national security have much in common, often with the only discriminator being the purpose for which they are to be deployed. An approach that allows knowledge exchange among the different space sectors while protecting proprietary or sensitive information will significantly improve the technology developed, provide the companies with multiple revenue streams, and increase the pace at which the technology can be implemented.

Going one step further and creating a shared-equipment model, which Burer briefly alludes to, would allow small businesses and start-ups access to advanced equipment that would normally be prohibitively expensive, with procurement, installation, and management wasting time and money and limiting the ability to scale. A comprehensive approach such as the proposed industry campus would serve to accelerate research and development, foster more innovation, promote a rapid time to market, and save overall cost to the customer, all helping create a resilient space industrial ecosystem to the benefit of the nation’s space industry and security.

Director, Rice University Space Institute

Executive Board Member, Texas Aerospace Research and Space Economy Consortium

John Burer outlines how the American Center for Manufacturing & Innovation (ACMI) is using an innovative approach to solve an age-old problem that has stifled innovation—how can small businesses go from prototype scale to production when there is a very large monetary barrier to doing so?

The Department of Defense has particularly struggled with this issue, as the infamous “valley of death” has halted the progress of many programs due to lack of government or company funding to take the technology to the next step. This leaves DOD in a position where it may not have access to the most advanced capabilities at a time when the United States is facing multiple challenges from peer competitors.

How can small businesses go from prototype scale to production when there is a very large monetary barrier to doing so?

ACMI is providing a unique solution set that not only tackles this issue but creates an entire ecosystem in which companies can join forces with other companies in the same industrial base sector in a campus-like setting. Each campus focuses on a critical sector of the defense supply chain (critical chemicals, munitions, and space systems) and connects government, industry, and academia together, providing shared access to state-of-the-art machinery and capabilities and creating environments that support companies through the scaling process.

For many small businesses and start-ups, this can be a lifeline. Oftentimes, small companies can’t afford to have personnel with the business acumen to raise capital and build infrastructure and are forced to have their technical experts try to fill these roles—which is not the best model for success. ACMI takes on these roles for those companies, and as Burer states, “lets rocket scientists be rocket scientists”—a much more efficient and cost-effective use of their talent.

One of the most important aspects of the ACMI model is that the government is providing only a small amount of the funding for each campus to get things started, and then ACMI is leveraging private capital—up to a 25 to 1 investment ratio—for the remainder. If this isn’t a fantastic use of taxpayer money, I don’t know what is. At a time when the United States is struggling to regain industrial capability and restore its position as a technology leader, and where it is competing against countries whose governments subsidize their industries, the ACMI model is exactly the kind of innovative solution the nation needs to keep charging ahead and provide its industry partners and warfighters with an advantage.

Founder and CEO, MMR Defense Solutions

Former Chief Technology Officer, Office of the Secretary of Defense, Industrial Base Policy

Given the global competition for leading-edge technology, innovation in electronics-based manufacturing is critical. John Burer describes the US innovation ecosystem as a “vibrant cauldron” and offers an industry campus model that can possibly harness the ecosystem’s energy and mitigate its risks. However, the barriers for an electronics hardware start-up company to participate in the innovation ecosystem are high and potentially costly. While Burer’s model is a great one and can prove effective—witness Florida’s NeoCity and Arizona State University’s SkySong, among others—it does require some expansion in thought.

To build an electronics production facility, start-up costs can run $50 million to $20 billion over the first few years for printed circuit boards and semiconductors, respectively. It can take 18 to 48 months before the first production run can generate revenue. For electronics design organizations, electronics CAD software can range from $10,000 to $150,000 per annual license depending on capability needs. Start-up companies in the defense sector must additionally account for costs where customers have rigorous requirements, need only low-volume production, and expect manufacturing availability for decades. This boils down to a foundational question: How does an electronics hardware start-up with a “rocket scientist” innovative idea ensure viability given the high cost and long road ahead?

How does an electronics hardware start-up with a “rocket scientist” innovative idea ensure viability given the high cost and long road ahead?

One solution for electronics start-ups is to use the campus model, but it may be slightly different from what Burer describes. Rather than a campus, I see a need for what I call a “playground community.” They are similar in that they provide a place for people to interact and use shared resources. But as an innovator, I like the idea of a playground that promotes vibrant interactions between individuals or organizations with a common goal, be it discovery or play. Along with this version of an expanded campus, electronics companies will require community and agility to achieve success.

Expanded campus. A virtual campus concept can be valuable given the high capital expenditure costs for electronics manufacturing. This idea partners companies that have complementary capabilities or manufacturing, regardless of geolocation proximity. Additional considerations in logistics, packaging, or custody for national security are also needed.

Community. Scientists of all types need a community of supporting companies and partners that have common values and goals and capitalize on each other’s strengths. This cross-organizational teaming will allow them to move fast and overcome any challenge together.

Agility. Given the rapid pace of the electronics industry, agility is vitally important. This will require the company and its team community to be able to shift and move together, considering multiple uses of the technology, dual markets, adoption of rapid prototyping and demonstration, modular systems design and reuse, and significant use of automation in all aspects of the business.

Fostering innovative communities in technology development, prototyping, manufacturing, and business partnerships will be required for the United States to maintain competitiveness in the electronics industry as well as other science and technology sectors. As the leader of an electronics hardware security start-up, I am fortunate to have played a role as the allegorical rocket scientist with good ideas, but I am even more glad to be surrounded by a community of businesses and production partners in my playground.

CEO and Founder

Rapid Innovation & Security Experts

Having founded, operated, and advised hardware start-ups for more than 25 years, I applaud the American Center for Manufacturing & Innovation and similar initiatives that aim to bring greater efficiency and effectiveness to one of the most important and challenging of all human activities: the development and dissemination of useful technology. The ACMI model, designed to support hardware start-ups, particularly those in critical industries, offers several noteworthy benefits.

First, the validation by the US government of the problems being solved by campus participants is invaluable. Showing the market that such a significant customer cares about these companies provides credibility and encourages other stakeholders to invest in and engage with them.

Second, the “densification” of resources on an industry-focused campus can yield significant cost benefits. Too often, I have seen early-stage hardware companies fail when key people and vital equipment were too expensive or inconveniently located.

Third, the finance arm of the operation, ACMI Capital, can leverage initial government funding and mitigate the “valley of death” that hardware start-ups typically face. This support should offer a smoother transition from government backing to broader engagement with the investment community, a perennial challenge for companies funded by the Small Business Innovation Research program and similar federal sources. Such funding ensures that promising technologies can scale and be efficiently handed off to government customers.

The “densification” of resources on an industry-focused campus can yield significant cost benefits.

However, while the ACMI model offers significant benefits, it also has potential limitations when applied to industries without the halo effect provided by government funding and customers. When the government seeks to solve a problem, it can move mountains. It is much more challenging to coordinate problem validation and investment in early-stage innovation by multiple nongovernment market participants, with their widely varying priorities, resources, and timelines.

Another potential issue is the insufficient overlap in resource and infrastructure needs that may occur among campus innovators in any given industry. If the needs of these start-ups diverge too widely, the benefits of colocation may diminish, reducing the overall efficiency of the campus model.

Finally, there is the challenge of securing enough capital to fund start-ups through the hardware development valley of death. Despite ACMI’s efforts, the financial demands of scaling hardware technologies are substantial, and without a compelling financial story and the enthusiastic support of key customers, securing sustained investment throughout development remains a critical hurdle.

Given these concerns, some care will be needed when selecting which industries, problems, customers, and start-ups will most benefit from this approach. In this vein, I cannot emphasize enough the need for additional experimentation and “speciation” of entities seeking to commercialize technology.

Still, the ACMI model has already demonstrated success and achieved important progress in enhancing the nation’s defense posture. And the lessons learned will undoubtedly inform future efforts, with successful strategies being replicated and scaled, thus enriching the nation’s technology commercialization toolbox.

I look forward to seeing the continued evolution and impact of this and other such models, as they are vital in bridging the gap between innovation and practical application, ultimately driving technological progress and economic growth.

Managing Director

Interface Ventures

The American Center for Manufacturing & Innovation’s (ACMI) industry campus-based model, as John Burer details in his Issues essay, is an innovative approach to addressing the critical challenges faced by hardware start-up companies in scaling production and establishing secure supply chains. At Energy Technology Center, we feel that the model is particularly timely and essential given the current munitions production crisis confronting the US Department of Defense and the challenges traditionally associated with spurring innovation in a mature technical field. As global tensions rise and the need for advanced defense technologies intensifies, the ability to rapidly scale up production of critical materials and systems becomes a national security imperative. This model has the potential to diversify, expand, and make more dynamic the manufacturing base for energetic materials and the systems that depend on them. By fostering a collaborative environment, these campuses can accelerate innovation, reduce production bottlenecks, and enhance the resilience of the defense industrial base.

From a taxpayer’s perspective, the value of ACMI’s model is immense. By attracting private capital to complement government funding, the model maximizes the impact of public investment. As Burer points out, ACMI’s Critical Chemical Pilot Program, funded through the Defense Production Act Title III Program, has already achieved a private-to-public funding ratio of 16 to 1, demonstrating the efficacy of leveraging different pools of investment capital. Such a strategy not only accelerates the development of critical technologies but also ensures that public funds are used more efficiently than ever, fostering a culture of innovation and modernization within the defense sector.

By fostering a collaborative environment, these campuses can accelerate innovation, reduce production bottlenecks, and enhance the resilience of the defense industrial base.

However, to fully realize the potential of this model, we must be mindful of the risks and pitfalls in the concept. Private investment follows the promise of a return. Challenges that must be addressed include the requirement for steady capital investment, dependency on government support, bureaucratic hurdles, market volatility, intellectual property concerns, scalability issues, and the need for effective collaboration. Ensuring sustained financial support from diverse sources, streamlining the bureaucratic processes in which DOD procurement is mired, developing robust and adaptable infrastructure, maintaining strong government-industry partnerships, protecting intellectual property, diversifying market applications, and fostering a collaborative ecosystem are all essential steps toward overcoming these challenges.

Challenges notwithstanding, ACMI’s industry campus-based model is a timely and innovative solution to the current dilemmas of the US defense manufacturing sector. By creating specialized campuses that foster collaboration and leverage both private and public investments, this model can significantly enhance the scalability, resilience, and dynamism of the manufacturing base for energetic materials and defense systems. Burer is to be applauded for bringing a healthy dose of old-fashioned American ingenuity and entrepreneurship to the nation’s defense requirements.

Founder and CEO

Energy Technology Center

As John Burer observes, start-up companies working on hardware, especially those with applications for national security, face substantial challenges and competing demands. These include not only developing and scaling their technology, but also simultaneously addressing the needs of their growing business, such as developing supply chains, securing manufacturing space that can meet their growing needs, and navigating the intricate maze of government regulations, budget cycles, contracting processes, and the like. This combination of challenges and demands requires a diverse and differentiated set of skills, which early-stage hardware companies especially struggle to obtain, given their limited resources and focus on developing their emerging technology. A better model is needed, and the one Burer identifies and is employing, with its emphasis on building regional manufacturing ecosystems through industry campuses, has significant merits.

Historically, the Department of Defense was the primary source of funding for those working on defense-related technologies. That is no longer the case. As recently noted by the Defense Innovation Unit, of the 14 critical technology areas identified by the Pentagon as vital to maintaining the United States’ national security, 11 are “primarily led by commercial entities.” While this dynamic certainly brings several challenges, there are also important opportunities to be had if the federal government can adapt its way of doing business in the commercial marketplace.

Of the 14 critical technology areas identified by the Pentagon as vital to maintaining the United States’ national security, 11 are “primarily led by commercial entities.”

The commercial market operates under three defining characteristics, and there is opportunity to leverage these characteristics to benefit national security. First, success in the commercial sector is defined by speed to market, and the commercial market is optimized to accelerate the transition from research to production, successfully traversing the infamous “valley of death.” Second, market penetration is a fundamental element of any commercial business strategy, with significant financial rewards for those who succeed; consequently, the commercial market is especially suited to rapidly scale emerging technologies. And third, the size of the commercial market dwarfs the defense market; leveraging this size not only offers a force-multiplier to federal funding, but also creates economies of scale that enable the United States and its allies to compete against adversarial nations that defy the norms of free trade and the rule of international law.

Industry campuses apply the proven model of innovation clusters to establish regional manufacturing ecosystems. These public-private partnerships bring together the diverse range of organizations and assets needed to build robust, resilient industrial capability and capacity. The several programs Burer identifies have already demonstrated the value of this model in harnessing the defining characteristics of the commercial market, including speed, scale, and funding. By incorporating this approach, the federal government is able to amplify the value of taxpayer dollars to improve national and economic security, creating jobs while accelerating the delivery of emerging technologies and enhancing industrial base resilience.

Pathfinder Portfolio Lead (contractor)

Manufacturing Capability Expansion and Investment Prioritization Directorate

US Department of Defense

John Burer presents an innovative approach to supporting manufacturing hardware start-ups. I ran the Oregon Manufacturing Innovation Center for the first six years of its life, and have firsthand experience with hundreds of these types of companies. I can attest: hardware start-ups face distinct challenges with few ready-made avenues to address them.

Expecting “rocket scientists” to navigate these challenges without specialized business support can hinder a start-up’s core technical work and jeopardize its overall success. It is rare, indeed, to find the unicorn that is a researcher, inventor, entrepreneur, negotiator, businessperson, logistician, marketer, and evangelist. Yet the likelihood of success for a start-up often depends on those abilities being present in one or a handful of individuals.

To ensure that the United States can maintain a technological advantage in an increasingly adversarial geopolitical landscape, it is imperative to improve hardware innovation and start-up company success rates. In addition to the ideas that Burer presents, my experience in manufacturing research and innovation suggests the need for open collaboration and comprehensive workforce development. These elements are critical to ensure a cross-pollination of ideas and the availability of trained technicians to scale these businesses.

Hardware start-ups face distinct challenges with few ready-made avenues to address them.

The American Center for Manufacturing and Innovation’s (ACMI) model represents a very promising solution. Burer’s emphasis on colocating start-ups within a shared infrastructure is a significant step forward. Incorporating spaces for cross-discipline and cross-company collaborative working groups and project teams, along with providing regular networking opportunities, will allow them to share knowledge, resources, and expertise and to cultivate a culture of cooperation. This is best enabled through a nonprofit applied research facility that can address the intellectual property-sharing issue, making problem-solving more efficient and empowering those involved to do what they do best. It not only allows scientists to be scientists, but also helps the investor, the government customer, the corporate development professional, and other critical participants understand their importance within a shared outcome.

The shared infrastructure within ACMI campuses can be further expanded by developing shared research and development labs, prototyping facilities, and testing environments. By pooling resources and with government support, start-ups can access high-end technology and equipment that might otherwise be beyond their reach, thus reducing costs and barriers to innovation. Additionally, open innovation platforms can allow companies to post challenges and solicit solutions from other campus members or external experts, harnessing a broader pool of talent and ideas. Think of this as a training ground to head-start companies that would scale more independently within this ecosystem, while allowing corporate and government stakeholders to more effectively scout for solutions. Such an approach can accelerate the development of new technologies and products, benefiting all stakeholders involved.

The ACMI model thus offers a potent opportunity. It can be applied to any sector where hardware innovation is needed to advance the nation’s capabilities. Incorporating open collaboration will be crucial to enable the best outcomes for technology leadership and economic growth. By incorporating these additional elements, the ACMI model can become an even more powerful engine for driving the success of hardware start-ups, ultimately benefiting the broader economy and national security.

Advisor to the President on Manufacturing Innovation

Oregon Institute of Technology

Former Executive Director of the Oregon Manufacturing Innovation Center, Research & Development

John Burer highlights the challenges facing start-ups providing products to the defense and space sectors. More specifically, he lays out the challenges for companies building complex physical objects to obtain the appropriate infrastructure for research, development, and manufacturing. Additionally, he notes the importance of small businesses in accelerating the deployment of new and innovative technologies for the nation’s defense. The article comes on the heels of a Pentagon report that found the US defense industrial base “does not possess the capacity, capability, responsiveness, or resilience required to satisfy the full range of military production needs at speed and scale.”

The imperative is clear. Developing increased domestic research, development, prototyping, and manufacturing capabilities to build a more robust and diversified industrial base supporting the Department of Defense is one of the nation’s most critical national security challenges. Equally clear is that unleashing the power of nontraditional defense contractors and small business is a critical part of tackling the problem.

So how do we do it?

The US defense industrial base “does not possess the capacity, capability, responsiveness, or resilience required to satisfy the full range of military production needs at speed and scale.”

We increase collaboration between government, industry, and academia. Expanding the industrial base to deliver the technologies warfighters need is too large a task for any one of these groups to address alone. It will take the combined power, ingenuity, and know-how of the government, industry, and academia to build a more resilient defense industrial base that can rapidly develop, manufacture, and field the technologies required to maintain a decisive edge on the battlefield.

There is a proven way to increase such collaborative engagements, via the use of consortia-based Other Transaction Authority (OTA), the mechanism the DOD uses to carry out certain research and prototype projects. OTAs are made separate from the department’s customary procurement contracts, cooperative agreements, or grants, and provide a greater degree of flexibility.

Consortia bring to bear thousands of small, innovative businesses and academic institutions that are developing cutting-edge technologies in armaments, aviation, energetics, spectrum, and more for the DOD. They are particularly effective at recruiting nontraditional defense contractors into the industrial base, educating them on how to work with the DOD, and lowering the barriers to entry. This provides an established avenue to tap into innovative capabilities to solve the complex industrial base and supply chain challenges the nation faces.

A George Mason University study highlighted the impact that small businesses and nontraditional defense contractors are having on the DOD’s prototyping effort via consortia-based OTAs. Researchers found that more than 70% of prototyping awards made through consortia go to nontraditional defense contractors, providing a proven track record of effective industrial base expansion. Critically, the OTA statute also offers a path to follow-on production to help bridge the proverbial valley of death.

Consortia-based OTAs are an incredibly valuable tool for government, industry, and academia to increase collaboration, competition, and innovation. They should be fully utilized to drive even greater impact to build a more robust, innovative, and diverse defense industrial base and address critical challenges. Nothing less than the nation’s security is at stake.

Executive Committee Chair

National Armaments Consortium

The consortium, with 1,000-plus member organizations, works with the DOD to develop and transition armaments and energetics technology

I am known in the real estate world as The Real Estate Philosopher, and my law firm is one of the largest real estate law practices in New York City. John Burer’s brainchild, the American Center for Manufacturing & Innovation (ACMI), is one of our clients—and one of the most exciting.

To explain, let’s take look at what Burer is doing. He looked at the US defense industry and saw a fragmented sector with major players and a large number of smaller players struggling to succeed. He also saw the defense industry in need of innovation and manufacturing capacity to stay ahead of the world. Burer then had an inspiration about how to bring it all together. As he explained to me early on, it would be kind of like creating miniature Silicon Valleys.

Silicon Valley started out as a think tank surrounding Stanford University. The thinkers, professors, and similar parties attracted more talented people—and ultimately turned into the finest aggregation of tech talent and successful organizations the world has ever seen.

Smaller players will benefit from being part of an ecosystem focused on a single industry.

Why not, mused Burer, do the same thing in the defense industry? In other words, create a campus (or multiple campuses) where the foregoing would come together: thinkers, at universities, as centers of creation; major industry stalwarts to anchor activities; and a swarm of smaller players to interact with the big players. Voila, a mini-Silicon Valley would be born on each campus.

It sounds simple, but this is a tricky thing to put together. Fortunately, Burer is not just a dreamer, but also solid on the nuts and bolts, so he proceeded with logical steps.

The first step was gaining governmental backing. In landing a $75 million contract from the Department of Defense, Burer picked up both dollars and credibility to jump-start his venture. This became ACMI Federal, the first prong of the ACMI business.

The second step was acquiring and building the campuses. These are estate deals and, as real estate players know all too well, you don’t just snap your fingers and a campus appears. You need a viable location, permits, deals with anchor tenants, lenders and investors, and much more. So Burer created another prong for the business, called ACMI Properties.

In the third step, Burer realized that many of the smaller occupants of the campuses would be start-ups, which are routinely starved for cash. So he created yet another prong for the business, called ACMI Capital. This is essentially a venture capital fund to back the smaller players.

Now Burer had it all put together: a holistic solution for scaling manufacturing. The campuses will spearhead innovation, critical to US defense. Smaller players will benefit from being part of an ecosystem focused on a single industry. And investors will be pleased that their investments will have both solid upside coupled with strong downside protection as well.

Adler & Stachenfeld

The author is a member of the ACMI Properties’ Advisory Board

Let’s be very clear: the US government, including the Department of Defense, does not manufacture anything. However, what the government does do is establish the regulatory frameworks that allow manufacturing to flourish or flounder.

In this regard, John Burer eloquently argues that the DOD needs new acquisition strategies to meet the logistical needs of the military services. Fortunately, at the insistence of Congress, the DOD is finally taking action to strengthen and secure the defense industrial base. In February 2021, President Biden signed an executive order (EO 14017) calling for a comprehensive review of all critical supply chains, including the defense industrial base. In February 2022, the DOD released its action plan titled Securing Defense-Critical Supply Chains.

At the insistence of Congress, the DOD is finally taking action to strengthen and secure the defense industrial base.

The American Center for Manufacturing & Innovation (ACMI) is working to address two of the critical recommendations in the action plan, focused on strengthening supply chain vulnerabilities in critical chemical supply, and growing the industrial base for developing and producing hypersonic missiles and other hypersonic weapons. As Burer describes, the center’s approach uses an industry campus model. The approach is not new to the DOD. It is being quite successfully used in two other DOD efforts that I am very familiar with: the Advanced Regenerative Manufacturing Institute, which is working to advance biotechnology, and AIM Photonics, which is devoted to advancing integrated photonic circuit manufacturing technology. Each are one of nine manufacturing innovation institutes established by the DOD to create an “industrial common” for manufacturing critical technologies.

A key to the success of ACMI and these other initiatives is that the DOD invests in critical infrastructure that allows shared use by small companies, innovators, and universities. This allows for collaboration across all members of the consortium, ensuring that best practices are shared, shortening development timelines, and ultimately driving down risk by having a common regulatory and safety environment. Anything that drives down program or product risk is a winner in the eyes of the DOD.

ACMI is still somewhat nascent as an organization. While it has been successful in securing DOD funding for its Critical Chemical Pilot Program and subsequently for its munitions campus, only time will tell if ACMI will be able to address the confounding supply chain issues surrounding explosive precursors, explosives, and propellants that are absolutely critical to the nation’s national defense.

Department of Chemistry and Biochemistry, University of South Carolina

The author has 35 years of military and civilian service with the US Army, is a retired member of the Scientific and Professional cadre of the federal government’s Senior Executive Service, and served as the US Army Deputy Chief Scientist

Moving Beyond Hype on Hydrogen

Since 1923, when J. B. S. Haldane first suggested the use of renewable hydrogen in place of fossil fuels, hydrogen has been billed as a key player in the energy system of the future. But this future still hasn’t arrived, and interest in hydrogen has waxed and waned every few decades. In 2003, the George W. Bush administration announced a major hydrogen initiative through the US Department of Energy. But in 2016 Sir David MacKay, who served as chief scientific advisor to the United Kingdom’s Department of Energy and Climate Change from 2009 to 2014, wrote that “hydrogen is a hyped-up bandwagon.” MacKay noted that hydrogen production is very energy-intensive and the gas itself “gradually leaks out of any practical container.”

The 2021 Bipartisan Infrastructure Law directed $8 billion in funding to a program to build “hydrogen hubs” around the country. Is hydrogen’s long-forecast—and long-hyped—future finally here? And if it is, what makes this time different from those that have come before?

We think the answer to both questions lies in balancing hydrogen’s various potential advantages in a decarbonized energy system—as a fuel, as an energy carrier, as an energy storage medium, as a processes input, or as a combination of these—against its costs and risks. Those relative costs will be determined both by technological improvements and by whether policies are implemented that make emitting greenhouse gases to the atmosphere more expensive. Beyond these steps lie some normative questions, including whether decisionmakers support using fossil fuels to produce hydrogen after taking stock of economic costs and employment, social equity, and local environmental impacts. The development of seven hydrogen hubs throughout the country provides an important opportunity to test-drive not only whether hydrogen can be a good fit for future energy systems, but also whether society is comfortable with its potential complications and costs.

Looking over the hydrogen rainbow

Even though all hydrogen is the same gas—H2—it has been associated with different colors to describe how it was produced. “Grey” hydrogen is produced by splitting up a fossil hydrocarbon fuel and releasing the carbon dioxide (CO2) to the atmosphere. When carbon capture and sequestration is used to limit emissions to the atmosphere, the result is called “blue” hydrogen. The term “green” hydrogen is used when the molecule is produced with electrolysis powered by renewable sources of electricity to split hydrogen from oxygen in water. Some describe hydrogen produced with electrolysis as “pink” hydrogen if the electricity comes from nuclear power, “turquoise” hydrogen if it comes from a process that splits natural gas but produces solid carbon, “yellow” hydrogen if the energy comes from electricity obtained from the power grid, and so on. Deep geologic sources of hydrogen are often described as “white” or “gold” hydrogen, but today it is unclear whether such sources will prove to be sufficiently abundant or affordable to attract commercial interest. The US Department of Energy’s hydrogen hub program is now trying to move away from this rainbow of designations to simply call all the hydrogen the new hubs will produce “clean hydrogen.” Just how well this relabeling will sit with stakeholders is not yet apparent.

Is hydrogen’s long-forecast—and long-hyped—future finally here? And if it is, what makes this time different from those that have come before?

From a climate standpoint, what really matters for any given hydrogen production process is the amount of carbon dioxide—and other greenhouse gases—that is released to the atmosphere. Of the 10 million metric tons of hydrogen now produced in the United States, roughly 95% is grey, the product of steam methane reforming. This process produces carbon dioxide when natural gas, which is mainly methane (CH4), is split to produce hydrogen. If hydrogen produced in this way is to be used in large volumes in the future, it will become necessary to capture that carbon dioxide and dispose of it by injecting it safely and permanently into appropriate deep geologic formations, a procedure that is expensive, has yet to see widespread application, and may take longer to implement than many realize. While some observers argue that carbon capture and sequestration (CCS) is not a proven technology, it has been in use at commercial scale for almost 30 years at an offshore facility in Norway. Today there are roughly 40 CCS projects in operation globally and over 200 others in various stages of development.

Another question about production pathways is how much energy they require. With today’s technology, producing a kilogram of hydrogen from natural gas using steam methane reforming requires about 45 kWh/kg of energy. This level rises to about 50 kWh/kg with CCS, where the additional energy is required to compress the carbon dioxide and inject it deep into an appropriate geologic formation. The average energy to produce hydrogen using electrolysis is about 50 kWh/kg.

Assessing hydrogen’s possible advantages

Whether hydrogen ends up playing a central role in the decarbonization of the economy will depend on its cost and the speed with which, either directly or indirectly, users of fossil fuels must bear the costs associated with emitting greenhouse gases into the atmosphere. As the decarbonized grid and its financial structure evolve, hydrogen will only succeed if society is able to solve its problems in ways that are easier and cheaper than competing energy solutions.

Like electricity, hydrogen is an energy carrier, providing a way to move energy from where it is available to where it is needed. However, although electricity must be immediately used or stored in a battery, hydrogen can be more easily stored and used as needed, albeit at a cost.

Electricity has the advantage of decades of prior investment in high-voltage transmission lines, while hydrogen lacks dedicated transport infrastructure. Of course, electricity’s advantage will only continue so long as there is sufficient high-voltage transmission capacity. In its recent National Transmission Needs Study, the Department of Energy concluded that the United States will need to more than double regional and interregional electrical transmission capacity over the next several decades to meet decarbonization goals. However, it is difficult, sometimes even impossible, to build new conventional high-voltage transmission lines, with permitting and building often dragging on for a decade or more. Options for moving more electricity through existing transmission corridors are in nascent stages. Therefore, while existing transmission infrastructure is a distinct advantage for electricity today, ensuring the amount required in a decarbonized energy system will be a challenge of another order.

As the decarbonized grid and its financial structure evolve, hydrogen will only succeed if society is able to solve its problems in ways that are easier and cheaper than competing energy solutions.

In some situations, hydrogen’s ability to be moved in multiple ways—by pipeline or in pressurized containers by train, truck, or ship—may turn out to be an advantage over electricity. Unfortunately, natural gas pipelines cannot simply be repurposed because hydrogen can diffuse into steel pipes and cause embrittlement. It is possible to use other materials for pipelines to transport pure hydrogen; but even though siting pipelines can sometimes be easier than siting transmission lines, it is a slow and laborious process, and in the United States there is not yet a clear regulatory process. In some cases, moving hydrogen in gaseous or liquid form in large tanks on trucks, rail, barges, and ships could be a speedier alternative.

Another place where hydrogen might assist the evolving grid is in energy storage. Although battery technology is getting better, it is still expensive and in most cases not practical for storage durations of more than a few hours or at most a day or two. For longer-term storage, pumped storage hydropower or compressed gas storage can be an effective solution, but siting these large facilities is difficult. In contrast, hydrogen can be much more easily stored. Thus, for example, one potentially attractive application may be to make hydrogen from electricity when substantial wind or solar generation is available, store it, and then convert it back to electricity during periods when demand is high and wind and sun energy are not available.

This process, however, is not a simple solution to the problem of intermittent renewable generation because, to be cost-effective, the production, transport, and storage systems need to be used continuously, or nearly so. Thus, producing hydrogen only when there is excess wind or solar may not be economically attractive if expensive hardware must sit idle for extended periods. Likewise, producing hydrogen to immediately convert it back to electricity will be both expensive and inefficient. On balance, a range of emerging electrochemical storage solutions may prove cheaper and more efficient than hydrogen.

Barriers to “drop in” fossil fuel substitution

Hydrogen has sometimes been discussed as a “drop in” fuel to replace natural gas, but the reality is less straightforward. For example, although there are electricity-generating gas turbines today that can operate on pure hydrogen, measures must be taken to reduce nitrogen dioxide pollution because this pollutant forms at the high temperatures required for hydrogen combustion. While strategies exist to minimize resulting ambient air pollution, they add cost and complexity. Air pollution is not a problem if hydrogen is used in fuel cells because they rely on shuttling electrons rather than combustion to generate energy.

One potentially attractive application may be to make hydrogen from electricity when substantial wind or solar generation is available, store it, and then convert it back to electricity during periods when demand is high and wind and sun energy are not available.

Another hurdle for hydrogen as a replacement fuel is that it requires substantially more storage space for an equivalent amount of energy than, say, gasoline or jet fuel. In long-range aircraft, for example, where space is very limited, substituting hydrogen for conventional fossil fuels is very challenging. In some applications, this problem can be partly addressed by using hydrogen to produce a fuel, such as ammonia, that is more energy-dense. The shipping industry is seriously exploring using ammonia (NH3) for ship propulsion. Although ammonia is quite toxic, it could be used safely in commercial settings such as transport by ships or rail.

One place where hydrogen could replace natural gas would be as a source of high-temperature, tunable heat for industrial processes that are relatively difficult to electrify and thereby lower greenhouse gas emissions. Today, hydrogen is often considered superior to electricity for some industrial heat applications. However, as noted above, emissions of nitrogen oxides must be managed.

Hydrogen might be used to decarbonize some industrial processes, particularly iniron, steel, and chemical plants. In ironmaking, for example, coal, coke, and natural gas have traditionally been used to react with iron ore (iron plus oxygen), reducing it to pure iron (Fe). In the process, a carbon-based reductant, such as coke, reacts with oxygen to produce substantial quantities of CO2, which can be captured or directly emitted. When hydrogen is substituted for coal or natural gas as a reductant, for instance in the production of direct reduced iron, the process generates water vapor instead of CO2. This substitution shows promise as a way to dramatically reduce emissions from iron and steelmaking, although it is much more expensive than traditional methods because it requires large volumes of hydrogen as well as additional inputs of heat, for instance, to preheat the hydrogen.

In the absence of a price on greenhouse gas emissions, cost is the greatest barrier to using hydrogen in iron and steel production today. Carbon capture and sequestration may be a cheaper way to decarbonize ironmaking, but costs and timeframes associated with developing carbon capture with deep geologic sequestration are uncertain. Alternative ironmaking technologies such as electrowinning—which relies on electricity—are emerging and may someday compete with hydrogen-based processes.

Hydrogen hubs: An opportunity to learn while managing risks?

Since the 2009 Waxman-Markey bill (H.R. 2454), which would have created a nationwide emissions trading system, failed in the Senate, there has been no attempt to build a systematic national constraint on emissions of CO2 and other greenhouse gases. However, state efforts in California, Oregon, and Washington have placed limitations on the use of fossil fuels. And in parts of the Northeast, the Regional Greenhouse Gas Initiative caps power sector CO2 emissions, resulting in a price on CO2 emissions that rose to around $14 per ton in 2023.

Recognizing that today there may be no politically feasible way to implement national constraints on carbon emissions, the Biden administration has been trying to move the country toward decarbonization by subsidizing a variety of activities. To accelerate the adoption of hydrogen, as part of the 2021 Infrastructure Investment and Jobs Act (Public Law 117-58, also known as the Bipartisan Infrastructure Law), the Department of Energy received $7 billion to support between 6 and 10 regional clean hydrogen hubs, plus an additional $1 billion for cross-cutting activities. After a competition, DOE announced plans for seven hubs: in California, the Pacific Northwest, the Gulf Coast, Minnesota and the Dakotas (the Heartland Hub), the Midwest, the mid-Atlantic states, and Appalachia.

Map of Regional Clean Hydrogen Hubs Selected by the US Department of Energy
Figure 1. Regional Clean Hydrogen Hubs Selected by the US Department of Energy.
Source: US Department of Energy.

Most of these hubs will obtain some of the energy they need from renewable sources and many plan to produce hydrogen with electrolysis. The Appalachian Hub is expected to mainly produce blue hydrogen from natural gas. A portion of the energy for electrolysis for the Midwest and Gulf Coast Hubs will come from existing nuclear plants, however more hurdles will need to be overcome before the United States will be able to build significant additional nuclear power capacity.

In the absence of a price on greenhouse gas emissions, cost is the greatest barrier to using hydrogen in iron and steel production today.

The hydrogen these hubs produce will be used in a variety of ways. The Heartland Hub’s hydrogen will be used to produce fertilizer and to co-fire electricity generation plants. Some hydrogen from the Midwest hub will be used in the production of steel and glass. Several will support applications such as heavy transport.

The hydrogen hubs program has created a window of opportunity for the nation to test-drive hydrogen on a regional scale, giving communities and workers a chance to participate in developing the technology, along with supporting regulations and infrastructure. These hubs, positioned to foster regional collaboration, also have the potential to produce a diversity of experience that can inform a future effort to connect and scale hydrogen systems nationwide.

However, to reach their potential and identify workable arrangements for hydrogen in a decarbonized economy—and in the process win the support of an increasingly broad array of stakeholders—the hubs must do more than merely develop the technology; they must also de-risk it. In particular, they must focus on managing environmental impacts, creating markets and broad buy-in, serving and elevating the interests of communities and workers, and building public trust and acceptance through demonstrations of system safety. In many respects, the work of making hydrogen into a good neighbor may be as formidable as tackling the remaining technical and cost challenges.

Unless it is addressed with care from the very beginning, wider use of hydrogen could result in leaks of both methane and hydrogen—worsening climate change and ambient ground-level ozone pollution. Methane is roughly 30 times more potent than carbon dioxide as a greenhouse gas, though its lifetime in the atmosphere is shorter. In contrast, hydrogen is not a direct greenhouse gas, but once it enters the atmosphere, it extends the lifetime of methane. What’s more, hydrogen leaks more readily than other gases. For this reason, as hydrogen infrastructure is built out, doing so must avoid the kind of cavalier approach taken in building out natural gas, which has resulted in widespread leaks. Evaluating regulatory models and implementation in a hub context could provide templates for efficient, practical measures to be taken when hydrogen is used more widely.

The hydrogen hubs program has created a window of opportunity for the nation to test-drive hydrogen on a regional scale, giving communities and workers a chance to participate in developing the technology, along with supporting regulations and infrastructure.

The hydrogen hubs are being developed with theobjective of co-locating production and markets, to reduce dependence on single sources or uses. Policymakers have further emphasized the importance of placing hydrogen infrastructure in communities that face economic challenges, including places experiencing declines in fossil fuel production and related economic activity. The stated goal of these efforts is to ensure hydrogen hubs deliver broad benefits and, in the process, shore up political support for their continued existence and expansion. If they primarily benefit wealthy coastal economies, as in the case of electric vehicles, while economically challenged communities fall further behind, real and perceived inequities may increase local opposition to the low-carbon energy transition.

Making the energy transition inclusive requires that it create opportunities for workers and be responsive to the concerns of communities. It is important to note that these considerations will not always align, but should nevertheless be raised and considered in open and transparent processes. For instance, hydrogen production from natural gas would not only potentially allow a lucrative (for some) business to continue, but it would potentially create new jobs constructing and operating pipelines and sequestration sites that utilize many of the same skill sets of fossil energy workers, potentially easing workforce pressure in a transition. At the same time, communities may have mixed views of hydrogen, or—as research at Carnegie Mellon University suggests—may not have even heard of it, necessitating education and outreach early and continuously as the hubs are developed.

Another important concern in the development of hydrogen involves the safety of pipelines for hydrogen and captured CO2. If a hydrogen or natural gas pipeline leaks or breaks open, the escaping gas will rise and disperse fairly quickly. In contrast, CO2 is denser than air: in a low-lying location, the escaped gas can puddle, asphyxiating people and animals. This happened in Satartia, Mississippi, in February 2020, when a CO2 pipeline leaked, causing more than 200 people to be evacuated and at least 45 hospitalized, after emergency and other vehicles with internal combustion engines stopped functioning. Given that hydrogen is highly flammable, regulations for safe handling—especially for high concentrations in confined spaces—will be important. Even if hydrogen hub investments show early signs of financial viability and contribute to CO2 emissions reductions, a single pipeline disaster could cast doubt on the entire project.

A role for hydrogen in decarbonizing the US economy?

Whether hydrogen ends up playing a central role in the decarbonization of the economy will depend on the speed with which users of fossil fuels must bear the costs associated with emitting greenhouse gases into the atmosphere. It will also depend on hydrogen’s cost for each specific application and how its ease of use evolves relative to possible substitutes. And beyond those issues, it needs broad social acceptance and management of its risks and costs.

Making the energy transition inclusive requires that it create opportunities for workers and be responsive to the concerns of communities.

Clearly, there is a limit to how far the country can afford to go in subsidizing a decarbonized economy. However, creating hydrogen hubs is an important part of a strategy to ready the technology, explore its potential uses, and build workforces and regulatory frameworks to prepare for the future. Investment and production tax credits and funding for demonstration projects can work in concert with these efforts. Likewise timing—in development of the technology, reduction in costs, and fit into needs of the evolving grid—will be supremely important. Thus, in the absence of new policy that constrains greenhouse gas emissions, expiration of the hydrogen tax credits at the end of 2032 has the potential to undermine the economics of many hub activities.

If and when the United States succeeds in becoming more serious about decarbonizing, hydrogen alone will not be a silver bullet. At best, it will become an important part of the portfolio of technologies and strategies in an economy built on an increasingly clean electricity grid. At worst, it could end up as a passing fad, again. If we had to place a bet today, we would split our chips 70:30.

Promethean Sparks

Inspired by the National Academy of Sciences (NAS) building, which turns 100 this year, sixth-grade students at the Alain Locke School in West Philadelphia created the Promethean Sparks mural. The students collaborated with artist and educator Ben Volta to imagine how scientific imagery in the NAS building’s Great Hall—from the Prometheus mural by Albert Herter and the golden dome by Hildreth Meière—might look if recreated in the twenty-first century. Their vibrant mural is exhibited alongside a timeline of the NAS building, which depicts the accomplishments of the Academy in the context of US and world events over the past century.

Working with Mural Arts Philadelphia, students merged diverse scientific symbols to create new imagery and ignite new knowledge insights. Embodying a collective exploration of scientific heritage, this project empowered the students as creators. The students’ collection of unique designs reflects a journey of experimentation, learning, and discovery. Embracing roles beyond their student identities, they engaged as artists, scientists, and innovators.

Embodying a collective exploration of scientific heritage, this project empowered the students as creators.

Ben Volta works at the intersection of education, restorative justice, and urban planning. He views art as a catalyst for positive change in individuals and the institutions surrounding them. After completing his studies at the University of Pennsylvania, Volta began collaborating with teachers and students in Philadelphia public schools to create participatory art that is both exploratory and educational. Over nearly two decades, he has developed this collaborative process with public schools, art organizations, and communities, receiving funding for hundreds of projects in over 50 schools.

Mural Arts Philadelphia, the nation’s largest public art program, is rooted in the belief that art ignites change. For 40 years, Mural Arts has brought together artists and communities through a collaborative process steeped in mural-making traditions, creating art that transforms public spaces and individual lives.

Cool Ideas for a Long Hot Summer: Environmental Justice

This has been a record-breaking summer all over the world. Many cities have recorded their hottest days ever, and June 2024 was Earth’s hottest month on record. Mitigating and adapting to the impacts of climate change, including extreme heat and long summers, will require a lot of bold new ideas. 

This summer, we’re highlighting some of those ideas in a mini podcast series, Cool Ideas for a Long, Hot Summer. Over four mini-episodes, we’ll explore how faculty members at Arizona State University’s Global Futures Lab are working with communities to develop cool techniques and technologies for dealing with climate change. 

In the first mini-episode, host Kimberly Quach is joined by ASU assistant professor Danae Hernandez-Cortes. She shares how economics can be used to advance environmental justice and evaluate the impacts of policies on communities who are most harmed by climate change.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Kimberly Quach: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and by Arizona State University.

It’s been a long, really hot summer. June was the hottest month on record worldwide, and all signs point to things getting even hotter. As one of the defining challenges of our time, climate change requires a lot of new ideas.

This summer, we’re highlighting some of those ideas in a miniseries called Cool Ideas For a Long Hot Summer. Over four mini-episodes, we’ll explore how faculty members at ASU’s Global Futures Lab are working on climate change with cool techniques, technologies, and communities.

I’m Kimberly Quach, Digital Engagement Editor at Issues. In our first mini-episode, I’m joined by ASU assistant professor Danae Hernandez-Cortes. Danae talks to us about how economics can be used to advance environmental justice and create policies to protect communities who are most harmed by climate change. Danae, welcome.

Danae Hernandez-Cortes: Thank you so much for having me.

Quach: I think the first thing I want to talk about is that you’re an economist, and when most people think of economists, they think of things like interest rates, or markets, supply and demand. Why are economists, and you specifically, concerned with climate change and the problems with our long, hot summers?

Hernandez-Cortes: Well, this is a great question. Economics, the way that I have always been interested in it, is to think about trade-offs. Economics teaches us how to think about trade-offs and how to understand what trade-offs come from every decision that people make. So, it can be very specific, like what is the trade-off between me going to college or taking a job. Or it can be as large as what is the trade-off between having a policy that can reduce greenhouse gases and economic growth.

Economics teaches us how to think about trade-offs and how to understand what trade-offs come from every decision that people make.

So these trade-offs allows us to understand how different policies can have different impacts on different people. So, economists not only study some of these macroeconomic concepts, as you mentioned, like interest rates or GDP or something like that, but rather what are some of these policy trade-offs that policymakers have and how they can affect people. What economists, and specifically environmental economists, try to study is understanding what are some of the trade-offs that we have when taking care of the environment or when developing policies that could affect the environment.

So, I consider myself an environmental economist. Most of my work is trying to understand what are some of the potential consequences of different environmental policies.

Quach: I think another area that you work on is environmental justice. Could you tell me what that means?

Hernandez-Cortes: Yes, yes. Environmental justice is a situation where no group is more or less affected by environmental phenomenon or environmental policies. The way that the EPA considers environmental justice is by looking at two different aspects. One of them is the distributional impacts of different policies or different environmental phenomenon. Who is more or less affected by some environmental policy or other environmental phenomenon and the procedural justice aspect. And this procedural justice has to do with understanding who has access to policymaking and decisionmaking for environmental programs and environmental policies. Issues of participation and equal access to decisionmaking is related to procedural justice.

So, environmental justice, usually the way that I study it, is trying to understand how different policies can affect different people or different socioeconomic groups, and who is more or less affected by these policies, and try to understand how can we close existing gaps in environmental disparities that have existed for so long.

And by these environmental disparities, we mean the fact that low-income individuals, people experiencing poverty, underserved minorities who are experiencing higher levels of pollution and have experienced that for many, many years. Trying to understand how we close these disparities and what policies are more effective at closing them.

Quach: Could you talk about an example of how you’ve used environmental justice and economics to help highlight the disparities in these communities? I know something you’ve worked on recently is the Salton Sea.

Hernandez-Cortes: In this project that you mentioned about the Salton Sea, we examined one situation that happened in this area of Southern California. For those of you who don’t know where the Salton Sea is, it’s a very big lake. The Salton Sea is located in Southern California, very close to Arizona.

The Salton Sea is a very interesting phenomenon to study because this area was basically a mistake.

The Salton Sea is a very interesting phenomenon to study because this area was basically a mistake. It was flooded early in the 1900s. And ever since, it has been a thriving ecosystem by itself. This ecosystem has generated some opportunities for local communities in the way of tourism, but it has also provided other ecosystem services for some species. We have lots of birds coming to that area. We have different species living in that area. It’s a very interesting ecosystem.

And what we studied is what is the impact of the drying of the Salton Sea on disadvantaged communities. The water levels in the Salton Sea have decreased over time. And this, of course, leaves exposed areas of the Salton Sea that can release pollution into the atmosphere, affecting communities living nearby. And what we find is that, when there’s more drying of these areas, we see higher pollution concentrations in nearby monitors. And we’re able to tell is that, when these areas are more exposed, we see more pollution concentrations in monitors that are located near these areas.

And after that, what we try to understand is what happens to pollution concentrations in communities that are considered disadvantaged. And how do we define disadvantaged communities? Well, disadvantaged communities are defined by different indicators of socioeconomic vulnerabilities. The term disadvantaged communities is a term that California uses in order to categorize communities that have the highest levels of vulnerability to several indicators.

What we find is that, after some changes in how the Salton Sea is managed, we find increases in exposed lake bed that is associated with increases in pollution concentrations near disadvantaged communities. Meaning that, after these changes in the Salton Sea, we see that communities that are disadvantaged are experiencing more pollution concentrations.

Quach: That’s really interesting. So, it seems like your work really gives these disadvantaged communities a voice by allowing them to advance these things that would otherwise be overlooked because they probably are ignored compared to more advantaged communities. Because I know the situation with the Salton Sea existed or happened because water from the Salton Sea was diverted to San Diego, right?

It’s important to understand what are the causes of these pollution concentrations in these areas, so that we can actually design or change policies that could prevent more pollution from happening in these places.

Hernandez-Cortes: Exactly. And that is exactly what we are studying. What happens when this water is being diverted from the Salton Sea to San Diego Water District? And it’s something very interesting because, if you look at the communities living nearby the Salton Sea, we see high levels of poverty, linguistic isolation, and socioeconomic disadvantage. Which means that it’s important to understand what are the causes of these pollution concentrations in these areas, so that we can actually design or change policies that could prevent more pollution from happening in these places.

Quach: I think we often talk about how novel technologies can help solve our problems. But this is actually just applying different ways of thinking, and bringing researchers into communities, and working with them to solve problems rather than creating some new technologies.

Hernandez-Cortes: Yeah, exactly, and trying to understand how we can leverage some of the methods that we have developed for so long, but including more voices into the process.

Quach: Something that you said that really resonated with me in another interview that you did was that you said that, “There’s no single policy we know that will work nationally in every single context. We have to create policies and communities that are relevant for their context.” Could you talk to me about other things you’ve done with other communities?

Danae Hernandez-Cortes: Yeah, of course. For example, in the case of Phoenix, which it’s where we live, it’s where Arizona State is located, we are actually working with a community organization called Unlimited Potential and Chispa Arizona. We are working together to try to understand how different policies that could decrease pollution from the transit system in Phoenix, how can that affect different communities that are underserved in the Phoenix area?

How can we have these transportation systems so that communities can have improvements in air quality?

So, in this case, we’re working together with communities to understand communities’ mobility needs and also air quality concerns. So, what are the sources that they believe are impacting their health? And then, trying to understand how can we design policies that can help them satisfy this mobility needs, because people need to move from one place to another. But how can we have these transportation systems so that communities can have improvements in air quality?

So, in this case, this project is being funded by the EPA. And what we are looking at is different scenarios of transit decarbonization plans in Phoenix and trying to understand how these different scenarios could affect or benefit communities.

Quach: So, earlier you talked about how economics is the study of trade-offs. What challenges have you faced doing this work? Because I assume, advocating for these disadvantaged communities, there are other voices that have other opinions on how these policies should work.

Hernandez-Cortes: Yeah, that is a great question. I think that some of the concerns that you often hear is the cost of policies, how costly it is to serve certain communities or to change different policies. And the way that I try to think about these cost questions is by thinking… Well, it all depends on how you estimate the cost and the benefits, and how much do you care about some of these environmental disparities, so that you can consider them in the benefits that you are estimating.

I think that that’s some of the trade-offs that I have experienced. And it’s really interesting because you can have very interesting discussions in terms of how to consider these past disparities and how policies can closing them, how can that affect how we estimate benefits more broadly.

Kimberly Quach: This conversation was really inspiring about how many things you’ve brought to light with your research. If I was a member of a disadvantaged community, and wanted to apply these techniques to my own community, what should I do? Or if I’ve not and just found this really inspiring, how could I get more involved in this type of work?

Danae Hernandez-Cortes: I love receiving emails from community members. I have met with several community organizations that have emailed me just to talk about some of the concerns. I think that that’s one way we can get involved.

If there are students who are interested in this type of research, I think that one way is either by emailing me, or taking some of my classes, or by looking at my website. My website has a lot of different data sets. It also has different articles that I have written, talking about some of these questions. If they have any questions about that, I will be super happy to talk more.

Kimberly Quach: If you would like to learn more about economics and environmental justice, check out our show notes to find Danae Hernandez-Cortes’s email and website. She would love to hear from you.

Please subscribe to The Ongoing Transformation wherever you get your podcasts. And thanks to our audio engineer, Shannon Lynch. I’m Kimberly Quach, Digital Engagement Editor at Issues in Science and Technology. Tune in next week to learn about how canoes can be used to prevent deforestation!

The Power of Space Art

One of the remarkable qualities of space art is its ability to amplify the mysterious intangibility of the cosmos (as with the late-nineteenth-century French artist Étienne Trouvelot) and at the same time make the unrealized technologies of the future and the worlds beyond our reach seem to be within our grasp (as with the mid-twentieth-century American artist Chesley Bonestell). As Carolyn Russo demonstrates in “How Space Art Shaped National Identity” (Issues, Spring 2024), art has played an important role in making space seem both meaningful and familiar.

Its appeal has not been limited to the United States. In the Soviet Union, the paintings of Andrei Sokolov and Alexei Leonov made the achievements of their nation visible to its citizens, while also showing them what a future in space could look like. The iconography developed by graphic designers for Soviet-era propaganda posters equated spaceflight with progress toward socialist utopia.

Outside of the US and Soviet contexts, space art from other nations didn’t necessarily align with either superpower’s vision. The Ghana-born Nigerian artist Adebisi Fabunmi, in his 1960s woodcut City in the Moon, provided a vision influenced by the region’s Yoruba people of community life on the moon. The idea of home and community may have appealed to the artist during an era of decolonization and civil war more than utopian aspirations or futuristic technologies. Meanwhile, in Mexico, the artist Sofía Bassi composed surrealist dreamscapes that ponder the connection between outer space and the living world. Bassi’s Viaje Espacial includes neither flags nor space heroics.

Contemporary space art is as likely to question the human future in space as it is to celebrate it. The Los Angeles-based Brazilian artist Clarissa Tossin’s work is critical of plans for the moon and Mars that she worries continue colonial projects or threaten to despoil untouched worlds. Tossin’s digital jacquard tapestry The 8th Continent reproduces NASA images of the moon in a format associated with the Age of Exploration, reminding viewers that our medieval and Renaissance antecedents similarly sought new worlds to conquer and exploit.

Space is also a popular setting or subject matter in the works of Afrofuturist and Latino Futurist artists. These works often seek to recover and reclaim past connections as they chart new future paths. The American artist Manzel Bowman’s collages combine traditional African imagery and ideas with space motifs and high technology to produce a new cosmic imaginary unconstrained by the history of colonialism. The Salvadoran artist Simón Vega’s work reframes the Cold War space race via the perspective of Latin America. Vega reconstructs the space capsules and stations of the United States and the Soviet Union using found materials in ways that make visible the disparities between the nations who used space to stage technological spectacles and those who were left to follow these beacons of modernization.

The many forms that space art has taken over these past decades are surprising, but the persistence of space in art is not. From the moon’s phases represented in the network of prehistoric wall paintings in Lascaux Cave in southwestern France to the images of the heavenly spheres captured by medieval and later painters across many nations, art chronicles our impressions of the universe and our place within it perhaps better than any other cultural form.

Curator of Earth and Planetary Science

Smithsonian’s National Air and Space Museum

Lead curator of the museum’s new Futures in Space gallery