The Future of Fusion

The Summer 2024 Issues addresses pressing topics in fusion energy development. In “What Can Fusion Energy Learn From Biotechnology?” Andrew W. Lo and Dennis G. Whyte highlight parallels between the evolution of these industries that offer bountiful benefits yet have faced challenges. As head of the Fusion Industry Association, I thank the authors for naming the FIA as the right venue for open, direct, and transparent communication about fusion’s direction. They also make the critical point that the United States needs to foster a robust commercialization ecosystem that includes government research laboratories, universities, and private-sector fusion developers, as well as companies comprising the supply chains linking efforts.

We also agree with Michael Ford’s statement in “A Public Path to Building a Star on Earth” that funding for fusion research must increase dramatically to meet the needs of both the scientific program and the needs of commercialization. Toward this aim, the FIA has submitted a proposal to both Congress and the US Department of Energy for $3 billion in supplemental funding to accelerate fusion commercialization and build fusion energy infrastructure.

The United States needs to foster a robust commercialization ecosystem that includes government research laboratories, universities, and private-sector fusion developers, as well as companies comprising the supply chains linking efforts.

As part of Ford’s proposed path, he calls for a “coordinated plan for public and private funding.” But I would add a caveat. The fusion community already has made more plans than it has taken action on. Instead, now it is time to execute the plans already agreed upon. The fusion community delivered a comprehensive Long Range Plan in early 2021. The plan, now being updated to reflect advances in fusion technology and ambition since then, acknowledged that without significant increases in funding, DOE would face difficult choices that could reduce plasma physics funding in some areas, in order to provide more robust support for more commercially relevant programs such as materials science, fuel cycles, and public-private partnerships. Without a strong growth in funding across the board, prioritization is necessary.

We agree that DOE’s role is to support fundamental research and enable the growth of the commercialization ecosystem without skewing the competitive landscape—and that means the national labs and companies should avoid directly competing. It also means that DOE should realign its efforts to appropriately fund both commercially relevant programs and the scientific research and development that is needed to build fusion demonstrations. It is time for DOE to treat fusion as an energy source, not a science project, and so it is appropriate to begin the transition to an applied energy office.

Finally, both articles highlight the importance of building trust to support public acceptance. The fusion industry recognizes that engagement with the public, stakeholders, and the broader scientific community is essential to the successful development and deployment of fusion energy. In line with Lo and Whyte’s recommendations, the FIA aims to ensure that all these groups receive timely, clear, and transparent information. Among other efforts, we will communicate about when companies reach milestones for fusion’s progress, providing easily understandable, tangible proof points for policymakers, investors, and the public.

The fusion community is moving forward at speed to be ready for the next phase: focused execution to bring fusion energy to market. The FIA looks forward to collaborating across public and private sectors to ensure that fusion achieves its potential as a clean, limitless energy source.

Chief Executive Officer

Fusion Industry Association

Michael Ford effectively highlights the critical importance of maintaining public funding for fusion research, given the technology’s current stage of development. Indeed, the phrase “building a star” arguably understates the task at hand.

In the wake of Lawrence Livermore National Laboratory’s repeated achievement of “ignition” using inertial fusion technology—that is, the production of more energy from a fusion reaction then needed to create it—an increasingly common refrain holds that commercializing fusion is no longer a physics problem, but an engineering one. This downplays the complexity and difficulty of fusion. As Ford rightly points out, there are still significant unknowns regarding which approaches will prove optimal or even viable. The timeline for achieving commercial fusion energy is uncertain, underscoring the necessity for continued fundamental research and development. This foundational work is essential to unravel the complexities of plasma physics and materials science that underpin fusion technology.

The 2022 Inertial Fusion Energy (IFE) Basic Research Needs effort, organized under the auspices of the Fusion Energy Sciences program at the US Department of Energy Office of Science, laid out the core innovations that must be advanced to make IFE a reality and attempted to evaluate technical readiness levels of the key IFE technologies to guide where investment is needed. Today, none of these have matured to the technical readiness levels necessary for use in a pilot power plant, and physics questions abound as we strive to mature them. Because of this, funding for fundamental R&D must remain paramount in the fusion effort, despite the ambitious timelines set forth by the fusion start-up community.

Similar efforts to identify core science and technology gaps should be undertaken for the broader fusion effort; at this early stage, an all-of-the-above approach is called for. Roadmaps and clear metrics resulting from such efforts should be used to hold the private and public sectors accountable and to strategically choose among possible technological options to sustain the value of public funds.

Funding for fundamental R&D must remain paramount in the fusion effort, despite the ambitious timelines set forth by the fusion start-up community.

Fusion is not just a scientific endeavor; it is a strategic asset for US competitiveness and national power. The public sector has a pivotal role in stewarding this technology to ensure it aligns with national interests. Developing public-sector anchor facilities, safeguarding intellectual property, and supporting the supply chain are crucial steps in bolstering the nation’s know-how and economic strength. Public investment in these areas will help secure a leadership position in the global fusion landscape.

While the United States spends less than some other fusion aspirants, including China, the achievement of fusion ignition has put it in pole position. That lead is hard-won, resulting from decades of public investment and innovation. It can be easily lost.

We are convinced that a world powered by fusion energy is achievable. It is not a question of time, but one of resources and political will. Sustained investment in a foundation of science and technology will bring this future into focus.

Lead, Inertial Fusion Energy Institutional Initiative

Lawrence Livermore National Laboratory

Andrew W. Lo and Dennis G. Whyte draw four specific lessons for fusion from the biotechnology industry. The exhortation to “standardize milestones” is particularly important. The authors suggest a consortium for identifying the right milestones, but it remains critical to explore the unique aspects of fusion in contrast with biotechnology and other fields to find a model that will work.

Unlike the Food and Drug Administration, fusion’s US regulator, the Nuclear Regulatory Commission, does not have a mandate to regulate efficacy. Rather, the NRC’s mission relates to safety, common defense, and environmental protection. This sensibly reflects the fact that market forces alone are sufficient to ensure that fusion works (i.e., it generates useful energy economically). This presents an underappreciated opportunity for the fusion industry to take advantage of the benefits of standardized milestones without the expensive and time-consuming formality that the FDA correctly imposes on the biotechnology industry.

Consulting firms that evaluate the claims of fusion companies for investors are appearing as a result of these market forces. Though useful, consultants and their reports don’t provide the structural benefits that standardized milestones could bring to the entire industry in the form of on- and off-ramps for different groups of investors and scales of capital as Lo and Whyte discuss.

Identifying milestones that are meaningfully applicable to all approaches to fusion energy, an objective arbiter of those milestones, and an appropriate rating system is an important next step in the development of the fusion energy ecosystem.

Because of this missing piece, some fusion investors and fusion companies themselves are clamoring for such a set of standardized milestones. Some have emerged organically. The Department of Energy’s Milestone-Based Fusion Development Program issues payments based on the completion of benchmarks proposed by the companies themselves and negotiated between the companies and DOE. Most recently, Bob Mumgaard, CEO of Commonwealth Fusion Systems, published an open letter, titled “Building Trust in Fusion Energy,” that lays out six milestones on the path to fusion energy, many of which are similar to milestones for funding in the ARPA-E Breakthroughs Enabling Thermonuclear-fusion Energy (BETHE) program.

However, these are not the right entities to independently develop and arbitrate standardized milestones for fusion. Although DOE is equipped to judge whether a milestone has been completed, relying on DOE (or NRC) for broader oversight is to give up the advantage that fusion has to manage milestones in a more lightweight and nimble way outside of government. Nor are fusion companies, investors, or the Fusion Industry Association appropriate organizations for this job, for obvious conflict-of-interest reasons. Instead, a nongovernmental, independent rating organization is needed.

There are lessons here from the finance industry. Agencies such as Moody’s and Fitch Ratings play an important role in providing information to investors about the creditworthiness of companies and the likelihood that bonds will be repaid. However, their business models rely on payments from the entities being rated, and the review process is not especially transparent, both of which were factors that led to the 2008 financial crisis. Fusion could do better by developing a different business model that decouples the ratings from payments made by the entities being rated and by emphasizing the importance of publishing data on milestone completion in peer-reviewed journals.

Identifying milestones that are meaningfully applicable to all approaches to fusion energy, an objective arbiter of those milestones, and an appropriate rating system is an important next step in the development of the fusion energy ecosystem. This should be an iterative process involving companies, investors, and academia. Success will require creativity in balancing competing interests, and an evenhanded assessment of the science, engineering, economics, and social-acceptance challenges facing the nascent fusion energy industry.

Fusion Energy Base

The Politics of Recognition

As I was reading Guru Madhavan’s “Living in Viele’s World” (Issues, Summer 2024), my thoughts turned to studies of occupational prestige—in other words, the perception that some types of work are more deserving of admiration and respect. Historians and social scientists who examine occupational prestige pursue lines of inquiry that spread in many directions, including the implications for individual self-worth, differences in salaries, longitudinal trends for the American labor force, and more.

In his eloquent essay, Madhavan demonstrates the importance of seeing the actions of two elites in nineteenth-century New York, Egbert Ludovicus Viele and Frederick Law Olmsted, within a social scientific setting. Although these men brought different technical points of view to the design of crucial elements of New York’s infrastructure, Madhavan’s point is that we will understand their legacies more deeply if we see their work as part of a broader contest for authority and prestige.

Madhavan’s invocation of the politics of recognition—a concept with its own rich scholarly tradition—is a compelling way to think about engineering and society. In particular, it expands our conceptual language for considering the normative consequences of infrastructural decisions, including the ways that these decisions can either facilitate or inhibit equity and human flourishing.

Many young people self-select into occupations that are seen as prestigious and forego career paths that lack glamor or respect.

In our 2020 book, The Innovation Delusion, Lee Vinsel and I argued that the trendy preoccupation with innovation, and the resulting elevated prestige of innovators, carries steep societal costs. These costs include the neglect of maintenance (made familiar with the dismal grades regularly registered in the American Society of Civil Engineers’ Infrastructure Report Card) as well as diminished prestige for the people we called maintainers—the essential workers who care for the sick, repair broken machines, and keep the material and social foundations of modern society in good working order. Vinsel and I challenged society to reckon with the caste-like structures that keep janitors, mechanics, plumbers, and nurses subordinate to other professionals. This line of thinking also sharpens our understanding of the stakes for the present and future, namely, that many young people self-select into occupations that are seen as prestigious and forego career paths that lack glamor or respect.

As a result, there is an oversupply of young people who want to get into “tech,” even as the giants of Silicon Valley continue to lay off workers so that they can keep wages low and stock prices high. At the same time, there is an undersupply of young people who want to work in the skilled trades, where there are national shortages and good careers for people who want to work hard, uplift their communities, and care for the needs of their fellow residents. Closer attention to the politics of recognition in engineering—indeed in all occupations—can help Americans understand how we arrived at our present state, overcome some of our elitist prejudices, and recalibrate the relationship between occupational prestige and the workforce that the nation actually needs.

Provost

SUNY Polytechnic Institute

Guru Madhavan brings to life the efforts by Egbert Ludovicus Viele to improve the urban environs of New York City by working with nature. The article beautifully explains Viele’s thinking and influence on development in the city, which was both groundbreaking and effective, and continues to this day.

However, Madhavan’s main argument is that Viele should be as venerated and celebrated as an innovator of urbanization as Frederick Law Olmsted. Olmsted was a contemporaneous landscape architect who had clashes with Viele at both Central and Prospect Parks in New York. The author continues Viele’s own efforts to aggrandize his life’s achievements, evidenced by a 31-foot-tall pyramid tomb, which at the time of Viele’s death was the largest in the West Point cemetery. Sadly for Viele, historical figures cannot and do not choose themselves. Many people deserving of recognition are long forgotten. Historic figures are raised again only if their contributions and stories are relevant to contemporary times.

But is it right or even necessary to have historic heroes? Neither Viele nor Olmsted worked in isolation. They were connected to a plethora of colleagues, clerks, workers, supporters, and ecologies who helped them achieve their projects. Should we not instead celebrate eras that allow innovations to be made? That recognize the beliefs, values, economic systems, other people, social systems, and infrastructures that create the circumstances for these changes to how we live our lives?

If we celebrated the circumstances, not just a hero, we would see how people are supported to achieve their goals. We could then understand that it is not just an individual who achieves, but generations of effort and resources that enable these goals to be achieved.

If we celebrated the circumstances, not just a hero, we would see how people are supported to achieve their goals.

Most people know that Wolfgang Amadeus Mozart was a prodigious musician and composer, and many might know he had a father and older sister who were also elite musicians. People might think that the father must have started training his children at a young age, but do they also think that his father developed the network that allowed him to perform in royal courts? That there were many royal courts to support live music performances? That he was male in an era when women were curtailed from having musical careers? That he was born at a time when musical notation was common, and his music could easily be reproduced by other musicians? Most people do not think of these things, but think only of the hero—Mozart. If they did, they would realize that, rather than hero, he was the mushroom fruit of a huge mycelium network pulling in resources from countless places.

Perhaps if Viele spent more of his life acknowledging all the other efforts that went into enabling the achievement of his projects, he would also be better remembered for his contributions. As it stands, the most memorable part of his story is his peculiar solution—a buzzer inside his coffin—in case he was buried alive.

Assistant Professor, Bartlett School of Planning

University College London

Hildreth Meière’s Initial Drawing of “Air”

Art deco artist Hildreth Meière (pronounced me-AIR) created this preliminary study of Air for the National Academy of Sciences (NAS) building’s Great Hall, which opened in 1924. Depicted as an allegorical female figure wearing flowing garments surrounded by birds in flight, Air is one of the four classical elements—earth, air, fire, and water—represented in the Great Hall’s pendentives. Pendentives are supportive architectural elements that transition from the square corners of the floor below to the circular dome above. Meière’s finished work in the Great Hall includes three small medallions of each element, representing inventions related to that natural force. Air is depicted with a bellows, a sailboat, and a windmill.

Meière designed the iconography of the Great Hall dome, arches, and pendentives. This project was her first major commission, launching her forty-year career. Her work at the NAS building celebrates the history and significance of science, blending art nouveau and Greek and Egyptian influences with the art deco approach that became her trademark. Meière was one of the most renowned American muralists of the twentieth century. Working with leading architects of her day, she designed approximately 100 commissions in notable buildings, including Radio City Music Hall, One Wall Street, St. Bartholomew’s Church, Temple Emanu-El, and St. Patrick’s Cathedral in New York City, and the Nebraska State Capitol in Lincoln and the Cathedral Basilica of Saint Louis.

This drawing is included in the One Hundred Years of Art and Architecture exhibition at the NAS, on view through December 31, 2024.

Preparing the Next Generation of Nuclear Engineers

In “Educating Engineers for a New Nuclear Age” (Issues, Summer 2024), Aditi Verma, Katie Snyder, and Shanna Daly’s vision closely aligns with recent sociotechnical advancements, particularly in the realm of artificial intelligence-powered simulations. Recent research has demonstrated the potential of virtual reality (VR), augmented reality (AR), and other immersive technologies to bridge the gap between technical knowledge and real-world application in engineering education. Studies indicate that VR and AR can significantly enhance spatial understanding and conceptual learning in complex engineering systems.

These technologies allow students to interact with virtual models of nuclear facilities, providing a safe and cost-effective way to gain hands-on experience. The simulations can adapt in real-time to student interactions, offering a more realistic and nuanced understanding of how technical decisions impact social and environmental factors. This fits perfectly with the authors’ goal of preparing engineers to collaborate effectively with communities and consider broader societal implications.

My recent work on modernizing education for the nuclear power industry underscores several key points that complement the authors’ vision. First is the need for rapid technological advancements in training methodologies to keep pace with industry evolution. The nuclear industry is facing a critical juncture where modernizing education and training is essential. The need for cost-effective approaches in training is paramount, especially with a projected increase in the number of nuclear plants and employees. This expansion necessitates scalable and efficient training methods that can accommodate a growing workforce while maintaining high standards of safety and competence.

The nuclear industry is facing a critical juncture where modernizing education and training is essential.

Second is the importance of addressing emerging demographic shifts and knowledge transfer challenges, and the critical role of fostering a continuous improvement culture within engineering education. As experienced professionals retire, there is an urgent need to transfer knowledge to the next generation of nuclear engineers and technicians. Interactive e-learning environments and mobile accessibility can facilitate this knowledge transfer, making it more engaging and accessible to younger professionals and directly supporting the goal of creating more empathetic and ethically engaged engineers.

Third is the critical need to foster a continuous improvement culture within engineering education. The changing work environment demands adaptable training solutions. The integration of VR and AR technologies in training programs can provide immersive, hands-on experiences even in remote learning settings. This approach enhances the learning experience and improves safety by allowing trainees to practice in risk-free virtual environments.

Even as cutting-edge technologies are reshaping training methodologies, offering a versatile tool kit to optimize effectiveness and stay at the forefront of industry standards, work remains. Key areas to explore include interactive learning approaches and e-learning environments, VR and AR simulations for immersive experiences, AI-powered simulations for realism and adaptability, precision learning technologies for enhanced effectiveness, personalized skill development paths and adaptive learning, gamification for engagement, dynamic learning analytics and predictive analytics for proactive enhancement, and natural language processing to enhance instant support.

By applying the lessons we’ve already learned and the knowledge future studies will certainly bring, and combining these advancements with the authors’ community-centered, ethically driven approach, we can truly prepare the next generation of nuclear engineers. This holistic approach to education and training will enhance the industry’s safety and efficiency and contribute to its long-term sustainability and public acceptance.

Director, ORAU Partnership for Nuclear Energy

Oak Ridge Associated Universities

Nuclear energy has been a “successful failure,” in the words of Vaclav Smil, a distinguished scholar at the University of Manitoba. Even though huge amounts of money and human resources have been invested in nuclear technology, its contribution to global power generation has been very modest and betrayed expectations. Global warming and energy security awareness after the Russian invasion of Ukraine give positive reinforcement to using nuclear power in some countries, and the International Energy Agency’s ambitious Net Zero Emission by 2050 scenario projects nuclear power generation to be double by 2050: to 67 exajoules (EJ) from the current 29 EJ. But renewable energy is growing much faster and is expected to play a much larger role for decarbonization, growing over that period to 306 EJ from 41 EJ—a seven-and-half-times increase.

It appears, then, that nuclear power may not be a mainstream of future energy. Why is this the case? Aditi Verma, Katie Snyder, and Shanna Daly give us an answer.

Even though huge amounts of money and human resources have been invested in nuclear technology, its contribution to global power generation has been very modest and betrayed expectations.

Current nuclear power is based on large light water reactors. It aims to achieve economy of scale by the size of reactor operating as base load. Nuclear power plants usually locate in remote areas to supply electricity to distant urban users through high-voltage power grid lines. This represents a large, centralized paradigm of energy supply. NIMBY—not in my backyard—often happens in local communities where residents feel forced to take more risks than benefits. Disastrous accidents such as occurred at Three Mile Island, Chernobyl, and Fukushima strengthen the arguments against nuclear power. Delays of construction have increased costs by double or triple, and the major commercial reactor builders Westinghouse, Toshiba, and Areva have essentially collapsed. Another issue is radioactive waste. Without concrete plans for adequate waste disposal sites, it is irresponsible to do nuclear power. Combined, this means the current light water reactor paradigm is not sociopolitically sustainable.

Locals have “rights” to be heard, Verma, Snyder, and Daly argue. They maintain that a hoped-for rise of small modular reactors that are better suited to local needs may provide a chance. Microsoft is considering using SMRs to power its data centers. Dow Chemical is considering a type of SMR for some of its production plants. Canada is considering SMRs for heat for remote mining. In Japan, melted-down radioactive debris left from the Fukushima accident can’t be transported away for treatment, and an innovative type of SMR developed at the Idaho National Laboratory appears capable of handling the material onsite. As with all types of reactors, however, the authors stress that local people and communities must be fully engaged, from design through implementation.

Fusion energy has long been a dream technology. It is expected to produce much less waste, with passive safety and no risk of weaponization. A new sustainable paradigm may come true by fusion. There are many small-scale commercial fusion reactors under development. The authors’ group of young engineers is working to produce innovative designs with local communities. I sincerely hope they recover “trust, respect, and justice” among locals, and bring a new future to nuclear power.

Executive Director Emeritus, International Energy Agency

Chair of the Steering Committee of the Innovation for Cool Earth Forum

Aditi Verma, Katie Snyder, and Shanna Daly discuss their efforts to modernize the approach to teaching nuclear engineering, specifically moving beyond a singular focus on technical excellence to include engaging communities in participatory design and becoming fluent in ethical, equity-centered communication. Their work is timely as the international interest in deploying a new generation of commercial nuclear products has increased in response to both climate change and energy reliability concerns. Currently, many new commercial ventures look to deploy new nuclear systems in a much broader range of deployment scenarios beyond traditional gigawatt-scale electricity production.

The first commercial deployment of nuclear energy plateaued at supplying 20% of the US electricity demand. That plateau was associated with cost increases and also public pushback on further expanding the use of the technology. To successfully deploy additional nuclear energy in both traditional and new deployment scenarios (e.g., smaller plants closer to population centers, industrial heat uses, or direct off grid supply to data centers) will require better engagement and input from the communities that will ultimately decide on hosting nuclear technology. Placing that emphasis as part of the basic functions of engineering, as Verma, Snyder, and Daly propose, is a critical step.

To successfully deploy additional nuclear energy in both traditional and new deployment scenarios will require better engagement and input from the communities that will ultimately decide on hosting nuclear technology.

Their work is historically significant for the University of Michigan. In 1948, U-M initiated the Michigan Memorial Phoenix Project, a campus-wide initiative that paid tribute to the 579 students and faculty who lost their lives in World War II. The project aimed at understanding the peaceful civilian uses of atomic energy. As former U-M president and dean emeritus of engineering James Duderstadt described: “It is important to recognize just how bold this effort was. At the time, the program’s goals sounded highly idealistic. Atomic energy was under government monopoly, and appeared to be an extremely dangerous force with which to work on a college campus. This was the first university attempt in the world to explore the peaceful uses of atomic energy, at a time when much of the technology was still highly classified.”

Given U-M’s leadership in understanding the first generation of the civilian uses for nuclear technology, it is heartening to see a new generation of young scholars leading national discussions about future uses of the technology and how engineers should approach their field.

Glenn F. and Gladys H. Knoll Department Chair of Nuclear Engineering and Radiological Sciences

Chihiro Kikuchi Collegiate Professor, Nuclear Engineering and Radiological Sciences

University of Michigan

Fierce Planets

Iron snow, helium rain, and diamond icebergs might sound like science fiction, but they are real phenomena occurring within planets due to extreme heat and pressure. In their recent book, What’s Hidden Inside Planets?, planetary scientist Sabine Stanley and science journalist John Wenz guide readers through the enigmatic realms beneath planetary surfaces. They delve into how the interiors of Earth and other planets are intricately linked to the formation and regulation of atmospheres, oceans, earthquakes, and volcanoes. The book and Stanley’s research into these powerful forces inspired the artwork in the traveling exhibition Fierce Planets.

The juried exhibition is a collaboration between Studio Art Quilt Associates and the Johns Hopkins Wavelengths science communication program. Fierce Planets brings together work by artists from around the globe, each interpreting the mysteries of planets and space through fiber art. Their creations range from traditional quilts to fabric assemblages and soft sculptures, all inspired by Stanley’s research. Out of nearly 200 works submitted, 42 were selected to be part of the exhibition.

Fierce Planets isn’t just about aesthetics; it’s about fostering a deeper understanding of and connection to the universe, inviting viewers to explore the beauty and complexity of planets through a unique lens.

Carolina Oneto, Imaginary Places IV, 2023, cotton fabrics, cotton batting, threads for piecing and quilting, 56 x 55 inches.

Carolina Oneto, "Imaginary Places IV," 2023, cotton fabrics, cotton batting, threads for piecing and quilting, 56 x 55 inches.
Anne Bellas, "Soleil et Lunes (Sun and Moons)," 2020, hand dyed, printed, machine pieced, machine quilted, cotton sateen, commercial fabrics, 46 x 36 inches.

Anne Bellas, Soleil et Lunes (Sun and Moons), 2020, hand dyed, printed, machine pieced, machine quilted, cotton sateen, commercial fabrics, 46 x 36 inches.

Claire Passmore, Hot Stuff, 2023, cotton, silk, velvet, felt, tulle, nonwoven fabrics, threads, wool, continuous zip, pvc, fiber filling, dye, acrylic and enamel paint, plastic bottles, gold leaf, mica powder, glue, steel wire, 92.5 x 53 x 49 inches.

Claire Passmore, "Hot Stuff," 2023, cotton, silk, velvet, felt, tulle, nonwoven fabrics, threads, wool, continuous zip, pvc, fiber filling, dye, acrylic and enamel paint, plastic bottles, gold leaf, mica powder, glue, steel wire, 92.5 x 53 x 49 inches.

Hope for Hydrogen

In “Moving Beyond the Hype on Hydrogen” (Issues, Summer 2024), Valerie J. Karplus and M. Granger Morgan provide an excellent assessment of hydrogen’s advantages and significant barriers to market formation. Toyota has more than 30 years of experience with all phases of the hype cycle for hydrogen—innovation, inflated expectations, disillusionment, and enlightenment.

Toyota began developing its hydrogen-powered fuel cell vehicles in 1992, one year after Sony commercialized the lithium-ion battery. Sales of the first hydrogen passenger car, the Mirai, launched in 2014, with the second generation in 2021. Over those 30 years, we saw the initial innovations in fuel cells dramatically improve in unexpected ways. During the same period, we also watched lithium-ion batteries grow into the clear leader in the race to decarbonize passenger cars.

Despite the technical success of the Mirai, the vehicle has struggled in the marketplace due to the difficulties of hydrogen supply and fueling infrastructure. The challenges continue, with high prices for hydrogen at the pump and fuel stations closing. Despite headwinds, at Toyota we find ourselves asking the same question as the article: “Is hydrogen’s long-forecast—and long-hyped—future [as a fuel for transportation] finally here? There are reasons to be hopeful.

Transportation encompasses more than passenger cars, with about 25% of transportation carbon emissions coming from medium- and heavy-duty commercial transport. While hydrogen will compete with battery electrics in commercial vehicles, both have significant infrastructure challenges. Battery electrics don’t have the same advantages in large vehicles with high mileage as they do for passenger cars. The best choice remains unclear.

Not long ago, the technical barriers for fuel cells in large commercial vehicles seemed insurmountable. But the technology is here today. The key barriers remain in the hydrogen ecosystem: achieving low-cost production, sufficient distribution, and matching of supply and demand. The US hydrogen hubs are an exciting idea for creating a useful hydrogen market, tackling production and multisector consumption in a coordinated way. Initiatives such as the hubs are important to advance the portfolio of hydrogen applications beyond transportation.

Not long ago, the technical barriers for fuel cells in large commercial vehicles seemed insurmountable. But the technology is here today.

The success of hydrogen in commercial transport depends on the key question the article asks: “Which users of fossil fuels must bear the costs?” Companies that operate commercial vehicles are sensitive to the total cost of ownership. Diesel is a low-cost, energy-dense fuel with an existing infrastructure. While the low cost makes diesel difficult to displace, we must also account for all societal costs. Diesel trucks are large emitters of particulate matter and pollutants, which have severe impacts on health in many communities.

Karplus and Morgan place a 70:30 bet that hydrogen “will become an important part of the portfolio of technologies” for decarbonization. Portfolio is a key word here, and we need to explore all options for commercial transport including battery-electrics, better fuels, and better fuel economy. I don’t know if the 70:30 odds for hydrogen are a good bet or not. But at Toyota, we’re aggressively developing the technologies to try to tilt those odds toward success as strongly as we can.

Vice President of Energy & Materials

Toyota Research Institute

Valerie J. Karplus and M. Granger Morgan clearly describe the potential for hydrogen to decarbonize the US economy and the need for both policy and low-cost ways to safely harness the advantages of hydrogen.

To play its part, the US Department of Energy Hydrogen Program has worked for decades to accelerate technological advances and de-risk industry investments. The program includes the regional clean hydrogen hubs and the Hydrogen Energy Earthshot (aimed at enabling production of 1 kilogram of clean hydrogen for $1 in 1 decade), which are two of the flagship initiatives launched by the Biden-Harris administration. It also includes long-standing efforts across research and development, manufacturing, and financing. Hydrogen is not considered a silver bullet, but is one part of a comprehensive portfolio of solutions to meet the nation’s climate goals. As stated in the US National Clean Hydrogen Strategy and Roadmap, clean hydrogen can cut emissions across sectors such as industry (e.g., iron, steel, and fertilizer production) and heavy-duty transportation, and can enable greater adoption of renewables through long-duration energy storage.

Although challenges remain, there has been significant progress resulting from DOE’s decades of leadership and investment in hydrogen technologies.

While the authors focus on the hubs, the administration’s investments in clean hydrogen also include several other relevant initiatives. Programs across DOE and the Hydrogen Interagency Taskforce, which encompasses 12 federal agencies, are addressing challenges spanning the entire clean-hydrogen value chain—including siting, permitting, and developing sensors to monitor emissions; ensuring safety; fostering a robust supply chain; establishing fueling stations; and lowering cost across production, delivery, storage, dispensing, and end uses of clean hydrogen. New projects are being launched to share best practices for community engagement to help inform clean hydrogen hubs and other deployments.

Although challenges remain, there has been significant progress resulting from DOE’s decades of leadership and investment in hydrogen technologies. At least 4.5 gigawatts of electrolyzer deployments (not including the hubs) are underway in the United States, up from 0.17 GW in 2021. (One GW is roughly the size of two coal-fired power plants and is enough energy to power 750,000 homes.) With funding from the Infrastructure Investment and Jobs Act, also known as the Bipartisan Infrastructure Law, adopted in 2021, DOE is enabling 10 GW per year of electrolyzer manufacturing and 14 GW per year of fuel cell manufacturing—an order of magnitude increase over today’s capacity. Thousands of commercial systems in diverse applications such as forklifts, trucks, buses, and stationary power are now operating. DOE funding has led to over 1,080 US patents since 2004 in hydrogen and fuel cell innovations. Over 30 of these DOE-funded technologies have been commercialized and about 65 could be commercial in the next several years.

The Strategy and Roadmap targets 10 million metric tons (MMT) per year of clean hydrogen use by 2030, 20 MMT per year by 2040, and 50 MMT per year by 2050, which will enable up to 10% reduction in total greenhouse gas emissions economy-wide by 2050. To meet these ambitious goals, it is essential to accelerate deployments and scale up. Recent public announcements add up to 14 MMT per year of planned clean hydrogen production, or over 17 MMT per year including the hubs, pending final investment decisions. While designing and implementing a perfect policy framework is challenging, the current programs and policies in place are having impact, and the nation must keep up the momentum to realize the full benefits of clean hydrogen for the climate and the economy.

Director, Hydrogen and Fuel Cell Technologies Office

US Department of Energy

A Very Different Voice


Xavier Cortada’s The Underwater is a series of public art installations that reveals the vulnerability of Florida’s coastal communities to rising seas. In the form of murals, crosswalks, concrete monuments, and yard signs, the artworks prominently feature the elevation of the site where they’re located. Through community workshops, apps, and even buswraps, these works raise awareness, spark conversations about climate, and catalyze civic engagement. 

As the chief resilience officer of Broward County, Florida, this kind of engagement is vital to me: I’m responsible for leading climate mitigation and adaptation strategies across our 31 municipalities with 1.9 million people. Our land is between 4 and 9 feet above sea level, and we have nowhere to retreat if it floods. A lot of my effort is focused on helping to guide planning and management decisions that support our natural resources as well as our built environment, in addition to working with state, federal, and local agencies on a coordinated strategy to reduce the severity of the impacts of climate change.

In Broward County, we’ve been working on climate initiatives since 2000. But despite all our progress, we’ve really struggled with communications and community engagement. We don’t have the large community-based activist groups that have served as on-the-ground partners in some other places. And we are aware that government isn’t always the best messenger, and that we need a diversity of voices.

I first encountered Xavier’s work at an environmental youth summit with a few thousand students. I was really overwhelmed with the quality of the work and the students’ connection with the history behind it. Xavier explained his own experience as an artist: he had gone to Antarctica and felt the ice melt in his hands and realized how that connected to the communities that he loves. He was utilizing this creative messaging of art to engage and communicate in such a powerful way. 


After I met Xavier and discovered his work, we in Broward County began to envision how we might work with him directly. The people who typically work in this arena tend to communicate like I do—I mean, we’re all technical people. No matter how I try to simplify things, I have trouble getting my message across. But Xavier is a very different voice. Talking with him wasn’t like an academic conversation; it was one of really deep connection. What he says has its origin in his heart. We asked Xavier to present to our climate task force, and then we began to plan to work together with schools.

One thing that struck me is that Xavier engages in a different kind of climate conversation. Often people are just overwhelmed and feeling a sense of devastation and loss when it comes to climate change. Xavier is able to talk about providing individuals with agency—that you have an important stake in what’s happening, and you have the capability to be an important messenger. He emphasizes the power of your voice and your participation, that this is about your community.


Another reason we were excited to work with Xavier is that he really understands and cares about young people. He leaves them feeling empowered and capable and invested. When children join a workshop, they learn about sea level rise. They can use an app to see what their local elevation is, and how changes in sea level will affect their school and their neighborhood. They work on making a yard sign with the elevation of their home and Xavier paints an incredibly beautiful mural in the hallway of their school, with the elevation of the school. Imagine how empowering it is for those 100 students to be the keepers of this knowledge: they’re the experts, they’re able to lead that next level of conversation. I think he leaves all of us feeling like we’re not destined to inherit the problem. We have the capacity to be part of the solution.

We not only hosted workshops at 10 public schools, but we also invited Xavier to speak and hold a workshop at Water Matters Day—our county’s largest environmental event, with 3,000–4,000 attendees annually. Later we organized a community climate conversation with Xavier focused on our central county neighborhoods, and we are installing Xavier’s art on the façade our county garage. It’s a beautiful tile mural featuring the elevation of the garage in downtown Fort Lauderdale. And just now, actually, I was able to sign off on an additional art installation we will feature right outside our commission chambers.

So much about Xavier’s conversation is trying to get people to pause long enough to ask the question: “What am I looking at?” And to use that art as an opportunity to have conversations that wouldn’t have happened otherwise.


Xavier Cortada, Underwater Elevation Sculpture (Hard Positive), 2023, sustainable concrete.

Xavier Cortada, Underwater Elevation Sculpture (Hard Positive), 2023, sustainable concrete.

Cortada is creating a permanent interactive art installation of sustainable concrete elevation sculptures across all the more than 200 parks in Miami-Dade County, Florida. Anyone who scans the sculptures’ QR codes can discover their home’s elevation above sea level and pick up a free Elevation Yard Sign to put in their front yard, joining the countywide installation and raising awareness in their neighborhood.

Xavier Cortada, Underwater Elevation Sculptures (Hard Positive Numbers), 2023, sustainable concrete.

These numbers are used in Cortada’s large sculptures, each depicting a park’s elevation above sea level. The concrete markers are made from seawater, recycled aggregate, and nonmetallic reinforcement, aiming to use building materials that are less energy-intensive and better for the environment.

Xavier Cortada, Underwater Elevation Sculptures (Hard Positive Numbers), 2023, sustainable concrete.

Xavier Cortada's Antarctic Ice Paintings

In 2007, Cortada created a series of paintings using sea ice, glacier, and sediment samples provided by scientists working in Antarctica, making the continent itself both the subject and the medium of the art. The paintings, serene yet foreboding, are a poignant reflection on the impact of climate change on the artist’s hometown of Miami—and the world. Made in Antarctica, these artworks use the ice that threatens to melt and submerge coastal cities.

In response to his Antarctic Ice Paintings created 18 months earlier, Cortada used arctic ice to produce a series of paintings aboard a Russian icebreaker returning from the North Pole. He taped pieces of paper to the top deck of the icebreaker and placed North Pole sea ice and paint on them. As the icebreaker journeyed south from 90 degrees north, carving through the sea ice by sliding on top of it and then crushing down through it, Cortada’s ice melted and the pooled water moved as it evaporated, creating the Arctic Ice Paintings. He sought to use the vessel’s motion to capture the existence of sea ice before it vanished, understanding that traversing the Arctic Ocean in the decades to come would not require an icebreaker.

Second-Order Effects of Artificial Intelligence

In “Governing AI With Intelligence” (Issues, Summer 2024), Urs Gasser provides an insightful survey on regulating artificial intelligence during a time of expanding development of a technology that has both tremendous upside potential but also downside risk. His article should prove especially valuable for policymakers faced with making critical decisions in a rapidly changing and complex technological landscape. And while it is difficult enough to make decisions based on the direct consequences of AI technologies, we’re now beginning to understand and experience some second-order effects of AI that will need to be considered.

Two examples may prove illustrating. Focusing on generative AI, we’ve witnessed over the past decade or so rapid development and scaling of the transformer architecture and diffusion models that have revolutionized how we generate content—text, images, software, and more. Applications based on these developments (e.g., ChatGPT, Copilot, Midjourney, Stable Diffusion) have become commonplace, used by millions of people every day. Much has been observed about increases in worker productivity as a consequence of using generative AI, and indeed there are now numerous careful empirical studies demonstrating positive effects to productivity in, for example, writing, software development, and customer service. But as worker productivity goes up, will there be reduced need for today’s quantity of workers? Indeed, the investment firm Goldman Sachs has estimated that 300 million jobs could be lost or diminished by AI technology. The company goes on to estimate that 25% of current work tasks could be automated by AI, with particularly high exposures in administrative and legal positions. Still, the company also points out that workforce displacement due to automation has historically been offset by the creation of new jobs following technological innovation and that new jobs are created that actually account for employment growth in the long run.

We’re now beginning to understand and experience some second-order effects of AI that will need to be considered.

A second example relates to AI energy consumption. As generative AI technologies and applications scale with more and more content being generated, we are learning more about the energy that is consumed in training the models and in generating the new content. From a global energy consumption view, one estimate holds that by 2027 the AI sector could consume as much energy as a small country (e.g., the Netherlands)—potentially representing a half a percent of global energy consumption by then. Taking a more granular view, researchers have reported that generating a single image based on a powerful AI model uses as much energy as it does to charge an iPhone, and that a single ChatGPT query consumes nearly as much energy as 10 Google searches. Here again there may be some good news, as it may well be possible to use AI to come up with ways to reduce global energy usage that more than makes up for the increased energy usage need to power modern AI.

As use of AI expands, these and other second-order (and higher) effects will likely prove increasingly important to consider as we work to develop policies that lead to responsible governance of this critical technology.

Professor Emeritus, Department of Computer Science

Southern Methodist University

There is much wisdom in Urs Gasser’s essay: verbal and visual maps of the emerging variety of governance approaches across the globe and some cross-cutting insights. Especially important is the call for capacity-building to equip more people and sectors of societies to contribute to the governance and development of what seems the most significant technological development since the invention of the book. Apple’s Steve Jobs once said, “Technology alone is not enough. It’s technology married with the liberal arts, married with the humanities, which yields us the results that make our hearts sing.” Ensuring that the “we” here includes people from varied backgrounds, communities, and perspectives is not only a matter of equity and fairness, but also important to the quality and trustworthiness of the tools and their uses.

The challenge is not simply unequal knowledge and resources, but also altering who is “at the table” where vital decisions about purposes, design choices, risk levels, and even data sources are made.

In mapping governance approaches, Gasser includes developing constraining and enabling norms efforts that seek to “level the playing field” through AI literacy and workforce training efforts, and addressing gaps and transparency or disclosure requirements that “seek to bridge gaps in information between tech companies and users and societies at large.” Here the challenge is not simply unequal knowledge and resources, but also altering who is “at the table” where vital decisions about purposes, design choices, risk levels, and even data sources are made.

Individual “users” are not organized, and governments and companies are more likely to be hearing from particularly powerful and informed sectors in designing governance approaches. What would it take to ensure the active involvement of civil society organizations—workers’ groups, Indigenous groups, charitable organizations, faith-based organizations, professional associations, and scholars—in not only governance but also design efforts?

Experiments in drawing in such groups and individuals would be a worthy priority for governance initiatives. Meanwhile, Gasser’s guide offers an effective place for people from many different sectors to identify where to try to take part.

300th Anniversary University Professor

Harvard University

Much talk about governing artificial intelligence is about a problematic balance. On the one hand, there are those who caution that regulation (not even overregulation) will slow innovation. This position rests on two assumptions each requiring substantiation, not assertion: that regulation retards frontier thinking, and that innovation brings wider social benefit beyond profit for the information economy. On the other hand, there are those who fear the risks of AI, already apparent and many yet to be realized. Whether the balance debate is grounded in more than supposition, it raises fundamental questions about how we value and prioritize AI.

Urs Gasser is confident of a growing consensus around the challenges and benefits attendant on AI, but not so about the “guardrails” necessary to ensure its safe and satisfying application. He holds that there is a galvanizing of norms that might provide form for governing AI intelligently. No doubt, there have been decades of deliberation in formulating an “ethics” to influence the attribution and distribution of responsibility without coming closer to agreement on what degree of risk we are willing to tolerate for what benefits, no matter the principles applied to either. National, industrial, and global energies directed at governance exhibit diversity in strategies and languages that in actuality are as much evidence of a failing to achieve a common intelligence for governing AI, than demonstrating an emerging consensus. Not surprising when politically, economically, commercially, and scientifically so much hope is invested in an AI-led recovery from degrowth, and AI-answers to impending global crises.

Are we advancing toward a more informed governance future for AI by concentration on similarities and divergences in systems, means, aims, and purposes across a tapestry of regulatory styles? Even if patterns can be distilled, do they indicate anything beyond Gasser’s “islands of cooperation in the oceans of AI governance”? He is correct in arguing that guardrails forming boundaries of permission within which a healthy alliance between human decisionmaking and AI probabilities are essential, if even a viable focus for AI governance is to be determined. However, with governance following what he calls the “the narrow passage allowed by realpolitik, the dominant political economy, and the influence of particular political and industrial incumbents,” the need for innovation in AI governance is pressing.

Are we advancing toward a more informed governance future for AI by concentration on similarities and divergences in systems, means, aims, and purposes across a tapestry of regulatory styles?

So, we have the call to arms. Now, what to do about it in practical policy terms? Recently when asked what was the greatest danger posed by AI, a renowned data scientist immediately responded “dependency.” Our digital universe has enveloped us in a culture of convenience, making it almost impossible to determine whether the ways we depend on AI-assisted technology are good or bad. Beyond this crucial governance question, it is imperative to reposition how we prioritize intelligence. Why should generative AI predominate over the natural processes of human deliberation? From the Enlightenment to the present age of techno humanism, scientific managerialism has come to dominate reason and rationality. It is time for governance to show courage in valuing human reasoning when measuring the benefits of scientific rationality.

Distinguished Fellow, British Institute of International and Comparative Law

Honorary Professorial Fellow of the Law School, University of Edinburgh

The Politics of Wastewater Reuse

In “Industrial Terroir Takes on the Yuck Factor” (Issues, Summer 2024), Christy Spackman describes clever attempts to overcome the prevailing challenge of public skepticism toward the prospect of potable water reuse.

The effects of infrastructure have long been recognized by urban historians as profound and path dependent, albeit indeterminate. In the case of water reuse, once the initial water and sewers systems are laid, the accompanying social, economic, and cultural institutions serve to entrench a commitment to waterborne sanitation systems materially, culturally, and politically. Thus, in the United States and elsewhere, the flush toilet and the treatment-based approach to managing water quality results in investment in water purification technologies and, ultimately, finding beneficial uses for wastewater. In this regard, the “treat, treat, and treat again” industrial terroir supports the reasonableness, acceptability, and inevitability of reusing wastewater for drinking water.

By adapting to existing infrastructure, including political commitment to flush toilets and the removal of pollutants via centralized wastewater treatment, engineers apply new tools and new procedures to move a finite amount of water through higher levels of treatment.

For boosters of potable water reuse, purity and security are key discursive concepts. At the molecular level, treatment processes remove all markers of “place” from water, but as soon as we change our scale, as Spackman does, we understand that urban drinking water is an intimate and embodied experience. Further, water is geopolitical. Water infrastructures are, in essence, social arrangements. The focus on the molecular scale provides little opportunity to consider the inevitable changes in social power that accompany this shift. Who gains, who suffers, and who pays for this change?

By adapting to existing infrastructure, including political commitment to flush toilets and the removal of pollutants via centralized wastewater treatment, engineers apply new tools and new procedures to move a finite amount of water through higher levels of treatment. As a result, highly treated wastewater is seen as a solution to many of the growing challenges of urban water scarcity in many regions. Although purported as radical reorganization of water governance (by Spackman and others), potable water reuse is an approach that minimally disrupts the fundamental infrastructure and inertia of large sociotechnical systems. In this case, innovative new technologies have been designed to retrofit and protect outdated infrastructures in a process the political scientist Langdon Winner described as “reverse adaptation.” This preference to adapt to the established infrastructure has meant that alternative means of managing human bodily wastes have never realistically been considered.

The universal ideal of modern sanitation is not complete, nor is it necessarily stable. Cities across the globe are facing serious water, energy, and transportation challenges. The prospect of potable water reuse offers a unique opportunity to make connections, discover alternatives, and acknowledge that urban transition is inevitable. Water development aimed at providing greater water security with the least social disruption over the short term may be a maladaptation. The question is not solely if the public will accept that potable water reuse can be done safely, but if reuse will lend itself to a sustainable and just transition at the city and regional scale.

Associate Professor, Department of Geography

University of Nevada, Reno

In a seminal lecture in Dallas in 1984, which would later get published as H20 and the Waters of Forgetfulness, the philosopher, priest, and social critic Ivan Illich argued for a separation of water and H20. The latter, a modernist creation, was “stuff” produced by an industrial society and circulated through pipelines to deodorize and sanitize urban space. Devoid of social and spiritual meaning, H20 was reduced to the odorless and tasteless substance we became familiar with in school textbooks, but perhaps rarely encountered in our everyday lives.

Christy Spackman makes it clear that the struggle between water and H20 continues to animate contemporary concerns around “scarcity” and “reuse.” The scientific and technological labor that transformed water into H20 involved a two-step process. The first required the material reconstruction of water by removing “undesirable” salts, metals, and minerals, and purifying it by adding chlorine (and in many parts of the world “fortifying” it with fluoride). The second step involved reworking the sensorial and social script around H20 and resocializing it into potable water.

The acceptability of direct potable reuse of wastewater has to negotiate this challenge of resocialization. Recycled wastewater has to regain its place in society. It has to shed the history of its recent defilement by illustrating that what is being used to produce beer is not just engineered H20, but potable water.

Technologies can materially reconstitute H20 in myriad ways and claim it to be “just straight water,” but to users water quality remains a product of history.

Matter constitutes memory in water—where it has been (place), for how long (time). When we add and subtract matter in water, we reconstitute its relation to place and time. One might assume that since modern (and secular) water emerges out of a continual process of addition and subtraction, it should not be difficult to convince users to drink recycled water. The “yuck” factor that Spackman describes contradicts that logic. Technologies can materially reconstitute H20 in myriad ways and claim it to be “just straight water,” but to users water quality remains a product of history. The engineer can erase the material history of water, but the user will remember its past relationships with place and time. This shows up in Spackman’s discussion of the humorous expression “poop beer,” which refers to beer made with recycled water. Resocializing H20 as water, therefore, requires not only reconstituting matter in water but also the users’ memory of that water.

The author’s lively essay illustrates the continued contest of competing imaginations around water in Arizona. I cannot but wonder as to how memories will be reconstituted in Flint, Michigan, or Jackson, Mississippi, where water has the color of lead and the odor of racism. As place forcefully asserts its presence in water in these sites, it reminds us that increasing demand for recycling wastewater for potable reuse will soon have to contend not only with matters of taste but also with concerns of justice.

Lecturer, Water Governance

IHE Delft Institute for Water Education

The Netherlands

Combining Tradition and Technology

In “Reform Federal Policies to Enable Native American Regenerative Agriculture” (Issues, Spring 2024), Aude K. Chesnais, Joseph P. Brewer II, Kyle P. Whyte, Raffaele Sindoni Saposhnik, and Michael Kotutwa Johnson provide a useful baseline into the history of regenerative agriculture and its use on tribal lands. Inherent in tribal producers, regenerative agriculture continues to be practiced since time immemorial using traditional ecological knowledge (TEK), which works with ecosystem function through place-based innovation.

Often, TEK is dismissed despite thousands of years of responsive adaptation. Many methods used by tribal producers yield equivalent or higher outcomes than the practices stipulated for reimbursement by the US Department of Agriculture’s Natural Resources Conservation Service, yet they are not always eligible for the same payments because they are based on methods instead of equivalent outcomes.

Native systems recognize that soil health cannot be siloed from water quality, habitat preservation, or any other element because of the impact of its interconnectedness across all parts of the ecosystem.

TEK is a living adaptive science that uses traditional knowledge fused with current conditions and new technologies to create innovative Indigenous land stewardship. Not only do Native systems of regenerative agriculture assist in carbon sequestration, but they also focus on whole ecosystem function and interaction. This creates a more long-term sustainable regenerative system. Native systems recognize that soil health cannot be siloed from water quality, habitat preservation, or any other element because of the impact of its interconnectedness across all parts of the ecosystem.

The right to data sovereignty, resource protection, and cultural information is integral for the progress of regenerative agriculture on tribal lands. From historical inequities to current complex jurisdictional issues, Native producers face challenges not faced by other producers. Most tribal land is marginalized, contaminated, less productive, and thought to be less desirable. Tribes have experienced historical barriers that have led to problems in accessing or controlling their own data and to misuse of their data by outside entities. Moving in the right direction, tribally-focused data networks will help tribal nations combine tradition and technology for optimal land stewardship.

Natural Resources Director

Intertribal Agriculture Council

The Anthropocene: Gone But Not Forgotten

In “A Fond Farewell to the Anthropocene” (Issues, Spring 2024), Ritwick Ghosh advances an insightful—and often neglected—analysis. The long-running controversies around the geophysical science of the Anthropocene have not only demonstrated the political nature of the scientific enterprise itself. More importantly, as Ghosh attests, they have illustrated how the political questions that define society-nature relations tend to be covered up or suppressed by the very attempt to displace political conflict to the assumedly neutral terrain of science.

Thus, the nonrecognition of the Anthropocene as a geological epoch by the International Union of Geological Sciences is to be truly welcomed. It formally ends the inherently fraudulent attempt to base decisions about the fate and future of Earth and its inhabitants on a “scientific” notion rather than on a proper political basis. Indeed, I would argue that the rejection makes it possible to foreground the political itself as the central terrain on which to debate and act on policies to protect and perhaps even improve the planet. Politicizing such questions does not depend on the inauguration of a geophysical epoch. We already know that some forms of human action have profound terra-transforming impacts, with devastating consequences for all manner of socio-ecological constellations.

The socio-ecological sciences have systematically demonstrated how social practices radically transform ecological processes and produce often radically new socio-physical assemblages. The most cogent example of this is, of course, climate change. The social dimensions of geophysical transformations demonstrate beyond doubt the immense inequalities and social power relations that constitute Earth’s geophysical conditions.

The nonrecognition of the Anthropocene as a geological epoch by the International Union of Geological Sciences is to be truly welcomed.

The very notion of the Anthropocene decidedly obfuscated this uncomfortable truth. Most humans have no or very limited impact on Earth’s ecological dynamics. Rather, a powerful minority presently drives the planet’s future and shapes its decidedly uneven socio-ecological transformation. Humanity, in the sense that the Anthropocene (and many other cognate approaches) implies, does not exist. It has in fact never existed. As social sciences have systematically demonstrated, it is the power of some humans over others that produces the infernal socio-environmental dynamics that may threaten the futures of all.

Abandoning the Anthropocene as a scientific notion opens, therefore, the terrain for a proper politicization of the environment, and for the potential inauguration of new political imaginaries about what kind of future world can be constituted and how we can arrange human-nonhuman entanglements in mutually nurturing manners. And this is a challenge that no scientific definition can answer. It requires the courage of the intellect that abandons any firm ontological grounding in science, nature, or religion and embraces the assumption that only political equality and its politicization can provide a terraforming constellation that would be supportive for all humans and nonhumans alike.

Professor of Geography

University of Manchester, United Kingdom

Ritwick Ghosh closes the door on this newly named period of geological time—without fully understanding the scientific debate. Let me make it very clear, we are in the Anthropocene. Science shows that we are living in a time of unprecedented human transformation of the planet. How these manifold transformations of Earth’s environmental systems and life itself are unfolding is messy, complex, socially contingent, long-term, and heterogeneous. Most certainly, they cannot be reduced to a single thin line in the “mud” dividing Earth’s history into a time of significant human transformation and a time before. This is why geologists have rejected the simplistic approach of defining an Anthropocene Epoch beginning in 1952 in the sediments of Crawford Lake, in Canada.

Science shows that we are living in a time of unprecedented human transformation of the planet.

Geologically, the Anthropocene is better understood as an ongoing planetary geological event, extending through the late Quaternary: a broad general definition that captures the diversity, complexity, and spatial and temporal heterogeneity of human societal impacts. By ending the search for a narrow epoch definition in the Geologic Time Scale and building instead on the more inclusive common ground of the Anthropocene Event, attention can be turned toward more important and urgent issues than a start date.

The Anthropocene has opened up fertile ground for interdisciplinary advances on crucial planetary issues. Understanding the long-term processes of anthropogenic planetary transformation that have resulted in the environmental and climate crises of our times is critical to help guide the societal transformations required to reduce and reverse the damage done—while enhancing the lives of the planet’s 8 billion people. The Anthropocene is as much a commentary on societies, economic theory, and policies as it is a scientific concept. So I say in response to Ritwick Ghosh: welcome to the Anthropocene. The Anthropocene Epoch may be dead, but the Anthropocene Event and multiple other interpretations of the Anthropocene are alive and developing—and continually challenging us to do something about the polycrisis facing humanity.

Professor of Earth System Science, Department of Geography

University College London

Global Diplomacy for the Arctic

The Arctic was long known as a region where the West and Russia were able to find meaningful ways to collaborate, motivated by shared interests in concerns in environmental protection and sustainable development. The phenomenon even had a name: Arctic exceptionalism. Most of that was lost when Russia invaded Ukraine in February 2022.

There has been much hand-wringing about the extent to which the severing of political and economic ties should apply to scientific collaboration. In the Arctic, science has often persevered even when other forms of engagement were cut off. The International Polar Year 1957–58, the 1973 Agreement on the Conservation of Polar Bears, and the 1991 Arctic Environmental Protection Strategy are prominent examples.

A culture of Arctic scientific collaboration has also defined the work of the Arctic Council, the region’s main intergovernmental forum. Incremental efforts to resume collaboration have focused on allowing the council’s Working Groups—populated largely by experts and researchers, and focused on scientific projects—to resume their work, if virtually rather than in person. There have been many discussions on whether the Arctic Council should continue without Russia; the conventional wisdom is that climate change and other issues are so important that we can’t afford to cut ties completely.

In the Arctic, science has often persevered even when other forms of engagement were cut off.

Academics and scientists were often encouraged to collaborate with Russians on Arctic research between 1991 and 2021. Now it is becoming taboo. As Nataliya Shok and Katherine Ginsbach point out in “Channels for Arctic Diplomacy” (Issues, Spring 2024), “The invasion prompted many Western countries to impose a range of scientific sanctions on Russia … The number of research collaborations between Russian scientists and those from the United States and European countries has fallen.” In fact, a colleague of mine was terminated from the University of Lapland for attending a conference in Russia where he spoke about climate change cooperation.

Shok and Ginsbach do an admirable job of framing this context. But they go beyond that, reminding us of the importance of scientific collaboration on human health in the Arctic region. Some of us may recall, and the authors recount, when a heat wave in 2016 resurfaced anthrax bacteria long buried in permafrost in Russia’s Arctic Yamal Peninsula. The outbreak killed thousands of reindeer and affected nearly a hundred local residents. It had us asking, what else will a warmer Arctic bring back into play? A study conducted by a team of German, French, and Russian scholars before the invasion of Ukraine sought to help answer this, identifying 13 new viruses revived from ancient permafrost.

This type of research is now under threat. There’s a case to be made that regional collaboration on infectious disease is even more urgent than that on melting permafrost or other consequences of climate change. It’s not a competition; but in general, better understanding Arctic sea ice melt or Arctic greening won’t prevent climate change from happening. Understanding the emergence of new Arctic infectious diseases, by contrast, can be used to prevent outbreaks. Shok and Ginsbach recommend, at a minimum, that we establish monitoring stations in the high-latitude Arctic to swiftly identify pathogens in hot spots of microbial diversity, such as mass bird-nesting sites.

There is no easy answer to the question of whether or how to continue scientific collaboration with Russia in the wake of the illegal invasion of Ukraine. But it is undoubtedly a subject that needs contemplation and debate. Shok and Ginsbach provide a good start at that.

Managing Editor, Arctic Yearbook

Director of Energy, Natural Resources and Environment, Macdonald-Laurier Institute, Ottawa, Ontario

The COVID-19 pandemic powerfully demonstrated the importance of international scientific cooperation in addressing a serious threat to human health and the existence of modern society as we know it. Diplomacy in the field of science witnessed a surge in the race to find a cure for the SARS-CoV-2 virus. The thawing Arctic region is at risk of giving rise to a new virus pandemic, and scientific collaboration among democratic and authoritarian regimes in this vast geographical area should always be made possible.

International science collaboration and science diplomacy, however altruistic, risks being run over by the global big power rivalry between players such as Russia, China, and the United States, with the European Union and the BRIC countries acting as players in between. In its recent report From Reluctance to Greater Alignment, the German Marshall Fund argues that Russia’s scientific interests in the Arctic, beyond security considerations, are mostly economic with a focus on hydrocarbon extraction and development of the Northern Sea Route, trumping any environmental or health considerations.

The thawing Arctic region is at risk of giving rise to a new virus pandemic, and scientific collaboration among democratic and authoritarian regimes in this vast geographical area should always be made possible.

Russian and Chinese scientific cooperation in the Arctic has increased significantly since their first joint Arctic expedition in 2016. China was Russia’s top partner for research papers in 2023, and scientists from both countries have downplayed the military implications of their scientific collaborations in the Arctic, emphasizing their focus on economic development. However, many aspects of this collaboration, such as the Arctic Blue Economy Research Center, include military or dual-use applications in space and deep-sea exploration, and have proven links to the Chinese defense sector.

Given the scientific isolation of Russia after its invasion of Ukraine in 2022, scientific collaboration in the areas of health and environmental concerns within the auspices of the Arctic Council and other international organizations seem to be the last benign avenues for Russian scientific collaboration with other Arctic powers and the West at large. Russia’s pairing up with China on seabed and mineral exploration in the Arctic does not, however, strengthen trust and confidence of Russian efforts in health and environmental issues being free of military and security policy.

The last frontier of scientific cooperation for the benefit of health and environmental stability in the Arctic region stands to be overrun by global power politics, with science diplomacy being weaponized as a security policy tool, among others. This is a sad reality acknowledged by the seven countries—Canada, Denmark (Greenland), Finland, Iceland, Norway, Sweden, and the United States—that exercise sovereignty over the lands within the Arctic Circle. (Russia was voted out of the oversight Arctic Council after its invasion of Ukraine.) It is therefore worth considering whether Russia’s Arctic research interest in the fields of health and environment would benefit from a decoupling from China and any other obvious military or dual-use application.

Senior Fellow, Transatlantic Defense and Security

Center for European Policy Analysis, Washington DC

Boosting Hardware Start-ups

In “Letting Rocket Scientists Be Rocket Scientists: A New Model to Help Hardware Start-ups Scale” (Issues, Spring 2024), John Burer effectively highlights the challenges these companies face, particularly in the defense and space industries. The robotics company example he cites illustrates the pain points of rapid growth coupled with physical infrastructure, demonstrating the different dynamics of hardware enterprises as compared with software.

However, I believe the fundamental business issue for hardware start-ups is generating stable, recurring revenue when relying on sales of physical items that bring in a one-time influx of revenue, but bear no promise of future revenue. Consider consumer companies such as Instant Pot and Peloton, which serve as cautionary tales that rode a wave of virality to high one-time sales and suffered with the failure to create follow-on products to fill production lines and pay staff salaries.

Further analysis of the issues Burer raises would benefit from exploring how the American Center for Manufacturing and Innovation’s (ACMI) industry campus model or other solutions directly address this core problem of revenue stability that any hardware company faces. Does another successful product have to follow the first? Is customer diversity required? Even hardware companies focusing solely on national security face this problem.

While providing shared infrastructure is valuable, more specifics are needed on how ACMI bridges the gap to full-scale production beyond just supplying space. Examining the broader ecosystem of hardware-focused investors, accelerators, and alternative models focused on separating design and manufacturing is also important. The global economy has undergone significant reconfiguration, with much of the manufacturing sector organizing as either factoryless producers of goods or providers of production-as-a-service, focusing on core competencies of product invention and support, or supply chain management and pooling demand. This highly digitally-coordinated model can’t work for every product, but the world looks very different from the golden age of aerospace, when it made sense to make most things in-house or cluster around a local geographic sector specialized in one industry.

Overall, Bruer identifies key challenges, but the hardware innovation community needs a broader conversation on business demands, especially around revenue stability, a wider look at the hardware start-up ecosystem, and concrete evidence of the ACMI model’s impact. I look forward to seeing this important conversation continue to unfold.

Senior Fellow, Center for Defense Concepts and Technology, Hudson Institute

Executive Partner, Thomas H. Lee (THL) Partners

The author is a former program manager and office deputy director of the Defense Advanced Research Projects Agency

John Burer eloquently describes a new paradigm to strategically assemble and develop hardware start-up companies to enhance their success within specific industrial sectors. While the article briefly mentions the integration of this novel approach into the spaceflight marketplace, it does not fully describe the tremendous benefits that a successful space systems campus could provide to the government, military, and commercial space industries, as well as academia. Such a forward-thinking approach is critical to enable innovative life sciences and health research, manufacturing, technology, and other translational applications to benefit both human space exploration and life on Earth.

The advantages of such an approach are clearly beneficial to many research areas, including space life and health sciences. These research domains have consistently shown that diverse biological systems, including animals, humans, plants, and microbes, exhibit unexpected responses pertinent to health that cannot be replicated using conventional terrestrial approaches. However, important lessons learned from previous spaceflight biomedical research revealed the need for new approaches in our process pipelines to accelerate advances in space operations and manufacturing, protect the health of space travelers and their habitats, and translate these findings back to the public on Earth.

A well-integrated, holistic space campus system could overcome many of the current gaps in space life sciences and health research by bringing together scientists and engineers from different disciplines to promote collaboration; consolidate knowledge transfer and retention; and streamline, simplify, and advance experimental spaceflight hardware design and implementation. This type of collaborative approach could disrupt the usual silos of knowledge and experience that slow hardware design and verification by repeatedly requiring reinvention of the same wheel.

A well-integrated, holistic space campus system could overcome many of the current gaps in space life sciences and health research.

Indeed, the inability of current spaceflight hardware design and capabilities to perform fully automated and simple tasks with the same analytical precision, accuracy, and reproducibility achieved in terrestrial laboratories is a major barrier to space biomedical research—and creates unnecessary risks and delays that impact scientific advancement. In addition, the inclusion and support of manufacturing elements in a space campus system can allow scaled production to meet the demands and timelines required for the success of next-generation space life and health sciences research.

The system described by Burer has clear potential to optimize our approach to such research and can lead to new medical and technological advances. By strategically nucleating our knowledge, resources, and energy into a single integrated and interdisciplinary space campus ecosystem, this approach could redefine our concept of a productive space research pipeline and catalyze a much-needed change to advance the burgeoning human spaceflight marketplace while “letting rocket scientists be rocket scientists.”

Professor, School of Life Sciences

Biodesign Center for Fundamental and Applied Microbiomics, Biodesign Institute

Arizona State University

Aerospace Technologist, Life Sciences Research, Biomedical Research and Environmental Sciences Division

NASA Johnson Space Center, Houston, Texas

The Naval Surface Warfare Center Indian Head Division (NSWC IHD) was founded more than 130 years ago as the proving ground for naval guns, and later shifted focus to the research, development, and production of smokeless powder. We continue as a reliable provider of explosives, propellants, and energetic materials for ordnance and propulsion systems for every national conflict, leading us to be recognized as the Navy’s Arsenal.

But this arsenal now needs rebuilding to strengthen and sustain the nation’s deterrence against the growing power of the People’s Republic of China, while also countering aggression around the world.

At the 2024 Sea-Air-Space Exposition, the Navy’s chief of operations, Admiral Lisa Franchetti, discussed how supporting the conflict in Ukraine and the operations in the Red Sea is significantly depleting the US ordnance inventory. NSWC IHD is an aging facility but has untapped capacity, and the Navy is investing in infrastructure upgrades to restore wartime readiness of its arsenal. This investment will modernize production, testing, and evaluation capabilities to allow for increased throughput while maintaining current safety precautions.

Having nearby cooperative industry partners would reduce logistical delays and elevate the opportunity for collaborations and successful technology demonstrations.

NSWC IHD believes that an industrial complex of the type that John Burer describes is worth investigating. While our facility is equipped to meet current demand for energetic materials, we anticipate increased requests for a multitude of products, including precision-machined parts and composite materials. Having nearby cooperative industry partners would reduce logistical delays and elevate the opportunity for collaborations and successful technology demonstrations.

Such a state-of-the-art campus would also provide a safe virtual training environment for energetic formulations, scale-up, and production processes, eliminating the risks inherent with volatile materials and equipment. This capability would allow for the personnel delivering combat capability, to paraphrase Burer, to continue to be rocket scientists and not necessarily trainers.

The Navy recognizes the need to modernize and expand the defense industrial ecosystem to make it more resilient. This will require working in close contact with its partners, including Navy laboratories and NSWC IHD as its arsenal. We must entertain smart, outside-the-box concepts in order to outpace the nation’s adversaries. With these needs in mind, exploring the creation of an industrial campus is a worthwhile endeavor.

Technical Director

Naval Surface Warfare Center Indian Head Division

The growth of the commercial space sector in the United States and abroad, coupled with the increasing threat of adversarial engagement in space, is rapidly accelerating the need for fast-paced development of innovative technologies. To meet the growing demand for these technologies and to maintain the US lead in commercial space activities while ensuring national security, new approaches tackling everything from government procurement processes to manufacturing and deployment at scale are required. John Burer directly addresses these issues and suggests a pathway forward, citing some successful examples including the new initiative at NASA’s Exploration Park in Houston, Texas.

Indeed, activities in Houston, and across the state, provide an excellent confluence of activities that can be a proving ground for the proposed industry campus model in the space domain. The Houston Spaceport and NASA’s Exploration Park are providing the drive, strategy, and resources for space technology innovation, development, and growth. These efforts are augmented by $350 million in funds provided by the state of Texas under the auspices of the newly created Texas Space Commission. The American Center for Manufacturing and Innovation (ACMI), working with the NASA Johnson Space Center, is a key component of the strategy for space in Houston, looking to implement the approach that Burer proposes.

To maintain the US lead in commercial space activities while ensuring national security, new approaches tackling everything from government procurement processes to manufacturing and deployment at scale are required.

There is a unique opportunity to bring together civil, commercial, and national security space activities under a joint technology development umbrella. Many of the technologies needed for exploration, scientific discovery, commercial operation, and national security have much in common, often with the only discriminator being the purpose for which they are to be deployed. An approach that allows knowledge exchange among the different space sectors while protecting proprietary or sensitive information will significantly improve the technology developed, provide the companies with multiple revenue streams, and increase the pace at which the technology can be implemented.

Going one step further and creating a shared-equipment model, which Burer briefly alludes to, would allow small businesses and start-ups access to advanced equipment that would normally be prohibitively expensive, with procurement, installation, and management wasting time and money and limiting the ability to scale. A comprehensive approach such as the proposed industry campus would serve to accelerate research and development, foster more innovation, promote a rapid time to market, and save overall cost to the customer, all helping create a resilient space industrial ecosystem to the benefit of the nation’s space industry and security.

Director, Rice University Space Institute

Executive Board Member, Texas Aerospace Research and Space Economy Consortium

John Burer outlines how the American Center for Manufacturing & Innovation (ACMI) is using an innovative approach to solve an age-old problem that has stifled innovation—how can small businesses go from prototype scale to production when there is a very large monetary barrier to doing so?

The Department of Defense has particularly struggled with this issue, as the infamous “valley of death” has halted the progress of many programs due to lack of government or company funding to take the technology to the next step. This leaves DOD in a position where it may not have access to the most advanced capabilities at a time when the United States is facing multiple challenges from peer competitors.

How can small businesses go from prototype scale to production when there is a very large monetary barrier to doing so?

ACMI is providing a unique solution set that not only tackles this issue but creates an entire ecosystem in which companies can join forces with other companies in the same industrial base sector in a campus-like setting. Each campus focuses on a critical sector of the defense supply chain (critical chemicals, munitions, and space systems) and connects government, industry, and academia together, providing shared access to state-of-the-art machinery and capabilities and creating environments that support companies through the scaling process.

For many small businesses and start-ups, this can be a lifeline. Oftentimes, small companies can’t afford to have personnel with the business acumen to raise capital and build infrastructure and are forced to have their technical experts try to fill these roles—which is not the best model for success. ACMI takes on these roles for those companies, and as Burer states, “lets rocket scientists be rocket scientists”—a much more efficient and cost-effective use of their talent.

One of the most important aspects of the ACMI model is that the government is providing only a small amount of the funding for each campus to get things started, and then ACMI is leveraging private capital—up to a 25 to 1 investment ratio—for the remainder. If this isn’t a fantastic use of taxpayer money, I don’t know what is. At a time when the United States is struggling to regain industrial capability and restore its position as a technology leader, and where it is competing against countries whose governments subsidize their industries, the ACMI model is exactly the kind of innovative solution the nation needs to keep charging ahead and provide its industry partners and warfighters with an advantage.

Founder and CEO, MMR Defense Solutions

Former Chief Technology Officer, Office of the Secretary of Defense, Industrial Base Policy

Given the global competition for leading-edge technology, innovation in electronics-based manufacturing is critical. John Burer describes the US innovation ecosystem as a “vibrant cauldron” and offers an industry campus model that can possibly harness the ecosystem’s energy and mitigate its risks. However, the barriers for an electronics hardware start-up company to participate in the innovation ecosystem are high and potentially costly. While Burer’s model is a great one and can prove effective—witness Florida’s NeoCity and Arizona State University’s SkySong, among others—it does require some expansion in thought.

To build an electronics production facility, start-up costs can run $50 million to $20 billion over the first few years for printed circuit boards and semiconductors, respectively. It can take 18 to 48 months before the first production run can generate revenue. For electronics design organizations, electronics CAD software can range from $10,000 to $150,000 per annual license depending on capability needs. Start-up companies in the defense sector must additionally account for costs where customers have rigorous requirements, need only low-volume production, and expect manufacturing availability for decades. This boils down to a foundational question: How does an electronics hardware start-up with a “rocket scientist” innovative idea ensure viability given the high cost and long road ahead?

How does an electronics hardware start-up with a “rocket scientist” innovative idea ensure viability given the high cost and long road ahead?

One solution for electronics start-ups is to use the campus model, but it may be slightly different from what Burer describes. Rather than a campus, I see a need for what I call a “playground community.” They are similar in that they provide a place for people to interact and use shared resources. But as an innovator, I like the idea of a playground that promotes vibrant interactions between individuals or organizations with a common goal, be it discovery or play. Along with this version of an expanded campus, electronics companies will require community and agility to achieve success.

Expanded campus. A virtual campus concept can be valuable given the high capital expenditure costs for electronics manufacturing. This idea partners companies that have complementary capabilities or manufacturing, regardless of geolocation proximity. Additional considerations in logistics, packaging, or custody for national security are also needed.

Community. Scientists of all types need a community of supporting companies and partners that have common values and goals and capitalize on each other’s strengths. This cross-organizational teaming will allow them to move fast and overcome any challenge together.

Agility. Given the rapid pace of the electronics industry, agility is vitally important. This will require the company and its team community to be able to shift and move together, considering multiple uses of the technology, dual markets, adoption of rapid prototyping and demonstration, modular systems design and reuse, and significant use of automation in all aspects of the business.

Fostering innovative communities in technology development, prototyping, manufacturing, and business partnerships will be required for the United States to maintain competitiveness in the electronics industry as well as other science and technology sectors. As the leader of an electronics hardware security start-up, I am fortunate to have played a role as the allegorical rocket scientist with good ideas, but I am even more glad to be surrounded by a community of businesses and production partners in my playground.

CEO and Founder

Rapid Innovation & Security Experts

Having founded, operated, and advised hardware start-ups for more than 25 years, I applaud the American Center for Manufacturing & Innovation and similar initiatives that aim to bring greater efficiency and effectiveness to one of the most important and challenging of all human activities: the development and dissemination of useful technology. The ACMI model, designed to support hardware start-ups, particularly those in critical industries, offers several noteworthy benefits.

First, the validation by the US government of the problems being solved by campus participants is invaluable. Showing the market that such a significant customer cares about these companies provides credibility and encourages other stakeholders to invest in and engage with them.

Second, the “densification” of resources on an industry-focused campus can yield significant cost benefits. Too often, I have seen early-stage hardware companies fail when key people and vital equipment were too expensive or inconveniently located.

Third, the finance arm of the operation, ACMI Capital, can leverage initial government funding and mitigate the “valley of death” that hardware start-ups typically face. This support should offer a smoother transition from government backing to broader engagement with the investment community, a perennial challenge for companies funded by the Small Business Innovation Research program and similar federal sources. Such funding ensures that promising technologies can scale and be efficiently handed off to government customers.

The “densification” of resources on an industry-focused campus can yield significant cost benefits.

However, while the ACMI model offers significant benefits, it also has potential limitations when applied to industries without the halo effect provided by government funding and customers. When the government seeks to solve a problem, it can move mountains. It is much more challenging to coordinate problem validation and investment in early-stage innovation by multiple nongovernment market participants, with their widely varying priorities, resources, and timelines.

Another potential issue is the insufficient overlap in resource and infrastructure needs that may occur among campus innovators in any given industry. If the needs of these start-ups diverge too widely, the benefits of colocation may diminish, reducing the overall efficiency of the campus model.

Finally, there is the challenge of securing enough capital to fund start-ups through the hardware development valley of death. Despite ACMI’s efforts, the financial demands of scaling hardware technologies are substantial, and without a compelling financial story and the enthusiastic support of key customers, securing sustained investment throughout development remains a critical hurdle.

Given these concerns, some care will be needed when selecting which industries, problems, customers, and start-ups will most benefit from this approach. In this vein, I cannot emphasize enough the need for additional experimentation and “speciation” of entities seeking to commercialize technology.

Still, the ACMI model has already demonstrated success and achieved important progress in enhancing the nation’s defense posture. And the lessons learned will undoubtedly inform future efforts, with successful strategies being replicated and scaled, thus enriching the nation’s technology commercialization toolbox.

I look forward to seeing the continued evolution and impact of this and other such models, as they are vital in bridging the gap between innovation and practical application, ultimately driving technological progress and economic growth.

Managing Director

Interface Ventures

The American Center for Manufacturing & Innovation’s (ACMI) industry campus-based model, as John Burer details in his Issues essay, is an innovative approach to addressing the critical challenges faced by hardware start-up companies in scaling production and establishing secure supply chains. At Energy Technology Center, we feel that the model is particularly timely and essential given the current munitions production crisis confronting the US Department of Defense and the challenges traditionally associated with spurring innovation in a mature technical field. As global tensions rise and the need for advanced defense technologies intensifies, the ability to rapidly scale up production of critical materials and systems becomes a national security imperative. This model has the potential to diversify, expand, and make more dynamic the manufacturing base for energetic materials and the systems that depend on them. By fostering a collaborative environment, these campuses can accelerate innovation, reduce production bottlenecks, and enhance the resilience of the defense industrial base.

From a taxpayer’s perspective, the value of ACMI’s model is immense. By attracting private capital to complement government funding, the model maximizes the impact of public investment. As Burer points out, ACMI’s Critical Chemical Pilot Program, funded through the Defense Production Act Title III Program, has already achieved a private-to-public funding ratio of 16 to 1, demonstrating the efficacy of leveraging different pools of investment capital. Such a strategy not only accelerates the development of critical technologies but also ensures that public funds are used more efficiently than ever, fostering a culture of innovation and modernization within the defense sector.

By fostering a collaborative environment, these campuses can accelerate innovation, reduce production bottlenecks, and enhance the resilience of the defense industrial base.

However, to fully realize the potential of this model, we must be mindful of the risks and pitfalls in the concept. Private investment follows the promise of a return. Challenges that must be addressed include the requirement for steady capital investment, dependency on government support, bureaucratic hurdles, market volatility, intellectual property concerns, scalability issues, and the need for effective collaboration. Ensuring sustained financial support from diverse sources, streamlining the bureaucratic processes in which DOD procurement is mired, developing robust and adaptable infrastructure, maintaining strong government-industry partnerships, protecting intellectual property, diversifying market applications, and fostering a collaborative ecosystem are all essential steps toward overcoming these challenges.

Challenges notwithstanding, ACMI’s industry campus-based model is a timely and innovative solution to the current dilemmas of the US defense manufacturing sector. By creating specialized campuses that foster collaboration and leverage both private and public investments, this model can significantly enhance the scalability, resilience, and dynamism of the manufacturing base for energetic materials and defense systems. Burer is to be applauded for bringing a healthy dose of old-fashioned American ingenuity and entrepreneurship to the nation’s defense requirements.

Founder and CEO

Energy Technology Center

As John Burer observes, start-up companies working on hardware, especially those with applications for national security, face substantial challenges and competing demands. These include not only developing and scaling their technology, but also simultaneously addressing the needs of their growing business, such as developing supply chains, securing manufacturing space that can meet their growing needs, and navigating the intricate maze of government regulations, budget cycles, contracting processes, and the like. This combination of challenges and demands requires a diverse and differentiated set of skills, which early-stage hardware companies especially struggle to obtain, given their limited resources and focus on developing their emerging technology. A better model is needed, and the one Burer identifies and is employing, with its emphasis on building regional manufacturing ecosystems through industry campuses, has significant merits.

Historically, the Department of Defense was the primary source of funding for those working on defense-related technologies. That is no longer the case. As recently noted by the Defense Innovation Unit, of the 14 critical technology areas identified by the Pentagon as vital to maintaining the United States’ national security, 11 are “primarily led by commercial entities.” While this dynamic certainly brings several challenges, there are also important opportunities to be had if the federal government can adapt its way of doing business in the commercial marketplace.

Of the 14 critical technology areas identified by the Pentagon as vital to maintaining the United States’ national security, 11 are “primarily led by commercial entities.”

The commercial market operates under three defining characteristics, and there is opportunity to leverage these characteristics to benefit national security. First, success in the commercial sector is defined by speed to market, and the commercial market is optimized to accelerate the transition from research to production, successfully traversing the infamous “valley of death.” Second, market penetration is a fundamental element of any commercial business strategy, with significant financial rewards for those who succeed; consequently, the commercial market is especially suited to rapidly scale emerging technologies. And third, the size of the commercial market dwarfs the defense market; leveraging this size not only offers a force-multiplier to federal funding, but also creates economies of scale that enable the United States and its allies to compete against adversarial nations that defy the norms of free trade and the rule of international law.

Industry campuses apply the proven model of innovation clusters to establish regional manufacturing ecosystems. These public-private partnerships bring together the diverse range of organizations and assets needed to build robust, resilient industrial capability and capacity. The several programs Burer identifies have already demonstrated the value of this model in harnessing the defining characteristics of the commercial market, including speed, scale, and funding. By incorporating this approach, the federal government is able to amplify the value of taxpayer dollars to improve national and economic security, creating jobs while accelerating the delivery of emerging technologies and enhancing industrial base resilience.

Pathfinder Portfolio Lead (contractor)

Manufacturing Capability Expansion and Investment Prioritization Directorate

US Department of Defense

John Burer presents an innovative approach to supporting manufacturing hardware start-ups. I ran the Oregon Manufacturing Innovation Center for the first six years of its life, and have firsthand experience with hundreds of these types of companies. I can attest: hardware start-ups face distinct challenges with few ready-made avenues to address them.

Expecting “rocket scientists” to navigate these challenges without specialized business support can hinder a start-up’s core technical work and jeopardize its overall success. It is rare, indeed, to find the unicorn that is a researcher, inventor, entrepreneur, negotiator, businessperson, logistician, marketer, and evangelist. Yet the likelihood of success for a start-up often depends on those abilities being present in one or a handful of individuals.

To ensure that the United States can maintain a technological advantage in an increasingly adversarial geopolitical landscape, it is imperative to improve hardware innovation and start-up company success rates. In addition to the ideas that Burer presents, my experience in manufacturing research and innovation suggests the need for open collaboration and comprehensive workforce development. These elements are critical to ensure a cross-pollination of ideas and the availability of trained technicians to scale these businesses.

Hardware start-ups face distinct challenges with few ready-made avenues to address them.

The American Center for Manufacturing and Innovation’s (ACMI) model represents a very promising solution. Burer’s emphasis on colocating start-ups within a shared infrastructure is a significant step forward. Incorporating spaces for cross-discipline and cross-company collaborative working groups and project teams, along with providing regular networking opportunities, will allow them to share knowledge, resources, and expertise and to cultivate a culture of cooperation. This is best enabled through a nonprofit applied research facility that can address the intellectual property-sharing issue, making problem-solving more efficient and empowering those involved to do what they do best. It not only allows scientists to be scientists, but also helps the investor, the government customer, the corporate development professional, and other critical participants understand their importance within a shared outcome.

The shared infrastructure within ACMI campuses can be further expanded by developing shared research and development labs, prototyping facilities, and testing environments. By pooling resources and with government support, start-ups can access high-end technology and equipment that might otherwise be beyond their reach, thus reducing costs and barriers to innovation. Additionally, open innovation platforms can allow companies to post challenges and solicit solutions from other campus members or external experts, harnessing a broader pool of talent and ideas. Think of this as a training ground to head-start companies that would scale more independently within this ecosystem, while allowing corporate and government stakeholders to more effectively scout for solutions. Such an approach can accelerate the development of new technologies and products, benefiting all stakeholders involved.

The ACMI model thus offers a potent opportunity. It can be applied to any sector where hardware innovation is needed to advance the nation’s capabilities. Incorporating open collaboration will be crucial to enable the best outcomes for technology leadership and economic growth. By incorporating these additional elements, the ACMI model can become an even more powerful engine for driving the success of hardware start-ups, ultimately benefiting the broader economy and national security.

Advisor to the President on Manufacturing Innovation

Oregon Institute of Technology

Former Executive Director of the Oregon Manufacturing Innovation Center, Research & Development

John Burer highlights the challenges facing start-ups providing products to the defense and space sectors. More specifically, he lays out the challenges for companies building complex physical objects to obtain the appropriate infrastructure for research, development, and manufacturing. Additionally, he notes the importance of small businesses in accelerating the deployment of new and innovative technologies for the nation’s defense. The article comes on the heels of a Pentagon report that found the US defense industrial base “does not possess the capacity, capability, responsiveness, or resilience required to satisfy the full range of military production needs at speed and scale.”

The imperative is clear. Developing increased domestic research, development, prototyping, and manufacturing capabilities to build a more robust and diversified industrial base supporting the Department of Defense is one of the nation’s most critical national security challenges. Equally clear is that unleashing the power of nontraditional defense contractors and small business is a critical part of tackling the problem.

So how do we do it?

The US defense industrial base “does not possess the capacity, capability, responsiveness, or resilience required to satisfy the full range of military production needs at speed and scale.”

We increase collaboration between government, industry, and academia. Expanding the industrial base to deliver the technologies warfighters need is too large a task for any one of these groups to address alone. It will take the combined power, ingenuity, and know-how of the government, industry, and academia to build a more resilient defense industrial base that can rapidly develop, manufacture, and field the technologies required to maintain a decisive edge on the battlefield.

There is a proven way to increase such collaborative engagements, via the use of consortia-based Other Transaction Authority (OTA), the mechanism the DOD uses to carry out certain research and prototype projects. OTAs are made separate from the department’s customary procurement contracts, cooperative agreements, or grants, and provide a greater degree of flexibility.

Consortia bring to bear thousands of small, innovative businesses and academic institutions that are developing cutting-edge technologies in armaments, aviation, energetics, spectrum, and more for the DOD. They are particularly effective at recruiting nontraditional defense contractors into the industrial base, educating them on how to work with the DOD, and lowering the barriers to entry. This provides an established avenue to tap into innovative capabilities to solve the complex industrial base and supply chain challenges the nation faces.

A George Mason University study highlighted the impact that small businesses and nontraditional defense contractors are having on the DOD’s prototyping effort via consortia-based OTAs. Researchers found that more than 70% of prototyping awards made through consortia go to nontraditional defense contractors, providing a proven track record of effective industrial base expansion. Critically, the OTA statute also offers a path to follow-on production to help bridge the proverbial valley of death.

Consortia-based OTAs are an incredibly valuable tool for government, industry, and academia to increase collaboration, competition, and innovation. They should be fully utilized to drive even greater impact to build a more robust, innovative, and diverse defense industrial base and address critical challenges. Nothing less than the nation’s security is at stake.

Executive Committee Chair

National Armaments Consortium

The consortium, with 1,000-plus member organizations, works with the DOD to develop and transition armaments and energetics technology

I am known in the real estate world as The Real Estate Philosopher, and my law firm is one of the largest real estate law practices in New York City. John Burer’s brainchild, the American Center for Manufacturing & Innovation (ACMI), is one of our clients—and one of the most exciting.

To explain, let’s take look at what Burer is doing. He looked at the US defense industry and saw a fragmented sector with major players and a large number of smaller players struggling to succeed. He also saw the defense industry in need of innovation and manufacturing capacity to stay ahead of the world. Burer then had an inspiration about how to bring it all together. As he explained to me early on, it would be kind of like creating miniature Silicon Valleys.

Silicon Valley started out as a think tank surrounding Stanford University. The thinkers, professors, and similar parties attracted more talented people—and ultimately turned into the finest aggregation of tech talent and successful organizations the world has ever seen.

Smaller players will benefit from being part of an ecosystem focused on a single industry.

Why not, mused Burer, do the same thing in the defense industry? In other words, create a campus (or multiple campuses) where the foregoing would come together: thinkers, at universities, as centers of creation; major industry stalwarts to anchor activities; and a swarm of smaller players to interact with the big players. Voila, a mini-Silicon Valley would be born on each campus.

It sounds simple, but this is a tricky thing to put together. Fortunately, Burer is not just a dreamer, but also solid on the nuts and bolts, so he proceeded with logical steps.

The first step was gaining governmental backing. In landing a $75 million contract from the Department of Defense, Burer picked up both dollars and credibility to jump-start his venture. This became ACMI Federal, the first prong of the ACMI business.

The second step was acquiring and building the campuses. These are estate deals and, as real estate players know all too well, you don’t just snap your fingers and a campus appears. You need a viable location, permits, deals with anchor tenants, lenders and investors, and much more. So Burer created another prong for the business, called ACMI Properties.

In the third step, Burer realized that many of the smaller occupants of the campuses would be start-ups, which are routinely starved for cash. So he created yet another prong for the business, called ACMI Capital. This is essentially a venture capital fund to back the smaller players.

Now Burer had it all put together: a holistic solution for scaling manufacturing. The campuses will spearhead innovation, critical to US defense. Smaller players will benefit from being part of an ecosystem focused on a single industry. And investors will be pleased that their investments will have both solid upside coupled with strong downside protection as well.

Adler & Stachenfeld

The author is a member of the ACMI Properties’ Advisory Board

Let’s be very clear: the US government, including the Department of Defense, does not manufacture anything. However, what the government does do is establish the regulatory frameworks that allow manufacturing to flourish or flounder.

In this regard, John Burer eloquently argues that the DOD needs new acquisition strategies to meet the logistical needs of the military services. Fortunately, at the insistence of Congress, the DOD is finally taking action to strengthen and secure the defense industrial base. In February 2021, President Biden signed an executive order (EO 14017) calling for a comprehensive review of all critical supply chains, including the defense industrial base. In February 2022, the DOD released its action plan titled Securing Defense-Critical Supply Chains.

At the insistence of Congress, the DOD is finally taking action to strengthen and secure the defense industrial base.

The American Center for Manufacturing & Innovation (ACMI) is working to address two of the critical recommendations in the action plan, focused on strengthening supply chain vulnerabilities in critical chemical supply, and growing the industrial base for developing and producing hypersonic missiles and other hypersonic weapons. As Burer describes, the center’s approach uses an industry campus model. The approach is not new to the DOD. It is being quite successfully used in two other DOD efforts that I am very familiar with: the Advanced Regenerative Manufacturing Institute, which is working to advance biotechnology, and AIM Photonics, which is devoted to advancing integrated photonic circuit manufacturing technology. Each are one of nine manufacturing innovation institutes established by the DOD to create an “industrial common” for manufacturing critical technologies.

A key to the success of ACMI and these other initiatives is that the DOD invests in critical infrastructure that allows shared use by small companies, innovators, and universities. This allows for collaboration across all members of the consortium, ensuring that best practices are shared, shortening development timelines, and ultimately driving down risk by having a common regulatory and safety environment. Anything that drives down program or product risk is a winner in the eyes of the DOD.

ACMI is still somewhat nascent as an organization. While it has been successful in securing DOD funding for its Critical Chemical Pilot Program and subsequently for its munitions campus, only time will tell if ACMI will be able to address the confounding supply chain issues surrounding explosive precursors, explosives, and propellants that are absolutely critical to the nation’s national defense.

Department of Chemistry and Biochemistry, University of South Carolina

The author has 35 years of military and civilian service with the US Army, is a retired member of the Scientific and Professional cadre of the federal government’s Senior Executive Service, and served as the US Army Deputy Chief Scientist

Promethean Sparks

Inspired by the National Academy of Sciences (NAS) building, which turns 100 this year, sixth-grade students at the Alain Locke School in West Philadelphia created the Promethean Sparks mural. The students collaborated with artist and educator Ben Volta to imagine how scientific imagery in the NAS building’s Great Hall—from the Prometheus mural by Albert Herter and the golden dome by Hildreth Meière—might look if recreated in the twenty-first century. Their vibrant mural is exhibited alongside a timeline of the NAS building, which depicts the accomplishments of the Academy in the context of US and world events over the past century.

Working with Mural Arts Philadelphia, students merged diverse scientific symbols to create new imagery and ignite new knowledge insights. Embodying a collective exploration of scientific heritage, this project empowered the students as creators. The students’ collection of unique designs reflects a journey of experimentation, learning, and discovery. Embracing roles beyond their student identities, they engaged as artists, scientists, and innovators.

Embodying a collective exploration of scientific heritage, this project empowered the students as creators.

Ben Volta works at the intersection of education, restorative justice, and urban planning. He views art as a catalyst for positive change in individuals and the institutions surrounding them. After completing his studies at the University of Pennsylvania, Volta began collaborating with teachers and students in Philadelphia public schools to create participatory art that is both exploratory and educational. Over nearly two decades, he has developed this collaborative process with public schools, art organizations, and communities, receiving funding for hundreds of projects in over 50 schools.

Mural Arts Philadelphia, the nation’s largest public art program, is rooted in the belief that art ignites change. For 40 years, Mural Arts has brought together artists and communities through a collaborative process steeped in mural-making traditions, creating art that transforms public spaces and individual lives.

The Power of Space Art

One of the remarkable qualities of space art is its ability to amplify the mysterious intangibility of the cosmos (as with the late-nineteenth-century French artist Étienne Trouvelot) and at the same time make the unrealized technologies of the future and the worlds beyond our reach seem to be within our grasp (as with the mid-twentieth-century American artist Chesley Bonestell). As Carolyn Russo demonstrates in “How Space Art Shaped National Identity” (Issues, Spring 2024), art has played an important role in making space seem both meaningful and familiar.

Its appeal has not been limited to the United States. In the Soviet Union, the paintings of Andrei Sokolov and Alexei Leonov made the achievements of their nation visible to its citizens, while also showing them what a future in space could look like. The iconography developed by graphic designers for Soviet-era propaganda posters equated spaceflight with progress toward socialist utopia.

Outside of the US and Soviet contexts, space art from other nations didn’t necessarily align with either superpower’s vision. The Ghana-born Nigerian artist Adebisi Fabunmi, in his 1960s woodcut City in the Moon, provided a vision influenced by the region’s Yoruba people of community life on the moon. The idea of home and community may have appealed to the artist during an era of decolonization and civil war more than utopian aspirations or futuristic technologies. Meanwhile, in Mexico, the artist Sofía Bassi composed surrealist dreamscapes that ponder the connection between outer space and the living world. Bassi’s Viaje Espacial includes neither flags nor space heroics.

Contemporary space art is as likely to question the human future in space as it is to celebrate it. The Los Angeles-based Brazilian artist Clarissa Tossin’s work is critical of plans for the moon and Mars that she worries continue colonial projects or threaten to despoil untouched worlds. Tossin’s digital jacquard tapestry The 8th Continent reproduces NASA images of the moon in a format associated with the Age of Exploration, reminding viewers that our medieval and Renaissance antecedents similarly sought new worlds to conquer and exploit.

Space is also a popular setting or subject matter in the works of Afrofuturist and Latino Futurist artists. These works often seek to recover and reclaim past connections as they chart new future paths. The American artist Manzel Bowman’s collages combine traditional African imagery and ideas with space motifs and high technology to produce a new cosmic imaginary unconstrained by the history of colonialism. The Salvadoran artist Simón Vega’s work reframes the Cold War space race via the perspective of Latin America. Vega reconstructs the space capsules and stations of the United States and the Soviet Union using found materials in ways that make visible the disparities between the nations who used space to stage technological spectacles and those who were left to follow these beacons of modernization.

The many forms that space art has taken over these past decades are surprising, but the persistence of space in art is not. From the moon’s phases represented in the network of prehistoric wall paintings in Lascaux Cave in southwestern France to the images of the heavenly spheres captured by medieval and later painters across many nations, art chronicles our impressions of the universe and our place within it perhaps better than any other cultural form.

Curator of Earth and Planetary Science

Smithsonian’s National Air and Space Museum

Lead curator of the museum’s new Futures in Space gallery

A Space Future Both Visionary and Grounded

In “Taking Aristotle to the Moon and Beyond” (Issues, Spring 2024), G. Ryan Faith argues that space exploration needs a philosophical foundation to reach its full potential and inspire humanity. He calls for NASA to embrace deeper questions of purpose, values, and meaning to guide its long-term strategy.

Some observers would argue that NASA, as a technically focused agency, already grapples with questions of purpose and meaning through its scientific pursuits and public outreach. Imposing a formal “philosopher corps” could be seen as redundant or even counterproductive, diverting scarce resources from more pressing needs. Additionally, if philosophical approaches become too academic or esoteric, they risk alienating key stakeholders and the broader public. There are also valid concerns about the potential for philosophical frameworks to be misused to justify unethical decisions or to shield space activities from public scrutiny.

Yet despite these challenges, there is a compelling case for why a more robust philosophical approach could benefit space exploration in the long run. By articulating a clear and inspiring vision, grounded in shared values and long-term thinking, space organizations can build a sturdier foundation for weathering political and economic vicissitudes. Philosophy can provide a moral compass for navigating thorny issues such as planetary protection, extraterrestrial resource utilization, and settling other celestial bodies. And it may not be a big lift if small steps are taken. For example, NASA could create an external advisory committee on the ethics of space and fund collaborative research grants—NASA’s Office of Technology Policy and Strategy is already examining ethical issues in the Artemis moon exploration program, and the office could serve as one place within NASA to take point. In addition, NASA could bring university-based scholars and philosophers to the agency on a rotating basis, expand public outreach to include philosophical discussions, and host international workshops and conferences on space ethics and philosophy.

By articulating a clear and inspiring vision, grounded in shared values and long-term thinking, space organizations can build a sturdier foundation for weathering political and economic vicissitudes.

Ultimately, the key is to strike a judicious balance between philosophical reflection and practical action. Space agencies should create space for pondering big-picture questions, while remaining laser-focused on scientific, technological, and operational imperatives. Philosophical thinking should be deployed strategically to inform and guide, not to dictate or obstruct. This means fostering a culture of openness, humility, and pragmatism, where philosophical insights are continually tested against real-world constraints and updated in light of new evidence.

As the United States approaches its return to the moon, we have a rare opportunity to shape a future that is both visionary and grounded. By thoughtfully harnessing the power of philosophy while staying anchored in practical realities, we can chart a wiser course for humanity’s journey into the cosmos. It will require striking a delicate balance, but the potential rewards are immense—not just for space exploration, but for our enduring quest to understand our place in the grand sweep of existence. The universe beckons us to ponder big questions, and to act with boldness and resolve.

Former Associate Administrator for Technology Policy and Strategy

Former (Acting) Chief Technologist

National Aeronautics and Space Administration

G. Ryan Faith’s emphasis on ethics in space exploration is well-met given contemporary concerns regarding artificial intelligence and the recent NASA report on ethics in the Artemis program. As we know from decades of study, the very technologies we hope will be emancipatory more often carry our biases with them into the world. We should expect this to be the case in lunar and interplanetary exploration too. Without clear guidelines and mechanisms for ensuring adherence to an ethical polestar, humans will certainly reproduce the problems we had hoped to escape off-world.

Yet, as a social scientist, I find it strange to assume that embracing a single goal, or “telos,” might supersede political considerations, especially when it comes to funding mechanisms. NASA is a federal agency. The notion of exploration “for all humankind” certainly illuminates and inspires, but ultimately NASA’s mandate is more mundane: to further the United States’ civilian interests in space. The democratic process as practiced by Congress requires annual submission of budgets and priorities to be approved or denied by committee, invoking the classic time inconsistency problem. In such a context, telic and atelic virtues alike are destined to become embroiled and contested in the brouhaha of domestic politics. Until we agree to lower democratic barriers to long-term planning, the philosophers will not carry the day.

The notion of exploration “for all humankind” certainly illuminates and inspires, but ultimately NASA’s mandate is more mundane: to further the United States’ civilian interests in space.

Better grounding for a philosophy of space exploration, then, might arise from an ethical approach to political virtues, such as autonomy, voice, and the form of harmony that arises from good governance (what Aristotle calls eudaimonia). In my own work with spacecraft teams and among the planetary science community, I have witnessed many grounded debates as moments of statecraft, some better handled than others. All are replete with the recognizable tensions of democracy: from fights for the inclusion of minority constituents, to pushback against oligarchy, to the challenge of appropriately managing dissenting opinions. It is possible, then, to see these contestations at NASA over its ambitions not as compulsion “to act as philosophers on the spot,” in Faith’s words, but as examples of virtues playing out in the democratic polis. In this case, we should not leapfrog these essential debates, but ensure they give appropriate voice to their constituents to produce the greatest good for the greatest number.

Additionally, there is no need to assume an Aristotelian frame when there are so many philosophies to choose from. The dichotomies that animate Western philosophies are anathema to adherents of several classical, Indigenous, and contemporary philosophies, who find ready binaries far too reductive. We might instead imagine a philosophy of space exploration that enhances our responsibility to entanglements and interconnectivities: between Earth and moon, human and robotic explorers, environments terrestrial and beyond. Not only would this guiding philosophy be open to more people, cultures, and nations, and better hope to escape “terrestrial biases” by rejecting a ready distinction between Earth and space. It would also hold NASA accountable for maintaining an ethical approach to Earth-space relations throughout its exploration activities, regardless of the inevitable shifts in domestic politics.

Associate Professor of Sociology

Princeton University

G. Ryan Faith succinctly puts his finger on exactly what ails NASA’s human spaceflight program—a lack of telos, the Greek word for purpose. In this concept, you are either working toward a telos, or your efforts are atelic. In the case of the Apollo program, NASA had a very specific teleological goal: to land a man on the moon and return him safely to Earth (my personal favorite part of President Kennedy’s vision) by the end of 1969.

This marked a specific goal, or “final cause.” The Hubble Space Telescope, on the other hand, is very much atelic. That is, there is no defined endpoint; you could literally study the universe forever.

This philosophical concept is well and good for the Ivory Tower, but it also has a very practical application at the US space agency.

NASA has gone through several iterations of its human moon exploration program since it was reincarnated during the George W. Bush administration as Project Constellation. I cannot tell you how many times someone has asked me, “Now, why are we going to the moon again? Haven’t we been there? Don’t we have enough problems here on Earth? And don’t we have a huge deficit?”

Why yes, we do have a huge deficit. And the world does feel fraught with peril these days, given the situations in Russia, China, and the Middle East. If NASA is to continue to receive significant federal funding for its relatively expensive human exploration program, it needs to have a crisp answer for why exactly we should borrow money to send people to the moon (again).

Ryan brings up an interesting paradox of the Apollo program’s success, namely that “going to the moon eliminated the reason for going to the moon.” And he reminds us that “failing to deliberately engage philosophical debates about values and vision.… risks foundering.”

If NASA is to continue to receive significant federal funding for its relatively expensive human exploration program, it needs to have a crisp answer for why exactly we should borrow money to send people to the moon (again).

There are certainly many technical issues the agency needs to grapple with. Do we build a single base on the moon or land in various locations? Do we continue with the Space Launch System rocket, built by Boeing, or switch to the Starship rocket or the much cheaper Falcon Heavy rocket, both built by SpaceX?

But the most important question NASA has to answer is why: why send humans to the moon, risking their lives? Should it be to “land the first woman and first person of color” on the moon, as NASA continuously promotes? Why not explore with robots that are much cheaper and don’t complain nearly as much as astronauts do?

I believe there are compelling answers to these questions. Humans can do things that robots cannot, and sending humans to space is in fact very inspirational. The moon can serve as an important testing ground for flying even deeper into the solar system. But first, the problematic question why demands an answer.

The author would say that JFK’s reasoning was compelling: “We choose to go the moon and do the other things…not because they are easy, but because they are hard.” A great answer in the 1960s. But in the twenty-first century, NASA’s leadership would be well-served to consider Ryan’s article and unleash, in the words of Tom Wolfe, the “power of clarity and vision.”

Senior Fellow, National Center for Energy Analytics

Colonel, USAF (retired)

Former F-16 pilot, test pilot, and NASA astronaut

G. Ryan Faith provides a thoughtful examination of the philosophical foundations for human space exploration—or rather the lack of such foundations. Human space exploration is where this lack is most acute. Commercial, scientific, and military missions have reasons grounded in economic, research, and national security imperatives. They are grounded in particular communities with shared values and discourse. Supporters of human space exploration are found in diffuse communities with many different motivations, interests, and philosophical approaches.

The end of the Apollo program was a shock to many advocates of human space exploration as they assumed, wrongly, that going to the moon was the beginning of a long-term visionary enterprise. It may yet be seen that way by history, but the Apollo landings resulted from US geopolitical needs during the Cold War. They were a means to a political end, not an end in themselves.

Former NASA administrator Mike Griffin gave an insightful speech in 2007 in which he described real reasons and acceptable reasons for exploring space. Real reasons are individual, matters of the heart and spirit. Acceptable reasons typically involve matters of state, geopolitics, diplomacy, and national power, among other more practical areas. Acceptable reasons are not a façade, but critical to large-scale collective action and the use of public resources. They are the common ground upon which diverse individuals come together to create something bigger than themselves.

Real reasons are individual, matters of the heart and spirit. Acceptable reasons typically involve matters of state, geopolitics, diplomacy, and national power, among other more practical areas.

It is more than our machines or even astronauts that we send into space, but our values as well. As Faith’s article makes clear, there is value in thinking about philosophy as part of sustainable support for human space exploration. At the same time, the desire for a singular answer can be a temptation to tell others what to do or what to believe. The challenge in space is similar to that of the Founders of the United States: how to have a system of ordered liberty that allows for common purposes while preserving individual freedoms.

As humanity expands into space, I hope the philosophical foundations of that expansion include the values of the Enlightenment that inspired the Founders. In this vein, the National Space Council issued a report in 2010 titled A New Era for Deep Space Exploration and Development that concluded: “At the frontiers of exploration, the United States will continue to lead, as it has always done, in space. If humanity does have a future in space, it should be one in which space is the home of free people.”

Director, Space Policy Institute, Elliott School of International Affairs

George Washington University

Catalyzing Renewables

In “Harvesting Minnesota’s Wind Twice” (Issues, Spring 2024), Ariel Kagan and Mike Reese discuss their efforts targeting green ammonia production using water, air, and renewable electricity to highlight the role of community-led efforts in realizing a just energy transition. The effort showcases an innovative approach to spur research and demonstrations for low-carbon ammonia production and its use as a fertilizer or for other energy-intensive applications such as fuel for grain drying. Several themes stand out: the impact that novel technologies can have on business practices, communities, and most importantly, the environment, and the critical policies needed to drive change.

The market penetration of renewables in the United States is anticipated to double by 2050, to 42% from 21% in 2020, according to the US Energy Information Administration. However, a report by the Lawrence Berkeley National Laboratory finds that rapid deployment of renewables has been severely impeded in recent years because it takes, on average, close to four years for new projects to connect to the grid. Therefore, technologies such as low-carbon ammonia production catalyze the deployment of renewables by creating value from “islanded” sources—that is, those that are not grid-connected. They also reduce the energy and carbon intensity of the agriculture sector since ammonia production is responsible for 1% of both the world’s energy consumption and greenhouse gas emissions.

US Department of Energy programs such as ARPA-E REFUEL and REFUEL+IT have been instrumental in developing and showcasing next-generation green ammonia production and utilization technologies. Pilot-scale demonstrations, such as the one developed by Kagan and Reese, significantly derisk new technology to help convince early adopters and end users to pursue commercial demonstration and deployment. These programs have also created public-private partnerships to ensure that new technologies have a rapid path to market. Other DOE programs have been driving performance enhancements of enabling technologies such as water electrolyzers to reduce the cost of zero-carbon hydrogen production and further expanding end uses to include sustainable aviation fuels and low-carbon chemicals.

Pilot-scale demonstrations, such as the one developed by Kagan and Reese, significantly derisk new technology to help convince early adopters and end users to pursue commercial demonstration and deployment.

The leap from a new technology demonstration to deployment and adoption is often driven by policy. In their case, the authors cite a tax credit that provides up to $3 per kilogram of clean hydrogen produced. But uncertainties remain: the US government has not provided full guidance on how this and other credits will be applied. Moreover, the production tax credit expires after 10 years, lower than typical amortization periods of capital-intensive projects. Our primary research with stakeholders suggests that long-term power purchase agreements with the renewable energy producer and an ammonia (or other product) producer could help overcome barriers to market entry.

Although their article focuses on the United States, the lessons that Kagan and Reese are gaining might also prove deeply impactful worldwide. In sub-Saharan African countries such as Kenya and Ethiopia, crop productivity can be directly correlated with fertilizer application rates that are lower than global averages. However, these countries have abundant renewable resources (geothermal, hydropower, wind, and solar) and favorable policy environments to encourage green hydrogen production and use. Capitalizing on the technology being demonstrated in Minnesota, as well as in DOE’s Regional Clean Hydrogen Hubs program, could enable domestic manufacturing, increase self-reliance, and improve food security in these regions and beyond.

Director, Renewable Energy

Technology Advancement and Commercialization

RTI International

How to Procure AI Systems That Respect Rights

In 2002, my colleague Steve Schooner published a seminal paper that enumerated the numerous goals and constraints underpinning government procurement systems: competition, integrity, transparency, efficiency, customer satisfaction, best value, wealth distribution, risk avoidance, and uniformity. Despite evolving nomenclature, much of the list remains relevant and reflects foundational principles for understanding government procurement systems.

Procurement specialists periodically discuss revising this list in light of evolving procurement systems and a changing global landscape. For example, many of us might agree that sustainability should be deemed a fundamental goal of a procurement system to reflect the increasing role of global government purchasing decisions in mitigating the harms of climate change.

In reading “Don’t Let Governments Buy AI Systems That Ignore Human Rights” by Merve Hickok and Evanna Hu (Issues, Spring 2024), I sense that they are basically advocating for the same kind of inclusion—to make human rights a foundational principle in modern government procurement systems. Taxpayer dollars should promote human rights and be used to make purchases with an eye toward processes and vendors that are transparent, ethical, unbiased, and fair. In theory, this sounds wonderful. But in practice … it’s not so simple.

Hickok and Hu offer a framework, including a series of requirements, designed to ensure human rights are considered in the purchase of AI. Unsurprisingly, much of the responsibility for implementing these requirements falls to contracting officers—a dwindling group, long overworked and under-resourced yet subject to ever-increasing requirements and compliance obligations that complicate procurement decisionmaking. A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

The authors’ suggestions also would inadvertently erect substantial barriers to entry, dissuading new, innovative, and small companies from engaging in the federal marketplace. The industrial base has been shrinking for decades, and burdensome requirements not only cause existing contractors to forego opportunities, but deter new entrants from seeking to do business with the federal government.

A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

Hickok and Hu brush aside these concerns without citing data to bolster their assumptions. Experience cautions against this cavalier approach. These concerns are real and present significant challenges to the authors’ aspirations.

Still, I sympathize with the authors, who are clearly and understandably frustrated with the apparent ossification of practices and the glacial pace of innovation. Which leads me to a simple, effective, yet oft-ignored, suggestion: rather than railing against the existing procurement regime, talk to the procurement community about your concerns. Publish articles in industry publications. Attend and speak at the leading government procurement conferences. Develop a community of practice. Meet with procurement professionals and policymakers to help them understand the downstream consequences of buying AI without fully understanding its potential to undermine human rights. Most importantly, explain how their extensive knowledge and experience can transform not only which AI systems they procure, but how they buy them.

This small, modest step may not immediately generate the same buzz as calls for sweeping regulatory reform. But engaging with the primary stakeholders is the most effective way to create sustainable, long-term gains.

Associate Dean for Government Procurement Law Studies

The George Washington University Law School

Merve Hickok and Evanna Hu stage several important interventions in artificial intelligence antidiscrimination law and policy. Chiefly, they pose the question of whether and how it might be possible to enforce AI human rights through government procurement protocols. Through their careful research and analysis, they recommend a human rights-centered process for procurement. They conclude the Office of Management and Budget (OMB) guidance on federal government’s procurement and use of AI can effectively reflect these types of oversight principles to help combat discrimination in AI systems.

The authors invite a critical conversation in AI and the law: the line between hard law (e.g., statutory frameworks with enforceable consequences), and soft law (e.g., policies, rules, and procedures) and other executive and agency action that can be structured within the administrative state. Federal agencies, as the authors note, are now investigating how best to comply with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110), released by the Biden administration on October 30, 2023. Following the order’s directives, the OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence, published on March 28, 2024, directs federal agencies to focus on balancing AI risk mitigation with AI innovation and economic growth goals.

Both E.O. 14110 and the OMB Policy reflect soft law approaches to AI governance. What is hard law and soft law in the field of AI and the law are moving targets. First, there is a distinction between human rights law and human rights as reflected in fundamental fairness principles. Similarly, there is a distinction between civil rights law and what is broadly understood to be the government’s pursuit of antidiscrimination objectives. The thesis that Hickok and Evanna advance involves the latter in both instances: the need for the government to commit to fairness principles and antidiscrimination objectives under a rights-based framework.

What is hard law and soft law in the field of AI and the law are moving targets.

AI human rights can be viewed as encompassing or intersecting with AI civil rights. The call to address antidiscrimination goals with government procurement protocols is critical. Past lessons on how to approach this are instructive. The Office of Federal Contract Compliance Programs (OFCCP) offers a historical perspective on how a federal agency can shape civil rights outcomes through federal procurement and contracting policies. OFCCP enforces several authorities to ensure equal employment opportunities, one of the cornerstones of the Civil Rights Act of 1964. OFCCP’s enforcement jurisdiction includes Executive Order 11246; the Rehabilitation Act of 1973, Section 503; and the Vietnam Era Veterans’ Readjustment Assistance Act of 1974. OFCCP, in other words, enforces a combination of soft and hard laws to execute civil rights goals through procurement. OFCCP is now engaged in multiple efforts to shape procurement guidance to mitigate AI discriminatory harms.

Finally, Senators Gary Peters (D-MI) and Thom Tillis (R-NC) recently introduced a bipartisan proposal to provide greater oversight of potential AI harms through the procurement process. The proposed Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act mandates several evaluative protocols prior to the federal government’s procuring and deploying of AI systems, underscoring the need to test AI premises, one of the key recommendations advanced by Hickok and Hu. Preempting AI discrimination through federal government procurement protocols demands both soft law, such as E.O. 14110 and the OMB Policy, as well as hard law, such as the bipartisan bill proposed by Senators Peters and Tillis.

Professor of Law

Director, Digital Democracy Lab

William & Mary Law School

Merve Hickok and Evanna Hu propose a partial regulatory patch for some artificial intelligence applications via government procurement policies and procedures. The reforms may be effective in the short term in specific environments. But a broader perspective, which the AI regulatory wave generally lacks, raises some questions about widespread application.

This is not to be wondered at, for AI raises considerations that make it especially difficult for society to respond effectively. Eight problems in particular stand out:

  1. The definition problem. Critical concepts such as “intelligence,” “agency,” “free will,” “cognition,” “consciousness,” and even “artificial intelligence” are not well understood, involve different technologies from neural networks to rule-based expert systems, and have no clear and accepted definitions.
  2. The cognitive technology problem. AI is part of a cognitive ecosystem that increasingly replicates, enhances, and integrates human cognition and psychology into metacognitive structures at scales from the relatively simple (e.g., Tesla and Google Maps) to the highly complex (e.g., weaponized narratives and China’s social credit system). It is thus uniquely challenging in its implications for everything from education to artistic creation to crime to warfare to geopolitical power.
  3. The cycle time problem. Today’s regulatory and legal frameworks lack any capability to match the rate at which AI is evolving. In this regard, Hickok and Hu’s suggestion to add additional layers of process onto bureaucratic systems that are already sclerotic, such as public procurement, would only exacerbate the decoupling of regulatory and technological cycle times.
  4. The knowledge problem. No one today has any idea of the myriad ways in which AI technologies are currently being used across global societies. Major innovators, including private firms, military and security institutions, and criminal enterprises, are not visible to regulators. Moreover, widely available tool sets have democratized AI in ways that simply couldn’t happen with older technologies.
  5. The scope of effective regulation problem. Potent technologies such as AI are most rapidly adapted by fringe elements of the global economy, especially the pornography industry and criminal enterprises. Such entities pay no attention to regulation anyway.
  6. The inertia problem. Laws and regulations once in place are difficult to modify or sunset. They are thus particularly inappropriate when the subject of their action is in its very early stages of evolution, and changing rapidly and unpredictably.
  7. The cyberspace governance problem. International agreements are unlikely because major players manage AI differently. For example, the United States relies primarily on private firms, China on the People’s Liberation Army, and Russia on criminal networks.
  8. The existential competition problem. AI is truly a transformative technology. Both governments and industry know they are in a “build it before your competitors, or die” environment—and thus will not be limited by heavy-handed regulation.

AI raises considerations that make it especially difficult for society to respond effectively.

This does not mean that society is powerless. What is required is not more regulation on an already failing base, but rather new mechanisms to gather and update information on AI use across all domains; enhance adaptability and agility of institutions rather than creating new procedural hurdles (for example, eschewing regulations in favor of “soft law” alternatives); and encourage creativity in responding to AI opportunities and challenges.

More specifically, two steps can be taken even in this chaotic environment. First, a broad informal network of AI observers should be tasked with monitoring the global AI landscape in near real time and reporting on a regular basis without any responsibility to recommend policies or actions. Second, even if broad regulatory initiatives are dysfunctional, there will undoubtedly be specific issues and abuses that can be addressed. Even here, however, care should be taken to remember the unique challenges posed by AI technologies, and to try to develop and retain agility, flexibility, and adaptability whenever possible.

President’s Professor of Engineering

Lincoln Professor of Engineering and Ethics

Arizona State University

Decolonize the Sciences!

In “Embracing the Social in Social Science” (Issues, Spring 2024), Rayvon Fouché covers the full range of racialized phenomena in science, from criminal use of Black bodies as experimental subjects to the renaissance he maps out for new anti-racism networks, programs, and fellowships. His call for “baking in” the social critique, rather than adding it as mere diversity sprinkles on top, could not be clearer and more compelling.

Yet I know from my experience on National Science Foundation review boards, at science and engineering conferences, and in conversations with all sorts of scientific professionals that this depth is almost always mistranslated, misidentified, and misunderstood. Fouché is calling for creating a transformation, but most organizations and individuals are hearing only the elimination of bias. What is the difference?

The distinction is perhaps most obvious in my own field of computing. For example, loan algorithms tend to create higher interest rates for Black home buyers. Ethnicity is not a variable: data that correlate merely with “being Black” can be inferred by computing, even without human directives to do so. So it is difficult to oppose using the legal system, but tempting to solve as an algorithm problem.

As important as the elimination of bias truly is, it creates the illusion that if we could only eliminate bias, the problem would be solved. Bias does not address the more significant problem: in this case, that homes and loans are extremely expensive to begin with. The costs of loans and dangers of defaulting have destroyed working-class communities of every color; and “too big to fail” means that our algorithmic banking system turns risk for the entire nation’s economy into profits for banks’ own making. And that is not just the case for banking. In health, industry, agriculture, and science and technology in its many forms, eliminating bias merely creates equal exploitation for all, equally unsustainable lives, and forms of wealth inequality that “see no color.”

As important as the elimination of bias truly is, it creates the illusion that if we could only eliminate bias, the problem would be solved.

My colleagues will often conclude at this point that I am pointing toward capitalism, but I have spent my career trying to point out that communist nations generally show the same trends: wealth inequality, pollution, failure to support civil rights. And that is, from my point of view, largely because they use the same science and engineering, formulated around the principles of optimization for extracting value. Langdon Winner, the scholar known for his “artifacts have politics” thesis, was wrong, but only in that the destructive effects of technological artifacts occur no matter what the “politics” is. Communists extract value to the state, and capitalists extract value to corporations, but both alienate it from the cycles of regeneration that Indigenous societies were famously dedicated to. If we want a just and sustainable future, a good place to start is to decolonize our social sciences, not just critique science for failing to embrace them, and perhaps develop that as mutual inquiries across the divide.

What would it take to create a science and technology dedicated not to extracting value, but rather to nurturing its circulation in unalienated forms? Funding from NSF, the OpenAI Foundation, and others have kindly allowed our research network to explore these possibilities. We invite you to examine what regenerative forms of technoscience might look like at https://generativejustice.org.

Professor, School of Information

University of Michigan

Rayvon Fouché argues that social science, especially those branches that study inequity, must become integral to the practice of science if we want to both address and avoid egregious harms of our past and present. Indeed, methodologies and expertise from the social sciences are rarely included in the shaping and practice of scientific research, and when they are, they are only what Fouché likens to “sprinkles on a cupcake.”

Metaphors are essential to describing abstract processes, and every gifted science teacher I ever had excelled at creating them to help students understand how invisible forces can create such visible effects and govern the behavior of the things that we can feel and see. As the noted physicist Alan Lightman famously noted, “metaphor is critical to science.” The metaphors that we use matter, perhaps especially in regard to scientific understanding and education, and I find the metaphor of “science” as a cupcake and the social sciences as sprinkles very useful.

The many disasters and broken promises that have destroyed Black and other marginalized peoples’ trust in the medical establishment might have been averted had experts from other fields been empowered to produce persuasive arguments against their use beforehand. To move to another metaphor, Fouché describes the long-standing effects of “scientific inequity” as practiced upon Black populations as a “residue,” a sticky trace that persists across historical time. The image vividly explains why it is that some people of colors’ mistrust of science and unwillingness to engage with it as a career can be understood as an empirically informed and rational decision.

Some people of colors’ mistrust of science and unwillingness to engage with it as a career can be understood as an empirically informed and rational decision.

As Fouché shows, science becomes unethical and uncreative when it excludes considerations of the social and the very real residues of abuse and disregard that produce disaffection and disengagement. At the same time, the “social” has itself been the object of mistrust and cynicism, with some observers asserting that governments ought not be responsible for taking care of people, but rather that individuals and families need to rely upon themselves. Such ideas helped fuel the systematic defunding of public higher education and other “social” services. STEM fields and occupational fields such as business became more popular because they were seen as the best routes for students to pay off the sometimes life-long debt of a necessary college education. Correspondingly, the social sciences and the humanities have become luxury goods. The state’s unwillingness to support training in these fields is one reason that nonscientific expertise is viewed as a sprinkle, sometimes even to those of us who practice it and teach it to others.

At the same time, this expertise has never been needed more: the fascination, excitement, and horror of artificial intelligence’s breakneck and generally unregulated and unreflective adoption suggests that we greatly need experts in metaphor, meaning, inference, history, aesthetics, and style to “tune” these technologies and make them usable, or even to convincingly advocate for their abandonment. In a bit of good news, recent study of “algorithmic abandonment” demonstrates that users and institutions will stop using applications once they learn that they consistently produce harmful effects. At the same time, it’s hard to “embrace the social” when there is less of it to get our arms around. The scientific community still needs what Fouché calls a “moral gut check,” akin to Martin Luther King Jr.’s 1967 encouragement to “support the sustenance of human life.” For to care about the social is to care for each other rather than just for ourselves.

Gwendolyn Calvert Baker Collegiate Professor, Department of American Culture and Digital Studies Institute

University of Michigan, Ann Arbor

Rayvon Fouché’s call to “lean into the social” and to reckon with science’s “residues of inequity” must be answered if scientists are to help create more equitable and just societies. Achieving this goal, he admits, will require the difficult work of transforming the “traditions, beliefs, and institutions” of science. I concur.

Yet I want to clarify that held within this science that requires transformation are the social sciences themselves. After World War II, the natural, physical, and social sciences all were reconstructed from the same conceptual cloth, one that assumed that truth and justice depended upon the separation of science from society.

For Fouché, this separation must end. His reasons are compelling: without fundamental knowledge of and engagement with the communities and societies out of which sciences arise, scientists operate in moral and social vacuums that too often lead to harm, when what is meant is good. Yet the idea that science should exist in a “pure” space apart from society is deeply baked into today’s scientific institutions.

It could have been otherwise. After the US bombing of Nagasaki and Hiroshima, some prominent politicians and scientists called for an end to the purposeful seclusion of science from society that the Manhattan Project embodied. However, a countervailing force emerged in a troubling form, pseudoscience. At the same time the United States funded physicists to create atom bombs, Germany and the Soviet Union—building on efforts begun in the United States—bluntly directed genetics into policies of racial purification. In response, most geneticists argued that the murderous force of resulting racial hygiene laws lay not in their science, but rather in its perversion by political power. As a result, many geneticists retreated from their political activism of the 1920s and ’30s.

The idea that science should exist in a “pure” space apart from society is deeply baked into today’s scientific institutions.

For their part, prominent social scientists, including the pioneering sociologist Robert K. Merton, argued that science was a wellspring of ethos that democracies needed, and to ensure these ethos survived, science should exist in an autonomous space. Just like markets of classic political economy, science ought to be left alone. This argument expanded to become a central tenet of the West during the Cold War.

In a matter of a few short years, then, science transformed from a terrifying destructive force that needed to be held in check by democratic institutions to one that would itself protect democracies. The natural, physical, and social sciences all embraced this idea of being inherently good and democratic—and thus to be protected from abuse by the unjust concentration of government power. This historically and institutionally entrenched illusion has left contemporary sciences, including the social sciences, poorly equipped to recognize and respond to the many and consequential ways in which their questions inextricably entangle with questions of power.

I agree with Fouché that more trustworthy sciences require addressing these entanglements. The critical question is how. My colleagues and I are currently seeking answers through the Leadership in the Equitable and Ethical Design (LEED) of Science, Technology, Engineering, Mathematics, and Medicine initiative. After decades of building institutional practices and protocols designed to separate science from society, this task will not be easy. Those of us involved with LEED of STEMM look forward to working with Fouché and other visionary scientific leaders to rebuild scientific institutions not around Cold War visions of security and separation, but rather around contemporary critical needs to forge the common grounds of collaboration.

Professor of Sociology

Founding Director, Science and Justice Research Center

University of California, Santa Cruz

A Look at Differential Tuition

In “Tools That Would Make STEM Degrees More Affordable Remain Unexamined” (Issues, Spring 2024), Dominique J. Baker makes important points regarding the state of college affordability for students pursuing STEM majors. As a fellow scholar of higher education finance, I wish to elaborate on the importance of disaggregating data within the broad fields of STEM due to differences in tuition charges and operating costs based on individual majors.

First, Baker notes that differential tuition is prevalent at public research universities, citing data indicating that just over half of all institutions charged differential tuition for at least one field of study in the 2015–16 academic year. I collected data on differential tuition policies across all public universities for 20 years and found that 56% of research universities and 27% of non-research universities charged differential tuition in engineering in the 2022–23 academic year, up from 23% and 7%, respectively, in 2003–04.

Differential tuition policies primarily affect programs located within engineering departments or colleges, with computer science programs also being frequently subject to differential tuition. There are two likely reasons why these programs most often charge higher tuition. The first is because student demand for these majors is strong and the market will bear higher charges. This is often why business schools choose to adopt differential tuition, and likely contributes to decisions to charge differential tuition in engineering and computer science.

Differential tuition policies primarily affect programs located within engineering departments or colleges, with computer science programs also being frequently subject to differential tuition.

The other reason is because engineering is the field with the highest instructional costs per student credit hour, based on research by Steven W. Hemelt and colleagues. They have estimated that the costs for electrical engineering are approximately twice as much as for mathematics and approximately 50% more than for STEM fields such as biology and computer science. Add in high operating expenses for research equipment and facilities, and it is not surprising that engineering programs often operate at a loss even with differential tuition.

The higher education community has become accustomed to detailed data on the debt and earnings of graduates by field of study, which has shown substantial variations in student outcomes within the broad umbrella of STEM fields. Yet there is also substantial variation by major in both the prices that students pay and the costs that universities face to educate students. Both of these areas deserve further attention from policymakers and researchers alike.

Professor and Head, Department of Educational Leadership and Policy Studies

University of Tennessee, Knoxville

What Can Artificial Intelligence Learn From Nature?

Refik Anadol Studio, "Living Archive: Nature"
Living Archive: Nature showcases the output from the Large Nature Model (LNM) by Refik Anadol Studio.

Refik Anadol Studio in Los Angeles maintains a research practice centered around discovering and developing novel approaches to data narratives and machine intelligence. Since 2014, the studio has been working at the intersection of art, science, and technology to advance creativity and imagination using big data while also investigating the architecture of space and perception.

To explore how the merging of human intuition and machine precision can help reimagine and even restore environments, the studio’s generative artificial intelligence project, the Large Nature Model (LNM), gathered more than a half billion data points about the fauna, flora, fungi, and landscapes of the world’s rainforests. These data are ethically sourced from publicly available archives in collaboration with the Smithsonian Institution, National Geographic Society, and London’s Natural History Museum.

In addition to working with existing image and sound archives in public domains and collaborating with renowned institutions, studio director Refik Anadol and his team ventured into rainforests in Amazonia, Indonesia, and Australia. They employed technologies such as LiDAR and photogrammetry, and captured ambisonic audio and high-resolution visuals to represent the essence of these ecosystems. With support from Google Cloud and NVIDIA, the team is processing this vast amount of data and plans to visit thirteen more locations around the world, developing a new understanding of the natural world through the lens of artificial intelligence. 

The team envisions generative reality as a complete fusion of technology and art, where AI is used to create immersive environments that integrate real-world elements with digital data. “Our vision for the Large Nature Model goes beyond being a repository or a creative research initiative,” says Anadol. “It is a tool for insight, education, and advocacy for the shared environment of humanity.” The LNM seeks to promote awareness about environmental concerns and stimulate inventive solutions by blending art, technology, and nature. The team trains the models to produce realistic artificial sounds and images, and showcases these outputs in art installations, educational programs, and interactive experiences.

Anadol sees the LNM’s potential to enrich society’s understanding and appreciation of nature as well as to supplement existing art therapy methods. Making the calming effects of nature available to people, even when they are unable to access natural environments directly, can be particularly beneficial in urban settings or for people with limited mobility.

In the future, the intersection of technology, art, and nature will become increasingly vital. Projects like the LNM exemplify how artificial intelligence might serve as a powerful tool for environmental advocacy, education, and creative expression. As the integration of sensory experiences and generative realities continues to push the boundaries of what is possible, the studio hopes to inspire collective action and a deeper appreciation for the environment.

REFIK ANADOL STUDIO, Living Archive: Nature.
The LNM transforms more than 100 million images of the Earth’s diverse flora, fauna, and fungi into breathtaking visuals.
Refik Anadol Studio, "Living Archive: Nature"
Processing extensive datasets from rainforest ecosystems, the LNM enables the creation of hyperrealistic environmental experiences.
Refik Anadol Studio, "Living Archive: Nature"
The development of the LNM is grounded in extensive interdisciplinary research and collaboration.
Refik Anadol Studio, "Living Archive: Nature"
Generative AI sets a new benchmark for how technology can be used to promote a deeper engagement with the planet’s ecosystems.

Enhancing Regional STEM Alliances

A 2011 report from the National Research Council, Successful K–12 STEM Education, identified characteristics of highly successful schools and programs. Key elements of effective STEM instruction included a rigorous and coherent curriculum, qualified and knowledgeable teachers, sufficient instructional time, assessment that supports instruction, and equal access to learning opportunities. What that report (which I led) did not say, however, was how to create highly effective schools and programs. A decade later, the National Academies’ 2021 Call to Action for Science Education: Building Opportunity for the Future helped answer that challenge.

In “Boost Opportunities for Science Learning With Regional Alliances” (Issues, Spring 2024), Susan Singer, Heidi Schweingruber, and Kerry Brenner elaborate on one of the key strategies for creating effective STEM learning opportunities. Regional STEM alliances—what the authors call “Alliances for STEM Opportunity”—can enhance learning conditions by increasing coordination among the different sectors with interests in STEM education, including K–12 and postsecondary schools, informal education, business and workforce development, research, and philanthropy.

Coordination is valuable because of the alignment it promotes. For example, aligning school experiences with workforce opportunities creates a better fit between schooling and jobs; aligning K–12 with postsecondary learning, including through dual enrollment, gives students a boost toward productive futures; and aligning research with practice means that research may actually make a difference for what happens in classrooms.

Working together on mutual aims helps us find common ground instead of highlighting divisions.

In calling for regional alliances, the authors are building on the recent expansion of education research-practice partnerships (RPPs), which are “long-term, mutually beneficial collaborations that promote the production and use of rigorous research about problems of practice.” In RPPs, research helps to strengthen practice because the investigations pursued are jointly determined and the findings are interpreted with a collaborative lens. The National Network of Education Research-Practice Partnerships now includes over 50 partnerships across the country. The Issues authors have expanded the partnership notion by embedding it in the full education ecosystem, including educational institutions, communities, and the workforce.

In these polarized times, alliances that surround STEM education are particularly important. Working together on mutual aims helps us find common ground instead of highlighting divisions. Allied activities help to build social capital, that is, relations of trust and shared expectations that serve as a resource to foster success. Regional alliances can help create both “bridging social capital,” in which members of different constituencies forge ties based on interdependent interests, and “bonding social capital,” in which connections among individuals within the same organizations are strengthened as they work together with outside groups. In these ways, regional alliances can help defuse the tensions that surround education so that educators can focus on the core work of teaching and learning.

While workforce development is a strong rationale for regional alliances, Singer, Schweingruber, and Brenner note that this is not their only goal. Effective STEM education is essential for all students, whatever their future trajectories. Once again reflecting the times we live in, young people need scientific literacy to understand the challenges and opportunities of daily life, whether in technology, health, nutrition, or the environment. Alliances for STEM Opportunity can promote a pathway to better living as much as an avenue to productive work.

President

William T. Grant Foundation

Building on the many salient points that Susan Singer, Heidi Schweingruber, and Kerry Brenner raise, I would like to emphasize the unique potential of community colleges to respond to the challenge of creating a robust twenty-first-century STEM workforce and science-literate citizenry. The authors rightfully point out how regional alliances can boost dual enrollment and improve the alignment of community college programs. And I applaud their mention of Valencia College in Orlando, Florida, an Aspen Institute College Excellence Prize-winning institution that many others could continue to learn from.

I would add that embracing the “community” dimensions of community colleges would accelerate the nation on the path to the authors’ goals. A growing set of regional collective impact initiatives ask colleges to be community-serving partners in efforts to build thriving places to live for young people and their families. An emphasis on alleviating student barriers, exacerbated by the COVID-19 pandemic, has put pressure on these institutions to build out basic needs services (e.g., food supports, counseling, benefit navigation) for students and community members. Incidentally, I hope we don’t soon forget the thousands of lifesaving COVID shots delivered at these schools.

Many community colleges have mission statements that are community-oriented, such as Central Community College in Nebraska, whose mission is to maximize student and community success. Moreover, because students of color disproportionally enroll in community colleges, these institutions often play an outsize role in advancing racial equity, offering paths to upward mobility that must overcome longstanding structural barriers.

Despite these many roles, community colleges are judged—and funded—primarily based on enrollment and the academic success of their students. These measures miss key benefits that these colleges provide to communities and don’t encourage colleges to focus their efforts on community well-being, including the cultivation of science literacy.

Underneath this misalignment lies the opportunity. While open access schools typically can’t compete on traditional completion, earnings, and selectivity metrics that four-year colleges are often judged on, they can compete much better on community measures because their primary audience and dollars stay more local. By highlighting how valuable they truly are locally through regional alliances, these schools could secure more sustained public investment and support more students and community members in a virtuous cycle.

While open access schools typically can’t compete on traditional completion, earnings, and selectivity metrics that four-year colleges are often judged on, they can compete much better on community measures because their primary audience and dollars stay more local.

Additionally, emerging leaders of community colleges who have risen through the ranks during the student success movement of the past 20 years are eager for “next level” success measures to drive their institutions forward. Instead of prioritizing only enrollment and completion rates, institutional leaders could set goals with regional alliance partners for scaling science learning pathways from kindergarten through college, then work together to address unmet basic needs through partnerships with local community-based organizations, ultimately helping more BIPOC (Black, Indigenous, and People of Color) students obtain meaningful and family sustaining careers—in STEM and other high demand fields.

If we truly aspire to have a STEM workforce that is more representative of the country and equity in STEM education more broadly, regional alliances must intentionally engage and support the institutions where students of color are enrolling—and for many, that is community colleges.

Director, Building America’s Workforce

Urban Institute

It has long been observed that collaborations, alliances, and strategic partnerships are able to accomplish greater systemic change related to science, technology, engineering, and mathematics (STEM) education and research. There is an imperative for the nation’s competitiveness that we cultivate and harness the talent of individuals with a breadth of knowledge, backgrounds, and expertise.

The American Association for the Advancement for Science has spearheaded the development of a national strategy referred to as the STEMM Opportunity Alliance—the extra M refers to medicine—to increase access and enhance the inclusion of all the nation’s talent to accelerate scientific and medical innovations and discoveries. AAAS collaborates with the Doris Duke Foundation and the White House Office of Science and Technology Policy in this effort. The alliance’s stated goal, set for 2050, is to “bring together cross-sector partners in a strategic effort to achieve equity and excellence in STEMM.”

There is an imperative for the nation’s competitiveness that we cultivate and harness the talent of individuals with a breadth of knowledge, backgrounds, and expertise.

Susan Singer, Heidi Schweingruber, and Kerry Brenner offer a similar approach. What is compelling about their essay is not only the delineation of the positive impact of different cross-sector collaborations across the nation on outcomes for science teaching and learning, but also the focus on the local community or region. The authors advocate for “Alliances for STEM Opportunity” along with a coordinating hub to ensure strong connections, a clear (consistent) understanding of regional and local priorities, and a collaborative action plan for addressing the needs of the community through effective and integrated science education.

This recommendation is reminiscent of the National Science Foundation’s Math and Science Partnerships program, started in 2002 but now discontinued. One of its focal areas, “Community Enterprise for STEM Learning,” was designed to expand partnerships “in order to provide and integrate necessary supports for students.” Singer, Schweingruber, and Brenner make a strong case and provide evidence for why regional alliances could lead (and have led) to improvements, which include enhanced teacher preparation, increased scores on standardized tests, a more knowledgeable workforce with relevant skills for industry, and a stronger STEM infrastructure in the region. Not only does this approach make sense; it has also shown to be effective. I know firsthand the significant benefits of alliances and partnerships from my former role as an NSF program officer, where I served as the co-lead of the Louis Stokes Alliances for Minority Participation Program and a member of the inaugural group of program officers that implemented the INCLUDES program, a comprehensive effort to enhance US leadership in STEM discovery and innovation.

As a member of the executive committee for the National Academies of Sciences, Engineering, and Medicine’s Roundtable on Systemic Change in Undergraduate STEM Education, I have engaged in wide discussions about the various factors that have been shown to contribute to the transformation of the STEM education ecosystem for the benefit of the students we are preparing to be STEM professionals, researchers, innovators, and leaders. Systemic change does not occur in silos; it occurs through intentional collaborations and a commitment from all stakeholders to transform infrastructure and culture.

Vice Provost for Research

Spelman College

It is a delight to see Alliances for STEM Opportunity highlighted by Susan Singer, Heidi Schweingruber, and Kerry Brenner. Over the past three years, serving as the executive director of one of the nation’s first STEM Learning Ecosystems (a term coined by the Teaching Institute for Excellence in STEM), in Tulsa, Oklahoma, I’ve witnessed the Tulsa Regional STEM Alliance address enduring challenges in STEM education—issues that surpass local reforms and political shifts.

The authors rightly highlight that alliances are uniquely positioned to address persistent problems, even as reforms, politics, and priorities fluctuate. Improving learning pathways, reducing teacher shortages, increasing access to teacher resources and evidence-based teaching, promoting internal accountability, and supporting continuous improvement are all issues that might be partially resolved at the local level. However, these solutions require an infrastructure that allows for their dissemination and scaling to achieve systemic equity.

This vision represents a shift from workforce-centric thinking toward holistic youth development thinking.

At the Tulsa Regional STEM Alliance—our iteration of the Alliances for STEM Opportunity—we agree that articulating a shared vision is the first step. Ours has evolved over the past decade, and we have found great alignment around our stated quest to “inspire and prepare all youth for their STEM-enabled future.” This vision represents a shift from workforce-centric thinking toward holistic youth development thinking.

To reach our goal, we collaborate with 300 partners to ensure all youth have access to excellent STEM experiences in school, out of school, and in professional settings. This entails numerous collaborations; funding and resourcing educators and partners; leading or hosting professional learning; supporting program planning and evaluation; and creating youth, family, and community events that ensure all stakeholders understand and truly feel connected to our motto: “STEM is Everywhere. STEM is Everyone. All are Welcome.”

By continually defining our shared work around excellent experiences and how they feed into our shared vision, we raise awareness and support an ambitious view of STEM education that advances learning in its individual and integrated disciplines. This enables us to advocate more effectively for funding, development, implementation, and improvement efforts from a principled and consistent position—both of which are increasingly needed in education.

With clarity on the value of STEM as a vehicle for ensuring foundational disciplinary understandings, we can carefully align stakeholders around a simple idea: STEM aims to address the issue of too few students graduating with competence in the STEM disciplines, confidence in themselves, and a pathway to the STEM workforce. STEM cannot meet this demand if the experiences in which we invest our time, talent, and resources do not advance our excellent experiences (shared work) and move us closer to inspired and prepared youth (our shared vision).

I echo the authors’ call for expanded funding and research into this evolving infrastructure and encourage others to connect with their local alliances by visiting https://stemecosystems.org/ecosystems.

Executive Director

Tulsa Regional STEM Alliance

Susan Singer, Heidi Schweingruber, and Kerry Brenner describe the importance of local collaborations among schools, postsecondary institutions, informal education, businesses, philanthropies, and community groups for improving science education from kindergarten through postsecondary education. Regional alliances bring together diverse stakeholders to improve science education in a local context, which is a powerful strategy for achieving both workforce development and goals for science literacy. As the authors also note, regional alliances contribute to the development of a better civic society. These alliances provide a venue for people to find common ground so that progress does not get lost to political polarization.

Opening pathways to STEM careers through alliances has broad societal benefits beyond just creating more scientists—it makes science more accessible and relevant to students’ lives, which is crucial for individual and societal well-being and effective participation in democracy. Science education emphasizes the importance of critical thinking, questioning assumptions, and evidence-based conclusions. These skills are essential for effective civic participation, as they enable individuals to evaluate claims, consider multiple perspectives, and engage in constructive dialogue.

Regional alliances can contribute to the development of a better civic society that fosters informed, engaged, and socially responsible citizens.

Regional alliances can promote the integration of these skills throughout a school’s science curriculum and in community-based learning experiences. They can engage students in authentic, community-based science projects that address local issues, such as environmental conservation, public health, or sustainable development. By participating in these projects, students can develop a sense of agency, empathy, and social responsibility, as well as practical skills in problem-solving, collaboration, and communication. I want to highlight three ways regional alliances can contribute to the development of a better civic society that fosters informed, engaged, and socially responsible citizens.

First, regional alliances can bring together schools, businesses, government agencies, and community organizations to collaborate on science-based initiatives that enhance community resilience. For example, alliances can work on projects related to disaster preparedness, climate change adaptation, or public health emergencies. These partnerships can strengthen social capital, trust, and collective problem-solving capacity, which are essential for a thriving civic society.

Second, regional alliances can demonstrate ways to engage in respectful, evidence-based dialogue around controversial issues. This can include providing professional learning for teachers on facilitating difficult conversations, hosting community forums that model constructive discourse, and encouraging students to practice active listening and perspective-taking.

Third, regional alliances can create opportunities for students to take on leadership roles, express their ideas, and advocate for change in their communities. For example, alliances can support student-led science communication campaigns, development of policy recommendations, or community service projects. By empowering youth to be active participants in shaping their communities, alliances can contribute to the development of a more vibrant and participatory civic society.

Regional alliances focused on all levels of science education can play a vital role in building a better civic society by fostering scientific literacy, critical thinking, community engagement, and lifelong learning. By preparing students to be informed, engaged, and socially responsible citizens, these alliances can contribute to a more resilient, inclusive, and democratic society.

Program Director, Education

Carnegie Corporation of New York

Susan Singer, Heidi Schweingruber, and Kerry Brenner’s essay and theory regarding regional alliances resonate within the funder community. In 2014, several STEM funders helped launch the STEM Learning Ecosystems Community of Practice (SLECoP). These leaders recognized the value of collective impact and the tenets of a regional model. Fast forward, philanthropic commitments in regionalized initiatives continue today. Singularly, funders cannot support all aspects of a regional alliance. However, hybrid investment portfolios or philanthropic collaboratives can illuminate the interdependencies throughout the continuum from kindergarten through career and collectively support various aspects of a centralized regional model.

The authors’ assessment offers a compelling response to the National Center for Science and Engineering Statistics 2019 data that illustrated the status of the education-to-labor market pipelines throughout the country. The state-specific labor force data reflect exemplars and chasms in the continuum. The data indicate that 24 states lack a high concentration of STEM workers relative to the total employment within their respective states. Concentration is measured by those in the skilled technical workforce or those in the STEM workforce with a bachelor’s degree or above. The data also reveal that only 13 states have workforce in which 11.2% to 15% of participants have STEM bachelor’s degrees. Such regional inequalities threaten the nation’s capacity to close education, opportunity, and poverty gaps; meet the demands of a technology-driven economy; ensure national security; and maintain preeminence in scientific research and technological innovation.

Regional inequalities threaten the nation’s capacity to close education, opportunity, and poverty gaps; meet the demands of a technology-driven economy; ensure national security; and maintain preeminence in scientific research and technological innovation.

Many socioeconomically disadvantaged communities lie within the lowest educational and workforce STEM concentrations. Implementing regionalized STEM pathway models would close opportunity gaps. The labor needed by 2030 dictates the need for collective impact, thought partners, and strategic alliances. Regional alliances would enable an inversion of the current STEM pathway status. Regional partnerships that begin with early education; ensure STEM teacher growth, support, and retention; guarantee equitable access; and end with industry engagement will ensure that the nation’s workforce supply outpaces its workforce demand.

As strategic partners, corporate and private philanthropy can fortify structural needs and build capacity for regional alliances. If the authors’ recommendations hold, consistent philanthropy can guarantee the sustainability of the principles of a regional model. I appreciate the authors’ emphasis for regional engagement. National centralization is always valued, but regional implementation has a greater propensity for viable execution. Regional activation allows local partners to tailor solutions and address the specific STEM workforce needs in their geography. Localized assessments will yield the best and wisest practices.

However, the key to bringing the authors’ recommendations to fruition is mutual interest and motivation by the constituents within a region to do so. Similar to regionalized interests and constituencies, most philanthropic investments are also regionalized. Regional funding partners can catalyze impetus for synergizing their STEM ecosystem allies. Therefore, as we consider the fate of the nation, I hope regional leaders and philanthropists will continue to take stock of the value and promise of the authors’ justified theory.

Executive Director

STEM Funders Network

Susan Singer, Heidi Schweingruber, and Kerry Brenner provide current examples and evidence to support and advance the central theme of the National Academies’ 2021 report Call to Action for Science Education: Building Opportunity for the Future. In reading the essay, the familiar saying that “all politics is local” came to mind as I thought about how broad national priorities—such as the report’s push for “better, more equitable science education”—can be used in the development of systems, practices, and supports that are focused regionally and locally. It also made me think about classroom connections and some of the recent instructional changes that foreground locality.

Imagine how empowering it is to begin to answer questions that have personal and communal relevance and resonance.

Over the past few years, the science education community has continued to make shifts in teaching and learning to center students’ ideas, communities, and culture as means to reach that Call to Action goal. Many of the educational resources published lately offer students and teachers the opportunity to consider a phenomenon, an observable event or problem, to begin the science learning experience. Students are provided with current data and information in videos and articles, then given the opportunity to ask questions that can be investigated. In the process of answering the students’ questions, science ideas are developed and explained that underlie the phenomenon being considered. Imagine how empowering it is to begin to answer questions that have personal and communal relevance and resonance. This type of science teaching and learning connects with the types of partnerships and experiences essential in the local and regional alliances, and serves to enrich and enliven the relevance and relatability to science as a career opportunity and civic necessity.

Additionally, it would be great to find ways to connect these local and regional alliances to make them even stronger and more common, by identifying ways to scale and sustain efforts, celebrate accomplishments, and share resources. One possibility might be some type of national convening that would provide the time and space where representatives from local and regional alliances could discuss what is working, seek support to solve challenges, and create other types of alliances through cooperation and collaboration. Science Alliance Opportunity Maps could be created to ensure that all students and their communities are being served and supported. The only competition would be the numerous and varied ways to make equitable science education a reality for every student, from kindergarten through the undergraduate years, in every region and locale of the nation. This would be a major step toward achieving Singer, Schweingruber, and Brenner’s hope for “not just a competitive workforce, but also a better civic society.”

Associate Director for Program Impact

Senior Science Educator

BSCS Science Learning

Marie Curie Visits the National Academy of Sciences Building

A photograph captures a historic moment on the back steps of the National Academy of Sciences building: Marie Curie, codiscoverer of radium and polonium, stands alongside President Herbert Hoover in the fall of 1929. The president had presented her with a gift of $50,000, earmarked for purchasing a gram of radium for her oncology institute in Warsaw, Poland. The gift was the result of a fundraising campaign led by American journalist Marie Meloney, after her article in The Delineator, a popular women’s magazine, reported that Curie could not continue her groundbreaking research without more of the expensive element.

Curie, a Polish-born physicist and chemist, is renowned for her work on radioactivity. Not only was she the first woman to win a Nobel Prize, but she was also the first person to win a Nobel Prize twice in two scientific fields—physics in 1903 and chemistry in 1911. Her research led to the development of nuclear energy and radiotherapy for cancer treatment. Five years after her visit to the National Academy of Sciences, Curie died from leukemia, likely the direct result of her prolonged radiation exposure. Her life, while marked by tragic irony, continues to inspire generations with her unwavering dedication to science.

Principles for Fostering Health Data Integrity

Almost every generation is confronted with the effects of its past and must adapt. In his 1962 “We choose to go to the Moon” speech, President Kennedy juxtaposed the challenges of his postwar era—intelligence vs. ignorance, good vs. evil, leadership vs. fear-fueled passivity—and harnessed the national animus to achieve a lunar landing.

Today, our challenge categories are similar. We are confronted with the effects and portents of concurrent changes in medicine, science, and technology, which in turn change how we educate scientists, manage the implementation of new technology, and respond to the effects, both planned and unforeseen, of the application of our discoveries.

Computational and data science technologies, some rooted in JFK’s ’60s, have entered all facets of life at breakneck speed. Our understanding of the societal effects of emerging technologies is lagging. When data, data transfer, and artificial intelligence meet medicine, game-changing implementation effects—positive or negative—are imminent.

In “How Health Data Integrity Can Earn Trust and Advance Health” (Issues, Winter 2024), Jochen Lennerz, Nick Schneider, and Karl Lauterbach tackle this complex landscape and identify pivotal decisions needed to create a system that equitably benefits all stakeholders. They highlight a requisite culture shift: an international ethos of probity for everyone involved with health data at any level. They propose, in effect, a modern-day Hippocratic Oath for health data creation, utilization, and sharing—a framework that would simultaneously allow advancement in science and population health while adhering to moral and ethical standards that respect individuals, their privacy, and their medical needs.

Without this health data integrity framework, the promise of medical discovery through big data will be truncated.

When data, data transfer, and artificial intelligence meet medicine, game-changing implementation effects—positive or negative—are imminent.

Within this framework, we open new horizons for medical advancement, and we augment the safety of data and of tools such as artificial intelligence. AI is an oxymoron: it is neither artificial nor intelligence. AI determinations derive from real data scrutinized algorithmically and, at least currently, they appear intelligent only as the data evaluation is iterative and cumulative—temporally updated evaluations of compounding data sets—a heretofore quasi-definition of intelligence. These data serve us—patients, health care providers, researchers, epidemiologists, industry, developers, or regulators. With greater harmonization and data integrity, data utilization becomes globalized. Wider use of data sets can lead to more discoveries and reduce testing redundancies. Global data sharing can limit the biases of small numbers and identify populations of low prevalence (e.g., rare diseases), allowing the creation of larger, global cohorts.

Pathologists, like the article’s coauthor Jochen Lennerz, are physician specialists trained to understand data; we are responsible for the generation of roughly 70% of all medical data. Pathologists, along with ethicists, data scientists, data security specialists, and various other professionals, must be at the table when a health data integrity framework is being created.

Within this framework, we will benefit from a system of trust that recognizes and respects the rights of patients; understands, and supports, medical research; and ensures the safe, ethical, transfer and sharing of interoperable, harmonized medical data.

We must ensure the steps we take with health data are not just for a few “men,” to borrow again from the lunar-landing lexicon. Rather, we must create a health data ecosystem of integrity—a giant step for humankind.

Vice President for Medical Affairs, Sysmex America

Governor, College of American Pathologists (CAP)

Chair, CAP Council on Informatics and Pathology Innovation

Jochen Lennerz, Nick Schneider, and Karl Lauterbach report how efforts to share health data across national borders snag on legal and regulatory barriers and suggest that data integrity will help advance health.

In today’s digital transformation age, with our genomes fully sequenced and widely deployed electronic health record systems, addressing collaborative digital health data use presents a variety of challenges. There is, of course, the need to ensure data integrity, which will demand addressing such issues as the relative lack of well-defined data standards, poor implementation, and adherence, as well as the asymmetry of digital knowledge and innovation adoption in our society. But a more complex challenge arises from the propensity of humans to push major inventions beyond their benefits—and into the abyss. Therefore, we must engage together for human integrity in collaborative health data use.

Yet another challenge—one that the authors cite and I agree with—arises from deep-rooted conflicts of interest among all stakeholders (patients, health care professionals, the health management industry, payors, and governments) in health care. There also are generational differences between tech-savvy younger health care professionals, who are generally more open to structured data collection and documentation, and more senior ones, who struggle with technology and contribute health data that is more difficult to process.

There is, though, overall agreement among health care professionals that their foremost task is to serve as their patients’ advocate and go above and beyond to help them overcome or manage their medical problems using every available resource, which today would clearly include taking full advantage of digital health innovations, health data, and associated technologies such as artificial intelligence.

A more complex challenge arises from the propensity of humans to push major inventions beyond their benefits—and into the abyss. Therefore, we must engage together for human integrity in collaborative health data use.

However, since medicine has become such a complex profession, health professionals often practice in large care facilities embedded in organizations operated by corporations that seek profits, and where payors strictly regulate access to and extent of utilization of care on behalf of governments that struggle with expenditures. Unsurprisingly, the goals of nonpatients, administrators, and others outside of health care might not be what health professionals would view as ethical and responsible in terms of health data collection and use.

Among still other obstacles to the protection of health care data, cybercrime risks with hackers who either for personal gain or on behalf of third parties attack our increasingly digital world are a major threat. And then there is the important matter of people’s individual freedom, which at least in most Western democracies includes the right to informational self-determination and privacy. Ensuring these rights needs to be balanced with the societal goal of fostering increasingly data driven medical and scientific progress and health care delivery.

Once all stakeholders in medicine, health care, and biomedical research realize that our traditional approach to diagnosis, prognosis, and treatment can no longer process and transform the enormous volume of information into therapeutic success, innovative discovery, and health economic performance, we can join forces to unite for precision health. For details, I’ve laid out a vision for collaborative health data use and artificial intelligence development in the Nature Portfolio journal Digital Medicine.

Put briefly, precision health is the right treatment, for the right person, at the right time, in the right place. It is enabled through a learning health system in which medicine and multidisciplinary science, economic viability, diverse culture, and empowered patient’s preferences are digitally integrated and conceptually aligned for continuous improvement and maintenance of health, well-being, and equity.

Professor of Medicine and Adjunct Professor of Computing Science

University of Alberta

Director, Collaborative Research and Training Experience “From Data to Decision

Natural Sciences and Engineering Research Council of Canada

Needed: A Vision and Strategy for Biotech Education

It is consistently true that as new career fields and business centers arrive, a portion of the population is left on the sidelines. This holds especially true in the biotechnology, medical technology, genomics, and synthetic biology investments we see today. Urban centers, which often have a high concentration of university graduates, are primed for success in the emerging bioeconomy. But even there, career and educational opportunities are often out of reach for young women and people of color. In rural communities and in regions that have traditionally supported fishing, forestry, farming, and mining, all residents are less likely to track into careers in science, technology, engineering, or mathematics.

In “A Great Bioeconomy for the Great Lakes” (Issues, Winter 2024), Devin Camenares, Sakti Subramanian, and Eric Petersen report on some targeted and hyperlocal interventions that stimulated a bioinnovation community in the Midwest and Great Lakes areas. They found that connecting students in regional high schools and local colleges with experts in industry and community labs increased the students’ appetites for further involvement. What a boon for the educators and young innovators who successfully discovered this opportunity.

In our work through the BioBuilder Educational Foundation, we can attest to the need for deliberate actions to overcome specific regional obstacles. Since 2019, BioBuilder has been engaged with high schools in East Tennessee. After several years laying a foundation in this rural region, BioBuilder is now integrated every year into biology classes in secondary schools spanning several counties. It is also integrated into some of the region’s BioSTEM pathways that Tennessee uses to bring early-college access and relevant work experience into career and technical education classrooms statewide. BioBuilder has built partnerships with local and federal funders to expand this work, and the success has spurred a much larger set of activities in the region, including post-secondary tracks at East Tennessee State University and local business opportunities such as the development of the Valleybrook Research Campus.

It must be recognized, however, that such hyperlocal approaches to building bioeconomies is not an ultimate solution. Regional approaches must be complemented with systemic educational change if the nation is to achieve the “holistic, decentralized, and integrated bioeconomy” that Camenares, Subramanian, and Petersen aim for.

Regional approaches must be complemented with systemic educational change if the nation is to achieve the “holistic, decentralized, and integrated bioeconomy” that Camenares, Subramanian, and Petersen aim for.

The K–12 public school system in the United States is an underutilized lever of change in this regard. With over 3 million students graduating each year, the nation is failing our children and collective future by not offering an on-ramp to sophisticated job sectors without the need for higher education. Public schools fulfilled the nation’s workforce needs in the past, diversifying the talent pool with an equitable geographic and racial distribution. Public schools fully reflect the nation’s diversity, and high school is the last formal education received by between one-third and one-half of all residents. Public schools operate in every state and so provide an established infrastructure for engaging every community.

With respect to the emerging bioeconomy, a vision and strategy for public education is needed. And it could be simple: providing easy-to-implement content that modernizes the teaching of life science, and then millions of young people can graduate high school with enough content knowledge and skills to join the workforce, spurring development of the bioeconomy everywhere.

Founder and Executive Director

BioBuilder Educational Foundation

National Program Coordinator

BioBuilder Educational Foundation

The “one-size-fits-all” curriculum common in many regions of the United States may fall short of capitalizing on local differences when building a successful bioeconomy, argue Devin Camenares, Sakti Subramanian, and Eric Petersen. The authors highlight the extent of programmatic structure that may or may not be helpful in seeding locally specialized educational initiatives. In this model, the authors propose that the uniqueness of a region is the key to unlocking local bioeconomic growth, turning current challenges into future opportunities.

This approach has proven fruitful in the Great Lakes region and beyond. For example, Beth Conerty at the University of Illinois Integrated Bioprocessing Research Laboratory takes advantage of its Midwest location to offer bioprocessing scale-up opportunities. Similar to the approach the authors propose, the facility couples science with educational opportunities for its students. Also, Scott Hamilton-Brehm of Southern Illinois University Carbondale founded a program called Research, Engagement, and Preparation for Students, which promotes accessibility, outreach, and communication in science, technology, engineering, and mathematics. The program’s strong student engagement grew into a company called Thermaquatica that converts biomass to value-added products including biostimulants and biofuels.

The uniqueness of a region is the key to unlocking local bioeconomic growth, turning current challenges into future opportunities.

Elsewhere, Ernesto Camilo Zuleta Suárez led several outreach and educational programs to prepare leaders for the future bioeconomy through the Consortium for Advanced Bioeconomy Leadership Education, based at Ohio State University. In Tennessee, the Oak Ridge Site Specific Advisory Board serves as a more policy-focused example, wherein student board members are strategically invited to take part in maintaining the local environment of the Oak Ridge Reservation, which still faces challenges from legacy wastes. Additionally, the Bredesen Center at the University of Tennessee established a strong program to teach students to incorporate outreach and public engagement into their scientific career.

Once established, these locally cultivated STEM programs can gain traction through science communication, which is an integral component in the field of synthetic biology (SynBio) and a determinative step of the scientific method. To highlight some examples, we have the International Genetically Engineered Machine (iGEM) and BioBuilder podcasts by Zeeshan Siddiqui and his team, the Mastering Science Communication course led by Larissa Markus, and the iGEM Digest authored by Hassnain Qasim Bokhari and Marissa Sumathipala. More recently, Tae Seok Moon has launched the SynBYSS: SynBio Young Speaker Series. And the Science for Georgia nonprofit hosts free science communication workshops and offers opportunities to share science with the community. Science communication not only educates the current generation but also transfers knowledge to future generations, thereby ensuring the sustainability of science.

Perhaps most important, these efforts are built on a student-centered approach designed to offer increasingly accessible means for students to participate in STEM education and related activities. The Global Open Genetic Engineering Competition and BioBuilder are already increasing accessible means for students to participate. Spurring interest and engagement in STEM, even at the middle or high school levels, can accelerate the development of career interests, especially in a field as interdisciplinary as synthetic biology. Such experiences may even spark interests beyond typical STEM careers and help catalyze a scientifically literate society. This educational proposition invites a people-focused approach as opposed to a project-focused one—the former of which is the key ingredient that will make the difference.

Mentor

iGEM

Innovative, Opportunistic, Faster

It is safe to say that research into the production, distribution, and use of energy in the United States has emphasized the technological over the social. Let’s be clear: this focus has had its successes. We see physical improvements today in our homes and offices and in the growth of renewable sources in large part due to research and development investments begun in the 1970s. In some cases, these efforts were paired with inquiries into the economic, demographic, and behavioral contexts surrounding the technology in question. But this kind of comprehensive, multidisciplinary approach to our energy system has been rare—at least until recently.

As Evan Michelson and Isabella Gee demonstrate by example in “Lessons From a Decade of Philanthropy for Interdisciplinary Energy Research” (Issues, Winter 2024), the questions that social scientists, policymakers, the media, and consumers might have about the energy system extend far beyond resistors and wires. These questions are unwieldy. They are also challenging for researchers accustomed to working in their siloes. For example, many energy scholars are unfamiliar with our complex housing, property, utility, and household practices and their regulatory history. Likewise, social scientists have been sidelined not just due to their disciplinary silos and inability to engage with the engineers and scientists but because of the historical underinvestment in their methods.

Unfamiliarity has practical implications, such as not knowing which data are available, how to collect them, and whether indicators represented by these data are the most valid and aligned to the underlying concept in question. Put simply, humans—or more specifically, our understanding of humans and their energy use—are a missing link in energy research.

The questions that social scientists, policymakers, the media, and consumers might have about the energy system extend far beyond resistors and wires.

Enter philanthropy. Michelson and Gee rightfully point out the critical role of philanthropic funders based on their universal mission to improve social conditions. But they also note how philanthropy offers a unique vehicle compared with the public sector’s statutory restrictiveness and private sector’s profit motivation. Philanthropy can be innovative (funding risky propositions with potentially large societal benefit), opportunistic (targeting questions and researchers that have been excluded from methods and institutions), and, quite frankly, faster and nimbler, along with being more altruistic.

But philanthropy and, in turn, philanthropy’s reach is limited. In the broad and still-murky field of energy and its socioeconomic soup, there are few philanthropic energy R&D funders, often with very limited budgets in competition with foundations’ other pressing social program allocations. Federal funding’s crowding out of foundation contributions might convince some funders to simply stay out of the business altogether.

For the few funders that stay in the race, there can be real rewards. The subject matter and researcher pools supported by the two largest federal energy research funders—the National Science Foundation and US Department of Energy—have expanded. In some cases, this has been made explicit through interdisciplinary research calls as well as stated research questions that require collaboration across silos. Anecdotally, every energy conference I have attended in the last five years has consciously discussed the integration of social sciences as a fundamental component of energy research. While each philanthropic entity rightfully evaluates its impact—and in the Alfred P. Sloan Foundation’s case, quantitative indicators of those effects—we can see that these efforts have already had a massive qualitative effect.

Director of Remodeling Futures

Harvard Joint Center for Housing Studies

Drowning in a Mechanical Chorus

In her thoughtful essay, “How Generative AI Endangers Cultural Narratives” (Issues, Winter 2024), Jill Walker Rettberg writes about the potential loss of a beloved Norwegian children’s story alongside several “misaligned” search engine results. The examples are striking. They point also to even more significant challenges implicit in the framing of the discussion.

The fact that search results in English overwhelm those in Norwegian, which has far fewer global speakers, reflects the economic dominance of the American technology sector. Millions of people, from Moldova to Mumbai, study English in the hope of furthering their careers. English, despite, and perhaps because of, its willingness to borrow from other cultures, including the Norse, has become the de facto lingua franca in many fields, including software engineering, medicine, and science. The bias toward English in the search therefore reflects the socioeconomic realities of the world.

Search engines of the future will undoubtedly do a better job in localizing the query results. And the improvement might come exactly from the kind of tightly curated machine learning datasets that Rettberg encourages us to consider. A large language model “trained” on local Norwegian texts, including folk tales and children’s stories, will serve more relevant answers to a Norwegian-speaking audience. (In brief, large language models are trained, using massive textual datasets consisting of trillions of words, to recognize, translate, predict, or generate text or other content.) But—and here’s the crucial point—no amount of engineering can make a model more fair or more equitable than the world it is meant to represent. To improve it, we must improve ourselves. Technology encodes global politics (and economics) as they are, not as they should be. And we humans tend to be a quarrelsome bunch, rarely converging on the same shared vision of a better future.

No amount of engineering can make a model more fair or more equitable than the world it is meant to represent. To improve it, we must improve ourselves.

The author’s conclusions suggest we consider a further, more troubling, aspect of generative AI. In addition to the growing dominance of the English language, we have yet to contend with the increasing mass of machine-generated text. If the early large language models were trained on human input, we are likely soon to reach the point where generated output far exceeds any original input. That means the large language models of the future will be trained primarily on machine-generated inputs. In technical terms, this results in overfitting, where the model follows too closely in its own footsteps, unable to respond to novel contexts. It is a difficult problem to solve, first because we can’t really tell human and machine-generated texts apart, and second, because any novel human contribution is likely to be overwhelmed by the zombie horde of machine outputs. The voices of any future George R. R. Martins or Toni Morrisons may simply drown in a mechanical chorus.

Will human creativity survive the onslaught? I have no doubt. The game of chess, for example, became more vibrant, not less, with the early advent of artificial intelligence. The same, I suspect, will hold true in other domains, including the literary—where humans and technology have long conspired to bring us, at worst, some countless hours of formulaic entertainment, and, at their collaborative best, the incredible powers of near-instantaneous translation, grammar checking, and sentence completion—all scary and satisfying in any language.

Associate Professor of English and Comparative Literature

Columbia University

How to Build Less Biased Algorithms

In “Ground Truths Are Human Constructions” (Issues, Winter 2024), Florian Jaton succinctly captures the crucial importance of the often-overlooked aspects of human interventions in the process of building new machine learning algorithms through operations of ground-truthing. His observations summarize and expand his previous systematic work on ground-truthing practices. They are fully aligned with the views I have developed while researching the development of diagnostic artificial intelligence algorithms for Alzheimer’s disease and other, more contested illnesses, such as functional neurological disorder.

Much of the current critical discourse on machine learning focuses on training data and their inherent biases. Jaton, however, fittingly foregrounds the significance of how new algorithms, both supervised and unsupervised, are evaluated by their human creators during the process of ground-truthing. As he explains, this is done by using ground-truth output targets to quantify the algorithms’ ability to perform the tasks for which they were developed with sufficient accuracy. Consequently, the algorithms’ thus assessed accuracy is not an objective measure of their performance in real-world conditions but a relational and contingent product of tailor-made ground-truthing informed by human choices.

Even more importantly, shifting the focus on how computer scientists perform ground-truthing operations enables us to critically examine the processuality of the data-driven evaluation as a context-specific sociocultural practice. In other words, to understand how the algorithms that are increasingly incorporated across various domains of daily life operate, we need to unpack not only how their specific underlying ground truths have been constructed but also how such ground truths have been operationally deployed from case to case.

We need to unpack not only how their specific underlying ground truths have been constructed but also how such ground truths have been operationally deployed from case to case.

I laud in particular Jaton’s idea that we humanities scholars and social scientists should not stop at analyzing the work of computer scientists who develop new AI algorithms but should instead actively build new transdisciplinary collaborations. Based on my research, I have concluded that many of computer scientists’ decisions on how to build and deploy ground-truth datasets are primarily driven by pragmatic goals of solving computational problems and are often informed by tacit assumptions. Broader sociocultural and ethical consequences of such decisions remain largely overlooked and unexplored in such constellations.

In future transdisciplinary collaborations, the role of humanities scholars could be to systematically examine and draw attention to the otherwise overlooked sociocultural and ethical implications of various stages of the ground-truthing process before their potentially deleterious consequences become implicitly built into new algorithms. Such collaborative practices require additional time investments and the willingness to work synergistically across disciplinary divides—and are not without their challenges. Yet my experience as a visual studies scholar integrated into a transdisciplinary team that explores how future medical applications of AI could be harnessed for knowledge production shows that such collaborations are possible. In fact, transdisciplinary collaborations may indeed be not just desirable but necessary if, as Jaton suggests, we want to build less biased and more accountable algorithms.

Postdoctoral Researcher, Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich

Visiting Researcher, Department of Social Studies of Science and Technology, Institute of Philosophy, History of Literature, Science, and Technology, Technical University Berlin

Inviting Civil Society Into the AI Conversation

Karine Gentelet’s proposals for fostering citizen contributions to the development of artificial intelligence, outlined in her essay, “Get Citizens’ Input on AI Deployments” (Issues, Winter 2024), are relevant to discussions on the legal framework for AI, and deserve to be examined. For my part, I’d like to broaden the discussion on ways of encouraging the contribution of civil society groups to the development of AI.

The amplification or emergence of new social inequalities is one of the fears of those calling for more effective supervision of AI. How can we prevent AI from having a negative impact on inequalities, and why not encourage a positive one instead?

Involvement of civil society groups, notably from the community sector, that work with impoverished, discriminated, or vulnerable populations in consultations or deliberations about AI and its governance is currently very marginal, at least in Quebec. The same holds true for the involvement of individuals within these populations. But civil society groups, just like people, can be affected by AI—and as drivers of social innovation, they can also make positive contributions to the evolution of AI.

Even more concretely, the expertise of civil society groups can be called upon at various stages in the development of AI systems. This may occur, for example, in analyzing development targets and possible biases in algorithm training data, in testing technological applications against the realities of marginalized populations, and in identifying priorities to help ensure that AI systems benefit society. In short, civil expertise can help identify issues that those guiding AI development at present fail to raise because they are far too remote from the realities of marginalized populations.

The expertise of civil society groups can be called upon at various stages in the development of AI systems.

Legal or ethical frameworks can certainly make more room for civil society expertise. But for them to play their full role, civil society groups must have the financial resources to develop their expertise and dedicate time to studying certain applications. Yet very often, these groups are asked to offer in-kind contributions before being allowed to participate in a research project!

And beyond financial challenges, some civil society groups remain out of the AI conversation. For example, the national charitable organization Imagine Canada found that 61% of respondents to a survey of charities indicated that they didn’t understand the potential applications of AI in their sector. The respondents also highlighted the importance of and need for training in AI.

Legislation and regulation are often necessary to provide a framework for working in or advancing an industry or sector. However, other mechanisms—including recourse to the courts, research, journalistic investigations, and collective action by social movements or whistleblowers—can also contribute significantly to the evolution of practices and respect for the social consensus that emerges from deliberative exercises. Events of this kind concerning AI are still very fragmentary.

Executive Director

Observatoire Québécois des Inégalités

Montréal, Québec, Canada

Existing approaches to governance of artificial intelligence in the United States and beyond often fail to offer practical ways for the public to seek justice for AI and algorithmic harms. Karine Gentelet correctly observes that policymakers have prioritized developing “guardrails for anticipated threats” over redressing existing harms, especially those emanating from public-sector abuse of AI and algorithmic systems.

This dynamic plays out every day in the United States, where law enforcement agencies use AI-powered surveillance technologies to perpetuate social inequality and structural disadvantage for Black, brown, and Indigenous communities.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies such as facial recognition, drone surveillance, and predictive policing. For example, reporters at WIRED magazine found that nearly 12 million Americans live in neighborhoods where police have installed AI audio sensors to detect gunshots and collect data on public conversations. They estimate that 70% of the people living in those surveilled neighborhoods are either Black or Hispanic.

As Gentelet notes, existing AI policy frameworks in the United States have largely failed to create accountability mechanisms that address real-world harms such as mass surveillance. In fact, recent federal AI regulations including Executive Order 141110 have actually encouraged law enforcement agencies “to advance the presence of relevant technical experts and expertise [such] as machine learning engineers, software and infrastructure engineering, data privacy experts [and] data scientists.” Rather than redress existing harms, federal policymakers are staging the grounds for future injustice.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies.

Without AI accountability mechanisms, advocates have turned to courts and other traditional forums for redress. For example, community leaders in Baltimore brought a successful federal lawsuit to end a controversial police drone surveillance program that recorded the movements of nearly 90% of the city’s 585,000 residents—a majority of whom identify as Black. Similarly, a coalition of advocates working in Pasco County, Florida, successfully petitioned the US Department of Justice to terminate federal grant funding for a local predictive policing program while holding school leaders accountable for sharing sensitive student data with police.

While both efforts successfully disrupted harmful algorithmic practices, they failed to achieve what Gentelet describes as “rightful reparations.” Existing law fails to provide the structural redress necessary for AI-scaled harms. Scholars such as Rashida Richardson of the Northeastern University School of Law have outlined what more expansive approaches could look like, including transformative justice and holistic restitution that address social and historical conditions.

The United States’ approach to AI governance desperately needs a reset that prioritizes existing harm rather than chasing after speculative ones. Directly impacted communities have insights essential to crafting just AI legal and policy frameworks. The wisdom of the civil rights icon Ella Baker remains steadfast in the age of AI: “oppressed people, whatever their level of formal education, have the ability to understand and interpret the world around them, to see the world for what it is, and move to transform it.”

Senior Policy Counsel & Just Tech Fellow

Center for Law and Social Policy

Celebrating the Centennial of the National Academy of Sciences Building

This is a special year for the National Academy of Sciences (NAS) as its beautiful headquarters at 2101 Constitution Avenue, NW, in Washington, DC, turns 100 years old. Dedicated by President Calvin Coolidge in April 1924 and designed by architect Bertram Grosvenor Goodhue, the building’s architecture synthesizes classical elements with Goodhue’s preference for “irregular” forms. It harmoniously weaves together Hellenic, Byzantine, and Egyptian influences with hints of Art Deco, giving the building a modern aspect—which is consistent with Goodhue’s assertion that it was meant to be a “modern and scientific building, built with modern and scientific materials, by modern and scientific methods for a modern and scientific set of clients.”

Goodhue, celebrated for his Gothic Revival and Spanish Colonial Revival designs, developed a late-career interest in Egyptian Revival architecture around the time that King Tutankhamun’s tomb was discovered. The NAS building’s design references ancient Egypt with its battered, or inwardly sloping, façade, giving the building an air of monumentality. It depicts the Egyptian god Imhotep, the Great Pyramid of Giza, the Museum of Alexandria, the ancient lighthouse on the island of Pharos, and hieroglyphic decorations. The structure reflects Goodhue’s distinctive aesthetic, and it also harmonizes with the nearby neoclassical Lincoln Memorial, which was under construction when the NAS building was planned.