In “Fixing Academia’s Childcare Problem” (Issues, Winter 2023), Zeeshan Habeeb makes what social scientists call the “business case” for providing subsidized childcare to graduate students and postdoctoral fellows. The author notes that the poorly paid, protracted training period for establishing an independent faculty career overlaps with women’s fertility. Habeeb argues that this life course pattern plus the lack of affordable childcare on campus pushes out talented academics in science, technology, engineering, and mathematics—the STEM fields—and decreases the innovation, competitiveness, and profitability of the United States. Losing these highly trained early-career scientists is a poor return on the nation’s investment in education and academic research.
However, the business case for work-family accommodations mostly persuades those who are already sympathetic. Others are likely to critique the statistics and to maintain that their institution is different from those held up as case studies.
Let’s ask why the current arrangements—that require academics to work for low wages and without adequate family accommodations until their thirties or forties—are still so taken for granted in American universities. To understand this, we need to address four moral and emotional dimensions of academic STEM culture, as Erin A. Cech and I find in our book, Misconceiving Merit(University of Chicago Press, 2022).
First, academic science is not seen by STEM faculty as a business. Rather, it is understood to be a vocation devoted to fundamental research and largely unpolluted by profit or politics.
Second, our research shows that across genders and family statuses, STEM faculty largely embrace the “schema of work devotion,” which mandates undivided allegiance to the scientific calling. Research faculty love their work; it is a big part of their identity. STEM faculty celebrate their independence and inspiration, charting their own course to intellectual discovery.
Third, seen through a work devotion lens, the lengthy training period is an appropriate novitiate, in which novices prove their dedication and their worthiness to be appointed as professors.
Fourth, the underbelly of work devotion is the stigma faced by caregivers, who are seen as violating their vocation. This translates into a devaluation of women and of non-gender-normative men, who often take on more of the household’s caregiving responsibilities.
I encourage disciplinary and multidisciplinary associations and federal funders to address this stigma head-on. They should demand a moral reckoning, which would redefine STEM academics as deserving of the time and resources to have or adopt children, if they choose, while maintaining their respected status in the vocation. At a later life course stage, STEM academics also deserve the time to care for elderly or fragile parents and other loved ones, while still maintaining full respect for their scientific contributions.
Academic science is understood to be a vocation. To preserve the inclusion of early-career scientists who are creative and procreative in all senses of these words, let’s stop expecting it to be a monastic one.
Mary Blair-Loy
Professor, Department of Sociology
University of California, San Diego
Zeeshan Habeeb offers compelling reasons and concrete solutions. The solutions are evidence-based for workforce productivity in research in science, technology, engineering, and mathematics—but US academic institutions already know how changes in policy across the board can make a huge difference for caregivers, especially women.
Academic institutions are not lacking models and solutions for equitable childcare; rather, the institutions function because of the exploitative invisible labor of caregiving that is at the root of all US workplaces. To address the childcare crisis in academia requires a reflection on our value system as a society and where we place our priorities. In 2021, women took on an extra 173 hours of childcare, and men took on an average of 59 extra hours. The United States spends the least of any “developed” country on early childhood care, and families are left to fill this gap in infrastructure. This gap is particularly glaring within academia because the system is traditionally set up for a man of means who has someone at home full time to cook, clean, and raise their children. The academic structure was never intended for a faculty member to do research, teaching, and service at the university and then go home to do cooking, cleaning, and bath time. We need to examine our cultural values around what childcare is worth and why it is not considered a valid expense for something like federal grants.
These are the questions we should be asking institutional program officers because the COVID-19 pandemic has shown that “it’s the policy” can be changed when necessary. If we can create budget lines for the travel required to do research, why do we dismiss the additional labor of caregiving required for parents to be in the field doing said research? The childcare crisis is particularly insidious given how academia as an industry requires graduate students, researchers, and faculty to move around to wherever we can find jobs—most times separated from family and other networks of support and making us heavily reliant on paid care. Even if you have managed to secure regular childcare, university events and conferences do not line up with typical childcare arrangements. Weekend conferences and evening lectures require scrambling for additional care, and missing out on networking and the unsaid necessities between the lines of your CV can be detrimental to promotion and tenure evaluations.
As the poet and civil rights activist Maya Angelou said, “When someone shows you who they are, believe them the first time.” US policies on caregiving, parental leave, and bodily autonomy continue to show us repeatedly where our value systems reside. Yet there are still reasons for hope and possibility for better working conditions in US academic institutions. If we were able to reconfigure the entirety of academic life during the early days of the COVID-19 pandemic, we should be able to utilize pandemic workarounds and re-evaluate cultural norms around carework, especially childcare. Academic institutions should not be left to fix the problems of US working culture alone, though we should consider how our industry-specific norms such as evening talks and weekend conferences are managed to produce family-friendly practices and child-friendly cultures.
Alexandrina Agloro
Assistant Professor, School for the Future of Innovation in Society
Arizona State University
Zeeshan Habeeb discusses the urgent need for better childcare in academia, where there currently are few options and high costs. I wholeheartedly agree, and can think of no more significant life event than parenting—but it also certainly tips the balance of work-life. As a faculty member, I believe that this balance or imbalance should be acknowledged when faculty are up for merit and promotion.
There is a precedent for it. During the first years of the coronavirus pandemic, across institutions many faculty up for evaluation were allowed to submit a supplementary COVID-19 impact statement along with their files for consideration. The purpose was to illustrate for reviewers the impact of the pandemic on their academic productivity. While this was a step in the right direction, a system that wholly measures and values individuals by academic productivity, and evaluates everyone on the same playing field, in COVID times or otherwise, is flawed.
While many institutions discuss, and highlight their support for, work-life balance, it is often not the reality that faculty experience. As many faculty who have gone through the review process know, the area that often matters most is research, and the unrealistic expectation of applying for grants each cycle has led to the current situation of overwork, burnout, and productivity trumping all to move up the academic ladder. The most privileged move up the academic ladder easiest and are least likely to be criticized by similar members of privileged committees (e.g., white, male, straight, cisgender, and healthy, with educated parents, adult children, ability to travel for talks and to conferences, fewer significant disabilities, and basic needs met). This reality does not support building a diverse academic community and is inherently exclusive.
In this light, I offer a modest proposal. Similarly to reporting on accomplishments in research, teaching, and service to achieve merit and promotion, faculty should also be allowed to report on their efforts in self-care activities and significant life experiences outside academia. This additional reporting would help address faculty retention, promote positive mental health, and acknowledge reality. Instead of having one-offs such as the COVID-19 impact statement or the next future emergency, work-life balance must be part of the norm in review.
My vision is not to deny tenure for those who don’t achieve their goals of self-care or for people who prefer to overwork. But just having such a work-life balance section will bring attention to its importance, provide a venue for people to reflect on their complete journey (which includes academia), and allow those who make decisions on merit and promotion files to understand context and to value work-life balance rather than the unhealthy academic norms with life taking a back seat to academic productivity. So maybe a particular assistant professor didn’t reach the established departmental productivity norms in one scholastic area, but look at that person’s experience outside academia and all they are doing for self-care. Having taken time for self may make them a better researcher, teacher, collaborator, and human being who will stick around academia. Shouldn’t that be the goal?
Brandon Brown
Professor of Medicine
Department of Social Medicine, Population, and Public Health
University of California, Riverside School of Medicine
Coordinating Against Disease
I support Anh Loan Diep’s call in “Finding My Future Beyond the Bench” (Issues, Fall 2022) for greater communication between researchers and the public. Diep began their undergraduate research studies in my laboratory, which studies the interactions between the parasite Toxoplasma gondii and the human immune system. For their PhD research, Diep worked on a relatively understudied but increasingly recognized significant fungal infection that occurs in the southwestern United States: Valley fever.
Of all the major classes of infectious agents—viral, bacterial, parasitic, fungal—we know the least about how our immune system contends with and ultimately clears fungal pathogens. Certainly, there is great need for more research and public awareness regarding Valley fever. As Diep points out, the disease is underdiagnosed. In the clinic, it is also often mistaken for other respiratory diseases, delaying treatment and exacerbating symptoms for those with Valley fever. The need for collaboration among clinicians treating patients, scientists studying the molecular mechanisms of the disease, and policymakers could not be greater.
I second Diep’s suggestion that at leading scientific conferences in the field of Valley fever, devoting a section to fostering interactions among scientists, clinicians, and those in public health policy would have lasting impact on disease alleviation. As the author reasons, scientific conferences are something of a missed opportunity to build mutual trust between relevant stakeholders, and to build collaboration and momentum needed to tackle this insidious disease—a vision that many of us support.
Kirk D. C. Jensen
Associate Professor
Department of Molecular and Cell Biology
University of California, Merced
Episode 28: Finding Collective Advantage in Shared Knowledge
The CHIPS and Science Act aims to secure American competitiveness and innovation by investing $280 billion in domestic semiconductor manufacturing, scientific innovation, and regional development. But if past government investments in science and technology are any guide, this will affect American life in unexpected and profound ways—well beyond manufacturing and scientific laboratories.
On this episode, Michael Crow, president of Arizona State University, talks to host Lisa Margonelli about the CHIPS and Science Act in the context of previous American security investments. Investments in food security and agriculture in the 1860s and nuclear security in the 1940s and ’50s created shared knowledge that benefitted all Americans. Early agricultural programs, for example, turned farmers into innovators, resulting in an agricultural sector that can feed many people with very little labor. In similar ways, today’s quest for digital security could make the country more secure, while also changing how individuals live and work with information.
Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. In 2022, Congress passed an extraordinarily bipartisan initiative called the CHIPS and Science Act. The act is meant to make the US a leader in industries of the future. It has $52 billion for semiconductor chip development, $200 billion for science, and $10 billion for regional hubs. It’s a lot of money. In today’s dollars, it’s twice the cost of the Manhattan Project for the chips element alone. How could these investments transform American life?
I’m Lisa Margonelli, editor-in-chief of Issues. On this episode, we’re talking to Dr. Michael Crow, president of Arizona State University about previous government initiatives around science and security, and what they suggest about the chips initiative and our possible future. Michael, welcome.
Michael Crow: Hey, Lisa. Thank you. Glad to be here.
Lisa Margonelli: There’s a lot of talk about how CHIPS and Science is unprecedented, but how does it fit into the history of government investments in science and security?
Crow: You know, what’s funny—and a lot of Americans I don’t think remember this or have thought about it—but the American government from its design and its outset has always been scientifically driven. President Jefferson in 1804 formed the Corps of Discovery after the purchase of the Louisiana property from France, and then had Lewis and Clark, then as the captains of the Corps of Discovery, scientifically explore from the Mississippi River in St. Louis all the way to the coast of Oregon at the mouth of the Columbia River—an unbelievable scientific exploration. Then many times in the history of the United States, with the Coastal and Geodetic Survey and all kinds of other things along the way, the country just became very, very science driven; very, very knowledge core driven.
Three times prior to the CHIPS and Science Act, the US government stepped up and decided to ensure national security around something that they felt was absolutely essential. The first was our moves in the nineteenth century, in the 1860s, with both the establishment of the Department of Agriculture and the land-grant universities, to make certain that food security would always be maintained in the United States. And now we’ve become the most agriculturally abundant, most agriculturally creative, most scientifically driven, food-secure place that’s ever existed. That was sort of case number one.
Case number two was following the Manhattan Project during World War II, nuclear security became a thing, where we had developed this scientific thing: atomic fission. We had done this during World War II; we’d built all of these labs, and now we knew we had this tiger by the tail that would have both civilian applications and weapons applications—which we needed to basically be the best at, forever, so that we could maintain the advantage that we’d gained. And so the Atomic Energy Commission was formed in 1946, later the ERDA, the Energy Research and Development Administration, in the early 1970s. And this really became a core thing.
A third thing kind of on the side, was that we decided after the launch of Sputnik in October of 1957, that we were going to be the masters of space technology. President Kennedy announced going to the moon. NASA was created from the previous agency that had existed since World War I. All kinds of things happened in that space. And in those three areas, food, nuclear, and space, the United States is able to protect all of its interests and to advance its knowledge-seeking requirements in those spaces to our advantage. And finally, now, just recently with the CHIPS and Science Act, we’ve decided that all things digital are so important to the future of the country, like food in the 1860s, that all things digital are so essential that we have to maintain technological—not superiority, but constant technological innovation, constant manufacturing capability, constant ability to be the best at all things digital. So the CHIPS and Science Act is like the agricultural project, the nuclear project, and the space project. They’re decisions by the country to maintain national security around a certain area of technology.
Margonelli: That’s really interesting. And I think what’s in the story of twentieth-century science—we’re pretty familiar with the Manhattan Project and the space program, but we’re a little bit less familiar with what happened in the 1860s. So I want to kind of dive down into that. There was the formation of the agriculture department, and there was also the formation of the land-grant universities. And these things had huge and long-lasting, transformative effects. So let’s talk a little bit about that.
Crow: So imagine it’s 1860. The country’s deeply divided. There’s three people running for president. A person is elected president with around 40% of the vote—that would be Abraham Lincoln. Several states secede from the union; the country’s in crisis. There’s about 30 million people living in the United States at the time, but it’s expanding wildly and quickly, particularly into the West. Food security becomes a question. And then also the notion of inequitable social outcomes becomes a question, as well as our agricultural productivity.
So with Congress realigned, with fewer states present in Congress, two things could be created. One was a national initiative in agriculture, agricultural science, agricultural trade oversight, agricultural ideas and thinking, and so forth: agricultural innovation. So that’s the Department of Agriculture. And then along the way, a guy named Justin Morrill, who was a congressman from Vermont at the time, had thought for some time that each state should sell some of the land given to the states by the federal government to build a college for agricultural and mechanical arts, open to the sons and daughters of farmers and mechanics (which was 90% of the population at the time).
And so that got passed in July of 1862. The states set up land-grant schools like the University of California, the University of Illinois, Michigan State, Purdue, Cornell, MIT. In each of those states and many others—Iowa State, where I went to undergraduate school, was one of those schools—those universities then became, and the history shows this, unbelievable, transformative elements on two dimensions relative to the United States. First, we moved into unbelievable agricultural security and agricultural productivity, and never had the food insecurity that then existed in Europe, existed in Asia, has existed in other places around the world. And then food has just been taken for granted in the United States because it’s been such a perfect area of national security. In addition to that, the innovation created out of these schools then became the driving force for the post-Civil War industrial success of the United States. A lot of the literature has looked at the role of the land-grants.
It’s really quite remarkable. Those land grants, several of them became among the first research universities at scale. The United States accelerated its economic evolution, its social evolution, all these things were driven by basically stabilization of agriculture, movement of agriculture into a powerful economic driver. And then all the engineering solutions and special training and special people that came out of these schools were really, really powerful to the late-nineteenth-century transformation of the American economy.
Margonelli: That’s really interesting because reading what you had written about this sort of sent me back to Hunter Dupree’s book on the history of science and the federal government. And two things came out of that that struck me. One thing is that that transformation of the US science and knowledge enterprise was not really anticipated when they started. When the agriculture department started, it was run by a milkman, I think, and it didn’t know how to generate knowledge. It didn’t know how to solve problems. The Texas fever among cattle got completely out of hand. They had all the wrong ideas, and they gradually moved towards this very unified way of looking at problems and solving problems. And they also kind of transformed, on a very intimate level, farmers all across the country into scientists.
Crow: Yes, you’re absolutely right with that history. And so what we learned was that there was collective advantage to shared knowledge; there was collective advantage to shared training and shared experience. So over time, county extension offices were built in every one of the 3,000-plus counties in the United States. There were agricultural extension specialists that were helping individual farmers to accelerate their innovation. Hybrid corn varieties, ways to take care of pests and insects and weeds, all kinds of things, all enhancing productivity and also enhancing farmer success. So throughout European history and other parts of the world, farm collapse, agricultural collapse, economic collapse, bread riots, food riots, starvation, all these things were avoided here because we found a way to turn every individual farmer into a state-of-the-art agriculturalist. They could use their own ingenuity, but then they could draw from the collective knowledge of the country.
And yes, the Department of Agriculture started the same way that the Department of State [did]—I mean, I think the first patent agents and spies for the United States in terms of acquiring other technology reported directly to Hamilton and Jefferson when they were both cabinet members in the first administration. And so all these departments started out as small, unorganized things. But what happened was then the value of connection and collective knowledge and core scientific knowledge and core technological knowledge became really, really important to the success of the country.
Margonelli: Yeah, it’s really a fascinating transformation. I think one of the other things that came up, another parallel to CHIPS and Science, which has been discussed as industrial policy or the government getting out of its lane and getting involved in working directly with industry, was that when these agricultural acts started, they essentially transformed the role of government into working on the general welfare and generating knowledge. And we have something sort of similar happening here.
Crow: Well you know, what’s weird about that is it’s always funny to me when people talk about interference of the government. In fact, they’ve forgotten to go back and read the founding documents or the debates that occurred in the summer of 1787. So a lot of things got left on the cutting room floor in Philadelphia. The summer of 1787 left a lot of things that were proposed and not brought into the Constitution and then those things that were put into the Constitution. And the “general Welfare” remains in there. And people just forget, what does that mean? Well, how about food security? How about nuclear security? How about making certain that we never have to live without the essential digital devices that we’re going to need for every aspect of our life, our drinking water, our clean air, our cars, our electric vehicles, our computational tools, our learning assistants, our everything—all these things require these digital assets.
If you go back, it’s kind of weird, all these people who are against earmarks. So Samuel Morse’s funding for the first telegraph was an earmark from Congress. The wind tunnel that ultimately became the Jet Propulsion Laboratory was an earmark. So this notion that somehow you can’t have politics involved in building national capability, I don’t get that. And then there’s just this weird thing about, “Well, the government shouldn’t be involved in this.” Well, it’s not the government that’s involved in this. The government is facilitating collective knowledge. It’s facilitating base knowledge from which everyone can benefit.
If you look at somebody like George Washington Carver and what he was able to do in organizing knowledge about the peanut and the growth of the peanut, helping after Reconstruction Black farmers in the South to gain wealth and move forward with things. I mean, no individual farmer could do that by themselves.
Every individual farmer could be a better farmer because of the collective knowledge. And then from that, the industries that were developed from that base in the United States are unbelievable. It’s almost 20% of the economy, if you look at all things that agriculture touches just in that particular area.
Margonelli: I think this is a good time to move on to the second major initiative, which is after World War II, when you had the science and security initiative that had three elements to it. It created an infrastructure, it mobilized talent, and it had critical supply chains attached to it. Can you talk a little bit about how that transformed?
Crow: Well, what was interesting is that President Franklin Roosevelt, in the summer of 1940, was already speculating that the United States was highly likely to become involved in World War II. We had not been attacked yet by the Japanese, and in general, the public did not want to get into the war. But the president’s job is also to be prepared for the war. So he called on a person named Vannevar Bush, who at the time was the president of the Carnegie Institution of Washington. He had been the vice president for research at MIT in the 1930s, and he was one of the founders, after his PhD in electrical engineering in the 19-teens, of a company called Raytheon. So he was sort of a polymathic computer person, design electoral engineer. He could do all these things.
Margonelli: He was an amazing writer too.
Crow: Yeah, he was, absolutely. So he was called upon by President Roosevelt to create a thing that was ultimately called the Office of Scientific Research and Development, OSRD. And that became the mechanism by which President Roosevelt said, “I want you to bring all of the talent of American universities and American science and American technology to bear so that when we enter this war, we can have as few casualties as possible and we can end this war as quickly as possible,” which is a fantastic objective. And then when we entered the war in December of ’41, he accelerated unbelievably the scientific capabilities of the United States, particularly at the universities, building the Manhattan Project, launching other initiatives. So as you said, he brought talent to bear, he brought ideas to bear. He brought structures and mechanisms and, in a sense, transformed the way that we thought about science as a mechanism to protect democracy—and science as a mechanism to advance our economic and health success.
So much so that by the end of the war, just as President Roosevelt had passed away in April of ’45, just prior to that, Bush had been asked to put together a report on what do we do with all this science capability? And he wrote the famous report, Science, the Endless Frontier came out in July of 1945. President Truman accepted it. And from that point forward, you see that we got out of that the Atomic Energy Commission, we got the National Science Foundation, we got the expansion of the National Institutes of Health. The United States became the most significant scientific place in human history in terms of discoveries and technologies and moving things forward. And research universities began growing up all over the place, and the economy began doubling and doubling and doubling and doubling. And so what happened was we secured ourselves, in a sense, nuclear defense, which has proven to be complicated but positive. But we also designed out of that an unbelievable creative enterprise engaging the entire country.
Margonelli: It’s also interesting because it also kind of remodeled the relationship between government, research universities, and industry. It’s been called sort of the golden triangle or “a new kind of post-war science” that blurred the traditional distinctions. And that has proven to be an incredibly powerful engine of change in innovation through the development of GPS as it migrated out of military applications and into our cars and our phones—and right now, as we talk, smartphones, AI, jet engines, all of this sort of stuff moved from the military and security sphere out into our lives.
Crow: Well, what happened was that these research universities, which began being built in the 1870s with Johns Hopkins in the 1890s with Stanford and the University of Chicago, then a bunch of the public universities and the land-grant universities came in and became research universities. But even by 1939, they weren’t heavily funded by the government. They were doing their own research. They were funded by some foundations, there were some private entities. And then when they were asked to rise up to the national challenge to carry out a global conflict to advance the United States to victory on two massive war fronts at the same time, technology played an unbelievably important role in all of that, from proximity fuses, to other kinds of devices, to code breakers, to atomic weapons designers and torpedo developers—everything that you can imagine. That quickly brought the war to an end; the main combatants in the form of Germany and Japan transformed forever into functional democracies of significant economic outcome.
This was perceived at the moment as an unbelievable transformation in the role of universities, and it just has never stopped. So what began in ’41 and ’42 accelerated in the fifties, accelerated in the sixties, and has continued to accelerate, which has then fueled, as you said, the internet advanced technologies. It fueled us becoming the unbelievable developers of these advanced semiconductors and microchips, advanced materials research, advanced computation research, medical research. All these things got going, and now it is a core part of who we are—and in fact has been emulated by others, which is making others nervous now that other places are “catching up” or passing us or whatever, because they’ve decided to take on the same model, build research universities, fuel these research universities and become competitive with the unbelievably successful United States.
Margonelli: Yes, and that actually brings me to my next question, which is: you’ve called failing to secure digital security a strategic error. What do you mean there?
Crow: So what I mean by that: we developed the fundamental material sciences, the fundamental engineering, the fundamental designs, the breakthroughs in the first semiconductors, the breakthroughs in what was the first transistor, all the things that came—the transistor was ’47, and in the fifties and in the sixties, these semiconductor materials were being built. We then built the most advanced chips, microchips, built the most advanced systems. Then because of costs of manufacturing being potentially lower in other parts of the world, manufacturing got offshored, development got offshored—so much so that by the time we get to the 2020s, the late teens and the 2020s, we find ourselves with a small manufacturing base, a significant research base, and our supply chain interruptible. So the strategic error was to not see these as a national asset. It’s only in the way that we see nuclear, the way that we see food, both of which are inseparable from our existence. In this case, we thought that this was only a commercial thing. It’s not only a commercial thing. These chips have become as essential as water to our success going forward.
Margonelli: That’s interesting too, because food, it has national implications, but it also has sort of personal implications, as we’re seeing with this talk of taking TikTok off of our phones and things like that.
Crow: Well, the technological applications using these technologies are slightly ahead of our social thinking right now and our ability to understand these things. So we’ve got all kinds of technology manifestations that are causing social disruption and social upset, and we have potential for security threats, we have potential for cultural threats. We’ve got all these things that are going on. All those things are transitory and will be addressed. What’s not transitory is the fact that our species is now enabled by these microchips, which are basically enhancing every single individual. All of us carry, or most of us carry, an iPhone or something like an iPhone, or an Android phone or something like this. Well, that’s a supercomputer attached to your body, connected to all the other supercomputers that are out there. And with ChatGPT and other things coming along, those will become, over time, powerful assistants to every person, every organization.
And so what’s going to happen here is that our species, for the first time, has now created a foundational tool—a computational device in the form of a semiconductor, which is an electronic system—which is then reducible because of advanced science to up to, I mean, I think the most advanced chip that IBM has has 50 billion transistors on a single microchip. My phone has, I think, only 12 billion transistors on the microchip. So everything will change. Medicine will change, business will change, computation will change, learning will change. Everything will continue to evolve. And so, like food and like nuclear, digital will be that kind of thing. And we’ve just come to that realization, and the CHIPS and Science Act is that.
Margonelli: Yeah. It’s so interesting when you’ve really put it in a larger context of how far this may take us and how it may change and transforms our lives and our fundamental relationships. I think the question here is, what can we learn from the past about how CHIPS and Science can have the same transformative potential?
Crow: Well, one thing we need to learn from what we learned in agriculture is that you’ve got to work at the level of the people. You’ve got to think sociologically about the outcome of these kinds of technologies. You’ve got to do technology assessment. You’ve got to understand what these technologies might do. You’ve got to think about how to educate the people to then fully take advantage of the technology and become, as we have in agriculture—basically spurring development across the entire economy, not just in concentrated corporations. That will then get the most fueling of all of “Schumpeter’s forces of creative destruction,” the terms that he used, Schumpeter being the Austrian economist who thought about what is innovation? How do you drive innovation? So innovation can’t be just these big chip manufacturers or the big tool manufacturers only. They have to be then spurred by whole new ways of thinking about chips and using chips and using technology.
So that’s a lesson from the past. Another lesson from the past is to basically not take our foot off the gas. This can’t be on again, off again, on again, off again. It has to be continuous innovation, continuous forward movement.
The other thing is that competition is real. We can’t stop competition from other parts of the world. We can only win. And so if you try to stop something, you don’t win. If you try to block something, you will lose. And so you need to understand global competition.
And then I think the other thing that we need to think about in terms of a lesson from the past coming out of nuclear is that we were clueless as to all of the ultimate implications of nuclear weapons technologies, certainly. And so now we have unmitigated nuclear proliferation, which hasn’t been thought through, hasn’t been managed. And so how do we manage the negative outcomes of some of these technologies more carefully? That’s certainly a lesson from the past.
And I think another lesson from the past is that sometimes we don’t think about what it all means. So for instance, through agricultural technology development, we eliminated the agricultural workforce. OK, well, that happened kind of gradually, and we adjusted, but we had a deep cultural impact on the country because much of the country was agriculturally based. And so these digital technologies will also have huge workforce implications, and we should think about them in front of these changes as opposed to during or after these changes. And so those are lessons from the past.
Margonelli: Yeah, recalling what happened with the agricultural act of transforming people’s ability to be scientists in their own lives and have that contribute to their own satisfaction and ability to feed themselves and their families has some interesting parallels for this.
Crow: Yes. And so one parallel is—it certainly is the case. In fact, on a project that I’m working on as a part of the National Advisory Committee on Innovation, which I’m a member of, I’ve been arguing that we need to make certain that we can have, down to the level of communities, incubators for the uses of chips in ideas that teenagers and others are coming up with. And how do you help people to build new kinds of chips and new kinds of activities and so forth. You got to look at these things as not just the realm of the massive global corporation, but the realm of any tinkerer, any innovator. If you read [Walter] Isaacson’s book, The Innovators, it’s a fabulous story about how some of these innovations in digital technologies emerged. They were not the product of just the big corporations. They were the product of all kinds of people and the big corporations.
And so what we need is bothand, then more. So how do you facilitate all of that? And also, how do we get more even economic benefit across the country from these kinds of technologies? What could be developed using these kinds of technologies in new applications to help manage, I don’t know, the Mississippi River, or grow rice better in the delta regions of Arkansas and Louisiana along the Mississippi River and Tennessee? How do we do all those things? We need as much of this to be, like the agriculture department, as localized as possible.
Margonelli: That brings up two other interesting parallels to the agricultural act. One was the realization that manufacturing this knowledge could help raise everybody’s boat. And that is kind of called in question a little bit with CHIPS and Science: are we going to try to raise the knowledge only in the US and raise everybody’s boat in the US, or are we still a global knowledge producer? And that seems like something that’s going to have to be negotiated.
Crow: Well, I mean, yes, it’s complicated because some of these technologies can be particularly handy in weapons systems. And so what one wants to think about is how do we float all boats to drive up all economic activity? I’m going to round up the global economy, a hundred trillion dollars. Well, there’s no reason that it couldn’t be a thousand trillion dollars, be environmentally clean, drive up per capita income across the entire planet, drive us into all kinds of new things. Well, we’re not going to do that if we hold onto these digital technologies in a way that everyone doesn’t benefit. We just have to find a way to make certain that we reduce the probability of kinetic combat. There may be ways where these technologies can be very helpful to us in that also. We just have to think it through. We’re not thinking it through enough.
Right now we are heavily concerned about the rise of new major competition in China, new major competition in other parts of the world. I’m all for competition. Competition makes you perform better, harder, cheaper. There’s all kinds of ways that you solve things. We just have to make sure that what we get out of this is global evolution and fair competition.
Margonelli: We are at a very interesting point in history because we are at the beginning of this sort of arc of another 80 years. As you mentioned, we’ve had 80 years of transformation from the initial sort of nuclear security work. And we’ve had 150 years of evolution from the agricultural work. And as we start down that path, history shows us that we don’t actually know where we’re going, but we have to actually keep our eyes on what things are important as we go forward.
Crow: Well, what’s interesting, no one in 1940 would’ve predicted where we are now with either nuclear weapons, nuclear power, the emergence of fusion power, the Perseverance rover on the surface of Mars being nuclear powered, all these things that are happening—no one would’ve thought about any of that. We will have nuclear-powered spaceships, we’ll have all these things going on. All these things that are happening, no one would’ve predicted any of that. And then in agriculture, no one would’ve predicted that only 2% of the American population would be involved in production agriculture, feeding 340 million people in the United States and probably another 300 million people around the world, something like that, all from 2% of the American population. No one would’ve predicted that. No one would’ve thought about sustainable agriculture or whole new ways to build plant-based meats and all these other kinds of things that are going on.Not a single person could have thought of that.
And here we are now in 2023, thinking about what will happen between now and 2100 when in fact these technologies, these digitally based technologies will be more impactful than either nuclear or food. No one can predict where it’s all going, which means then, therefore, that we need more technology assessment capabilities, more predictive analytics, more deeper understanding of what these things might do, and just more thoughtfulness. Not to predict, because we’ll never get the predictions correct, but to understand and to adjust as we go along the way.
Margonelli: You have been really active in CHIPS and Science in Arizona, and as you think forward for how Arizona’s life, and not just the whole state, but the individual life could potentially be transformed, what are the things that you hope to steer towards and what are the things that you worry about?
Crow: One of the things I think that will happen for certain is that Arizona already is a huge manufacturing center for semiconductors and will become even more than that, that it’ll become the most concentrated semiconductor manufacturing place on the planet. And then all of the supply chain related to that, which then also connects to the battery companies that are here and the electric vehicle companies that are expanding here. So empowerment of all kinds of renewable energy systems, renewable tools, renewable devices, all those kinds of things. All of that will be advanced here. And then I think, beyond that, then what happens in all of that is, how does one find a way in Arizona to become the place where the best renewable energy-based, best sustainability-based economy can be built using every electronic computational tool imaginable?
So you can better manage water with more data, more data, more data, more data, more data.You can better manage all complex systems, like adjustments to all of the complexities of global management, with more computational outcomes. We don’t have the computational capabilities to manage the complex interfaces that we have with the environment. So if we want to better manage our relationship with the environment, we need more intensive tools to do that, and we need companies building those tools. And so I’m hopeful that Arizona will be a place where a lot of those things grow.
Now, the downside here is there’s some chance of uneven economic opportunity for the population because of educational differences. And we’re working very heavily to address that at ASU by giving pathways to everyone to have a chance to participate. There are unresolved issues of the waste streams from these advanced digital technologies, which have to be very seriously thought about because the chemicals are particularly hazardous in many cases.
And then I’d say that there is a huge worker transformation that we have to worry about. So as these computational tools become—you know, the reason that autonomous vehicles don’t work as well as we would like them to work is that we don’t have computational tools that are good enough. You get a computational tool that’s 20 times better than a chip today, and you can now calculate almost anything, any error function. And then all of a sudden half the drivers don’t have jobs, half the servers don’t have jobs in restaurants, the grocery stores, as you’ve already seen if you’ve been to one lately. There’s nobody that works there. You just check out yourself. And so what that means then is that I think the downside that we have to think about is how do we build an economy that is robust for everyone with these technological breakthroughs driven by these digital technologies? And this happened in agriculture; it’s going to be more complicated with digital. And so we’re going to have to really, really worry about this significantly.
Margonelli: To learn more about previous science initiatives mentioned in this conversation, visit the podcast page at issues.org. You can email us at [email protected] with any comments or suggestions. And you can subscribe to The Ongoing Transformation wherever you get your podcast. Thanks to our podcast producer, Kimberly Quach, and audio engineer, Shannon Lynch. I’m Lisa Margonelli, editor-in-chief of Issues in Science and Technology. Thank you for joining us!
Revisiting the Wireless Body
Though the notion of a “wireless body” is often presented as a recent concern in the age of electronic medical records, or pandemic-era telemedicine, Jeremy Greene shows us that this fantasy has been a structuring concern in medical care for the past 70 years or so. To many people, this alone might be a stunning finding—but it stuns only if we are without our history.
The notion that we might be able to quantify continuously the inner workings of the body (and then surveil and disseminate that data), and therefore prevent excess death, increase wellness, and render transparent the black box of the body may seem either still speculative or like a moral good or both. While the fantasy is simple and longstanding, its implications are less so.
In turning the patient’s inner workings into streams of data, we complicate the ethics of medicine by blending it with the ethics of data science.
The media of medicine have long promised not merely to reveal the body, but to democratize medical care in the United States. These media are often articulated as transcending barriers to care. That if we only connect the right devices to the right people in the right system, we might retrieve those who traditionally have been excluded from the scene of care. For, so the story goes, the scene of care would go to them.
In “The Wireless Body” (Issues, Fall 2022), Greene says, not so fast. In turning the patient’s inner workings into streams of data, we complicate the ethics of medicine by blending it with the ethics of data science. And we do so haphazardly. Even the question of what counts as medical data is a tricky and deeply concerning one. What might one system of patient protection, say the Health Insurance Privacy and Portability Act (HIPPA), protect, and what might be designed to fall outside its remit? Greene shows us that when patients become users, and we turn our companion technologies into medical devices, we might open up the black box of the body, but we also hand over our most sensitive medical data and, as the author points out briefly, the implications of that data can be volatile, as in the case of consumer apps that track menstruation and fertility in light of the US Supreme Court’s recent Dobbs decision on abortion. Markets shape the medium of care much more than practitioners and patients, and so do politics.
Greene shows us that always-on medical devices, from the wearable Holter monitor that records heartbeats to the tools that might be in your pocket right now, are not some neutral or obvious good. Instead, if we read them closely, we can use them to reveal the contradictions of American medicine and its competing impulses: to democratize and to scale it, to make it universal and to make it intimate with individuals, to provide patients freedom while indoctrinating them into bodily surveillance, to make them transparent objects of study while privatizing care and linking that care to proprietary devices and the biases and interests they carry.
Hannah Zeavin
Assistant Professor of Informatics
Indiana University
Can CHIPS and Science Achieve Its Potential?
Amid rising concerns over the United States’ capacity to innovate and address large-scale societal challenges, the CHIPS and Science Act represents a positive and well-timed achievement for legislators and their staff. As multiple authors point out in a special section of the Fall 2022 Issues that explores how to help the act deliver on its promises, the 400-page law seeks to address goals in a variety of areas: semiconductor production; skills development in science, technology, engineering, and mathematics; regional innovation, and discovery science, among others.
In “An Infection Point for Technological Leadership?” Steven C. Currall and Venkatesh Narayanamurti raise a particularly salient and subtle point: the attempt in CHIPS to nudge parallel investments in discovery science and technological invention, primarily through reforms to the National Science Foundation. The intimate interplay between discovery and invention has yielded breakthroughs in the past, and such linked investments may offer potential going forward. Even before CHIPS, the NSF boasted an appealing mix of discovery science grant programs, multidisciplinary research centers, and industrial partnerships. The new law creates an opportunity to deepen these networks and expand the funding palette to accelerate discovery, invention, and commercialization together. Whatever the outcomes of CHIPS semiconductor funding, the other areas of science reform flowing from the legislation may yield real long-term benefit for US innovation.
But a throughline of the CHIPS coverage is the idea of “potential.” If the long-run goal is to enhance US science and innovation leadership and industrial capacity, then the bipartisan vote for the CHIPS and Science Act really represents the end of the beginning, as agencies move on to implementation and Congress performs oversight.
Whatever the outcomes of CHIPS semiconductor funding, the other areas of science reform flowing from the legislation may yield real long-term benefit for US innovation.
What comes next? The most crucial and immediate need is robust appropriations, as Currall and Narayanamurti correctly identify, including for newly created programs on regional technology hubs, nuclear research instrumentation, and microelectronics research and development, among other areas. As of mid-December 2022, Congress is already well past the fiscal deadline and has yet to reach a deal on overall spending, which will facilitate CHIPS investments. There’s a chance that appropriations may stretch into 2023, which would represent a failure in Congress’s first test to put the science provisions into practice. The multiyear time horizon for CHIPS means future Congresses—and possibly a new administration in 2025—will also be on the funding hook for fulfilling the CHIPS and Science vision.
Beyond appropriations, the federal government should think about honing its strategic acumen. CHIPS directs agencies to invest in critical technology areas and mandates a broad science and technology strategy from the White House Office of Science and Technology Policy (OSTP). In light of these directives, government should expand its capacity to analyze and understand data, trends, and national performance in emerging and critical technology. This would raise its tech IQ, improve federal ability to spot critical supply chain trouble, and ensure smart investment decisions with an eye to long-term competitiveness. Congress can help lead in this area, and should encourage OSTP to take its role as strategist and quarterback seriously.
The new Congress will also have an opportunity to focus on research and industrial capacity in a sector that didn’t get much attention in CHIPS: space. Congress and the agencies could work together to address space junk, expand technology investments in in-space manufacturing and advanced propulsion, modernize federal space technology acquisition, or reform global treaties to facilitate the peaceful development of space. These and other moves could do for space what the CHIPS and Science Act sought to do for microelectronics, in a similar spirit of enhancing US competitiveness in the long run.
Matt Hourihan
Associate Director for R&D and Advanced Industry
Federation of American Scientists
The Urgent Need for Carbon Dioxide Removal
Carbon dioxide removal, or CDR, has emerged as a key climate mitigation activity to limit global temperature increase to well below 2 degrees Celsius. In 2022, the rise in global temperature reached 1.1 degrees, resulting in more intense and frequent droughts, floods, and fires, as well as rising sea levels. Without immediate emission reductions, carbon dioxide removal, or both, society is currently on course to exhaust the remaining carbon budget to stay below a 1.5 degree rise within around 10 years at current levels of emissions from fossil-fuel combustion.
CDR technologies provide a critical opportunity to implement negative-emissions technologies to support the global transition toward renewable energy portfolios needed to reach net-zero carbon emissions. In “What Is the Big Picture in Carbon Removal Research?” (Issues, Fall 2022), Gyami Shrestha reviews the history and contemporary landscape of CDR research and observations supported by the US government. CDR emerges as an active area of interagency research, technological development, and implementation, now mapped by an extensive survey carried out by the Interagency CDR Research Coordination Group.
CDR technologies provide a critical opportunity to implement negative-emissions technologies to support the global transition toward renewable energy portfolios needed to reach net-zero carbon emissions.
Several thematic areas emerge from the survey that link the federal activities across areas of technology development, implementation, and monitoring. Technology areas include direct air capture and storage, soil carbon sequestration, reforestation and afforestation, enhanced mineralization, ocean capture and storage, and biomass removal and storage. Technology transfers to the private sector allow for scaling of activities such as direct air capture or mobilization of resources for climate-smart agriculture and forest management. And monitoring includes enabling new multiscale measurements of carbon dioxide from tower networks, aircraft, and spacecraft missions. Combined, the investments situate CDR as an important component of a climate mitigation strategy.
Several challenges remain. These include evaluating the full effects of CDR technologies on other ecological services that society depends on; implementing CDR while taking into account issues related to environmental justice; and addressing uncertainties and responsibilities for monitoring, reporting, and verification. These are multidisciplinary challenges that require expertise in Earth system science, social science, policy, land management, and engineering, and that need to be facilitated with continued coordination and transparency. Establishing unique partnerships between the public and private sectors can provide novel opportunities to rise to the challenge, as demonstrated recently through new satellite constellations successfully detecting point-source emissions of methane from oil, gas, agriculture, and landfills.
Benjamin Poulter
Research Scientist
Earth Sciences Division
NASA Goddard Space Flight Center
Gyami Shrestha reflects on her experiences during her tenure as director of the US Carbon Cycle Science Program, especially with respect to coordination of federal research on carbon dioxide removal (CDR) approaches and technologies. She describes how the program, along with the Interagency Carbon Dioxide Removal Research Coordination Group (I-CDR-C), produced a compendium of federal research activities in this arena. This compendium, as she notes, helps to “identify gaps and establish fruitful new collaborations and partnerships” among participating agencies, a crucial element for ensuring the success of federal CDR efforts.
While the breadth of the compendium is exciting, it is an incomplete snapshot of the full scope of federal CDR research efforts. Contributions to the compendium, and in fact participation in the I-CDR-C itself, are voluntary efforts by participants who have the time, energy, and commitment to engage with their colleagues on this topic. There are federal departments, agencies, and programs that fund or conduct relevant research but are not currently involved with I-CDR-C work, whether due to lack of awareness or limitations on capacity. Without a mandate for all relevant agencies and programs to report and coordinate, there will be missed opportunities and perhaps unforeseen consequences in this rapidly evolving arena.
Without a mandate for all relevant agencies and programs to report and coordinate, there will be missed opportunities and perhaps unforeseen consequences in this rapidly evolving arena.
As reported, the I-CDR-C found that some of the activities cited in the compendium were not explicitly focused on CDR, but instead were carbon cycle science research efforts that are foundational to the design and implementation of CDR work. As Shrestha notes, there are “fundamental questions [about the carbon cycle] that have yet to be sufficiently answered.” Inclusion of these research activities in the compendium demonstrates an understanding of the importance of this underlying research to CDR, but there needs to be clearer articulation of pathways to support and enhance the connections of research to operations and back again. How can programs that are typically considered more “basic” science ensure that their insights inform applications like CDR? How can CDR activities potentially contribute to our fundamental knowledge of the carbon cycle and other Earth and social sciences?
Ongoing interagency collaboration through the I-CDR-C can help facilitate these matters, as can engagement with the broader research community. Programs such as the North American Carbon Program and the US Ocean Carbon and Biogeochemistry Program provide space for the ongoing development and support of communities of practice around carbon dioxide removal that span sectors and domains, and both programs have held trainings and workshops on the topic that have generated significant enthusiasm. The urgency and complexity of climate change threats requires collaborative, iterative, adaptive approaches to research and applications that not only cross traditional disciplinary boundaries but will likely also require new organizational and institutional frameworks in the way we approach science altogether.
Libby Larson
Coordinator
North American Carbon Program
Missouri Lawmakers and Science
In “How Missouri’s Legislators Got Their ‘Science Notes’” (Issues, Fall 2022), Brittany N. Whitley and Rachel K. Owen examine the value and challenges of bringing scientific information into the vast array of decisions that state legislators must make each year. With over 2,000 bills considered by the Missouri General Assembly annually, so many of them impact the daily lives of people across the state, changing how they connect, work, learn, and live. And eventually, many of these seemingly “local” decisions add up to national impacts.
We often hear concerns that people do not care about science or do not trust experts. However, experts often do not show up in ways that center the user’s actual problem, respect the context and complexity of the decision being made, and provide the scientific information in a way that can be valued and understood across backgrounds and party lines. Whitley and Owen provide a compelling approach for addressing these challenges.
There is an additional underlying challenge I would like to bring to this discussion, one that funders do not like to support or sustain. Few groups have the breadth of expertise to understand the landscape of emerging scientific information, aggregate that learning into knowledge, and translate that knowledge into useful inputs for decisions.
There is the fundamental need for trusted translators in our society.
That funders of all types will pour billions of dollars into science, but far fewer dollars into sustained efforts to make science results useful, is a fundamental flaw in our system.
If we want the funding the United States pours into science—over $40 billion of federal funding for basic research annually—to lead to real-world solutions and to impact how policy leaders make decisions, it will be necessary to provide those leaders with information in ways they can hear, understand, and trust.
The scale of the challenge is often glossed over. Looking at just the Scopus database, curated by independent subject matter experts, we see that the overall scientific literature as characterized by the National Science Foundation is growing at almost 4% each year, with approximately 1.8 million publications in 2008 and growing to 2.6 million in 2018. This gives us roughly 20 million articles in just a 10-year period from Scopus alone, providing new information that is supposed to be building on itself. But the articles are often laden with jargon, highly academic, and focused on other experts in the field. They also tend to highlight novelty instead of a road map for how to actually use the information and aggregate it into knowledge.
Who is supposed to aggregate knowledge from those tens of millions of papers over time? This is a daunting task for an expert; it is an unreasonable if not impossible expectation for legislative staff with little support. That funders of all types—federal, state, philanthropic, and industrial—will pour billions of dollars into science, but far fewer dollars into sustained efforts to make science results useful, is a fundamental flaw in our system.
I applaud the progress being made by the Missouri Science & Technology (MOST) Policy Initiative, as the authors document, along with the achievements of other state-level science programs in New Jersey, California, and Idaho, among others. But if we believe that evidence-based decisions are critical to solving society’s most pressing problems, we must rethink how we support and sustain those organizations actually doing the work.
Melissa Flagg
Flagg Consulting
Former Deputy Assistant Secretary of Defense for Research
As a Missourian, I applaud the efforts of the visionary founders and the savvy of the current leadership of the Missouri Science & Technology (MOST) Policy Initiative and their contributions to sound science-based policy and legislation in the state. As Brittany N. Whitley and Rachel K. Owen describe, in a truly short time and in the context of a part-time legislature (never mind the restrictions of a global pandemic), the MOST fellows have already demonstrated remarkable effectiveness and have garnered the respect and appreciative of our state lawmakers.
I have always hoped that the highly successful model of the American Association for the Advancement of Science’s Science and Technology Fellows program, which has so successfully helped integrate scientific knowledge into the work of the federal government, would be, in the true spirit of federalism, translated into state houses and local governments. MOST is among a small number of state-based programs demonstrating the effectiveness of such translation. Whitley and Owen provide an excellent historical record of the importance of having sources of solid, nonpartisan scientific knowledge informing the many decisions state governments make that can impact, for example, the health of their citizens, the future of their environmental resources, the strength of their economies, and the quality of their educational systems.
It is well known that private funders typically see themselves as the source of “start-up” funding and not a source of long-term sustaining support.
The piloting of state initiatives would not have been possible without the support of private funders. Investment by the Gordon and Betty Moore Foundation was integral to the launching of several state programs, and in Missouri, the James S. McDonnell Foundation contributed significantly to launching MOST and has recently renewed its support. So, in what is otherwise a positive article, I was surprised to read the negative slant of the discussion concerning the role of private funders.
I am sympathetic to the plight of programs, such as MOST, whose fellows are engaging in novel undertakings requiring both the building of trust relationships and the time to develop a record of accomplishment. However, it is well known that private funders typically see themselves as the source of “start-up” funding and not a source of long-term sustaining support. Philanthropy is a source of social venture capital—with the return on investment measured in terms of contribution to the common good. For philanthropy to continue as a source of venture funds, it has to carefully evaluate the duration of the commitment it can make to any one beneficiary. While I am certain MOST will continually garner support from private funders, it is likely that the identities of the donors will change as the organization’s maturation changes its needs. It is also likely that the leadership of MOST will continue to grow more skilled at communicating its accomplishments and requesting the necessary resources to continue its essential work.
I fervently hope that MOST will garner long-term and sustainable support from the constituency it serves so well, and that the state of Missouri and its citizens will recognize the value that MOST fellows provide to informed governance by appropriating funds to the program.
Susan M. Fitzpatrick
President (retired)
James S. McDonnell Foundation
The Vital Humanities
The humanities bring a range of important perspectives to bear on the scientific and technological issues with which institutions such as Georgia Tech wrestle, as Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz note in “Humanizing Science and Engineering for the Twenty-First Century” (Issues, Fall 2022). I want to start from that general argument in favor of interdisciplinarity and make a slightly different case, however.
First, it’s important to note that the humanities are not the arts. Humanists are trained in analysis and interpretation; they are not trained in aesthetic production. Thus, when the article cites case studies from a Georgia Tech publication called Humanistic Perspectives in a Technological World that turn out to be about technical writing, music composition, and theater production, I worry. None of those fields center analysis—they all focus on production.
Collapsing the arts into the heading “humanities” is not uncommon. When I went to the Washington, DC, launch of the report cited in this article, The Integration of the Humanities and Arts with Sciences, Engineering, and Medicine in Higher Education: Branches from the Same Tree, I was struck by the poster presentations, which included projects that featured dance, music, art, and poetry. But not a single one of them included the humanities. Nothing analytical, critical, or interpretive. When “arts and humanities” is the framework being used, the humanities tend to disappear. It’s easier to talk about the arts: everyone knows what music or poetry is. It’s harder to be concrete in talking about philosophy or literary criticism. But what philosophers and literary critics do is just as essential as what musicians or poets do: they enable us to interpret the world around us and to posit a better one.
What philosophers and literary critics do is just as essential as what musicians or poets do: they enable us to interpret the world around us and to posit a better one.
So what I want to point out is the specific value of integrating humanities into science and engineering by recognizing the expertise of humanities practitioners. That expertise is in visual analysis (art history), ethics and problem-solving (philosophy), close reading and analysis (literary criticism), and interpretation of the past (history). The doctor who likes reading novels is probably not the right person to be teaching the narrative medicine course when you have experts in narrative theory on your campus. The article notes that Florida International University’s Herbert Wertheim College of Medicine “uses art analysis during museum tours as a practice analogous to detailed patient diagnosis.” I hope the art analysis is done by trained art historians.
Interpretation and analysis are important skills for practitioners in science, technology, engineering, and mathematics (STEM) to learn, certainly. But the humanities are valuable for more than “equipping STEM practitioners with a humanistic lens.” STEM researchers achieve the best interdisciplinary work not when they apply a humanistic lens themselves but when they partner with those trained in humanities disciplines. I think of Jay Clayton, for example, whose team of humanists at Vanderbilt University’s Center for Genetic Privacy and Identity in Community Settings, or GetPreCiSe, analyzes cultural treatments of the topic of genetics. How do novels, films, stories, and other cultural expressions address the moral and ethical consequences of developments in genetics, and what do those cultural texts tell us about our society’s changing sense of itself? How do such texts shape social attitudes? These are humanities questions calling for humanities methodologies and humanities expertise.
Paula M. Krebs
Executive Director
Modern Language Association
As an educator and researcher concerned with equity, I’m tasked with looking for and identifying useful connections between science, technology, engineering, and mathematics (the STEM fields) and the arts, a collective span otherwise known as STEAM. My work amplifies the contributions of artists and cultural practitioners who are often left out of the discourse in STEM areas. For example, popular comics and movies give us Shuri in Black Panther, who uses her knowledge of science, culture, and the natural resources around her to design and build things. As an artist who uses artificial intelligence, I combine my knowledge of color theory, culture, literature, creative writing, and image editing to create unique art that captures the spirit of the present moment.
While reading the essay by Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz, I thought about Albert Einstein, who used thought experiments as a way to understand and illustrate physics concepts. Einstein considered creative practice as essential to problem-solving. He took music breaks and engaged in combinatory play, which involved taking seemingly unrelated things outside the realms of science (e.g., music and art) and combining them to come up with new ideas. These interludes helped him “connect the dots” of his experiments at opportune moments when he played the violin. Einstein’s ideas influenced musicians such as John Coltrane, who used theoretical physics to inform his understanding of jazz composition.
As an artist who uses artificial intelligence, I combine my knowledge of color theory, culture, literature, creative writing, and image editing to create unique art that captures the spirit of the present moment.
Scientists who embrace the arts use cognitive tools that the biologist and historian of science Robert Root-Bernstein identifies as observing, imaging, recognizing patterns, modeling, playing, and more to provide “a clever, detailed, and demanding fitness program for the creative mind” across scientific and artistic disciplines. A study led by Root-Bernstein considered the value of the arts for scientists. He and his collaborators found that scientists often commented on the usefulness of artistic practices in their work. They suggested that their findings could have important implications in public policy and education. Conducted over a decade ago, this research has not yet led to a marked shift in science policies and the development of STEM and STEAM curricula. Science, the arts, and the humanities are still siloed in most US institutions.
Many scientists and musicians never realize the links between physics and the polyrhythmic structures in music. K–12+ physics teachers don’t teach their students about the connections between theoretical physics and jazz. Music students never consider physics when learning to play Coltrane’s “Giant Steps,” and I think this is a missed opportunity for interdisciplinary learning. Scholars such as the multidisciplinary Ethan Zuckerman argue for the combining of technical and creative innovation through the use of artificial intelligence, which has a potential for composing music, visualizing ideas, and understanding literature. The gaps or frictions in the sciences, the arts, and the humanities belie the fact that all these disciplines or fields are charged with investigating what it means to be human and how we might improve our states of wellness and well-being. To create a more inclusive future inside and across disciplines, it’s up to all of us to make these connections more apparent, and our engagements with inclusion more intentional.
Nettrice Gaskins
Assistant Director, Lesley STEAM Learning Lab
Lesley University
Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz trace the historical development of the argument that science, technology, engineering, and mathematics (STEM) and the humanities, arts, and social sciences (HASS) should be integrated. In the first century BCE—far further back than Vannevar Bush’s landmark report or the Branches from the Same Tree and The Fundamental Role of Arts and Humanities in Medical Education (FRAHME) reports that Husbands Fealing et al. foreground—the Roman architect and engineer Vitruvius wrote that architecture should meet three criteria: firmness, fitness, and delight. Buildings should exhibit structural integrity but also meet the needs of their occupants and be pleasant spaces for them. Or in an example from today’s world, smartphones should not only work; they should also seamlessly fit into their owners’ daily lives and contribute to their self-identity.
Translated into Vitruvian terms, an overemphasis on STEM addresses firmness but neglects fitness and delight, which are where HASS can help. Critical analysis from the humanities and social scientific investigations play an essential role in assessing, predicting, and designing for fitness. And delight is the domain of the arts. (“Delight” is meant in a broad sense of aesthetic engagement, and may include provocative discomfiture if that is what is intended.)
By treating complex problems as STEM problems at their core, with HASS contributions as mere “add-ons,” we risk solving the wrong problems or only the narrow subproblems that are tractable using STEM methods.
Framed in this way, the authors correctly state that considering fitness and delight too late is bound to lead to a narrow conception of whatever problem is being addressed. By treating complex problems as STEM problems at their core, with HASS contributions as mere “add-ons,” we risk solving the wrong problems or only the narrow subproblems that are tractable using STEM methods. Computing professionals in my experience often consider user interfaces to be window dressing—something to be considered after the core functionality is nailed down. In contrast, the Human-Centered Computing PhD program at Georgia Tech—the authors’ own institution and formerly my own—takes the perspective that a human-centered computing solution to any problem has an intertwined Vitruvian structure. Students learn to conceive of technology as part of a broader social web. But they do more than study technology from the sidelines; they design and advocate, much as the FRAHME report’s “Prism Model” encourages medical students to do.
So far, so good, but there’s an elephant in the room. At STEM-focused universities, STEM and HASS are not just separate: they have different statuses. It is no accident that at Stanford University, the STEM and HASS students label themselves as “techies” and “fuzzies,” respectively. STEM professionals may magnanimously acknowledge that HASS contributions can help them, but that is not the same as treating STEM and HASS as co-equal contributors to sociotechnical challenges. My experience of the computational media program that Husbands Fealing et al. cite as an example of successful integration included conversations with colleagues and students who referred to it as “Computer Science Lite.” The valorization of technical rigor—or the mislabeling of epistemic rigidity as rigor—is a badge of masculinized self-identity in many STEM environments, and overcoming that will be a HASS problem in its own right.
Colin Potts
Provost and Executive Vice Chancellor for Academic Affairs
Professor, Computer Science
Missouri University of Science and Technology
While science will benefit from greater integration with the humanities, arts, and social sciences (HASS), so too will the HASS fields benefit from greater knowledge of science, technology, engineering, and mathematics (STEM). What better place to do this than the undergraduate bachelors programs across the country, as demonstrated by many colleges and universities offering a liberal education with curricular requirements of all majors to take courses across the disciplines. Many institutions moved away from such requirements in the 1960s and ’70s, and it is time to bring them back, for the reasons that Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz articulate.
While science will benefit from greater integration with the humanities, arts, and social sciences, so too will these fields benefit from greater knowledge of science, technology, engineering, and mathematics.
In 2010, four Vassar College students produced a film, Still Here, for a class on documentary film. The film chronicled the life of Randy Baron, from New York City, who lived with HIV for 30 years. He survived for decades because of a rare genetic mutation, while he watched almost all his friends die from the disease. The film, and the four students who produced it, demonstrates the value of a liberal education and exposure to a variety of disciplines. The film was powerful because it integrated the history of the 1980s, the public’s response to the HIV epidemic, and the politics involved, including the Reagan administration’s slow response, the psychology and trauma of grief, and of course the science of the disease. Without the integration of the history, politics, science, and art, this film wouldn’t have won the Cannes prize that it did for best student documentary.
The student director, Alex Camilleri, commented that to be an effective filmmaker, film had to be secondary in one’s life, behind first being human. This applies to all STEM-HASS majors. While expertise and depth of knowledge are of course important to all disciplines, bringing understanding of the context of your work in the world, as well as the variety of methodologies used by various disciplines, allows for greater creativity and the advancement of knowledge important to improving the human condition. The slow response to the AIDS epidemic and the development of antivirals in the 1980s and the ’90s has clearly informed the world’s response to COVID-19, saving lives across the globe. It is to be hoped that historians, scientists, politicians, and storytellers of all types will study the COVID-19 episode to improve future responses to pandemics.
Some observers worry that requiring students to take courses that they wouldn’t otherwise choose will not be productive. But the point is in fact to expose students to ideas and disciplines that they are not familiar with and wouldn’t necessarily choose to pursue. If STEM and HASS faculty believe in the benefits of more interdisciplinary exposure for their majors, it can easily become part of the culture and understood to be an important part of the education that will benefit their work in the future, not take away from it.
Catharine B. Hill
Managing Director, Ithaka S + R
President Emerita, Vassar College
Kaye Husbands Fealing, Aubrey Deveny Incorvaia, and Richard Utz effectively address the critical importance of “intercultural” dialogue between science, technology, engineering, and mathematics (the STEM fields) and the humanities, arts, and social sciences to restore a holistic educational model that leverages diverse perspectives. It is indeed time to come together across sectors and disciplines to organize a cohesive response to the intractable problems that plague humankind, such as novel viruses, racism, radicalization, hunger, and damage to the earth.
The network I direct, a2ru, is devoted to not only integrating the arts in STEM and other disciplines, but also providing demonstrable examples of integrated research to help communicate the promise of these collaborations to policymakers and the public. A2ru’s Ground Works, a peer-reviewed platform of arts-integrated projects, features many examples of research that integrates art and the humanities in STEM.
It is indeed time to come together across sectors and disciplines to organize a cohesive response to the intractable problems that plague humankind, such as novel viruses, racism, radicalization, hunger, and damage to the earth.
For example, “Just-in-Time Ecology of Interdisciplinarity: Working with Viral Imaginations in Pandemic Times” is a fascinating example of where and how this transdisciplinary can exist in the research culture—in this case, calibrated as a coordinated response to a real-time, real-world problem. Other projects demonstrate close and innovative collaborations. “Greenlight SONATA,” a collaboration between a composer, ethnomusicologist, and civil engineer, tested the hypothesis that translating simulated traffic information into music could lead to musical resolution of persistent traffic congestion. “Unfolding the Genome,” a collaboration between scientists and artists, explored how the human genome, the DNA contained in every cell of the body, folds in 3D. A recent special collection on the platform, “Vibrant Ecologies of Research,” looks beyond the project level to the systems level. As its guest editor, Aaron D. Knochel, explains, “The project work and commentaries explore vibrant ecologies of research deepening our understanding of the institutional, social, and epistemological systems that effectively weave arts-based inquiry into the scholarly fabric of research.”
The a2ru network is seeing a subtle yet consistent and accelerating shift in STEM programs finding pathways for their students to benefit from arts and humanities studies, to better prepare them for a changing workforce but also to improve their individual wellness and ability to contribute to society within their chosen profession. There is a critical need to bridge a siloed disciplinary culture in higher education (and other sectors). We need networks such as a2ru in place to effectively build those bridges.
As the authors note, Branches from the Same Tree, a 2018 report by the National Academies of Sciences, Engineering, and Medicine, focused on integrating curricula within these various fields. A second phase of this work is surfacing examples of integrated research in not only higher education, but in the industry and civic spheres as well. Allied networks such as a2ru, designed and determined to work across differences, can lead us there.
Maryrose Flanigan
Executive Director
a2ru
Technology-Based Economic Development
In “Manufacturing and Workforce” (Issues, Fall 2022), Sujai Shivakumar provides a timely and important review of the CHIPS and Science Act. This landmark legislation aims at strengthening domestic semiconductor research, development, design, and manufacturing, and advancing technology transfer in such fields as quantum computing, artificial intelligence, clean energy, and nanotechnology. It also establishes new regional high-tech hubs and looks to foster a larger and more inclusive workforce in science, technology, engineering, and mathematics—the STEM fields. In a recent article in Annals of Science and Technology Policy, I noted that the act focuses tightly on general-purpose technologies, emanating from technology transfer at universities and federal laboratories. Shivakumar correctly notes that public/private investment in technology-based economic development (TBED) in manufacturing must be accompanied by workforce development to match the human capital needs of producers and suppliers.
I have two recommendations relating to workforce development, in the context of technology transfer. The first is based on evidence presented in a 2021 report by the National Academies of Sciences, Engineering, and Medicine titled Advancing Commercialization of Digital Products from Federal Laboratories. (In full disclosure, I cochaired that committee with Ruth Okediji of Harvard University.) The report concluded that accelerating commercialization of research requires that we achieve a better understanding of workplace and managerial practices relating to technology transfer, including individual and organizational factors that may inhibit or enhance the ability of scientists to engage in commercialization of their research. These factors include the role of pecuniary and nonpecuniary incentives, organizational justice (i.e., workplace fairness and equity), championing, leadership, work-life balance, equity, diversity and inclusion, and organizational culture. Understanding such issues will help identify and eliminate roadblocks encountered by scientists at federal labs as well as universities who wish to pursue technology transfer. It would also allow us to assess how “better performance” in technology transfer is achieved.
Accelerating commercialization of research requires that we achieve a better understanding of workplace and managerial practices relating to technology transfer.
A second recommendation concerns a major gap that needs to be filled, in terms of developing a more inclusive STEM workforce to implement these technologies. This gap centers on tribal communities, which are largely ignored in TBED initiatives and technology transfer. Unfortunately, economic development efforts for tribal communities have predominantly focused on building and managing casinos and developing tourism. Results have been mixed, with limited prospects for steady employment and career advancement.
Opportunities for TBED strategies to aid tribal communities might include the development of new investment instruments, the strategic use of incentives to attract production facilities in such locations, and the promotion of entrepreneurship to build out supply chains. This would require adapting tools for TBED to be better suited to the needs and values of the communities. That means developing a TBED/technology transfer strategy that simultaneously protects unique, Indigenous cultures and is responsive to community needs.
In sum, I agree with Shivakumar that workforce development is key to the success of the CHIPS and Science Act. Two complementary factors that will help achieve its laudable goals are improving our understanding of how to better manage technology transfer at universities, federal labs, and corporations, and involving tribal communities in technology-based economic development initiatives and technology transfer.
Donald Siegel
Foundation Professor of Public Policy and Management
Co-Executive Director, Global Center for Technology Transfer
Arizona State University
Sujai Shivakumar stresses the importance of building regional innovation capacity to bolster manufacturing innovation in the United States. He rightly notes that this needs to be a long-term cooperative effort, one requiring sustained funding and ongoing policy attention.
One of Shivakumar’s key points is the necessity of complementary state and local initiatives to leverage federal and private investments through the use of public-private partnerships. As he notes, the success of the nano cluster centered in Albany, New York, was initially based on collaboration with IBM and the state, especially through the College of Nanoscale Science and Engineering, but it reflected a 20-year commitment by a succession of governors, both Republican and Democratic, to developing a regional semiconductor industry.
We need to capitalize on current centers of excellence, even as we seek to create new ones in a spoke-and-hub model, using their proven strengths to reinforce this ambitious national undertaking.
Working with Thomas R. Howell, we have documented the long-term nature of this effort in a recent study titled Regional Renaissance: How the New York Capital Region became a Nanotechnology Powerhouse. We describe in some detail (because the details matter) the role of regional organizations, such as the Center for Economic Growth and the Saratoga Development Commission, steadily supported by leaders of the state assembly to find a site, obtain the necessary permits, build out infrastructure, and win public support. The state also encouraged training programs in semiconductor manufacturing by institutions such as the Hudson Valley Community College. Moreover, the semiconductor company AMD (now Global Foundries) was attracted by the resources and proximity of the College of Nanoscale Science and Engineering, building and expanding a semiconductor fabrication plant—or “fab,” in the field’s parlance—that has led to many thousands of well-paying jobs.
This model is especially relevant to meeting the needs of growing numbers of semiconductor fabs that are encouraged under the CHIPS and Science Act. Indeed, in order to address the need for applied research, student training, and collaborative research among semiconductor companies, the Albany facility stands out, not least for its proven track record and its exceptional capabilities, based on its commercial scale fab, ideal for testing equipment and designs but unusual for a university.
This facility can and should be a central node in the semiconductor ecosystem that the nation seeks to strengthen. If we are to avoid an excessive dispersal of funds and the long lead times of new construction and staffing, we will need to draw on existing facilities that are already operational and can be reinforced by the resources of the CHIPS and Science Act.
In short, we need to capitalize on current centers of excellence, even as we seek to create new ones in a spoke-and-hub model, using their proven strengths to reinforce this ambitious national undertaking. Time is not our friend; existing assets are.
Charles Wessner
Adjunct Professor
Global Innovation Policy
Science, Technology, and International Affairs
School of Foreign Service
Georgetown University
Semiconductors and Environmental Justice
In “Sustainability for Semiconductors” (Issues, Fall 2022), Elise Harrington and colleagues persuasively argue that the CHIPS and Science Act of 2022 offers the United States a unique chance to advance national interests while decoupling the semiconductor industry from supply chain vulnerabilities, public health problems, and environmental hazards. The path the authors suggest involves improving industry sustainability by, among other actions, circularizing supply chains—that is, by designing with a focus on material reuse and regeneration.
But any industrial effort to circularize the supply chain will face an uphill battle without addressing competition policy concerns. Today’s large companies and investors seem to assume it natural to seek illegal monopoly power, regardless of its toxic effects on society. Some companies may claim consolidation as a cost of US technological sovereignty, but consolidation is actually a threat to national security. Achieving a circular supply chain will require innovative policies for competition and coordination of pre-competitive interests across use, repair, reuse, refurbishment, or recycling of semiconductors and destination devices. Securing downstream product interoperability, rights to repair, and waste and recycling requirements would be a promising start.
Further, building a strong and sustainable semiconductor industry should not come at the expense of public and environmental health. Attention to environmental justice must be front and center. The European Commission is advancing a novel approach to regulating economic activities for sustainability through the principle of “do no significant harm.” However, the Commission, as well as Harrington et al., fixates on negotiating quantitative, absolute environmental targets to arbitrate the harm of an industrial activity. Harm and its significance are subjective and contingent on the parties involved (and the stage of the industrial lifecycle). Too often, research, development, and industrial policies end up simply becoming governments doling out permission to harm individuals and communities for the benefit of a few for-profit companies. Silicon Valley, with 23 Superfund sites, the highest concentration in the country, has a lot to answer for on this front.
Building a strong and sustainable semiconductor industry should not come at the expense of public and environmental health. Attention to environmental justice must be front and center.
Finally, the United States should avoid a “race to the bottom” of state and local governments undercutting each other to secure regional technology hubs. Too often, relocation handouts siphon tax dollars from communities and schools to attract corporations that prove quick to loot the public purse and leave to the next doting location. For example, the Mexican American environmental justice movement has noted how major semiconductor industries in New Mexico and Arizona regularly secured state subsidies yet provided few quality jobs and little community reinvestment, burdened communities with environmental wastes, and drained scarce water resources. To center environmental justice in semiconductor sustainability efforts, much can be learned from such highly effective good neighbor agreement efforts. Respecting small and medium-size industries, centering environmental justice, and fairly distributing the benefits of a semiconductor renaissance around the country would be not only good policy but also good politics, as shown in other industrial policy efforts. A sustainable semiconductor industry considering these strategies would be more likely to win political and public support and stand a better chance of genuinely benefiting the nation and its people.
Michael J. Bernstein
Center for Innovation Systems & Policy
AIT Austrian Institute of Technology
R&D for Local Needs
In “Place-Based Economic Development” (Issues, Fall 2022), Maryann Feldman observes that the CHIPS and Science Act marks “an abrupt pivot in the nation’s innovation policy” away from the laissez-faire system of the past and toward a policy focused on addressing regional economic development. Central to this new course is the act’s directive for the National Science Foundation (NSF) to “support use-inspired and translational research” through its new Technology, Innovation, and Partnerships (TIP) directorate.
Yet nowhere within the statute are these terms defined or described. The phrase “use-inspired research” was coined in 1997 by the political scientist Donald Stokes in his seminal work, Pasteur’s Quadrant, in which he sought to break down the artificial distinctions between scientific understanding and wider use while rejecting overly limiting terms such as basic and applied research. For Stokes, research focused on real-world societal problems—such as French chemist Louis Pasteur’s work on anthrax, cholera, and rabies—can spark both new fundamental knowledge and applied breakthroughs.
But what potential uses will inspire the next generation of innovators? If we look to the text of the CHIPS and Science Act, the legislation outlines 10 technology focus areas and five areas of societal need to guide use-inspired research overseen by the TIP directorate. Beyond these lists however, there is another source of inspiration that is strongly implied by the legislative language: regional societal and economic needs—specifically, the needs of places where scientists live and work.
What potential uses will inspire the next generation of innovators?
While this observation may sound simple, implementation is not. Indeed, researchers at the University of Maine previously described in Issues the intricate challenges of crafting a regional use-inspired research agenda, creating community partnerships, engaging stakeholders, and breaking through institutional and cultural barriers that transcend publish-or-perish incentives to help produce real-world solutions.
The CHIPS and Science Act has launched such an endeavor on a national scale with NSF as the driver. It is a new place-based research enterprise that finds inspiration from the needs of diverse geographic regions across the United States. The statute is an important step, though many bumps in the road lie ahead, including securing the necessary appropriations. However, by focusing more on the needs of geographic regions through use-inspired research, NSF can better meet the mandate of CHIPS and Science to address societal, national, and geostrategic challenges for the benefit of all Americans.
Tim Clancy
President, Arch Street
Former professional staff member, US House Committee on Science, Space, and Technology
The recent CHIPS and Science Act ensures that the invisible hand in the capitalist US economy is now far from invisible. With nearly $280 billion in planned investments by the federal government for domestic research, development, and manufacturing of semiconductors, officials are hoping this support will lead to a technological paradigm shift for future generations. Officials contend this sizeable investment will decrease the nation’s dependence on countries such as South Korea, Taiwan, and China, which have dominated the semiconductor industry for the past two decades. By their dominance, these countries are effectively in control of the US supply chain and thus threaten the nation’s current and future national security. Credit is due to federal elected officials for realizing the need for such critical investment in the US economy and semiconductor industry. How that funding and support is distributed, however, remains a critical component of the legislation.
In her essay, Maryann Feldman argues that “the United States needs a bold strategic effort to create prosperity.” While one could argue that the Chips and Science Act is the bold public policy needed to ensure the nation’s technological independence and innovation, from an economic and public policy perspective I would argue that the most critical components of that act will be source contract award processes that are directly connected to defined requirements, strong agency oversight, engagement throughout the award timeline, early definition and commitment to creating commercialization pathways, implementation of award support for research personnel in flyover states, and a commitment to assess program results as they relate to the requirement that generated the award.
With nearly $280 billion in planned investments by the federal government for domestic research, development, and manufacturing of semiconductors, officials are hoping this support will lead to a technological paradigm shift for future generations.
Additionally, based on contemporary research from TechLink, a US Department of Defense Partnership Intermediary, in order to develop innovative, successfully commercialized technology, identifying the most effective research personnel in flyover states—those individuals who will be able to ensure that their technology innovations and commercialization overcome the well-known “valley of death”—will be critical for place-based economic impacts and outcomes.
The CHIPS and Science Act needs to ensure that when technology decisions and appropriations occur at the practical level, they are funded to a diverse set of independent entrepreneurs, innovators, nonprofit organizations, universities, small businesses, local government partners, and educated citizens in research parks—those whose sole purpose is in developing the most advanced technology conceivable. COVID-19 changed how researchers can coordinate remotely with leading technology experts in the field, regardless of their physical location. Technological innovation and commercialization must take precedence over quid-pro-quo politics in Washington, DC, or the nation’s attempt to become the global leader in the semiconductor industry will have started and ended with federal public policy officials and bureaucrats. If the recommendations that Feldman makes regarding place-based economics, public policy implementation, and economic development are implemented, I’m confident the United States will surpass China, Taiwan, and South Korea in a semiconductor paradigm shift that will last for decades to come.
Michael P. Wallner
Department Head, Economic Impacts
TechLink
Maryann Feldman in her excellent article makes a strong case that one of the most important strategic requirements for future growth in high-income jobs is expanding what the regional economic growth policy arena calls “innovation hubs.”
Feldman states that “place-based policy recognizes that when firms conducting related activities are located near each other, this proximity to suppliers and customers and access to workers and ideas yield significant dynamic efficiency gains. These clusters may become self-reinforcing, leading to greater productivity and enhanced innovation.”
There are a lot of economic rationales and policy implications packed into this summary statement. From an economist’s perspective, innovation hubs are an essential industrial structure for future economic development, first and foremost because they enable the realization of “economies of scope.” This is a distinguishing characteristic from the Industrial Revolution in which “economies of scale” dominated.
More specifically, scale is the dominant driver when product technology is relatively simple and product differentiation is therefore limited. In such cases, the emphasis is largely on reducing unit cost; that is, price is the basis for competing. In contrast, modern technology platforms offer a much wider “scope” of potential product applications, which requires more sophisticated process technologies in terms of both quality and attribute flexibility. The increasingly differentiated needs of modern high-tech supply chains means that economies of scope with respect to emerging technology platforms are now the major policy driver.
More technically demanding product and process technology development and use require higher and diversified labor skills. As the complex set of labor inputs changes continuously with the evolution of technology development, responsive educational institutions are essential to update and refocus workers’ skills. The resulting diverse local (and hence mobile) labor pool is essential to meeting rapidly changing skill requirements across firms in a regional innovation cluster.
The increasingly differentiated needs of modern high-tech supply chains means that economies of scope with respect to emerging technology platforms are now the major policy driver.
Further, the potential for economies of scope provides many opportunities for small firms to form and pursue niche markets. But doing so requires the availability of a local start-up infrastructure embodying such institutional entities as “accelerators” and “incubators” to facilitate evolution of optimal industry structures.
The extreme dynamic character of technology-based competition determined to a significant extent by economies of scope inherent in modern technology platforms means considerable shifting of skilled labor among competing firms, as new application areas are developed and grow. Co-location of a large skilled labor pool and a supporting educational infrastructure is therefore essential. Similarly, the extreme dynamics of the high-tech economy that affords opportunities for new firms to form and prosper works well only if a significant venture capital infrastructure is present.
These factors—facilitation of economies of scope in research and development; a diverse and skilled local labor pool; start-up firm formation; risk financing; and technical infrastructure—collectively promote the innovation hub concept. As Feldman states, “For too long, the conventional policy approach has been for government to invest in projects and training rather than in places.”
In summary, the complexity of modern technology development and commercialization demands a four-element growth model: technology, fixed capital (hardware and software), and skilled labor, all of which depend on a complex supporting element: technical, educational, and business infrastructure. All four assets must be co-located to achieve economies of scope and hence broad-based technology development and commercialization.
Gregory Tassey
Research Fellow, Economic Policy Research Center
University of Washington
The Problem With Subsidies
The United States government is waging a “chip” war with China. The war is fought on two fronts: one is preventing China from accessing the latest artificial intelligence chips and manufacturing tools, and the other is subsidizing large firms to bring manufacturing back to the United States. But as Yu Zhou points out in “Competing with China” (Issues, Fall 2022), “whether [the CHIPS and Science Act] will improve US global competitiveness and prevent the rise of China is uncertain.”
A race to subsidies as America’s solution is uncertain and problematic because it is based on a misunderstanding of how innovative Chinese firms advanced their technologies. Contrary to the popular belief that China built its technology industry through massive subsidies, China’s records of state-sponsored technology investments are often spotty. The prime example is the semiconductor industry, with billions of dollars invested by the state over the last three decades; the industry’s advancement consistently fell short of government targets. The real secret of the Chinese high-tech industry is indigenous innovation—that is, innovative Chinese firms sense unfulfilled domestic demands, innovate to generate localized products at a lower cost, build on access to the vast Chinese market to scale up, and eventually accumulate problem-solving capabilities to approach the technological frontier. Ironically, the US government’s chip war is creating a space for indigenous innovation for Chinese semiconductor companies, which was previously absent when China relied on American chips.
A race to subsidies as America’s solution is uncertain and problematic because it is based on a misunderstanding of how innovative Chinese firms advanced their technologies.
Taking the wrong lessons from China could have unintended consequences for the US industry. Since leading American high-tech firms have spent some $633 billion on stock buybacks over the past decade, it can hardly be assumed that their lack of enthusiasm for investing in semiconductor manufacturing is because of a lack of cash. But showering money on business, as China’s experience showed, would certainly lead to unhealthy state-business relations. Already, the CHIPS and Science Act has created perverse incentives for lobbying for more and more subsidies.
Instead of competing on subsidies, the United States should compete with China in areas where it excels, namely innovation in emerging technologies. Historically, the United States has had few successes in reviving mature industries through subsidies, whether it was steel, automobiles, or memory chips. But it has consistently led the world in new technological revolutions over the past century. In a world facing a climate crisis, the United States should compete with China to bring innovations to solve the nation’s existential problems and win the ultimate prize of technological leadership that benefits all humankind. After all, the subsidy war benefits few but corporate bottom lines.
Yin Li
Assistant Professor of Innovation Policy
School of International Relations and Public Affairs
In “Public Access to Advance Equity” (Issues, Fall 2022), Alondra Nelson, Christopher Marcum, and Jedidah Isler touch on the many reasons why open access to federal research is critical and highlight some of the challenges to come. We wholeheartedly agree with their sentiment—“A research ecosystem where everyone can participate and contribute their skills and expertise must be built”—and we applaud both the Biden administration and the White House Office of Science and Technology Policy (OSTP), where the authors work, in their commitment to make federally funded research open to the public.
In particular, as graduate, professional, and medical students, we have been shaped by the relics of an inequitable publishing model that was created before the age of the internet. Our everyday work—from designing and running experiments to diagnosing and treating patients—relies on the results of taxpayer-funded research. Having these resources freely available will help to accelerate innovation and level the playing field for smaller and less well-funded research groups and institutions. With this goal of creating an equitable research ecosystem in mind, we want to highlight the importance of creating one that is equitable in whole.
In the same way that open access to reading publications is important to keep the public and institutions informed, open access to publishingis equally important, as it allows institutions to make their work known. With free access to federally funded research, this effect will be even greater. It is critical that access to publishing is open to promote learning from the public knowledge as well as contributing to it.
As graduate, professional, and medical students, we have been shaped by the relics of an inequitable publishing model that was created before the age of the internet.
But today, the incentives for institutions do not align with goals of equity, and change will be necessary to help support a more equitable system. Nor do incentives within institutions always align with these goals. This is especially true for early-career researchers, who might struggle to comply with new open-access guidelines if they need to pay a high article publishing fee to make their research open in a journal that is valued by their institutions’ promotion and tenure guidelines.
To these ends, it is imperative that the process for communicating research results to the public and other researchers does not shift from a “pay-to-read” model to a “pay-to-publish” model. That is, we should not use taxpayer dollars to pay publishers to make research available, nor should we simply pass these costs on to researchers. This approach would be unsustainable long-term and would go against the equity goals of the new OSTP policy. Instead, we hope that funders, professional societies, and institutions will come along with us in imagining and supporting innovative ways for communicating science that are more equitable and better for research.
As the research community works to implement the new OSTP policy intended to make scientific results publicly accessible, it will be critical for the next generation of researchers that the federally funded research system be made open in a way that is equitable and inclusive of those early in their careers.
Thaddeus Potter
President, National Association of Graduate-Professional Students
Michael Walls
National President, American Medical Student Association
A New Role for Policy Analysts
In “Government and the Evolving Research Profession” (Issues, Fall 2022), Candice Wright highlights the increasing pressure on researchers to make fundamental advancements in their fields, enable technology transfer to help solve pressing problems, contribute insights to policy, and appropriately manage national security risks and research security. Navigating such challenges requires a mix of skills that are hard to find in any single person. Instead, these challenges call for collaboratively produced technical and policy-relevant analysis that can then be applied in both public and private spheres.
It is imperative to consider how scientific and technical experts can best contribute to a productive, evidence-based scientific and policymaking process. How researchers answer this call can have a profound impact on their career, their professional standing, and occasionally their public reputation. The problem is that for professional academics and researchers with an educational and knowledge-generating mission, such policy, tech transfer, and national security work is difficult and often requires time and resources that can be hard to justify within existing incentive systems. How do you best retain your honest-broker status amid the risk of entering the political fray or distracting from your research agenda with a multitude of nonresearch engagements and travel? Policy analysts who are well-trained and make a career at the intersection of policy, national security, and emerging technologies can help fill this gap with specialized skills that augment those of researchers.
Policy analysts who are well-trained and make a career at the intersection of policy, national security, and emerging technologies can help fill this gap with specialized skills that augment those of researchers.
Policy analysts (including those at the Government Accountability Office) can help translate the scientific and technical content for nontechnical decisionmakers. This is a viable career for both technically and policy-trained individuals. Working with good analysts who have a strong contextual understanding of policy and enough scientific technical expertise to understand the core issues in play is transformative. Their ability to communicate and translate information from technical experts will help bench scientists increasingly understand the analysts’ value and will open up new ways to work together.
Finding people who can work both sides of the technical and policy equation is difficult. There’s a history of training policy fellows (e.g., at the American Association for the Advancement of Science and the National Academies) and now some new fellowships are increasing attention on this role (TechCongress, Open Philanthropy, Federation of American Scientists). This is heartening, but the total number of highly skilled emerging-technology policy analysts is still relatively small, and their long-term career viability is still uncertain. The scientific research and academic communities need to create ramps and pathways from traditional fields to policy analysis roles with formal training options in these hybrid areas. Technical experts need to be encouraged to take these paths and find home organizations where they can develop and excel.
Those who choose to stay within research careers can cultivate alliances with colleagues at policy analysis institutions, and I offer the one I lead, the Center for Security and Emerging Technology, as an example. As this becomes more common, universities may choose to create more independent centers devoted to policy analysis or incentivize sabbaticals for those who can coproduce relevant policy analysis. Scientists and policy analysts are natural partners and have a vested interest in each operating at the top of their game, which can help fill the gap that Wright keenly observed.
Dewey Murdick
Director
Center for Security and Emerging Technology
Georgetown University
The US research environment is at an inflection point. While the Vannevar Bush model expressed in Science, the Endless Frontier has evolved somewhat over time, it has served the nation well for decades. Yet in the face of formidable competitive challenges, there are growing concerns that this model will not enable the country to maintain its international leadership position in the future. As an example, the Special Competitive Studies Project recently wrote, “China is creating spheres of influence without any clear limits, underpinned by physical and technological infrastructure, cemented with commercial ties and technology platform dependence, deepened by authoritarian affinities, and enforced by growing military capabilities.”
The current US model for scientific advancement is enabled by federally sponsored basic research, the results of which are leveraged by the private sector to produce new capabilities. There has been little strategically planned connectivity across sectors or through the innovation process, however, and the government’s ability to drive activities has diminished over time. Yet this approach is now competing with China’s holistic strategy for international leadership in science and technology (S&T) arenas important to national and economic security. To better compete, MITRE has called for a
new federal effort to build innovation-fostering partnerships: a voluntary coordination of government, industry, and academic activities to holistically address our nation’s most-critical S&T priorities. It must integrate such diverse players into a collaborative network to share information about opportunities and solutions, and to coordinate shared, complementary efforts across sectors, institutions, and disciplines … to help catalyze solutions to the biggest technology-related challenges.
The Special Competitive Studies Project is now working to develop such a new model that could drive collaboration of America’s “five power centers” on critically important S&T topics. It is against this background and toward this future model that researchers will likely have to work, and about which Candice Wright provides useful insights.
In the face of formidable competitive challenges, there are growing concerns that this model will not enable the country to maintain its international leadership position in the future.
Wright tackles three key issues: collaboration, balancing idea exchange with security, and rigor and transparency. All are important. Research collaboration (between government, industry, academia, and international partners) will soon become as important as cross-discipline collaboration has been, but while it’s easy to discuss the virtues of open research environments, this can be difficult to implement due to security and intellectual property concerns. Scientific integrity is also paramount, and Wright recommends actions to help at project initiation.
Additional components that must also be considered and incorporated within the future research paradigm include determining the roles, requirements, and collaboration approaches of different innovation sectors (including national laboratories and federally funded research and development centers) so that each succeeds and the nation’s capabilities advance; enhancing federal government S&T coordination; communicating to nonscientists; and ensuring that needed technical advancements occur while maintaining important national ideals such as privacy, equity, and free enterprise.
Our research future will be collaboration-based and strategically driven. Let’s begin now, together and with a consistent vision.
In “Making Gene Therapy Accessible” (Issues, Fall 2022), Kevin Doxzen and Amy Lockwood highlight contentious issues around gene therapy, even as the treatment shows good results and has raised hopes for many people with incurable diseases. The authors rightly point out that unless leaders carve out appropriate policies to develop gene therapy through a collaborative process, this novel therapy will be accessed by less than 1% of the world’s population.
Treatment that is unavailable to patients in need has no value at all. Policies will need to focus on research promotion, clinical and regulatory infrastructural development, capacity-building, training, and development of an approval pathway and community adoption for success and sustainable affordability. And as the authors suggest, rather than concentrating on single diseases, efforts should be focused on establishing a platform that would be applicable to multiple diseases. This approach could help researchers working to develop therapies not only for infectious diseases such as HIV and hemoglobinopathies such as sickle cell disease and thalassemia, but also for “rare” diseases that may in fact be common in low- and middle-income countries (LMICs).
In our opinion, one of the biggest roadblocks in this regard is intellectual property rights. The successful application of patent pools in the development of antiretroviral drugs in LMICs provides a tried and tested strategy for bringing down the cost of gene therapy by sharing these rights. Moreover, lack of competition affects the cost of gene therapies, as only a very small number of companies are developing such therapies. Bringing in more players may bring down the costs markedly.
Treatment that is unavailable to patients in need has no value at all.
Apart from encouraging research and development, the authors also rightly underline the significance of regulatory guidelines and laws to ensure execution of safe and ethical research. To begin, global clinical trials need to be encouraged and facilitated with the participation of various patient populations from countries associated with high disease burdens. There needs to be proper guidance documents for development of indigenous platforms—utilizing the current capabilities and intellectual property of researchers and clinicians of various countries—to establish self-reliant assets for LMICs. This is also necessary for local gene therapy methods and products developed through technology transfer. To encourage the best practices globally, there should be twinning programs to provide appropriate hands-on training on the platforms established elsewhere and to generate a well-trained workforce for these resource-intensive and innovative technologies. Data sharing across the globe for drafting evidence-based recommendations for treating diseases with these modalities should also be encouraged so that stakeholders may learn from each other’s experiences.
Also important are organizations such as the Global Gene Therapy Initiative, which, as the authors highlight, play a pivotal role in the development of gene therapies in LMICs. With the participation of multidisciplinary experts, such initiatives can go a long way toward preparing LMICs to maximize the impact of gene therapy.
Finally, policymakers and other authorities in government need to develop funding mechanisms and policies to prioritize long-term success and stronger health systems with respect to gene therapy, realizing the transformative potential of these technologies in improving millions of lives. Also, as emphasized above, regulatory convergence needs to be aimed for, to solve the existing bottlenecks and build the ecosystem for gene therapy methods and products in LMICs.
Geeta Jotwani
Varsha Dalal
Indian Council of Medical Research Headquarters
New Delhi, India
Development, acceptance, and sustainability of successful health interventions require both good planning and sound policies, as well as partnerships among many stakeholders. This is particularly the case when dealing with complex and sensitive interventions such as the introduction of and equitable access to gene therapy in low- and middle-income countries (LMICs), as Kevin Doxzen and Amy Lockwood highlight. The authors point out the familiar long delays between the time that new interventions become routine in high-income countries and their accessibility in LMICs. This must change.
To accelerate this change, Doxzen and Lockwood advocate for intersectoral, cross-cutting programs rather than single-disease vertical programs. Besides the examples the authors cite, this approach has proven to be very effective by other programs such as the European and Developing Countries Clinical Trials Partnership (EDCTP) and World Health Organization-TDR, which support interventions against diseases of the poor, especially in LMICs, and have broader mandates that include capacity development, networking, and fostering co-ownership of their programs.
Capacity development, including building environments for conducting quality health research and health service delivery, has far-reaching outcomes beyond the intended primary focus, as the authors cite in the case of the President’s Emergency Plan for AIDS Relief, or PEPFAR, and its contribution to the COVID-19 response. The same can be said about the EDCTP and the World Health Organization-TDR programs, which have shown that it is most cost-effective to support cross-cutting issues that can be used generally in different settings.
Capacity development, including building environments for conducting quality health research and health service delivery, has far-reaching outcomes beyond the intended primary focus.
However, as Doxzen and Lockwood point out, for ambitious programs such as equitable global accessibility of gene therapy, it is paramount to have in place good policies, sound strategic delivery plans, and coordination of activities. The strategy should consider inputs from all major stakeholders, including health authorities, regulatory agencies, civil society, international health organizations, the scientific community, development partners, industry, and the affected communities.
Of particular importance should be the involvement of health authorities in LMICs right from the start to inculcate a sense of co-ownership of the program. This will foster acceptance, active participation, self-determination, and program sustainability. Failure to do so may lead to public resistance, as evidenced in, for example, the vaccination campaign to irradicate polio in Africa and vaccinations efforts against COVID-19 and Ebola. Polio vaccination programs were falsely accused of imposing birth control in Nigeria, and COVID-19 immunization programs using mRNA vaccines were widely associated with negative misinformation about interference with human genes. Such involvement is particularly important in dealing with a sensitive issue such as gene therapy, which is prone to misinterpretation and misinformation. The involvement of local authorities and communities will also point out areas of weakness and capacity that need strengthening, including regulatory, laboratory, and clinical services.
Since it requires many years for such services to be readily available and accessible, this planning should take place now. There is no time to waste.
Charles Stephen Mgone
Retired Vice Chancellor
Hubert Kairuki Memorial University
Dar es Salaam, Tanzania
The Future of Global Science Relations
China’s scientific rise provokes strong global reactions. Research collaboration between North American and European partners with counterparts in China is now increasingly seen in light of securitization, asymmetrical dependencies, predatory practices, and general rivalry. In the middle of these heated debates, Joy Y. Zhang, Sonia Ben Ouagrham-Gormley, and Kathleen M. Vogel are doing something remarkable: holding trilateral experimental online meetings that focus on ethical and regulatory issues in the biosciences, and reporting about it. In “Creating Common Ground With Chinese Researchers” (Issues, Summer 2022), the authors not only describe fascinating, often-overlooked nuances of the Chinese science system; they also are admirably honest about the problems they encountered in establishing the dialogue and about the learning processes and necessary adjustments. This sort of open, flexible, and daring approach has become increasingly rare, as moral arguments and calls for taking a stance in and on bilateral relations grow louder.
Of course, there are heavy challenges involved, and the authors describe many of them. For instance, they address the often-questioned individuality and agency of Chinese scientists and argue that the question is too simple (and sometimes racist) and creates unproductive boundaries. Being sensitive to the increasing political pressure on and control over Chinese scientists could, however, be an important resource in the dialogue. After all, it could be tried as a starting point of mutually discussing experiences of societal demands on scientific work, and more.
This sort of open, flexible, and daring approach has become increasingly rare, as moral arguments and calls for taking a stance in and on bilateral relations grow louder.
Yet the most important message the authors convey is that the decision to enter such a dialogue, and how it would work, is above all our Chinese colleagues’ choice. We in the West cannot and should not try to force that decision on them, or unilaterally exclude them from participating in the dialogue, no matter our reasons and however well-meaning we consider them. Moreover, different from how big platforms blend science, politics, and science policy together, intimate and more specific dialogues between scientists and scholars are usually harder to politically instrumentalize. And should that still happen, then that can be called out or discussed there, or rather be contested on the basis of intellectual arguments.
Another matter is that COVID-19 still largely halts travel to China and thus hinders personal encounters, making it difficult to add new people to the conversation. Current online forums favor long-established and heavily trusted ties. Fortunately, expanding the conversation by snowballing beyond these ties seems to still work. My personal experience with online exchanges confirms that more than ever there is a tangible interest, including among colleagues in China, in keeping up the conversation, continuing to cooperate and learn, and trying not to let external pressure and interference blur our views of each other. Giving up on this opportunity now would be detrimental. But the online dialogues can go only so far. It will be a task for different scientific associations and academies to put them on a regular and broader (and offline) basis in the future.
Finally, while intense exchanges about meta-topics such as research ethics and work structures are extremely valuable, in the end what will count most is that researchers factually work together (again) in postpandemic times. Especially in my field, the social sciences, research and collaboration opportunities were already significantly limited for several years before the pandemic, due to the increasing levels of control in China over access, data, and publications. In addition, wemust now address the heightened sensitivities in North American and European societies about the legitimacy and value of our collaborations. We scientists and scholars on all three continents must find honest and compelling ways to fight for the future of science relations.
Anna L. Ahlers
Head, Lise Meitner Research Group: China in the Global System of Science
Max Planck Institute for the History of Science
Berlin, Germany
Investing in European Innovation
It’s not (just) the economy, stupid!
When Daniel Spichtinger writes, we pay attention. But after reading “From Strength to Strength?” (Issues, Summer 2022), we wondered whether his analysis of the problem was right. The article’s premise is that more money for research will ensure that Europe remains a science research superpower, which is why we chose to rewrite the famous political slogan from the Clinton era as our opening sentence. We think it is more complicated than just money. People are the most important resource in research, and as long as universities struggle to manage that resource responsibly, they are not getting any more of our hard-earned tax money.
Governments should be investing in well-functioning systems, but is academia such a system? Most people in academia who are not white, able-bodied, heterosexual men wouldn’t think so. Spichtinger refers to the renewal of the European Research Area (ERA). It’s interesting to notice that in this new ERA there is an increased focus on both diversity and culture. Why? Because research institutions are losing talent as they struggle to create inclusive research cultures. It is damaging not only to the reputation of universities when people of color, LGBTQ+ people, and people with disabilities experience fewer career opportunities and more harassment, and often leave academia. When research institutions don’t attract and retain the best talent, they don’t produce the best research. The responsibility of creating a research culture that is attractive to a diverse array of people lies with the universities and doesn’t require more money. The ERA plan just shows that too many universities don’t take action until pushed. Academia has known about this problem for years, and nothing has been done. The sector’s habit of pretending all is good until being forced to change is immature. This is not a responsible actor that should be sent more money, no matter how hard it lobbies.
People are the most important resource in research, and as long as universities struggle to manage that resource responsibly, they are not getting any more of our hard-earned tax money.
And Spichtinger suggests more lobbying for more money for actors in research. But lobbying requires a coherent policy position that goes beyond send more money. It requires a well-defined place in society, where one takes responsibility, but Spichtinger’s article shows how large parts of academia appear to exist in a vacuum. Due to the legal aftermath of Brexit, the United Kingdom is not part of Horizon Europe, the European Union’s key funding program for research and innovation. Neither is Switzerland due to a wider diplomatic fall-out with the EU. Despite these severe legal disagreements, many universities in Europe have supported a campaign to include the research environments in the two countries in Horizon Europe. Why is research so precious that in this specific area there can be no consequences for Switzerland and the United Kingdom? Should nations be able to pick and choose which rules to follow?
We want to agree with Spichtinger. Europe should invest more in research. We just think that academia should do some serious soul-searching and develop mature and coherent policy positions internally and toward the world that it is part of. Scientific excellence begins with thriving researchers, and science is never conducted in a vacuum. Act accordingly, and we would be happy to see the research community receive the 3% investment of national gross domestic product in research that it seeks.
Daniel Spichtinger touches upon a number of important issues scrutinizing the European Union’s position of strength. With the overall geopolitical disruptions, the EU is facing two “fronts” in becoming a research and innovation powerhouse: one relates to the internal challenges of closing the research and innovation gap within the EU, while the other relates to its global role as a promoter of research cooperation.
Internally, the uneven capacities and investment intensities of its member states are posing a risk to an accelerated development and the attractiveness of the EU as a science location. A special report from the European Court of Auditorsconfirms the general suitability of the widening measures implemented in Horizon 2020, the EU’s major research and innovation program. Challenges remain, however, such as the timing of complementary funding, sustainability of financing, recruitment of international staff, exploitation of results, or imminent disparities among the EU’s 13 newest member states.
Within Horizon Europe (the successor to Horizon 2020), actions taken under the widening participation rubric include the development of new instruments such as Excellence Hubs and the Hop-On Facility, along with placing a stronger focus on synergies with the Cohesion Funds (e.g., via the Seal of Excellence and the transfer of funds from the European Regional Development Fund to Horizon Europe or the European Partnerships). Although these instruments are in place, it is the main responsibility of the EU member states to deliver on the proclaimed consensus to make research an investment priority, especially with the new inflation challenge and strain on national budget already looming. The value asserted to science is contested, and communication measures to strengthen trust in science may be as important as fiscal measures.
For the international research cooperation perspective, a certain ambiguity seems eminent. Europe’s expressed strategy for a global approach to research and innovation tries to strike a balance between reaffirming openness while stressing the importance of a level playing field, reciprocity, technological sovereignty, and respect for fundamental values in research and innovation such as academic freedom, ethics, integrity, and open science.
It is the main responsibility of the EU member states to deliver on the proclaimed consensus to make research an investment priority, especially with the new inflation challenge and strain on national budget already looming.
These values and principles are currently being discussed in a multilateral dialogue with international partner countries to foster a common understanding and their promotion in future cooperation settings. However, the consequences of being a “like-minded country” respecting those values and principles are not yet obvious, nor is the opposite. Currently, 14 of the 16 associated countries in Horizon 2020 have already been associated to Horizon Europe; further negotiations with Morocco, Canada, and New Zealand will take place in fall 2022, as will exploratory talks with South Korea and Japan. So far, the values and principles have played only a marginal role at best in these negotiations. The enormous pressure geopolitical developments can put on international research and innovation cooperation has, however, become clear in light of recent events. Here, Spichtinger rightly raises the question of whether it is helpful to “use science as a stick.” And indeed, one has to assess if all issues the EU faces with different countries—from Russia and Belarus to China, the United Kingdom, and Switzerland—are of the same nature and warrant the current measures taken.
If the European Union wants to become a respected promoter of international cooperation in research and innovation, it is vital that partner countries and their research, higher education, and innovation organizations perceive the EU as being ambitious, fair, and impartial in advocating mutual benefits in jointly facing global challenges. Noncooperation could otherwise prove quite costly for the EU research and innovation community.
Martina Hartl
Adrian Korhummel
Department of International Cooperation and Science Diplomacy
Austrian Federal Ministry of Education, Science and Research
Remembering the Harrisons
Helen and Newton Harrison, You Can See that Here the Confluence is Pretty… From the Fourth Lagoon, The Lagoon Cycle, 1974–1984. Paper on canvas, acrylic gouache, collage, photographic print with ink, print, and pencil.
We are saddened by the news that pioneering eco-artist Newton Harrison passed away on September 4, 2022. Born in 1932, Newton graduated from Yale in 1965 with both a bachelor’s and master’s degree in fine art. He secured his first faculty position as assistant professor at the University of New Mexico (UNM), before moving to La Jolla, California, in 1967 to cofound the Visual Arts Department at the University of California, San Diego (UCSD). Helen Mayer Harrison (1927–2018), who was known for her activism and research-based work in literature at UNM, chose to dedicate herself to the Harrison collaboration when they made a map of endangered species around the world for the Fur and Feathers exhibit at the Museum of Crafts in New York City in 1969. The Harrisons, as they became known, then collectively made the decision to do no work that did not benefit ecosystems. Their collaboration lasted nearly fifty years and led to the first husband-and-wife shared professorship at UCSD.
As part of the Getty Foundation’s Pacific Standard Time: 2024 initiative, the La Jolla Historical Society in San Diego, in collaboration with three other venues, will present Helen and Newton Harrison: California Work, a groundbreaking four-part exhibition about this pioneering couple, offering a critical reappraisal of their California-based works. The exhibition will highlight the Harrisons’ extraordinary art and science collaboration, which ignited the field of ecological art and fostered it for decades. Many artists will continue to be inspired by them, as artist and environmentalist Lillian Ball affirms: “They were the forces of nature whose ongoing influence will be felt throughout generations.”
—Tatiana Sizonenko, art historian and curator
Episode 21: To Solve Societal Problems, Unite the Humanities With Science
How can music composition help students learn how to code? How can creative writing help medical practitioners improve care for their patients? Science and engineering have long been siloed from the humanities, arts, and social sciences, but uniting these disciplines could help leaders better understand and address problems like educational disparities, socioeconomic inequity, and decreasing national wellbeing.
On this episode, host Josh Trapani speaks to Kaye Husbands Fealing, dean of the Ivan Allen College of Liberal Arts at Georgia Tech, about her efforts to integrate humanities and social sciences with science and engineering. They also discuss her pivotal role in establishing the National Science Foundation’s Science of Science and Innovation Policy program, and why an integrative approach is crucial to solving societal problems.
Look at the National Academies 2014 summary of the Science of Science and Innovation Policy (SciSIP) principal investigators’ conference
View the webpage for the SciSIP program (renamed Science of Science: Discovery, Communication, and Impact) at the National Science Foundation
Transcript
Josh Trapani: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Science, Engineering, and Medicine and Arizona State University. I’m Josh Trapani, senior editor at Issues. I’m truly excited to be joined by Kaye Husbands Fealing, who is something of a living legend in the science policy community. Kaye is the dean of the Ivan Allen College of Liberal Arts at Georgia Tech. She previously taught for 20 years at Williams College and served in several positions at the National Science Foundation, including playing a pivotal role in creating the Science of Science and Innovation Policy, or SciSIP, program. On this episode, I’ll talk with Kaye about her work at Georgia Tech on integrating science and technology with humanities, arts, and social sciences, referred to as HASS. We’ll also talk about her career, and of course, I cannot pass up the opportunity to get her insights on the science of science policy. Kaye, thank you so much for being here.
Kaye Husbands Fealing: Thank you, Josh. It’s really great to be here with you today.
Trapani: I’m really delighted to have a chance to speak with you, because even though our paths first crossed directly only recently, I’ve heard your name numerous times in virtually every position I’ve held in Washington, DC, over the last 17 years. And your work particularly on science and science policy, as well as on science and innovation indicators, looms large over my career and those of many people who work in science and technology policy. And I’d like to ask you about some of that work. But let’s start with the piece that you and co-authors, Aubrey DeVeny Incorvaia, and Richard Utz have just published in Issues. In the piece, you argue science and technology education must be better integrated with humanities and social sciences, and describe some of the work you’ve been doing to make this happen. One thing you mentioned that really struck me is that more than 75 years ago, Vannevar Bush, in Science, the Endless Frontier, warned against this separation. And we listened to Bush on so many things, but not on this. Why do you think this challenge has been so longstanding, and what is science missing by not doing it better?
Husbands Fealing: Great question, Josh. And I wanted to take that question in two parts. First, talk about the challenge that has been so longstanding that Aubrey and Richard and I have been working on, and then I’d like to turn it and talk a little bit about what’s missing or what we can do better. So along the lines of the challenges, the premise of our article is that creative possibilities that lie at the intersection of science, engineering, art, humanity, social sciences, that the investment has not been pulled together in those areas the way they could be for a terrific return. So Vannevar Bush wrote that to set up a program under which research in the natural sciences and medicine was expanded at the cost of the social sciences, humanities, and other studies that are so essential to national wellbeing, that to set up programs that way, we would be missing something.
He also said science cannot live by and unto itself. So I just want to expand on that a little bit, because that was really what drew me into thinking about writing about this issue regarding science policy. Richard is a humanist, and Aubrey is a terrific social scientist. So we wanted to combine those areas to really explore this idea of humanities, arts, and social sciences integrated with STEM, science, technology, engineering, and mathematics. So for example, if you think about, and you go back and look at science advisors, go back, let’s just not go back that far. Let’s just go back to Holdren and look at the priorities that were written by him for OSTP fiscal years 2010 to fiscal year 2017. Here is what you see. Calls out these priorities, needs of the poor, clean water and integrity of the oceans, healthy lives, clean energy future while protecting the environment, safe and secure America and weapon-free world, economic growth, and jobs. Added to that, in the same set of priorities, STEM education, high performance computing, advanced manufacturing, and neuroscience.
So you see the difference. Some are big topics, big global issues, where clearly HASS and STEM coming together can really address issues of the human condition. So go forward to Lander and Nelson. The most recent priority memo was written by Alondra Nelson. And there we see pandemic readiness, Cancer Moonshot, climate change, security, economic resilience, STEM education, but also innovation for equity, open science and community engaged R&D. So then you see that scale back to something that is a larger context where the humanities and social sciences and even the arts come together with STEM and R&D to try to move us forward as a country.
So my observation is that there could be an increasing laser focus on competitiveness, and there’s nothing wrong with that. But with that, you see the increased focus on very specific areas in science and engineering. But these big topics; needs of the poor, clean water, safety, security, economic growth jobs, those certainly do require this kinship between HASS and STEM.
So for me, that sort of disciplinary fragmentation is the challenge and something that we can actually try to work through better as a group of science agencies. So let me address the what is missing part. What is missing by not doing it better was your question. And as we wrote in the paper, STEM and HASS domains intersect in the challenges and threats that people face every day. So we’re trying to get back to those issues of the human condition where the humanistic lens is needed to elucidate problems, imagine solutions and craft interventions. And we also think of it as these lenses allow us to think not only downstream about communicating science and communicating to either senators, congressmen, the populists, international leaders. We’re not just talking about the communication part of it, but we’re thinking upstream about also trying better to have that understanding of what the problem is, the discovery process.
And we think that it is important to have this discovery, design, solutions and communication process integrated into this combination at the intersection of HASS and STEM. Now, let me say just one more thing, and that is, it sounds as though we’re saying that this is easy, it’s not. It sounds as though we’re saying without it that we’re failing. We’re not, we don’t want to give that impression. In fact, accolades for our scientific progress surely are very well founded. So we’re not saying that that’s not the case or that arts and humanities need the sciences to buttress them. We’re not saying that either. What we’re saying is that there is a possible adaptivity that can accelerate progress in STEM, in science, technology, engineering, mathematics, and also in the arts and humanities and social sciences if we could work together. And the other issue is that it’s also not easy because we have to develop a common lexicon. We have to develop trust across the sciences and the humanities to allow the benefits that we foresee to come about.
So we need a way of creating learning pathways, experimental pathways to see this happen, to see this take off. And I think it’s worth our attention to see how we could get about many of the discoveries and then solutions to issues that continue to plague us.
Trapani: Well, thank you so much for that great answer. That was really, really clear. And it just shows the importance of taking the holistic approach. The end of your answer actually teed up my next question perfectly, because I was wondering why it’s been so hard to develop and scale up integrative approaches to building these things together in education network. And also because you’ve been leading the way at Georgia Tech, what are some of the things that you and your colleagues have been doing to bridge the gaps?
Husbands Fealing: I want to answer your question by talking through a few things that we’re doing here at Tech and then really address this issue of the difficulties of developing these and also the scalability. So some examples of what we’re doing at Georgia Tech. For one, a two-semester junior capstone sequence where that is co-taught between computer science and technical writing faculty. So what’s interesting about this, this is an arrangement that not only sharpens students’ communication skills, but it also inspires them to situate their scientific work in a larger context. For example, by considering how it will be received in a field rife with gender or racial bias. And so having the writing experts and scholars working directly with folks in computing, and then that allows both to advance, right? Because, also these writing scholars are technical writers, which we all know we need at NSF or at the National Academies or places like that.
So having that flow between the two, HASS and STEM, STEM and HASS, that’s an example. There’s another example of EarSketch, which is now used by more than a million students worldwide. And EarSketch integrates coding education with music competition. So using music as a pathway to get the students to learn to code worldwide. And so it’s really fantastic here to see that interdisciplinarity between the College of Liberal Arts and the College of Design, putting together with college computing, and more than a million students worldwide are using something that’s in the arts music to learn to code. So it’s really important that students are trained to think across a range of disciplines to leverage their exposure to diverse methodologies, to better understand and tackle complex problems. So why is this so difficult, and how can it be scaled? I think the difficulty goes back to something I said a little earlier, which is, we do need to develop a common lexicon and we do need to develop a sense of trust across these disciplines.
Even if you’re working just within HASS, the social scientists, economists, sociologists, political scientists are not all coming from the same place and they now are working with computer engineers or working with biomedical engineers or working across different avenues. Another area of difficulty, which we can work on it, there are ways of dealing with this. How do you assess the return on investment to having this complex combination of humanities and science, or arts and engineering, how do you figure out what the return on investment is from those? And typically we’re looking at number of papers, number of patent’s, number of grants, how much are you funded in those grants? But those are not necessarily the ways in which we should be assessing the breakthroughs that come at this intersection. And there are ways of quantitatively but also qualitatively measuring those breakthroughs. And I will put on the table that one important product is this talent pool, amazing talent pool.
And it’s not just the first job that they get and then we measure, think about, well, how much did they earn. But it’s really five years, 10 years down the road, sometimes even longer, where you see the amazing results of resilience and agility of the students that are coming out of these programs.
The second part of your question here about scale up, I think that we miss opportunities by focusing only on the private sector in terms of the outputs of R&D and that there are many ways in which innovation benefits the nonprofit sector and the government sector. There is innovation in government administration and there are ways of using some of these outcomes and some of these products to really have innovation in sectors other than just the private sector. Although the private sector, obviously, industry is really one of the main recipients of our investments in R&D, and it should be. There’s no reason to argue that. But I’m just trying to say here that we could expand on that a little bit.
A second part is that, I’m an economist, so I have to say, when I think of scale, I think of economies of scale and economies of scope. And it’s one thing to say scale up the same, and it’s another thing to say, well, look for the different use cases, things that are combined, how can they be used in the environmental area, or in the health area, or in the security, all the things that we talked about at the beginning, including getting to zero poverty, things that are really primarily top of mind to the ordinary citizen. And so thinking of not only how these combinations can be used to advance science and also to advance the social sciences, the arts and humanities, but also what are those use cases? Those are the things that are salient, those are the things that sing. Those are the things that really make sense to the ordinary citizen, and therefore that support for these investments, I hope, can be better articulated when we’re able to do those types of combinations and actually do that kind of communication.
Trapani: Your background as an economist came through loud and clear in that answer, and I wanted to turn to that next. So beyond your distinguished academic career, you’ve also played important roles outside the academy, including some key ones in science policy. In particular, you played a seminal role in developing and leading the National Science Foundation’s Science of Science and Innovation policy program, as well as leading the Science of Science policy Interagency Task Group. Now, before I came to issues, I also served briefly as an NSF program director, and I can say based on my experience that most program directors don’t get to start new programs, lead interagency groups, and work directly with the director of the White House Office of Science and Technology Policy, or OSTP, as you did with Jack Marburger. So I’ve been curious to talk to you for a long time and to ask you if you could talk a bit about that time and how you first got involved, and what you and others who were working on it were hoping to achieve.
Husbands Fealing: Thank you. That was a great time. I got to NSF, National Science Foundation, in 2005, was a program director in the economics program, one of three program directors. In my 11 months, so forward fast to 2006, I was asked by David Lightfoot, he was an associate director of the Social Behavioral and Economic Sciences directory, and he said Jack Marburger gave a talk at AAAS in 2005 where he called out the social sciences and said, “You need to stand up and be really part of this process of trying to get the evidentiary basis for funding science,” and that we needed to stand up and take that responsibility to do so. David Lightfoot, Mark Weiss, Wanda Ward, they were all in SVE at the time, and they said, we have social, behavioral, and economic scientist, so behavioral sciences also are part of this.
And we also have an arm. At the time it was SRS, now it’s the National Center for Science and Engineering Statistics. So we also had this quantitative part of us as a directorate. So they said, well, what can you do to draft something that would give us the platform to start something called science metrics. That’s what they called it, science metrics. But yet they want the sociologists and the behavioral scientists and others to be part of it. So it couldn’t just be metrics. So we knew it had to be science of science, which was something that existed, which means basically, what are the frameworks, the models, the tools, the data sets that are needed to make good decisions on funding science, or to make good decisions on how teams should be assembled to do science and so on. That’s the scientific foundation for science. So Science of Science and Innovation policy made sense, because at the end of the day, we want to have the evidentiary basis for policymaking. And that’s precisely what Dr. Marburger said he wanted.
So by the fall of 2006, I had finished writing with a lot of input from a lot of folks that were, at the time, in 2006, at NSF and finished this prospectus and showed it to Dr. Marburger. Obviously, David Lightfoot did that, I was a program director and came back, and they said, it’s a go. So we wrote the solicitation that fall. We were on continuing resolution. February, the continuing resolution lifted in 2007. Solicitation went out by that summer. We ran the panel, funded a number of proposals, and we had our first wrap. So from the summer of 2006 to the summer of 2007, prospectus, solicitation, proposals came in, proposals vetted, funded. It was a quick clock. I won’t give you all the details, but here are the categories that we funded in the first round: human capital development and the collaborative enterprise related to its science, technology and innovation outcomes.
So we did a lot there, including some work on the US, Mexico and Brazil. Biomedical, nano, hydrology, it’s all that foundational work behind funding those types of sciences. Another was returns to international knowledge flows, and once, test case was biofuels. A third, creativity and innovation. This is really interesting. This came out of the behavioral sciences, cognitive models of scientific discovery and innovation. Chris Schunn from Pitt was doing work where he would observe how engineers did work in labs and what were the cognitive processes that were going on so that we can understand ingenuity. So not just the commercialization, but all the way back to the ingenuity and that process. We funded that, we funded that project. Another set of projects, knowledge production systems, and looking at big systems, risk and rewards, low carbon energy technologies and things like that. And the last category was science policy implications.
And at the end of the day, everyone always wanted to know. Not only did you find the evidence behind how to fund or arrange activities in science better, but how did it affect science policy? And I’d say that we had the foundations of that even in the first round in 2007 in the SciSIP program. Very pleased about that. Dr. Marburger was very pleased about that, and forward fast to when Julia Lane, she took over after I left as program director, Stephanie Shipp and Dr. Marburger, the four of us wrote the preface of a book and then had many collaborators give contributions to the Science of Science handbook. And we finished that, I think it was published in 2012, but it was fun working on that with Dr. Marburger. So that does a little bit of background on the Science of Science and Innovation policy. Dr. Marburger really did give the charge for this, but it was fun. And yes, program directors at NSF get to do a lot of other things. So it was good for us.
Trapani: Well, that’s really remarkable. Thanks for telling that story. I don’t know that I’d ever heard it quite so succinctly and concisely, the very early days. So I guess it’s been 10 years or more though since then and I was wondering from your perspective, how has the landscape for the Science of Science policy evolved since those days, and how far do you think we’ve come in meeting some of these challenges and what remains to be done?
Husbands Fealing: I think that the advances that have been made, we have better models, I think, and frameworks that integrate across economic sociology especially. I think the original setup of this program envisioned having more domain scientists working with the social scientists especially, and behavioral scientists, and I think we’re making advances there. We’ve made many advances on the data side. I think the part where we could do more, we could do more in the behavioral space. I don’t think that we pulled in as much in the Science at Science, now has been renamed Science of Science, the behavioral piece as we wanted to at the beginning. In the prospectus, there was a real emphasis on creating a community of practice, and that would not only be academics, but it would also be individuals who are in the variety of agencies. The Interagency Task Group had representatives from 17 agencies that were part of the NSTC in the subgroup on social, behavioral, and economic sciences.
And the idea was to try to get more of the agencies to take on this Science of Science approach, but it would need funding, it would need to be a priority, it would need leadership. So I think that that’s something that’s still ongoing. I think the biggest question we get often is, well, how has this affected policy. And I don’t think that we’ve done the work to show that mapping distinctly between the science and science and policy changes. It’s hard to do. But I think that that is something where we still have a way to go. And the last thing I would add, Kellina Craig Henderson is now the AD for Social Behavioral and Economic Sciences. She and I rotated to NSF the same year, 2005, and she’s been there for a long time. And back then she really was working hard and diligently on the science of broadening participation in STEM.
And it is something still that to this day we’re still thinking about and talking about it. Dr. Panchanathan, the current director of NSF, is very focused on this. The NSB is very focused on the missing millions. And they just even created a new program called GRANTED to get R-2s and other universities the infrastructure so that they can apply effectively to NSF and get the grants to perform science and engineering activities at their institutions. And so I think the Science of Science, or SciSIP, depending on what you want to term it, I think we have an area to contribute on the science of broadening participation. And this is the time, because this is something that Pancha’ is talking about all the time, National Science Board and others. And it is in the priorities, the innovation for equity, that is in the priorities from OSTP. So I think we have an opportunity to keep moving along this line of Science of Science, or Science of Science and Innovation policy, especially at this time.
Trapani: Well, while I was there, I briefly ran the Science of Science program. We put out a special call that we called BP Innovate, and it was about building an understanding of the science behind what leads people to enter entrepreneurial activities or to not, and the sort of incentives and disincentives that are there and how that varies across people’s race and gender and geographic background. And that was a time thing, but I think it’s something that they are planning to repeat. I’ll just add that you mentioned a lot of names and places that I know. And I would just mention that Julia Lane just had a piece in Issues in Science and Technology that lays out a vision for the evidence-based policy making act.
And one more Issues plug before I move on—you mentioned the National Science Board, and last year we had a piece by the chair and co-chair of the National Science Board, and it was partly focused on the need to broaden participation. So this is very much the conversation that’s going on today. My sense is that when it comes to this field, the Science of Science policy, or Science of Science, is multidisciplinary, as you described, led by the quantitative social sciences. But to get back to your piece, you called for more than just that. You wanted more multidisciplinary, including the humanities, to be built into science policy. And I wonder if you could speak a bit just to what that would look like and what benefits you think it would bring.
Husbands Fealing: I’d like to see, for example, use cases where we can actually see the advancement of science using these activities. And also, I’d love to be able to see, and this I got from talking to our executive vice president for research who read the article and he came back and said, but we also need more science in art. And so consider that. Consider that. I want the listeners to think about what that means. So the art in science, there’re ways of visualizing, there’s something called medical humanities. So there’re ways of using those activities within the arts and music to improve not only outcomes in medicine for individuals, but also hopefully to really get at the kernel of issues in an interesting way using maybe art or visualization techniques that come from the humanities, arts and social science side. But the other challenge here was also, well, what if someone that is using materials or different types of paint understood the chemical processes or the composition in a way that actually enhance the product in the art side.
So that’s the vision, it may be out there a bit. It’s off the beaten path, but one of the things we’re trying to do here at Georgia Tech is create an area called Art Square. Now, Art Square, on a continuum, on campus is on somewhat of the periphery, but they’re building Sciences Square not far away. So imagine if we could really collaborate across those. And we’re also not far from the new Lewis Center, so the DEI aspect or DEIA aspect of this could come into play as well. So it could be a fulcrum, it could be a hub, it could be an area where we can really see advances that we hadn’t thought about by doing this. And for me, it’s an experiment. It’s something that’s worth investing in. We are doing much of it here at Tech. And so I don’t want to make it sound as though we’re not doing this, but with so much more that we can do to see this type of integration.
Another area is that we want to be able to understand how this intersection of HASS and STEM will improve policy. So it goes back to your previous question about the Science of Science and Innovation policy, that foundational element behind policy. Well, could it be crisper, more nuanced, more connected to communities if it includes the humanities part of it? And the National Academies report branches from the same tree. That is something that is important for us to remember, is that this cleaving, this disproportionate investment over these many decades, we really have to give back to the fact that maybe that didn’t need to happen, or we can do something that corrects that split and see better integration and investment in that integration. So that’s the vision.
Trapani: This has been such a wide-ranging conversation. I really appreciate your time and your insights, but I have one more question before we go. I was wondering if you have any advice or perhaps lessons learned from your experience for younger people who are interested in or getting started with careers related to science and technology policy who want to have a positive impact?
Husbands Fealing: Sure. I like that question very much because I’ve been in the business, so to speak, for more than 33 years. So I’ve been a professor for a long time and students are a top priority and it’s really important for us to have some takeaways that students can dig into.
I have three things I want to put on the table. Broaden your networks. And we’re not just talking HASS and STEM now. We’re talking just the networks that students can really utilize, not just to get access or economic and social mobility, but also to find pathways and career pathways, and those networks will really allow that to happen. The second piece is the focus on humanities or social sciences. If you’re an engineer, it’s not a distraction. It’s actually an enhancement in your area of expertise in the sciences and engineering and computing. So I’d like to just put that on the table, that oftentimes you may be chided, that, “Well, why are you doing that? Just spend 10 more hours in the lab and it’ll be better.” I want to say no. There’s a lot of benefit from looking at and having these other lenses to really do the exploratory work.
So humanities or social sciences is not a distraction, it could really be additive. And the third thing I’d like to say, because I had to really think about as you asked the question, what else would I want to put here? And I have to say, right, I have a math degree and an econ degree, and I was not a writer. I was not a person that did a lot of writing. I crunched equations. I loved QED at the end, especially when I knew I was right. And when I was a math major, the task was to solve the proof in as few steps as possible. I love that. But I will tell you the good writing, great communication, telling the story, there’s nothing more salient than that to put all of that hard work into people’s minds so that they understand what you’re talking about.
It even is important if you want to be an entrepreneur, it’s important if you want to set policy, it’s important if you want to let other students understand what you’re working on in terms of these peer effects that I talked about before. So please write, figure that out. It’s not always that easy, but it’s so incredibly important.
Trapani: As an Issues editor, I’m going to transcribe the part of your answer about writing. We’re going to put it on our homepage and I’m probably going to put it on a t-shirt too and wear it everywhere. Thank you. Kaye, it’s been delightful to talk with you. Thank you so much for being here.
Husbands Fealing: Thank you. This was a pleasure.
Trapani: This has been a wonderful conversation. And thank you to our listeners for joining us. As Kaye notes in her piece, Yo-Yo Ma once said, “Culture turns the other into us.” Science and technology has for so long seen the humanities and arts as other, and it’s time we turn them into us.
To learn more about how we can achieve that, read Kaye Husbands Fealing, Aubrey DeVeny Incorvaia, and Richard Utz’s piece in Issues, entitled “Humanizing Science and Engineering for the Twenty-First Century.” Find a link to this piece and the others we mentioned in our show notes. Subscribe to The Ongoing Transformation wherever you get your podcasts. You can email us at [email protected] with any comments or suggestions. Thanks to our podcast producer, Kimberly Quach, and audio engineer, Shannon Lynch. I’m Josh Trapani, senior editor of Issues in Science and Technology. See you next time.
Is Open Source a Closed Network?
In “Architectures of Participation” (Issues, Summer, 2022), Gerald Berk and Annalee Saxenian present a compelling question: “Given the complexity and divergent trajectories of today’s innovation systems, how should public policy foster innovation and openness, and support the process of making data more accessible?” In the 2000s, the authors note, open-source ecosystems, or “networks of networks,” propelled Silicon Valley’s first wave of technological advancements in internet innovations. Since the 2010s, however, as computation capabilities and software systems became larger and more complex, a handful of dominant platforms—Amazon, Google, Microsoft—appear to have abandoned that openness of the previous era.
These giants have done so in a number of ways, including by restricting access to their application programming interfaces (or APIs, which simplify software development and innovation by enabling third parties to exchange data and functionality easily and securely), by acquiring start-ups, and by developing their own proprietary (closed) systems. For example, in order to maintain and enhance their market share, these firms’ cloud platforms impose “anti-forking” requirements that restrict access to data and block software developers from building out new applications on their platforms. Also, as Berk and Saxenian write, critics say Amazon Web Services has improperly “copied the open-source code for a pioneering search engine named Elasticsearch and integrated it into its proprietary cloud services offerings,” thus making it harder for smaller companies to use the search engine to market their products. In response, the authors note, “at least eight open-source database companies, including Elastic, modified their licenses, making them so restrictive that they are no longer considered open-source by the community.”
Berk and Saxenian offer several prescriptions for policymakers, regulators, and jurists to consider. Antitrust law, or competition policy, offers one tool to restore and foster an open-source collaborative ecosystem. In opposition to today’s centralizing tendencies, the authors argue that interoperability and “the democratization of the use of data” are key to high-quality, fast-paced innovation. As they highlight, Google Cloud already collaborates with the Linux Foundation and shares revenue with its smaller partners. But, of course, the worry—at least in my mind—is that Google will not maintain the partnership when it sees an opportunity to build out proprietary products or services, particularly in areas complementary to its existing dominant position in internet search or mobile phone operating system. As a Department of Justice complaint against Google argues, the firm appears to have used restrictive licensing and distribution agreements with hardware producers for its Android operating software, requiring producers to place Google Search along with a bundle of other Google applications in prominent places on the screen. These cannot be deleted by users, and these contracts contained anti-forking requirements as well.
Antitrust law, or competition policy, offers one tool to restore and foster an open-source collaborative ecosystem.
The DOJ has a strong case against Google, but what remedy should the agency seek? Here, Berk and Saxenian cut to the heart of the matter. Neither breaking up Google nor simply prohibiting the distribution agreements will restore the dynamic, innovative engine of open-source collaboration. We need to think beyond the dichotomy of market competition versus monopolistic or oligopolistic firms in order identify creative, forward-looking solutions. Here, we might draw from recent examples, such as the Microsoft 2001 consent decree that required interoperability for competing browsers on Microsoft’s operating system. Moving forward, courts, regulators, and legislators should consider the procompetitive benefits of interoperability and information pooling, which may answer these authors’ call for both fostering collaborative competition and maintaining economies of scale.
Laura Phillips-Sawyer
Associate Professor
University of Georgia School of Law
Gerald Berk and Annalee Saxenian focus on the information technology industry in Silicon Valley and the relationship between open-source and proprietary-platform companies. They base their article on interviews with a range of software developers and managers, but significantly all from inside the industry. They argue that the open-source segment of the industry is especially innovative, that the proprietary segment can and does work with open-source companies, a partnership that further strengthens the innovative capacity of the industry as a whole, and for that reason ought to be promoted by public policy. They focus on antitrust policy, but the logic would apply to other domains of policy, most particularly the immigration of high-skilled workers from abroad.
I find their argument unpersuasive and ultimately disturbing. My main concern is the way it hinges on “innovation” as if that were the singular goal of public policy and could be pursued independently of other policy issues and debates. In effect, the authors seem to be arguing that society cannot have too much innovation.
Two specific issues at the forefront of current policy debates call into question the value of unlimited innovation. One is the debate about the impact of artificial intelligence and robotics on jobs, and the fear that workers are being displaced more rapidly than they can be absorbed into other parts of the economy. The other centers on the distribution of income and social mobility; in particular, Silicon Valley is a bifurcated economy, with one sector with highly paid software developers and other professionals and the other with low-paid service sector workers catering to the professional class. I would be much more persuaded of the value of open source if the authors could show that it was not only more open to ideas but also more open to a demographically diverse workforce.
I would be much more persuaded of the value of open source if the authors could show that it was not only more open to ideas but also more open to a demographically diverse workforce.
The failure to recognize these issues reflects in part the limits of the analytical lens through which the authors are working. That lens is, as the authors recognize, the network structure of the industry. Network theory in the social sciences has basically been concerned with the interaction of members within a defined network. It has not been especially concerned with the boundaries of the network, how its members are recruited, trained, and socialized to the norms and standards that govern their relationships. Indeed, it does not generally recognize that social networks are typically closed to outsiders.
A limited ethnographic literature looking at what software developers actually do on the job suggests that a better lens through which to view the industry is probably craft work, where skill and proficiency are acquired through experience on the job and close interactions with more senior developers. However, because experienced workers in such a system have to work closely with “apprentices” in order to train them, they tend to resist admission of new members from backgrounds very different from their own. The classic example is the skilled trades in the construction industry, which are notorious for resisting government pressure to admit workers from underrepresented demographic groups, particularly women and ethnic and racial minorities.
An important difference between the authors’ focus on innovation and the craft analogy emerges in the debate about expanding visas for highly skilled foreign workers. In the authors’ perspective, expansion is promoted as an impetus to innovation. In the perspective that recognizes other social goals, such expansion is also way of avoiding pressures for the upward mobility of low-skilled immigrants and their children.
Michael J. Piore
David W. Skinner Professor of Political Economy, Emeritus
Department of Economics
Massachusetts Institute of Technology
Authoritarian Environmentalism
In “China Planet: Ecological Civilization and Global Climate Governance” (Issues, Summer 2022), Yifei Li and Judith Shapiro seek to explain not only whether China can uphold its climate promises, but also whether the costs of achieving these promises would be worth it for the democratic world. They point out that answering those questions would “require transparency, accountability, and social equality—all of which are in short supply in the Middle Kingdom.”
The article makes a significant contribution to challenge and deconstruct China’s green image under the era of President Xi Jinping’s “Ecological Civilization.” Since 2012, China’s leadership has been using a green “China Dream” discourse that connects domestic environmental actions to global leadership on climate change and the “glorious revival” of the Chinese nation. This discourse often speaks of green policies in glowing terms, such as “green mountains are in fact gold mountains, silver mountains.” However, whether and at what cost the Chinese leader’s lofty rhetoric has been translated into environmental outcomes in practice has become a major question that is much harder to answer.
One key point the authors emphasize is the many nonenvironmental consequences of the coercive, state-led “authoritarian environmentalism” over the course of China’s making and remaking of international climate politics. They believe that instead of serving to achieve sustainability, China’s proclaimed emphasis on ecological civilization is actually a means to strengthen the Communist Party within and outside China. Therefore, a more accurate term than authoritarian environmentalism should be “environmental authoritarianism,” which resonates with the comparative environmental politics literature.
The rise of environmental authoritarianism reflects the long debate about the relations between China’s regime type and its government’s environmental performance. Considering the climate challenges that liberal democratic systems have faced, critics have questioned the performance of liberal democracies and especially their capability in leading global climate change governance. China created the concept of environmental authoritarianism to bring together these doubts about democracy as a favorable and capable model for environmental decisionmaking and governance. China is widely regarded to be a preferable example. Supporters of the actual environmental authoritarianism assume that a centralized undemocratic state may prove essential for major responses to the growing, complex, and global environmental challenges.
The rise of environmental authoritarianism reflects the long debate about the relations between China’s regime type and its government’s environmental performance.
Li and Shapiro deeply engage with the ongoing debate and provide an insightful answer. In their opinion, “Although China has seen some success in realizing its ambitious climate goals, the country’s achievements have come at a social and political cost that few democracies could—or should—tolerate.”
Their observations might inspire anyone who is curious to critically explore the following two questions:
First, why did the Chinese government intentionally select the ecological civilization green discourse as the “clothing” of authoritarianism? Could it be a double-edged sword for China’s governing party to maintain its legitimacy? In the formerly communist Eastern European countries of Ukraine and Poland, environmental crises led to national social movements that presented significant challenges to their governments’ political legitimacy. Do the political elites in China have the same concern that using environmentalism as a cover might in the end turn out to be “lifting a stone only to drop it on your own feet,” as an old Chinese proverb predicted?
Second, why does the western liberal world still want to cooperate with China on climate change governance if China is not genuinely interested in green values and is using environmentalism only to maximize its power in the international community and increase control of its own institutions and citizens?
Ran Ran
Associate Professor of Environmental Politics
Renmin University of China
To tackle the climate crisis, it is necessary for China, the world’s largest emitter of greenhouse gases, to take decisive actions to cut emissions. Almost paradoxically, China at the same time dominates the supply chains of technologies that are needed for the world to transition to renewable energy.
Meanwhile, in democracies, the inaction and the messy fights among government bodies and interest groups have left many people frustrated with the democratic process’s ability to address climate change. This and the fact that the climate fight hinges so much on China have created a willingness by some governments to overlook the problematic ways China approaches climate change.
Yifei Li and Judith Shapiro, in their essay as well as in their 2020 book, China Goes Green, caution about the perils of China’s top-down, numbers-based, and tech-driven environmental governance model. As a human rights researcher who over the years has witnessed and documented the tremendous human rights cost in the Chinese government’s pursuit of grand development goals, I am relieved to see scholars in the environment field sound the alarm about the “China model.”
As Li and Shapiro discuss, the burden of making the 2022 Beijing Winter Olympics green “fell primarily on China’s most vulnerable and politically disenfranchised.” Similarly, to reduce coal consumption, some local authorities have banned the burning of coal, including for home heating, without consulting the affected communities, forcing people who couldn’t afford alternative energy sources to freeze in the winter, and fined those who secretly burn coal.
As a human rights researcher who over the years has witnessed and documented the tremendous human rights cost in the Chinese government’s pursuit of grand development goals, I am relieved to see scholars in the environment field sound the alarm about the “China model.”
China’s supply chains for renewable energy are also ridden with human rights violations. Almost half of the world’s supply of polysilicon, a key component of solar panels, is produced in Xinjiang, a region where government abuses against the 13 million minority Uyghur Muslims “may constitute … crimes against humanity,” according to a recent United Nations report. In Guinea, to mine bauxite, a primary source of aluminum, which is a key component of electric vehicles, Human Rights Watch documented a joint venture linked to a Chinese company that pushed farmers off their ancestral land and destroyed their water sources. The dust produced by the mining also caused respiratory illnesses in villagers.
Assessments of the Chinese government as a climate model should take into account that it bans independent media, stringently controls the internet, and routinely jails government critics. The human rights abuses that are publicly known are only the tip of the iceberg.
Li and Shapiro also call into question the sustainability of the Chinese government’s rights-trampling climate fixes, arguing that they “cause people to become confused, angry, and even hostile to the climate cause,” and that “better outcomes are achieved when grassroots, citizen-driven environmental initiatives and projects become trusted partners with the state.” This corresponds with Human Rights Watch research on climate and human rights globally. Robust and rights-respecting climate action requires the full and meaningful participation of all stakeholders, including governments, activists, civil society groups, and populations most vulnerable to the harm of climate change. Doing away with human rights to address the climate crisis is not only ethically unacceptable, but also fundamentally ineffective.
Yaqiu Wang
Senior China Researcher
Human Rights Watch
Yifei Li and Judith Shapiro raise many good points about China’s dominance in global infrastructure construction and development.
To add to this discussion, I’d note that it is one thing to build the infrastructure, but quite another to manage it reliably. China’s safety cultures may be far less exportable—despite the country’s expansive outreach through its Belt and Road Initiative. Indeed, we need to first know considerably more about China’s track records in high reliability management of infrastructures. I have in mind particularly the real-time management of the nation’s high-speed rail system and coal-fueled power plants, and of the backbone transmission and distribution of electricity and water supplies in large metropolitan areas.
It is one thing to build the infrastructure, but quite another to manage it reliably.
We know that infrastructure data are in short supply from China, but it is important, I think, that data gaps be differentiated going forward by both types of infrastructure and their management cultures. How to fill these gaps? I know of no real substitute for Chinese scholars willing, even if currently unable, to analyze and research these major topics further.
Emery Roe
Senior Research Associate
Center for Catastrophic Risk Management
University of California, Berkeley
Imagining a Better Internet
The future of technology is too often imagined and engineered by corporations and venture capitalists, foreclosing more radical possibilities. Today, it is Meta’s iteration of the Metaverse that dominates headlines, a riff on an old theme: the monetization of networked social life. Apparently the future includes stilted, legless avatars in VR versions of Microsoft Teams meetings. After the launch in 2016 of Facebook Live, its developer, Mark Zuckerberg, called for the formation of a twenty-first century “global community” through technology, harking back to Marshall McLuhan’s “global village” of the 1960s. But who and what is a community for? As critics such Safiya Noble, Virginia Eubanks, Ruha Benjamin, Sarah T. Roberts, and Siva Vaidhyanathan have long argued, the democratic ideal of “everybody” connecting to the internet through social media platforms is undermined by the narrow visions of elite technologists.
It doesn’t have to be this way. Kevin Driscoll’s “A Prehistory of Social Media” (Issues, Summer 2022) helps us reimagine internet futures by looking to the many nets of the past. Rather than drawing on a singular narrative—the straight line from ARPANET to the World Wide Web and, eventually, platform supremacy—Driscoll emphasizes the grassroots, locally situated networks that emerged from the growth of the personal computer. Individual enthusiasts started bulletin board systems, and rather than relying on opaque terms of service and precarious content moderators to manage people’s relationships to the network, you could contact the volunteer owner directly or perhaps even meet in her living room.
Rather than drawing on a singular narrative—the straight line from ARPANET to the World Wide Web and, eventually, platform supremacy—Driscoll emphasizes the grassroots, locally situated networks that emerged from the growth of the personal computer.
Web 2.0-era social media platforms are a departure from early community networks, a diverse ecology with subcultures that matched their location and participants: Amsterdam’s DDS, the Boulder Community Network, Montana’s Big Sky Telegraph, the Blacksburg Electronic Village, the Seattle Community Network, and Berkeley’s Community Memory project. Such community networks were tied to specific places, not anonymous cyberculture. Even for electronic communities that were mostly associated with online interactions, there were some in-person encounters. Members of the Whole Earth ’Lectronic Link, known more popularly as The WELL, for example, met at potluck dinners around the Bay Area, even if the early virtual community was open to anyone regardless of location.
As Driscoll notes, while a plethora of networks for queer and trans people, Black people, and others from marginalized communities flourished, even grassroots networks are plagued by the ills of society. From the earliest days of cyberculture, critics pointed out that race, gender, sexuality, and embodiment cannot be left behind in cyberspace. Rather, racism and sexism structure people’s experiences in virtual environments as they do IRL.
While the past wasn’t perfect, disenchantment with digital advertising and surveillance models has catalyzed nostalgia for earlier internets. GeoCities, founded in 1994 as “Beverly Hills Internet,” fostered community through web-based neighborhoods and user-generated content. GeoCities closed in 2009. Neocities, launched in 2013, is an unrelated homage website that calls for bringing back the independent, creative web. Similarly, SpaceHey, a reimagined version of MySpace, is intended to revive the original site’s ability to teach coding skills to young people. Folk histories of the internet provide an entry point for using many pasts to envision a multiplicity of futures beyond Big Tech.
Tamara Kneese
Director of Developer Engagement
Intel
The article in Kevin Driscoll’s title is its most important part: a prehistory, not the. Some arguments narrow to closure, arriving at “the point.” In the course of convincing us, Driscoll’s argument instead opens out, welcoming us into the big country—literally and figuratively. He makes the case for modem culture and bulletin board systems as salient antecedents of contemporary online culture, and uses that point to bust right through the simple, received narrative of how “the internet” came about. In opening up the past, he opens up the future too: the history of networking computers together is a reservoir of alternatives. His history offers other technologies, other communities, other applications, other ways of being online—many of them better, for various values of better, than what we’ve got.
The engineers in Cambridge and Palo Alto created much of the fundamental infrastructure, but the way it is used can be better understood by starting with the modem on the kitchen table or the garage workshop in Baltimore or Grand Rapids.
Other places too: the big country. Notice the geography of Driscoll’s prehistory, rattling off place names like a Johnny Cash song. Chicago, Atlanta, Northern Virginia, “Alaska to Bermuda, Puerto Rico to Saskatchewan.” Notice how much of it happens in people’s homes in cities and towns across the North American continent. You can count the locations of the popular narrative of the internet on one hand: the research powerhouses SRI International, the Massachusetts Institute of Technology, and the University of California Los Angeles, and in the corporate world maybe Bolt Beranek & Newman (now BBN Technologies) and Xerox. It does not detract from that history to point out that it was only one of many ways that people were networking computers together—and a highly specialized, idiosyncratic one at that, reflective of the agendas of big science, Cold War R&D, and the nascent tech industry. Driscoll reveals how people outside this domain were connecting their computers for their own purposes in ways that prefigure the internet’s broad adoption much more accurately than electrical engineers with security clearance. The engineers in Cambridge and Palo Alto created much of the fundamental infrastructure, but the way it is used can be better understood by starting with the modem on the kitchen table or the garage workshop in Baltimore or Grand Rapids: the laboratory of digital social media that came before the internet.
Driscoll’s story reminds us that internet is a verb as well as a noun: internetworking the many and varied networks of computers together. Networking and internetworking comprise the labor of making BBSs, launching AOL, getting onto Usenet, rolling out ATMs in banks and convenience stores, tying satellites and radio telescopes into a computation grid, or setting up servers. How the networks work is an expression of agendas, ideologies, and expectations, and the “modem world” clarifies how different such agendas could be from our era of platform dominance and vertical integration of industry. There is no “history of the internet,” in other words, only histories of all the ways computers were and are networked and internetworked. Histories, and possible futures such as the one Driscoll gives us here: social media that are local, personal, inventive, messy, communal, and do-it-yourself.
Finn Brunton
Professor
Science and Technology Studies
University of California, Davis
Kevin Driscoll offers a fascinating account of the rise and fall of the bulletin board system (BBS), in the process highlighting an important path not taken in the history of digital communication and the internet. Unlike government-funded networks of the era, BBSs were spaces for experimentation and innovation driven not by funders’ goals but users’ own idiosyncratic interests and desires. In the process, they found new ways to work within the limitations of the existing phone infrastructure to create truly international connections.
As Driscoll recounts, whatever their reason for joining the “modem world,” users of a local BBS were able to make social connections and build community with other users well beyond their immediate social circles. For some users, pseudonymity allowed them to explore aspects of their identities, such as sexuality and gender identity, that they didn’t feel safe discussing anywhere else. In their governance and design, individual BBSs and associated software reflected their system operators’—or sysops’—own personal and political investments. The time that Tom Jennings spent in queer punk spaces shaped his focus on repurposing existing infrastructure to develop DIY solutions—a foundational aspect of his “Fido” software that, as Driscoll notes, became an open standard for exchanging files and messages between BBSs. Sister Mary Elizabeth Clark, who founded AEGIS to carry information about living with HIV and AIDS, brought what she’d learned about information dissemination during her years as a transgender advocate and activist to her work at AEGIS.
Unlike government-funded networks of the era, BBSs were spaces for experimentation and innovation driven not by funders’ goals but users’ own idiosyncratic interests and desires.
Beyond how it shifts our focus away from ARPANET and other state-sponsored internet infrastructure, Driscoll’s essay also provides fertile new ground for reimagining what the internet could be. As he shows, most BBSs operated on a far smaller scale than current platform monopolies. With that smaller scale came a distinctly different sense of sociality. For many people, socializing online via a BBS became a conduit for building community offline. Unlike current platforms, community moderation disputes were settled not by a faceless corporate entity, but by an identifiable member of the community invested in its continued success. As she recounts in her book Cyberville, Stacy Horn, the sysop of EchoNYC, encouraged users, at different points, to be active participants in her decisionmaking process regarding board governance and moderation.
This smaller scale and sense of community investment can be particularly potent for individuals who are often the targets of harassment and abuse online. What would communities created by and specifically for these individuals look like? How could their design use key features of the BBS, like pseudonymity, locality, and accountable governance, in ways that not only meet these users’ specific needs, but also ensure that they feel comfortable communicating online? Moreover, the history of the BBS includes a variety of models for monetary support not based on harvesting and reselling user data. Focusing on sustainable models of maintenance, as opposed to growth at all costs, opens up room for the kinds of experimentation and play needed to imagine a more equitable future online.
Lecturer, Women’s and Gender Studies, Gonzaga University
As the internet landscape of social media becomes increasingly embedded in day-to-day lives, many contemporary thinkers and critics have decried that the internet is broken. When Twitter and Facebook posts fuel widespread misinformation campaigns or inspire tumultuous market conditions, it might be difficult to recall the deeply intimate and personal roots of internet technologies.
Kevin Driscoll paints a vivid picture of those electric early days of networked computing, when a modem was a luxurious add-on and PC enthusiasts convened in homebrew clubs to discuss the latest microprocessor. Driscoll explains how the advent of the internet was really a collage of computer networking efforts rather than one seminal development by Department of Defense military researchers or the standardization of the internet protocol suite commonly known as TCP/IP. Most important to Driscoll’s internet history is the BBS, the bulletin board systems that facilitated widespread communication between tech-hobbyists and amateurs alike. It was this fervor for BBSs, Driscoll explains, that allowed the “modem world” to flourish.
Overlooking the history of the BBS presents the false notion that the reins of the internet have always been out of reach for the average computer-owner, better left to the Zuckerbergs and Bezoses of the world.
The 1970s and ’80s represented an increase in the adoption of personal computers, a shift from business to pleasure. What was previously found only in university research labs or government buildings was now available for citizen-consumers. Off-the-shelf computer kits made it easier than ever before to build an impressive piece of computing technology right in a person’s sitting room. Add to that the dropping price of modems and the increased exposure to computer networking through timeshare programs or hyperlocal BBS terminals, and the novelty of electronic communication became commonplace for those with the inclination and the means to pay for it. Driscoll shows how these developments snowballed through the late ’70s and early ’80s, paving the way for the BBS.
BBSs were an important, and distinct, precursor to the commercial internet and World Wide Web of the 1990s. Calling one computer to another had the added draw of being a one-to-one connection, an intimate sensation of dialing right into someone’s home. Even multiline systems, reliant on multiple phone lines, lent the cozy feeling of a cocktail party. In most narratives about the development of the internet, there’s a neat line from the ARPANET (the Advanced Research Projects Agency Network developed by the Defense Department) to the World Wide Web. These stories neglect the individual roots, and the personal touches, of a communally built public network like the modem world. Overlooking the history of the BBS presents the false notion that the reins of the internet have always been out of reach for the average computer-owner, better left to the Zuckerbergs and Bezoses of the world. In reality, computer networking technologies are historically a people’s technology.
Today, the expectation of ubiquitous internet access is rampant. Conditions brought on by COVID-19 pandemic lockdowns highlighted the need for high-speed at-home connections to facilitate schooling, work, and community connection. At the same time, the lockdowns brought inequities in the digital divide to the fore. As the internet morphs into increasingly partitioned spaces, funneling users between the same five mega-websites, it has become more urgent than ever to reexamine the stakes of internet ownership. When these extant structures seem inevitable, it’s helpful to remember how things got started—for people, by people.
Kat Brewster
PhD Candidate, University of California, Irvine
Kevin Driscoll insightfully notes that to reenvision the possibilities of the internet today, we need to recast its past. For Driscoll, that involves looking not to the mythologized narrative of ARPANET as a Cold War-era, US-built communications infrastructure for surviving nuclear war, one built by eccentric computer “wizards.” Rather, we might look to the networked links of bulletin board systems (BBSs) connected by telephone lines that were at their peak popularity in the 1970s through the 1990s.
“Why haven’t our histories kept up?” Driscoll asks. It’s an important question. As a scholar curious about the narratives told about technology, I might phrase it differently: What are those mythical tales of the internet doing? What hangs in the balance when they are repeated? When public and scholarly discourse leaves out other narratives of the networked past, what’s at stake? In other words, what do our histories of the internet do?
Driscoll rightfully notes that internet histories such as the ARPANET mythology have effects: these stories represent and reentrench values and are used in turn to advance arguments (for better or worse) from public policy to corporate conduct. Origin stories like these often act as a sort of founding document, taken as a blueprint for how the rest of the story should unfold.
Stories that look outside the ARPANET mythology can help us view more clearly the social and technical entanglements of the internet as they stand today.
Dominant narratives also inevitably perpetuate notions of who belongs—in this case, who belongs in the realms of high technology. Popular images of computing’s foundations are largely mapped to subjects typically white, male, and American. But looking to the world of BBSs instead supports a vision of the internet that is less commercial, more community-based, and more representative of the variety of people who created the sociotechnical basis for the online world as it is experienced today.
Along with the question of what current histories of the internet do, there’s also the question of what they can do, especially when conceptualized outside of typical paradigms. Driscoll suggests that stories such as those of BBSs can help tell a more accurate backstory for the internet—and give a foundation for imagining a better future. In my own research, where I have gotten lost is in attempting to draw this line directly between the recast past and a better future.
Instead, I want to suggest that stories that look outside the ARPANET mythology can help us view more clearly the social and technical entanglements of the internet as they stand today. The “internet,” after all, is a useful if inexact shorthand for the social and technical, the virtual and physical, the governmental and corporate and grassroots layers of networked computing. That is to say, what the internet is, what its problems are, and how to go about solving those problems are notoriously tricky to grasp. It might be one reason we rely on the well-worn ARPANET story that focuses on a few machines and a few people.
Understanding the internet of today is partly why I have researched its history myself: the past can offer a less volatile scenario while highlighting contemporary aspects that otherwise appear natural, that fade into the background and thus appear to be unchangeable. Looking to the rise and fall of BBSs’ popularity, for instance, accentuates the commercial consolidation of the internet that has resulted in conglomerate platforms such as Facebook or Google or Amazon, companies that in turn exercise and reentrench their overwhelming political, cultural, and economic power. Recognizing these realities is maybe one of the best things internet histories can do, and perhaps the first step in drawing that line between the internet’s past and some possibilities of a more hopeful future.
Frances Corry
Postdoctoral Fellow, Center on Digital Culture and Society
University of Pennsylvania
Innovation in Mentorship
“Academic Mentorship Needs a More Scientific Approach,” by Beronda L. Montgomery, Fátima Sancheznieto, and Maria Lund Dahlberg (Issues, Summer 2022), calls to light how well-intentioned but neglectful mentorship serves as a severe detriment in how the United States provides training in science, technology, engineering, mathematics, and medicine—the STEMM fields. Importantly, the authors point out that “mentorship is largely an ad hoc activity, with institutions delegating responsibility to graduate training programs and the PIs of individual research groups. This entrenched, informal system revolves around each scientist’s individual commitment to mentorship and personal experience with past mentors.”
Indeed, this ad hoc, do-it-yourself approach serves neither faculty members nor students well; it perpetuates inconsistent and outright bad experiences, and it ultimately hurts the research enterprise altogether by undermining the well-being and creativity of the humans who drive it. Frankly, it amounts to institutional neglect.
However, we keep doing the same thing: creating mentorship training programs that no matter their quality—and many are great—can be onerous to commit to. Absent institutional support, one’s ability to prioritize mentorship training inevitably competes with formal metrics of early-career success. In this sense, we operate in an academic ecosystem that inherently deprioritizes and disincentives mentorship.
We need to stop thinking about STEMM mentorship training as “if you build it, they will come.” Considering all the other pressures students, postdoctoral researchers, and early-career faculty members face, we need to build it, and then bring it to them—with institutional support that prioritizes, incentivizes, and rewards excellence in mentorship.
We operate in an academic ecosystem that inherently deprioritizes and disincentives mentorship.
A typically overlooked focus is on teams. Thoughtful programming for research teams at universities would pay dividends. At Stanford, with generous support from the Shanahan Family Foundation, we are testing ideas—drawing from successful teams across diverse sectors (business, academia, and even sports)—to implement the practice of mentorship in a research team context.
Whole-team participation by STEMM labs can engender greater and more meaningful engagement with mentorship training and practice, while also lending itself to scale-up via institutional support, with team-focused programs serving researchers at different career stages simultaneously. For a faculty member or student to learn with one’s team is to amplify the opportunities for healthy mentorship alliances to blossom across a team at all levels. And it takes what the business consultant Patrick Lencioni, in The Five Dysfunctions of a Team, calls “vulnerable trust” for faculty and students to engage honestly and effectively.
Currently, academic researchers who become faculty members—or who go into industry—are trained on research. Period. But the hardest part of science isn’t the science; it’s the people stuff. It’s tragic that virtually no one in STEMM explicitly receives training either on how to lead a research group or how to be an effective trainee within a group. Leading a research team requires one to manage others; communicate effectively with diverse people; mediate conflicts; do budgets; set operational, research, and cultural expectations; and implement inclusive training practices. Being an effective trainee requires one to understand roles and responsibilities; grasp timelines and programmatic obligations; pursue good grades in classes; publish and present one’s research; hone effective oral and written communication skills, including self-advocacy; and give credit and encouragement to others.
The unique focus on the team—enrolling whole teams inclusive of faculty, students, postdocs, and staff—provides an opportunity for innovation in the approach to mentorship education for STEMM researchers. It enables reciprocal mentor-mentee relationships to develop through healthier team environments in which mentorship alliances throughout an organization can thrive.
Crista L. Farrell
Director, Strategic Program Development & Engagement
Associate Director, Center for STEMM Mentorship
Stanford University
Maria T. Dulay
Senior Research Scientist, Lab Manager
Associate Director, Center for STEMM Mentorship
Stanford University
Joseph M. DeSimone
Sanjiv Sam Gambhir Professor of Translational Medicine and Chemical Engineering
Faculty Director, Center for STEMM Mentorship
Stanford University
Mentoring has been a cornerstone of the scientific enterprise, as knowledge is transferred from one generation to the next. But as Beronda L. Montgomery, Fátima Sancheznieto, and Maria Lund Dahlberg describe, the traditional ad-hoc form of mentoring has limitations and consequences, and it is time to reimagine how mentoring is practiced.
Mentors who had benefitted from being mentored early in their career usually adopt a similar, if not identical, approach. They vow to do the opposite if it was a toxic relationship. This is limiting as it lacks perspective, insight, and accountability. Mentors might try to transform their mentees into “mini-mes,” but this, as the authors describe, will only widen the diversity gap in science. Further, it discourages mentors from learning from each other. This is especially needed when facing unfamiliar or extreme situations, such as how to mentor during a pandemic.
The halls of science are littered with bad mentors who, as researchers have noted, often served up numerous forms of inadequate mentoring, both active and passive. Mentors might not even realize the negative impact of their practices from unanswered emails, months of not reviewing their manuscripts, and preventing their mentees from collaborating with others.
The authors advocate for a scientific approach to mentoring, underscored by collaboration, a hallmark of modern science. We can learn from other industries (after all, checklists in the operating room were copied from those that pilots use before takeoff). It is common in nearly every industry to utilize expertise from other sectors. Science should take a similar approach to mentoring.
The authors also recommend institutionalizing mentoring, making it a part of the promotion, tenure, and hiring practice. But some people may give the recommendation only lip service, and there is no means of holding them accountable for what they claim. Asking faculty to include a mentoring plan is a one-dimensional approach they can copy and paste among multiple applications.
It is common in nearly every industry to utilize expertise from other sectors. Science should take a similar approach to mentoring.
Instead, consider asking faculty: What is the most challenging mentoring issue you faced this year? or What is the greatest achievement of one of your mentees, and what was your role in it? These questions would force mentors to think more deeply about their actions and impact, far more than a generic plan can accomplish. Furthermore, mentees’ needs vary, and this broader approach prevents a one-size-fits-all model.
To move mentoring from a haphazard, ad hoc approach with questionable impact and scalability, academic institutions should consider a more collaborative approach:
Mentoring teams. As the authors explain, one mentor cannot have all the answers to all questions. Having a diverse group of mentors from various generations and fields is pivotal for offering the needed array of career guidance and psychosocial support.
Community of practice. Mentors need the chance to learn from each other. Being one of many will enable them to share ideas, ask for guidance on how to deal with difficult situations, and find an expert in a particular field.
Recognize the bidirectional effect. While traditionally mentors have been senior faculty, mentoring is now recognized as working in all directions. Senior faculty can learn from junior members in the lab, and peers can learn from each other.
Just as science has evolved, so must our views and approaches to mentoring. To compete, be inclusive, and stop the leaky pipeline that is plaguing science, we must take a more scientific approach to mentoring.
Reading Beronda L. Montgomery, Fátima Sancheznieto, and Maria Lund Dahlberg’s essay made me want to stand up and cheer. These authors address a critical issue that threatens aspiring scientists and the very future of science, namely the lack of systematic, data-driven approaches that place mentoring the next generation of scientists on an equal footing with other required activities such as publishing and obtaining research funding.
The authors are accurate in their assertion that with the current ad hoc approach, despite the critical importance of mentoring, this activity is neither rewarded nor incentivized. In fact, performing the labor of mentoring is not only notvalued, but can, and often does, have a negative impact on the typical metrics used to measure “success” for scientists.Time and effort devoted to mentoring are inherently time and effort not spent writing a manuscript, crafting a grant proposal, or brainstorming about a new research idea. The situation is even more dire in that mentoring is absolutely essential for the continued success of the scientific enterprise, particularly to build a diverse and inclusive scientific community, but the burden of mentoring often falls disparately on a subset of individuals, typically those from the very groups that the community claims to want to lift up. Without systematic training in mentoring, even well-intentioned individuals who are willing to put in the work may not have the ability to maximize the potential and impact of their effort.
Despite the critical importance of mentoring, this activity is neither rewarded nor incentivized. In fact, performing the labor of mentoring is not only not valued, but can, and often does, have a negative impact on the typical metrics used to measure “success” for scientists.
As the authors point out, there is ample research that defines mentoring best practices, but these substantial data have not been used to develop and implement mentoring programs at the institutional level, nor have they been adopted by granting agencies. In the interest of provoking action, the authors charge scientific leaders to institutionalize mentoring. For example, creating an institutional office of mentoring that provides training and measures compliance in a manner equivalent to that required for environmental health and safety would place mentoring on an even platform with other mandated activities. A major challenge is how to build in accountability. As the authors note, while mentor training can be implemented, required, or both, oversight and regulation pose a major challenge.
One of the major threats associated with poor or damaging mentoring is the continued failure to diversify the scientific research community. The authors raise the important point that scientists from groups that are historically excluded and underrepresented in STEM fields are more likely to be impacted by negative mentoring than those in majority groups. Thus, a lack of evidence-based and systematic approaches to mentoring works against a major stated goal in STEM, namely building an inclusive and diverse scientific community that is best poised to use creative approaches to tackle future scientific challenges.
Scientists apply innovative, evidence-based approaches to their research questions. These approaches need to be used in a similar manner to develop, implement, evaluate, and reward mentoring. Kudos to these authors for continuing this important conversation and suggesting actionable approaches to address this very real threat to the future of science.
Anita H. Corbett
Samuel C. Dobbs Professor of Biology and Senior Associate Dean for Research
Emory College of Arts and Sciences
A New Model for Philanthropy?
Two respected former leaders of the Defense Advanced Research Projects Agency, Regina Dugan and Kaigham J. Gabriel, have teamed up to lead a new philanthropy-funded, DARPA-like entity—Wellcome Leap. It is supported by the United Kingdom’s Wellcome Trust, an independent charity focused on health science. In “Changing the Business of Breakthroughs” (Issues, Summer 2022), Dugan and Gabriel propose this as a model for other DARPA-like entities to be funded by philanthropy.
Science policy theorists have long studied two innovation factors: research and development levels and directions and the talent base behind that R&D. The first focus stems from work by the economist and Nobel laureate Robert Solow, who argued that the dominant factor in economic growth was “technological and related innovation.” The second stems from work by the economist and Nobel laureate Paul Romer, who argued that “human capital engaged in research” was the vital factor behind the R&D for technological advance. These can be considered two direct factors behind innovation (as opposed to a multitude of indirect factors). However, a third direct factor, innovation organization, is less understood and has received less scrutiny. Dugan and Gabriel are, in effect, arguing for its centrality, pressing a new approach upon us.
They suggest that an era of complex technologies makes innovation organization a necessity. The lone innovator in the garage never happened; complex innovation (as opposed to examples of discovery or invention) requires a collaborative process involving a mix of skills and disciplines, putting innovation organization at a premium. This is not a new reality; it has been true since Thomas Edison’s group at Menlo Park developed the incandescent lightbulb and proposed the electrical system behind it. But getting to the right innovation organization is a minefield, littered with many inadequate models.
Dugan and Gabriel focus on DARPA, a famously successful model they know well. It has focused on taking high risks for high rewards, on breakthroughs not incremental advances, and it relies on empowered program managers to find the best research groups to implement new technology visions. They cite former DARPA program manager Dan Wattendorf’s vision that led to a critical DARPA effort a decade ago to advance mRNA vaccine technology. The model has been successful enough to have spawned successful DARPA clones, ARPA-E (for energy technologies) and IARPA (for intelligence technologies). A new clone, ARPA-Health, is now in the offing.
The lone innovator in the garage never happened; complex innovation (as opposed to examples of discovery or invention) requires a collaborative process involving a mix of skills and disciplines, putting innovation organization at a premium.
However, governments have faced challenges in creating ARPAs. Within the Department of Homeland Security, HSARPA was well-staffed at the outset by former DARPA program managers, but was never allowed to operate independently by its departmental overseers who limited its freedom of action. Other countries that have attempted an ARPA model have faced problems of locating it within an established agency, which can limit the needed entrepreneurial culture; of controlling the level of risk it can undertake; and of finding ways to link the ARPA to the scale-up efforts that must follow initial prototyping. Governments always have trouble with failure—of spending taxpayer dollars on high-risk ventures, whatever the potential rewards.
Could philanthropy be an alternative? Dugan and Gabriel suggest that it could face fewer of these restraints, citing their own Wellcome Leap effort. They argue that while governments must innovate within national borders, many technology answers, particularly in health, will be found by creating networks across borders, and philanthropy can operate internationally.
A potential problem for philanthropy is mustering the scale of funding needed. DARPA is a $3.8 billion a year agency. But how much funding do you need to make a difference? ARPA-E has shown that you can have a tenth of that funding level and spur important progress.
Also, philanthropy has been teaming up lately. Cooperation across leading foundations working on climate technologies is now widespread. Fast Grants has brought together some of Silicon Valley’s most successful—the Chan Zuckerberg Initiative, the Collinson brothers, Elon Musk, Schmidt Futures, Reid Hoffman, and others—collaborating to pool funding for projects such as a universal coronavirus vaccine.
Could there be too many DARPAs? In the 1940s IBM chairman and CEO Thomas Watson allegedly said there was a world market for about five computers. We’re now at about 2 billion and counting. Science has truly turned out to be an endless frontier that keeps building on itself; the more innovation the more opportunities there are for more. The DARPA innovation model has proven an unusually viable one; there seems no good reason not to bring on the clones.
As a former CEO and senior tech executive at companies such as Xerox PARC, Sun Microsystems, and Google, I have been a direct beneficiary of the DARPA model that Regina Dugan and Kaigham J. Gabriel describe. The Defense Advanced Research Projects Agency’s critical role in creating the internet is widely appreciated, but it also helped to enable many other technological revolutions, including design software for computer chips, handheld GPS receivers, speech recognition, and intelligent assistants. Google itself grew out of a DARPA-funded project at Stanford University on digital libraries. So I am a big believer in the DARPA approach of recruiting world-class technical program managers with a compelling vision, setting ambitious but measurable goals, and backing multidisciplinary teams to achieve those goals.
I am also delighted to see the growing support for the DARPA model, including the United Kingdom’s planned launch of the Advanced Research and Invention Agency (ARIA), funding from the US Congress for an ARPA for Health, and Wellcome Leap, led by Dugan and Gabriel.
I’d like to pose three questions that, if addressed, could increase the impact of these and other future ARPAs.
What takes the place of Defense Department procurement for other ARPAs?
I am a big believer in the DARPA approach of recruiting world-class technical program managers with a compelling vision, setting ambitious but measurable goals, and backing multidisciplinary teams to achieve those goals.
The original DARPA has benefited from the fact that Defense Department procurement will often create markets for the technology developed by DARPA’s R&D investments. What’s the equivalent of that for other ARPAs? Will market forces be sufficient to commercialize the results of ARPA-funded research programs, or will they get stuck in the “valley of death” between the lab and the market? One possibility to explore is what economists call “demand pull” (as opposed to “technology push”) approaches. For example, DARPA’s investment in the development of mRNA vaccines was complemented by Operation Warp Speed’s advance market commitment to purchase hundreds of millions of doses of a COVID-19 vaccine from Pfizer and Moderna.
What other pressing problems would benefit from a public or private ARPA?
For example, President Obama proposed creating an ARPA for Education, and the US House of Representatives recently provided funding for a National Center for Advanced Development in Education. What other economic, societal, and scientific challenges would benefit from an ARPA? What goals might these ARPAs set, and what are examples of projects they might support to achieve those goals?
What can we learn from the original DARPA, and what experiments should new ARPAs consider?
The original DARPA has been operating for over 65 years, and I think there is more we can learn by studying the different strategies used by DARPA program managers. For example, one highly successful DARPA program was called MOSIS, which provided shared access to semiconductor fabrication services to academic researchers and start-up companies. This accelerated the pace of innovation in microelectronics by providing access to an expensive resource, allowing more people to get involved in semiconductor design. There are dozens of these DARPA strategies that new ARPA program managers should learn from. New ARPAs should also take advantage of their ability to experiment with new models, such as Wellcome Leap’s Health Breakthrough Network.
Eric Schmidt
A STEM Workforce Debate
As a labor economist and director of a research institute, I am often asked to make forecasts about economic conditions. To be honest, I often demur because to forecast the economy means that nine times out of ten you will be wrong. Forecasting occupation demand is even more fraught because a dynamic economy will end up creating and destroying jobs at such a rapid pace as to be highly unpredictable. With that background, I read Ron Hira’s “Is There Really a STEM Workforce Shortage?” (Issues, Summer 2022) with great interest. While I mostly agree with Hira’s approach, there are aspects of this question that deserve a more nuanced discussion.
I wholeheartedly agree with Hira’s critique of Bureau of Labor Statistics Employment projections, and his conclusion that “technological disruptions, and their effects on employment, are notoriously difficult to predict.” Exhibit A is the impact of COVID-19 on the labor market. In February 2020, the United States had 152.5 million people employed, and by April 2020 over 20 million were out of work. It was only in August 2022 that employment finally exceeded February 2020 levels. These kind of employment shocks are not anticipated and are difficult to incorporate in models. Like Hira, I recommend that people take employment projections with a grain of salt.
That said, the aftermath of the COVID-19 pandemic has created an unprecedented labor shortage. According to the Bureau of Labor Statistics, as of July 2022 there are two job openings per every unemployed worker. Recent data from the Indeed Hiring Lab suggest that the shortage of STEM workers is more acute than in other fields. Using Indeed’s dataset, which calculates the percentage change in job openings by selected occupations since the labor market peak of February 2020, job postings in STEM fields were up as of July 26, 2022, including for software engineers (93.7%), medical technicians (78.8%), and electrical engineers (81.1%). In contrast, jobs in business fields were up half as much—in insurance (60.9%), marketing (46.9%), and management (42.3%).
The aftermath of the COVID-19 pandemic has created an unprecedented labor shortage. According to the Bureau of Labor Statistics, as of July 2022 there are two job openings per every unemployed worker.
Finally, while the current shortage (or lack thereof) of STEM workers may be debatable, the lack of diversity in STEM occupations is not. According to data from the National Center for Science and Engineering Statistics, in 2019 only 29% of employed scientists and engineers were female and 15% were from historically underrepresented groups. The National Science Board’s Vision 2030 plan rightly focuses on the lack of diversity in STEM education and employment.
In the long run, unless all children receive access to a high-quality K-12 education, including sufficient coursework in mathematics and science that will prepare them to participate in STEM careers, demographic trends suggest that there will be fewer STEM workers. This lack of diversity may lead to a lack of discovery. A new study in the journal PNAS shows that gender diversity in research teams generates higher impact and more novel scientific discoveries, and the same has been found for ethnic diversity. STEM education is an investment in the nation’s economic future and should be available to all students regardless of race and gender.
Donna K. Ginther
Roy A. Roberts & Regents Distinguished Professor of Economics
Director, Institute for Policy & Social Research
University of Kansas
Research Associate, National Bureau of Economic Research
Ron Hira’s article is an exercise in the giving of good advice. He is correct to advise us to demand better data that paint a fuller picture of the STEM labor market’s nuanced realities. He is also correct that we should make more honest and responsible use of the data we already have. But the advice that strikes me most strongly is the exhortation Hira leaves unwritten. Like most good advice, it can be summarized succinctly: follow the money.
Demanding better data about the STEM workforce raises the question of why we don’t have better data already. Aspiring to more truthful STEM workforce debates leads one to ask who has an interest in keeping the debates mired exactly where they are. Hira argues that we are not suffering the national STEM worker shortage our national discourse assumes; the bulk of his text is a point-by-point dismantling of the misuses of data that sustain this mistaken view. But the question of who benefits from the prevailing view is the heart of his argument, the moral undercurrent supporting his data deep-dive.
Demanding better data about the STEM workforce raises the question of why we don’t have better data already.
Hira’s own answer to that question is clear. He suggests that official statistics on how many STEM jobs are offshored every year would be useful, for example—and then reminds us that both the National Academy of Engineering and Congress have sought exactly this data from federal agencies, only to be thwarted by business interests. Hira recounts Microsoft president Brad Smith’s misuse of unemployment data to suggest a worker shortage in computer-related occupations, when there was in fact a surplus. He unpacks wage data to show that contrary to the higher wages a true shortage would prompt, STEM wages have been largely stagnant for years as employers increasingly meet their STEM labor needs through lower-paid contractors and the abuse of guest-worker programs, rather than through higher pay, a more diverse talent pool, and better professional development.
The larger story, then, is about commercial interests “controlling the narrative” on the meaning and role of labor, to the detriment of workers. The STEM workforce debate is just one instance of this systemic problem. For decades, the US policymaking apparatus has given itself over to an economic orthodoxy that treats labor as merely one factor of production among many, which capital is free to reshuffle, discard, downsize, lay off, or underpay as may be required to juice the bottom line. My organization, American Compass, argues that we would do better to recognize that workers are cocreators of economic value rather than merely commodities to be purchased. Hira’s argument points in a similar direction, and reminds us that more informed and constructive STEM workforce discussions will require honesty about whose interests are being served.
Chris Griswold
Policy Director
American Compass
Long out of fashion, industrial policy has come back into vogue, amid bipartisan concerns over economic and military vulnerabilities in an intensifying sphere of global competition. Underlying much of this discussion is the fear that America lacks sufficient STEM talent to carry forward its legacy of technological innovation and to maintain its lead over China. In his article, Ron Hira raises important questions about whether such concerns are supported by the facts.
Hira acknowledges that the lack of availability of detailed data represents a constraint to more effective analysis of imbalances between the supply and demand of STEM talent. As he points out, traditional public data only allow for analysis at the aggregate level, and typically only through a sectoral lens. Just as in any field, STEM roles differ in the skills they require and, correspondingly, in the availability of needed talent, as illustrated in Hira’s article by the contrast between life scientists and software engineers. In the same way that a sectoral lens is insufficient to analyze labor shortages for specific STEM roles, looking only at categories of STEM roles is insufficient to analyze the availability of specific skills in demand in the market. There is no single “skills gap” in the market, but rather different gaps for different skills. Overall, conferred degrees and employer demand in what the Bureau of Labor Statistics refers to as “computing and mathematics” occupations may be in balance, but the pipeline for specific talent can still be severely anemic at the level of specific roles.
This is even more the case when we consider the question of whether existing programs of study are aligned to industry demand at the skill level. For example, while universities may be conferring more than enough STEM degrees to meet demand at the categorical level, these university programs may not be teaching enough of the specific skills that are required by industry, whether those be technical skills such as cloud architecture or soft skills like teamwork and collaboration. Significant gaps between skills taught and skills sought can be as problematic as broader imbalances—but less perceptible.
Conferred degrees and employer demand in what the Bureau of Labor Statistics refers to as “computing and mathematics” occupations may be in balance, but the pipeline for specific talent can still be severely anemic at the level of specific roles.
The assertion that supply and demand are in balance (or that the market is possibly even glutted) also depends on the notion that supply follows demand and not the other way around. There is an argument to be made that jobs follow talent in the knowledge economy. Rather than simply filling demand for STEM roles by entering the workforce, STEM graduates can also launch enterprises, create new products, or drive innovations that ultimately create greater demand for STEM skills. Although demand is never infinitely elastic, growing the strength of the STEM talent base is likely to stimulate demand correspondingly. Simply put, if America can reassert itself as a STEM talent hub, its innovation economy will grow, spurring further demand growth.
STEM is also a field with particularly high attrition—a phenomenon the economists David J. Deming and Kadeem L. Noray study in a recent analysis on “STEM Careers and the Changing Skill Requirements of Work.” According to their article, upon graduation, applied science majors enjoy a salary premium of 44% over their non-STEM peers. Ten years out, that shrinks to 14%. Because of the speed of skill replacement in STEM, STEM workers are less likely to enjoy an experience premium. By the time they have acquired significant on-the-job experience, many of the skills they acquired during their education are no longer seen as relevant. This high rate of skill replacement leads to a loss of the skill premium evident immediately after graduation. Accordingly, many ultimately leave STEM roles in order to continue their career progression. Given these defections, a straight demand-graduate analysis could understate gaps in the market, as assumptions about the number of new graduates needed to meet market demand must consider higher attrition of existing workers and not only new jobs created.
Hira is correct that there is a need to revisit old assumptions. New, more granular, more timely data sources will afford decisionmakers a more precise awareness of the nature of current and emerging talent gaps and provide a more effective basis for action.
Matt Sigelman
President, The Burning Glass Institute
Chairman, Lightcast
Visiting Fellow, Project of Work at the Harvard Kennedy School
Ron Hira asks whether there is really a STEM workforce shortage and, while noting differences by field, largely answers no.
I largely disagree, but also think that “shortage” is the wrong way to think about whether the United States has enough scientists and engineers. Markets tend to clear. There is neither a fixed number of positions for scientists and engineers in the labor force nor a fixed number of ways to use people with that training.
The better policy issues are whether we would benefit from more scientists and engineers and whether people receiving that training have rewarding careers. With a few exceptions, there is overwhelming evidence that the answer is yes to both. Drawing on data from the National Survey of College Graduates, the National Science Foundation Science and Engineering Indicators, and the US Bureau of Labor Statistics, we find:
85% of recent science and engineering (S&E) graduates say their jobs are related to their degrees: 80% at the BS level and 97% at the PhD level, measured one to five years after degree.
Employment in S&E occupations by individuals with bachelor’s degree and above grew by 39% between 2010 and 2019, more than five times the growth rate of the labor force.
Degree production in S&E fields grew slightly slower than growth in S&E occupational employment—by 38% at the bachelor’s degree level and 30% at the PhD level.
Unemployment rates in S&E are low. Average unemployment in 2021 was 2.4% for computer and mathematical occupations; 3.3% for architectural and engineering occupations; and 2.2% for life, physical, and social science occupations.
Pay is high for recent graduates in most S&E fields—and rising. For bachelor’s degree recipients one to five years after their degree, average salary in private industry was $61,242 in 2019. This ranged from $44,910 for physical science to $76,368 for computer and mathematical science. For recent PhD holders, average salary was $115,000.
Differences in our conclusions come from different treatment of occupation data. Counts of jobs in STEM occupations should not be compared with headcounts of degree. Many people with bachelor’s-level STEM degrees pursue careers in other fields, such as law and medicine. Many new PhDs have student visas and may not want or be able to stay. Also, many S&E graduates who report that they are doing work related to their degree are not in formal S&E occupations.
The better policy issues are whether we would benefit from more scientists and engineers and whether people receiving that training have rewarding careers. With a few exceptions, there is overwhelming evidence that the answer is yes to both.
I also disagree about the meaning of changes in occupational wages as an indicator of the labor market value of skills. If the average wages for PhD geoscientists in industry were to fall from its $184,000 average, would that mean the skill is not in high demand? Would society be better off with fewer people with that skill?
Changes in occupational wages are not even a good measure of changes in the demand for skills—fast-growing occupations grow fast by bringing in people with less direct training, less education, and less experience. For this reason, the average wage rate often falls in fast-growing occupations.
There are many career-path issues worthy of policy concern, such as whether researchers in a particular field are too old when they receive their first independent grant, or whether older programmers have problems finding new jobs? But limiting the supply of talent, either by immigration rules or education policy, is a blunt policy tool that may have little effect on such issues. And it is probably not a good deal for the scientists and engineers who do remain. Rather than enhancing careers by reducing “competition,” much R&D activity would leave the United States or not take place at all.
Mark Regets
Senior Fellow
National Foundation for American Policy
Ron Hira’s article presents a valid challenge to the long-standing argument that there is a STEM workforce shortage in the United States and causes us to reconsider the premise of decades of STEM education policies and initiatives that are based on the “shortage” argument. Hira proposes that this argument has not only been unsubstantiated, but is based on often flawed, incomplete, and misinterpreted data.
As an African American female chemist and social scientist who comes from a low-income and first-generation background, the critical importance of broadening participation in STEM is paramount. In order for the United States to remain competitive in a global science and technology driven economy, we must engage all of our human capital—particularly those like myself who have been historically disenfranchised and discouraged from scientific pursuits. However, as a STEM policy adviser, I am also keenly aware that STEM policy is shaped by not only by data, but by public sentiment, perception, and stakeholder voices who are the loudest. As Hira posits, voices such as those from students who are the target of these policies, and workers who are the end product of these efforts, are often excluded.
For the United States to remain competitive in a global science and technology driven economy, we must engage all of our human capital—particularly those like myself who have been historically disenfranchised and discouraged from scientific pursuits.
Hira lays out several factors that have influenced this notion of a STEM workforce shortage and how these factors have been based on limited data and the exclusion of dynamic processes and situational caveats. He correctly asserts that employment projections, which are based on current trends that are extrapolated out, may be applicable to occupations with stable trends, such as the legal field, but this method is inadequate for occupations that do not have stable trends, such as the computer sciences. Moreover, as Hira notes, a seemingly low unemployment rate in STEM fields, which evidence shows is actually high due to miscalculations based on comparisons with the national employment rate—a composite across all labor markets—has created inaccurate estimations. Errors of this type can lead to regressive impacts on progressive efforts related to inclusive outreach, recruiting, and hiring policies as organizations rely on these rates, projections, and forecasts to formulate staffing budgets.
Overall, Hira presents a sound, documented argument that the decades-long perception of a STEM workforce shortage in the United States is based on unsubstantiated evidence and flawed data and is often driven by stakeholders who do not necessarily advocate for current or future US STEM workers. Hira lays the foundation to seed discourse on a real and transformative conversation not only about the validity of a STEM workforce shortage, but more importantly about the implications for policy and for current and prospective US STEM professionals. Examining the layered, multifaceted, and cross-sectional mitigating factors at play, informed by analyzing disaggregated data on the persistent unemployment, underemployment, wage disparity, and barriers that impact minoritized groups such as BIPOC and persons with disabilities, who are an untapped source of US STEM talent, would greatly enhance this new conversation that is needed.
Iris R. Wagstaff
Founder and Executive Director
Wagstaff STEM Solutions
The C Word: Artists, Scientists, and Patients Respond to Cancer
Max Dean, The Gross Clinic, 2016. Image courtesy of Max Dean.
After being diagnosed with prostate cancer on his sixty-second birthday, Canadian multidisciplinary artist Max Dean began to explore his prognosis through his art practice. Striving to visualize the physical and psychological manifestations of his disease, Dean employed the help of animatronic figures from the Wilderness Adventure Ride at Ontario Place, an abandoned theme park in Toronto. Deeply inspired since college by Thomas Eakins’s 1875 painting Portrait of Dr. Samuel D. Gross (The Gross Clinic), which depicts Gross performing surgery on a patient’s thigh, Dean staged an operation on the ride’s moose—exploring the interrelated themes of time, aging, and illness. His process was documented by filmmaker Katherine Knight in Still Max, which premiered at the Hot Docs film festival in 2021.
A clip from the documentary is included in the exhibition The C Word: Artists, Scientists, and Patients Respond to Cancer, which provides a platform for discussing the role of art in negotiating and reimagining humanity’s complicated relationship with cancer and the process of healing. The exhibit, which opened at 850 Phoenix Biomedical Campus on April 21, 2022, is curated by Pamela Winfrey. It represents the first five years of the Arizona Cancer Evolution Center’s Art Program, a residency program that embeds artists in research labs within Arizona State University’s Biodesign Institute.
Cleaning Up Our Mess in Space
In “A Montreal Protocol for Space Junk?” (Issues, Spring 2022), Stephen J. Garber and Lisa Ruth Rand correctly recognize the challenges to pursuing remediation for removing space debris despite the obvious use for the technology. Remediation alone is difficult to incentivize. Despite lowered costs to access space, the incentive to remove debris remains outweighed by the cost of a dedicated remediation mission.
An alternative approach to remediation is to focus on the creative combination of multiple objectives in a single mission. The companies Northrop Grumman and Intelsat recently accomplished two satellite-servicing missions, to extend the operational life and reposition the Intelsat satellites. Similarly, NASA is developing a spacecraft called OSAM-1 (short for On-orbit Servicing, Assembly, and Manufacturing 1) that is designed to test on-orbit refueling of satellites. OSAM-1 (formerly called Restore-L) is intended to refuel the Earth-observing satellite Landsat-7, for mission extension purposes and to demonstrate a repair capability.
Remediation alone is difficult to incentivize. Despite lowered costs to access space, the incentive to remove debris remains outweighed by the cost of a dedicated remediation mission.
Mission extension is a significant driver toward servicing a satellite. There exists a common tension between using limited fuel resources for mission extension vs. removing the satellite from orbit within 25 years of mission completion (known as the “25-year rule”). Private satellites are held accountable by regulators to meet the 25-year rule. However, public goods such as NASA’s satellites are pressured to maximize the utility of their highly valued and well-utilized science missions, particularly when a replacement satellite is delayed.
The current culture focused on near-term science does not necessarily align with the concept of timely disposal. A combined mission extension and disposal mission may offer a solution to this tension. In the case of Landsat, retaining operational continuity is key to achieving science objectives. Thus, a comfortable overlap between the operational Landsat and the developing replacement Landsat mission is often desired.
Technologies for remediation, or satellite servicing, have potential applications in the public and private sectors as well as the civilian and defense sectors. However, no one entity wants to get stuck with the bill for developing a service and sustaining that service. Creative public-private partnerships that meet the needs of nongovernment entities, rather than bespoke solutions, may serve well in this situation. In this manner, the government can encourage the development of an industrial base and be one of many customers.
It is important to remember that remediation is part of a multipronged approach. Debris mitigation continues to serve space sustainability well, but has limitations. Unplanned incidents on orbit will inevitably occur. Having an alternative solution available to support those unexpected accidents is a valuable addition to the suite of technologies that will support space sustainability.
Marissa Herron
Aerospace Engineer
NASA
Stephen J. Garber and Lisa Ruth Rand make the argument that orbital debris is a form of pollution and that it is constructive to examine past efforts to address global pollution. The authors logically turn to a successful international treaty, the Montreal Protocol on Substances That Deplete the Ozone Layer, adopted in 1987.
How successful is the Montreal Protocol? The United Nations Environment Program recently reported that signatory countries, or “Parties,” have phased out 98% of ozone-depleting substances globally compared with 1990 levels. Without the Protocol, ozone depletion would have increased tenfold by 2050. On a human scale, this would have resulted in millions of additional cases of melanoma, other cancers, and eye cataracts.
The authors highlight lessons learned from the Montreal Protocol that could apply to the planet’s burgeoning space debris problem, including:
Developing consensus on the existence of the problem;
Emphasizing government-led international collaboration to find solutions;
Devising incentives or financial assistance (“carrots”) for developing countries and punitive measures (“sticks”) for developed countries;
Evolving regulatory flexibility to align with new discoveries; and
Emphasizing the risks posed by inaction.
Emphasizing the risks of inaction is key. What would happen if Earth’s orbital regime reaches a point of no return? While there appears to be a consensus that orbital debris proliferation is a problem, we need to do more to broaden awareness.The value of space extends to all countries. Even those countries without operational satellites will benefit from space services such as increased connectivity, geolocation capabilities, and access to satellite imagery.
What would happen if Earth’s orbital regime reaches a point of no return? While there appears to be a consensus that orbital debris proliferation is a problem, we need to do more to broaden awareness.
Over 50 countries now own and operate space assets. However, equity in space assets is not distributed evenly. Citigroup recently estimated that the space economy would generate over $1 trillion in annual sales by 2040, up from around $370 billion in 2020, but wealthier spacefaring countries are the most invested and stand to benefit the most from the expanding space economy.
Thirty-five years ago, the architects of the Montreal Protocol navigated a dire situation—a thinning ozone layer—and planned a course of action that addressed a diverse range of stakeholders with varying degrees of resources. Now the planet is facing a dangerously congested orbital environment. But the financial consequences will not be felt equally across the planet. The higher-income world has more “skin in the game,” or equity in space-based assets, and therefore more to lose if a worsening space debris cascade threatens the long-term viability of satellites. Following the spirit of the Montreal protocol, more affluent spacefaring countries should lead while the smaller, less-invested countries should be given incentives to follow as debris mitigation policies continue to take shape.
As the ecologist Garrett Hardin noted over 50 years ago, “Freedom in a commons brings ruin to all.” If we are on the brink of a tragedy of the commons in space, now is the time for the space sector to learn from successful international cooperation efforts—and the Montreal Protocol provides a shining example.
Karen L. Jones
Space Policy Economist and Technology Strategist
The Aerospace Corporation
Stephen J. Garber and Lisa Ruth Rand provide a well-supported article regarding the extent of space debris and the potential application of that international agreement for controlling terrestrial pollution to controlling the risk of collisions with debris in space. They eloquently identify the three key aspects for controlling the debris population: debris mitigation, space traffic management, and debris remediation. I would like to focus on a specific attribute of the Montreal Protocol discussed and the most critical, and difficult, means to manage debris growth: debris remediation.
An investment by the government entities that are responsible for the decades of debris deposition on orbit is needed.
Debris remediation is primarily the act of removing massive derelict objects (e.g., abandoned rocket bodies and nonoperational payloads) from orbit to eliminate the possibility of future massive debris-generating collision events. A paper completed in 2019 by 19 scientists from around the world identified the top 50 statistically most concerning objects in low Earth orbit (LEO). Leading the list were 18 SL-16 rocket bodies launched by Russia primarily in the 1990s and left in a tight 40-kilometer-wide band centered around 840 km altitude, where they routinely cross orbital paths with each other, debris from the 2007 Chinese antisatellite test, and defunct US payloads and debris related to their demise. This combination has created a uniquely bad neighborhood in LEO. These objects were deposited primarily by the three major space powers—the United States, China, and the Russian Federation—before the turn of the century. For perspective, if two SL-16s were to collide, this event could singlehandedly double the debris population in LEO (i.e., add up to 15,000 large fragments).
Garber and Rand call for leadership. Indeed, leadership is critical to control the debris population in LEO and catalyze the debris remediation industry. Just as government investment has catalyzed the now largely commercial fields of space-based Earth imagery, global space-based communications, and satellite launch, development and deployment of debris remediation solutions cannot be borne solely by the emerging commercial ventures. Conversely, an investment by the government entities that are responsible for the decades of debris deposition on orbit is needed.
Darren McKnight
Senior Technical Fellow
LeoLabs
Understanding Noise in Human Judgments
It was a pleasure to read the interview with Daniel Kahneman, “Try to Design an Approach to Making a Judgment” (Issues, Spring 2022), who is a world leader in research based upon his multiple findings in the areas of attention and decisionmaking, as well as his other contributions to psychology and economics. His interview expresses that expertise and is very helpful in understanding the extent and danger of variability in human judgment.
However, there seems to me to be too strong an implication in the interview that we would be better off if everyone came to roughly the same decisions. In the cited case of insurance actuaries, a 10% variance seems tolerable, not the 50% actually found. The assumption that variability is bad is clarified somewhat in the interview as Kahneman discusses how it may aid creative problem solving by allowing diverse opinions.
To view variability as inherently bad seems to me a judgment error of the type Kahneman has discovered and identified in other situations. Even in the justice system, the effort to impose common minimum sentences for crimes might have reduced variability and increased equality, but it also has had some very bad consequences in filling the prison system.
To view variability as inherently bad seems to me a judgment error of the type Kahneman has discovered and identified in other situations.
If we all thought more or less alike, it seems it would be a more just world—but as a species would we be better off? It seems at first that in some issues where the “correct decision” has expert consensus, such as in vaccination for COVID-19 or climate change, we would be. However, in evolutionary biology variation is celebrated as insurance against some common flaw annihilating the whole species. I have been planting conifers along the river that flows through my forest. They are mixed conifers, as I was warned that planting all fir trees might be bad because a single predator or disease might cause them all to die.
Nearly all scientists say society needs to quit using fossil fuels because of the atmospheric warming it causes. However, can they be really sure that some unknown planetary adaptation might reduce the warming effect? Even though vaccination for COVID-19 has been successful, it is still possible that long-term harm from vaccination will come to some people. In many cases, expert opinion can vary greatly partly because the decision (as in the case of insurance actuaries) depends upon many somewhat separate facets. Kahneman suggests that it might be better to have multiple experts independently rate the importance of the many factors involved so that we could be more certain that all factors will be taken into consideration.
Kahneman has identified an important aspect of human decisionmaking and points also to our lack of awareness of human variability. However, Kahneman and biology tell us that variability can be a good thing for the species at least in some cases, even when we think it unfair.
Michael I. Posner
Professor of Psychology, Emeritus
University of Oregon
I have a reinforcing story to tell relating to the Issues interview with Daniel Kahneman. Among many interesting observations, Kahneman points out how unreliable job interview judgments are, largely due to cognitive shortcuts and biases—what he calls “noise”—that shape, and sometimes misshape, human decisions.
While a member of a large scientific R&D institution (IBM Research) for 25 years, I had the opportunity (and job requirement) of interviewing many dozens of candidates for PhD-level research positions. At one point in my career, I had to move offices and clean out my file cabinets.
Kahneman points out how unreliable job interview judgments are, largely due to cognitive shortcuts and biases—what he calls “noise”—that shape, and sometimes misshape, human decisions.
Coming across 15 years’ worth of my records of recommendations based on these job interviews, I was able to make a subjective evaluation of my own opinions, since a good many of the candidates had started their careers at my own institution or at other places where I was able to follow their progress. Aside from the outliers, the very best and the worst, I was humbled by the randomness of my decisions—which had little correlation between my interview judgments and the candidates’ subsequent success. I even misjudged a future Nobel laureate.
As a result, I did change my interview style to a more structured format, and I claim some subjective improvement, but mostly not at the PhD-level candidate.
Marc H. Brodsky
Making Space for Technology Governance
In “Imagining Governance of Emerging Technologies” (Issues, Spring 2022), Debra Mathews, Rachel Fabi, and Anaeze Offodile II outline the approach of the National Academy of Medicine’s Committee on Emerging Science, Technology, and Innovation (CESTI) for anticipating broad societal impacts from emerging technologies prior to their large-scale deployment and use. In full disclosure, I was involved in developing the figure, “Principles for the Governance of Emerging Technologies,” used in the article. I helped draft its first iteration for CESTI review and further development. I believe it provides a useful guide for the more holistic assessment of emerging technologies, their potential societal impacts, and procedural and substantive policy dimensions for technological governance. I am also impressed by CESTI’s use of narratives and scenarios to explore impacts and normative dimensions upstream of technology design, deployment, and use. I commend the committee for its efforts.
Unfortunately, as a society, we are far behind in the use of such diagrams and approaches for responsible research on and innovation of emerging technologies, many of which are accompanied by significant uncertainties about their impacts and important normative questions. The example the authors describe in their article, transcranial direct current stimulation, and its use for medicine and enhancement without US regulatory approval or governance of social and ethical issues, provides an example of current inadequacies with technology governance. The elephant in the room is, how can we remedy this deficit? To that question, Mathews and coauthors offer limited discussion. Models such as those proposed by CESTI are prevalent in the social science, science and technology studies, and policy literatures, but they will not have impact until policy spaces to implement them exist.
Models such as those proposed by CESTI are prevalent in the social science, science and technology studies, and policy literatures, but they will not have impact until policy spaces to implement them exist.
In my observations, the root cause for our inattention to the ethical and societal dimensions of emerging technologies, as well as the lack of policy spaces in which to consider them, is the lack of political will. We live in a society that is technologically optimistic, dominated by a capitalistic paradigm for technology governance. The predominant narrative is that social good is equivalent to technology development and adoption. With technology development comes capital, jobs, and economic growth. Regulations for safety, let alone considering the social and ethical dimensions or engaging diverse publics, are seen as getting in the way of these primary goals. Power for decisionmaking is concentrated in the industries developing the technology and regulators whose hands are tied by limited legal authorities and pressure from the US Congress to not overregulate (which in turn comes from industry lobbying). Technology governance takes place at the stage of regulation, and largely (almost exclusively) between the product developer and these constrained regulators.
Currently, spaces for the broader analysis and governance proposed by CESTI, and reported by Mathews and coauthors, are woefully lacking. It is imperative that these policy spaces be independently run; include voices of diverse and affected publics; and come with teeth—the ability to constrain or enable technologies—in order to execute the vision for more robust, responsible, and equitable futures with technology. We should turn our attention toward the creation of those spaces, including ways to overcome the political and economic forces, power structures, and strong techno-optimistic narratives that prevent their existence. Yet this is no easy task.
Jennifer Kuzma
Goodnight-NCGSK Foundation Distinguished Professor in the Social Sciences
Codirector, Genetic Engineering and Society Center
North Carolina State University
Debra Mathews, Rachel Fabi, and Anaeze Offodile II outline a systematic methodology developed by the National Academy of Medicine to inform a novel governance framework for disruptive technologies. The authors note that “fostering the socially beneficial development of such technologies while also mitigating risks will require a governance ecosystem that cuts across sectors and disciplinary silos and solicits and addresses the concerns of many stakeholders.”
The governance framework, however, does not adequately address the role of risk mitigation in the governance process. I propose that the Academy’s Committee on Emerging Science, Technology, and Innovation include risk mitigation in its next round of study of policy tools for governing emerging technologies to ensure that innovation risks are identified and managed in order to ensure high quality and safe patient care.
Advances in patient care involve a learning curve with new potential sources of harm and unintended consequences. For example, data used to “train” technology enabled by artificial intelligence may not be sufficiently diverse, resulting in algorithmic bias that adversely affects certain patients. Risk assessment and mitigation thus should begin in the innovation sandbox and continue through each stage of the product lifecycle to identify, analyze, and control risks. A health care organization’s innovation risk appetite (the amount of risk an organization is willing to accept to achieve its objectives)and risk tolerance (the acceptable deviation from the organization’s risk appetite) should be incorporated into its enterprise risk management program. Since the introduction of emerging technologies presents strategic, operational, and patient safety exposures to the health system, innovation risk also should be included in the governing board’s risk agenda consistent with the board’s oversight responsibility.
Advances in patient care involve a learning curve with new potential sources of harm and unintended consequences.
Critical risks relating to emerging technologies are many and varied, including:
Lack of integration with the patient’s electronic health record resulting in gaps in clinician documentation that could negatively affect diagnosis and treatment decisions;
Injury to patients and medical malpractice liability exposure resulting from an evolving standard of care;
Problems with the accuracy, integrity, and/or completeness of data or information utilized in the development of technologies;
Vulnerabilities that may result in data privacy breaches and/or security incidents;
Failure to appropriately determine data ownership, rights, and permitted uses in codeveloped intellectual property that adversely affects a codeveloper’s right to use, share, or sell the technology or information generated by it;
Disproportionate allocation of contractual rights, responsibilities, and liabilities among all stakeholders throughout the entire development and deployment lifecycle;
Inadequate insurance coverage for a product’s deficiencies when used by the organization during the development period and when marketed to third parties;
Violations of federal and/or state fraud and abuse laws, such as when the technology could influence a provider’s referral decisions;
Uncertain and inconsistent legal and regulatory requirements that may result in litigation, imposition of monetary penalties, or administrative agency action;
Financial risk due to unclear reimbursement rules and policies that may affect achievement of economic objectives; and
Reputational risk, such as ethical violations.
Since the dynamic nature and accelerated pace of tech-driven innovation carries inherent risks throughout the entire product lifecycle, it is prudent to include risk mitigation as a policy tool for the governance of emerging technologies.
Elisabeth Belmont
Corporate Counsel
MaineHealth
A Fresh Look at Power Thieves
One of my most disturbing memories growing up as an infant in Peru is the blackouts (apagones). Provoked by the Maoist group Shining Path as part of its strategy to seize power, the sudden lack of electricity marked (and shaped) the lives of an entire generation of Peruvians, who had to learn how to manage and repair everyday objects while preventing them from damage once energy returned. The close interaction with electricity (as well as its absence) in the 1980s was present while reading Diana Montaño’s fascinating account, “Electrifying Agents and the Power Thieves of Mexico City” (Issues, Spring 2022), of electricity thieves and the rise of an infrastructure in Mexico City a century ago.
Montaño has chosen an unusual path to understand the complex relationship between individuals and systems: those who trespassed the law by procuring access to electric light. By focusing on marginal subjects, the author sheds new light on technological systems. They are no longer abstract “black boxes,” but a set of artifacts that users (and potential users) aimed to understand, intervene with, and incorporate into their daily lives. One way to do this is essentially by circumventing the formal requirements demanded by the company and gaining direct access to this source of energy and prestige.
The sudden lack of electricity marked (and shaped) the lives of an entire generation of Peruvians, who had to learn how to manage and repair everyday objects while preventing them from damage once energy returned.
As the essay reveals, the expansion of electricity in Mexico City and elsewhere ignited the emergence of new practices. Stealing electricity was one of them, a grassroots expertise that also involved technicians and police agents. Montaño makes an excellent point in suggesting that thieves should be considered as part of the human network created by this innovation. Why not? After all, thieves (and hackers, by extension) belong to a long lineage of underrepresented actors and deserve particular attention beyond the legal/illegal judicial dichotomy.
The role of “users” in the coproduction of technoscientific knowledge constituted a crucial debate a few years ago in the field of science and technology studies, and particularly in the SCOT (Social Construction of Technology) approach. Even though the debate seems to be over, it is crucial to return to these foundational exchanges and discuss them with new approaches and case studies, such as Montaño’s own book, Electrifying Mexico, along with other works on the history of electricity, such as the historian Willie Hiatt’s current project on Peruvian apagones, or anthropologist Antina von Schnitzler’s book, Democracy’s Infrastructure: Techno-Politics and Protest after Apartheid, on local resistance to prepaid meters in South Africa. Hence, “stealing” is just another action from a broader repertoire of “moral economy” invoked by those who reclaimed their own right to gain access to certain technologies or to make them available to a larger group.
Most recently, historians of technology have made an important effort to incorporate narratives from overlooked groups as well as from areas beyond the Global North. In doing this, the field has gained a better and more nuanced perspective on how infrastructure interacted with human agents in the past and the legacies of such interventions. Only by expanding our repertoire of actors, places, and practices will we reach a comprehensive understanding of the multiple meanings of technology and the impact it has, and continues to have, on ordinary individuals.
José Ragas
Assistant Professor
Pontificia Universidad Católica de Chile
Diana Montaño highlights the importance of ordinary people’s experiences, practices, and expectations in the making of what she terms the electricscape. By showing how people not only made sense of new energy technology as it arrived (in a top-down model of technological diffusion), but also worked—and made trouble—with it in ways that were socially and culturally specific to Mexico City, Montaño reminds us to attend to a wider range of sites in which energy systems are shaped, extended, contested, and sabotaged.
Of particular interest is her focus on the ladrones de luz (power thieves). In her engaging narration of the problems raised by electric theft, Montaño points us to transgression as an entry point for understanding how energy systems function, and where they break down. The ladrones de luz show us how the theft of electricity has profoundly shaped the way we use, regulate, police, and sell energy. What Montaño identifies as an electrical script is not a product of consensus or cohesion, but rather can be read for conflicts, disagreements, and disparities of power. The electrical script defined by the power companies is not only defied by the residents of Mexico City, but also alerts us to the fact that capitalinos (authorized and unauthorized users) have developed their own electrical script. The process of negotiating that gap shaped everyday practices and produced new legal precedents. The cases remind us how novel the problems presented by new kinds of energy can be.
What impact did the contestation over the electricscape of Mexico City, including the struggle to set the boundary between licit and illicit use of electricity, have beyond Mexico?
Montaño’s account is filled with the rich texture of everyday life of the Mexican capital, but its insights reach far beyond the history of the city, or even the history of Mexico. At the heart of Montaño’s research is a challenge to historians of the countries that exported electric technology, finance capital, and expertise—places such as the United States and Canada, for example—to recognize that technological diffusion was not a one-way process, that capitalinos did not simply accept electricity as it was presented to them, but made it their own. What impact did the contestation over the electricscape of Mexico City, including the struggle to set the boundary between licit and illicit use of electricity, have beyond Mexico? What would it look like to bring this history into studies of the exporting countries? We might look to the processes of investment, scientific and engineering collaboration, and management to follow the impact on the companies. Equally, we might trace how migrants took their electrical scripts with them, from Mexico to elsewhere, or elsewhere to Mexico City.
Finally, I would like to briefly comment on the contemporary implications of this research. If we recognize that the legal regimes governing the distribution and appropriate use of electricity—including the definition of electric theft—grew from earlier moments of energy transition, we should consider how a similar process might play out with the efforts to decarbonize the world’s energy systems and what kinds of reimagination decarbonization might demand. How, for example, would decentralization and democratization of energy production and distribution reshape our electrical scripts, including around questions of theft? Are concepts of property and theft even the best ones we might use to pursue fairness and justice in the electricscape yet to come?
Trish Kahle
Assistant Professor
Georgetown University in Qatar
In hurricane-prone areas, everyone checks their electric power supplies before a hurricane’s landfall. Recently, days before Hurricane Ian, Floridians like me scrambled to assess how much electricity would their families need for water pumping/filtering and to run electronics, among other jobs. Then we charged our power banks, batteries, and solar panels, preparing for future days without power. People today are acutely aware that access to electricity is an essential part of modern life, and some may die without it. So I sympathize with Tomás Sánchez, owner of the San Antonio mill in Diana Montaño’s history, who said in court records that he needed steady access to electricity to operate his factory or it would perish—although he did not always wish to pay for this access.
Montaño’s engaging story of power thieves is a reminder that the invisible marriage of protons (positive charge) and electrons (negative charge) changed lifestyles around the globe. Beginning in the late nineteenth century, Latin American leaders directly funded or granted permission to private companies to build electric power plants as part of a modernization drive. The aesthetic of a civilized and orderly society was to have electric streetlights and lighted buildings. Electricity was vital in infrastructure, industrial development, and military preparedness. And it implied rising social status, offering life-changing opportunities to individuals who could afford a monthly contract to light their homes during the night and dawn hours. By the early twentieth century, as the scholars Abigail Harrison Moore and Graeme Gooday describe, electricity was common in public areas and designers invented decorative electricity “to produce a new style that would meet both the technological and aesthetic demands of what the Art Journal (December 1901) referred to as ‘the greatest revolution to antiquated customs and appliances … electricity.’”
The aesthetic of a civilized and orderly society was to have electric streetlights and lighted buildings. Electricity was vital in infrastructure, industrial development, and military preparedness.
By the mid-twentieth century, the Mexican power thieves were not alone in stealing electricity from public lines. For instance, in the Quarto de Despejo (1960), Carolina Maria de Jesus, a woman living in a São Paulo favela (shantytown) with her three children, mentions having electricity hooked up to her shack. Having light permitted her to write, read, and relax in the evenings. By the 1970s, gatos (illegal wire hookups) became commonplace in the Rocinha favela in Rio de Janeiro, with the donos do morro (owners of the hill) supplying utility services to residents, including through the illegal hookups, as reported in RioOnWatch.
As other discussants have suggested about Montaño’s case study of ladrones de luz, electricity could be widely available worldwide. However, institutions create unfair and inequitable systems of distribution, making it costly to access electricity—an energy source that is derived from a basic knowledge of transferring or managing it. Montaño shows that historically, consumers have never been passive actors and will ingeniously discover ways to retrieve needed energy for low or no cost. Since the first use of electricity in homes, people have continuously created new uses for it or expanded on its original designs. People today can live “off the grid,” using little more than a relatively inexpensive foldable solar panel system and portable power station. Like the historical characters, these individuals have made electricity their own.