In “What Fossil Preparators Can Teach Us About More Inclusive Science” (Issues, Fall 2021), Caitlin Donahue Wylie makes a compelling argument that fossil preparators can teach us a great deal about how to make science more diverse and inclusive. The study of paleontology rests almost entirely on the examination of physical specimens that survived the wreckages of time. But these rare, fragile, and valuable objects do not speak for themselves. A great deal of highly skilled labor is required to make them amenable to expert examination. That is the job of the fossil preparator.
The work of preparing paleontological specimens requires not only painstaking attention to detail. It also involves profound judgments about which parts of a fossil to highlight, and which aspects to leave in the dark. This is highly creative work, which is why preparators often compare themselves to sculptors and artists.
Given its far-reaching importance, why is the work of paleontological laboratory technicians so rarely acknowledged? As I tried to suggest in my book, Assembling the Dinosaur, their invisibility largely stems from a problem of trust. The hierarchical structure of today’s paleontological community took shape during the late nineteenth century. This period in American history is often described as the “first” Gilded Age, and it resembled our own time in many respects. An especially salient point of comparison is that high levels of economic inequality and political unrest created a deeply fractured society. As a result, scientists could not take it for granted that their ideas would be accepted among a divided public.
At the same time, paleontologists working for civic museums were charged with producing exhibits to inculcate moral lessons of right living and appropriate conduct. These exhibits often invoked a highly speculative narrative of evolutionary progress to naturalize, and thus justify, the economically stratified and white supremacist social order that prevailed at the time. They also featured large and imposing fossils designed to appeal to a mass popular audience. However, to succeed in their pedagogical function, museum exhibits had to impress visitors as a trustworthy depiction of the way life evolved over time. They did so by stressing the idea that fossils provide a direct link to the deep past, an objective window through which visitors could observe a bygone world in which “‘Nature red in tooth and claw’ had lost none of her primitive savagery,” as a visitor’s guide from the American Museum of Natural History put it in 1911.
To bolster the idea that paleontology offered an unmediated account of the deep past, museums made a strategic decision to downplay the creative work of fossil preparators. As the paleontologist and museum director Henry Fairfield Osborn argued, the American museum was “scrupulously careful not to present theories or hypotheses, but to present facts” to its visitors. The profound role that fossil preparators played in shaping paleontological specimens threatened to undermine this claim, and with it, the museum’s ability to discipline an unruly working class.
Wylie is right to insist that we should celebrate the essential work done by fossil preparators, who tend to make up a far more diverse community than research scientists. This should also occasion a much broader discussion about the politics of knowledge. The call to diversify science raises foundational questions about whom the scientific community entrusts with the work of producing authoritative claims about nature. These are old questions. It is about time that we came up with new answers!
Lukas Rieppel
Associate Professor of History
Brown University
Caitlin Donahue Wylie raises several important issues worth in-depth exploration, especially with respect to the role of the fossil preparator within the broader discipline of paleontology.
During my 20 years as a practitioner of fossil preparation, my perspective on the nature of mechanical and chemical interventions in fossil specimens evolved along with my understanding of how other paleontologists interacted with these objects. At first, I accepted the characterization of the role as a “technician.” As I became more aware of the downstream impacts of my decisionmaking on the integrity of the research process, I began to shy away from this historical label and instead view my contributions as an active member of the scientific research team.
As Wylie noted, preparators frequently describe their work as artistic, eschewing the title of paleontologist. Some will say that they are “more an artist than scientist,” often paraphrasing the quote attributed to Michelangelo relating to the sculpting process: “I simply remove from the marble that which is not David.” This self-effacing approach to explaining the work of fossil preparation may insulate the preparator (and audience) from a more uncomfortable examination of their true role (especially as it relates to institutional status, recognition, and compensation), but the subtext also glosses over the underlying joke behind the quote, that Michelangelo was one of the most brilliant artists in history. Is every preparator producing genius-level interpretations of the fossil record? Of course not. But many paleontological insights could never have been achieved without not just any fossil preparator, but rather a specific fossil preparator.
According to Wylie, preparators use “skill and judgment to prepare specimens instead of following top-down instructions from scientists or carrying out predetermined protocols.” Like our apocryphal sculptor, they are not merely removing excess material to achieve a prescribed result. Preparators are combining expertise in multiple domains (geology, biology, chemistry, objects conservation) with skilled mechanical aptitude for handling tools and materials to determine what does and does not constitute scientific data. To be sure, a great deal of preparation requires only basic “cleaning” to expose data, and falls into the historical classification of preparators as technicians. More complex projects require a suite of analytical processes, comparative collections research, and literature review to determine what material to remove or preserve.
In 1804 the paleontologist Georges Cuvier gathered an audience to witness a demonstration. Cuvier presented a partially exposed fossil of a previously unknown animal and anticipated the appearance of the pelvic bones still buried in the rock matrix. He proceeded to remove the matrix using a sharpened steel needle to confirm his prediction. The act of preparation was the test of his hypothesis, the paleontological experiment. I contend that most preparators are actually more scientists than artists, even if they do not recognize or admit it. The notion of preparators-as-paleontologists remains controversial in some pockets of the field, but I believe that Wylie’s research supports this perspective.
Caitlin Donahue Wylie argues that fossil preparators—the lab technicians who painstakingly remove rock from fossils—perform “significant physical and epistemic processing” to make fossils accessible to researchers. The work they do requires great skill and concentration, specialized knowledge, a steady hand, creativity, problem-solving abilities, and monastic patience.
Yet their contributions to the science of paleontology, though essential, are often unacknowledged and even underappreciated. Many preparators are also unpaid. Visitors might be surprised to learn that the fossil preparation labs in many American science museums, including some of the largest and wealthiest, are staffed by legions of hardworking volunteers. Likewise, the handful of paid preparators who train and supervise these volunteers are often working for low wages, in term positions, or both. There is no standardized training for fossil preparators, nor are there regular degree requirements. Preparators learn their craft through experience, apprentice-style. They come from all walks of life: artists, mechanics, woodworkers, and so on. Are you good at puzzles? Then you can learn to be a fossil preparator too.
As Wylie points out, there are advantages to this arrangement. The informal training that fossil preparators receive “dismantles the barriers” to participation in paleontology that are imposed by, for example, stringent degree requirements. Fewer obstacles means more opportunities to participate, which leads to greater inclusiveness. The success of the citizen scientist movement shows that people without science degrees—and this includes many (most?) fossil preparators—can nevertheless greatly impact the production of scientific knowledge through the contribution of their skilled labor. At the same time, broader participation by volunteers and citizen scientists “can help inspire greater public trust in science,” Wylie adds. I think all these claims are probably true.
Still, I can’t help wondering whether the science of paleontology and, more especially, the fossil preparators themselves wouldn’t also be well-served by reforming some of the ways that museums manage the business of paleontology. For example, would greater professionalization, including more standardized training (or even a degree requirement or a certification program), better pay, and more job stability ultimately yield higher-quality fossil preparation? Would this, in turn, lead to greater recognition or higher status positions with less turnover? Ask a paleontologist and they will no doubt acknowledge the essential nature of the lab work that preparators do on behalf of their science. But are they doing enough to help elevate the status of their essential coworkers? Maybe it’s time to start seeing fossil preparators as scientists too.
Fossil preparators in their glass-walled labs are working on the front lines of paleontology. They are often taken for paleontologists. Indeed, they are paleontologists as far as the museum-going public is concerned. Maybe it would be better practice to give them the kind of regular and rigorous training, job stability, and status that their back-of-the-house colleagues enjoy. Higher salaries would no doubt be appreciated as well.
Paul D. Brinkman
North Carolina Museum of Natural Sciences
Caitlyn Donahue Wylie’s article is an important reminder of the wide variety of skilled work that goes into scientific discovery. Giving readers the opportunity to learn about fossil preparators in action, she shows that manual labor can be simultaneously intellectual labor, and that what we’ve been taught to consider brain work can’t exist without good hands and a good eye.
Wylie asks us to value the work of preparators because the scientists they work with don’t. Scientists dismiss preparators’ work as mere “cleaning” and render them invisible in scientific publications. These practices are the opposite of inclusive, as Wylie acknowledges. Indeed, they betray a deeply entrenched hierarchy in the sciences, one that elevates people with advanced degrees and marginalizes those without, no matter how necessary or substantive their contributions to the collective work of making knowledge.
Making science more inclusive would seem to require dismantling this hierarchy. Bringing more people without scientific credentials into practices of inquiry, which Wylie holds up as a goal, will hardly result in “more inclusive science” as long as they are regarded as cleaners rather than scientists in their own right. Their “perspectives, creativity, and skills” will not enrich science—at least not to the extent that they could—until scientists and scientific institutions learn to respect uncredentialed workers enough to actually recognize their creativity and skill.
Increasing public trust in science—another of Wylie’s central concerns, and rightly so—also requires rethinking the hierarchy. Mistrust, I would venture, is not based on misunderstanding of how science proceeds or an inability for people without advanced degrees to see themselves as scientists, as Wylie’s argument might suggest. Rather, there is reason to think that people without scientific credentials understand all too well that scientists hold them in low regard. They would not be at all surprised to see fossil preparators being dismissed and disdained by their scientist colleagues. Identifying with preparators would give the public very little reason to change their opinion of science—unless perhaps they saw the skills and experience of preparators become valued to the same extent that those of credentialed scientists are.
Wylie’s study of fossil preparators does a great service to the public conversation on science, by revealing the disparities in how the culture and institutions of science value their diverse workers. In contrast to her optimistic message, however, I contend that part of what fossil preparators can teach us is that until scientific institutions change to promote equality between all contributors to science, inclusion is likely to seem hollow, and trust is too much to ask.
Gwen Ottinger
Associate Professor
Department of Politics and Center for Science, Technology, and Society
Drexel University
Caitlin Donahue Wylie provides fascinating insights from her fieldwork on the practices of fossil preparation and makes useful suggestions about citizen science more generally.
While preparators spend many hours and deploy dexterity and various skills to prepare fossils, their work is often invisible to the public. It is rarely acknowledged and credited in publications or exhibitions, and within institutions it is often pejoratively called “cleaning.” Wylie’s work makes visible what lies behind the scenes of museums. She thereby contributes to the scholarly debate of the invisibility of work—with scholars having pointed to the problem of “invisible technicians” (Steven Shapin), “invisible work” (Susan Leigh Star and Anselm Strauss) and, more recently, the “in/visibilities” of maintenance and repair work (Jérôme Denis and David Pontille).
But her work does more than make visible the careful and time-consuming practices of preparators. It also shows the creativity of this work. She demonstrates that preparation does not mean just cleaning rocks, but that it requires creative and complex skills and sometimes difficult decisions. I would argue that Wylie shows that preparators’ work is also “ontological,” in that it transforms the status of objects. This transformation is both physical and conceptual. Preparators turn natural objects into “working objects”—that is, objects that are not “raw” nature but the materials from which concepts are formed and stories can be told.
In her article, Wylie also moves beyond her passion and expertise in fossil preparation to make some recommendations for making science more inclusive, for example by undertaking outreach efforts, opening up data preparation for citizens, and recognizing the value of skills rather than credentials. This raises a number of important questions. For instance, how is fossil preparation similar or different from other fields of natural history (including botany, zoology, entomology, ornithology) that have a long track record of being relatively open to amateurs? And how does this difference play out compared with more recent fields such as DIY biology, DIY medicine, or popular epidemiology?
We have seen DIY biologists working with pharmaceutical companies on large data sets on cancer, and developing biosensors to measure the pollution of canals and give communities real-time access to data. We have seen people discovering exoplanets and asteroids through data openly provided by the NASA. We have seen patient organizations producing knowledge on rare diseases that companies and doctors knew almost nothing about. We have, in sum, observed citizens collecting, using, analyzing, and publicizing data. “Preparing” is the term that Wylie adds to this list of verbs—a verb both empirically grounded and theoretically fertile.
Wylie concludes by arguing that participation and engagement efforts might “inspire greater public trust in science.” Another crucial benefit could be this one: it makes scientists trust the public more.
Morgan Meyer
Director of Research
Paris Sciences et Lettres University, CNRS
Episode 7: Shaky Science in the Courtroom
Eyewitness testimony and forensic science are forms of evidence frequently relied upon in criminal cases. But over the past few decades DNA analysis—and the exonerations it has prompted—has revealed how flawed these types of evidence can be. According to the Innocence Project, mistaken eyewitness identifications played a role in about 70% of convictions that were ultimately overturned through DNA testing, and misapplied forensic science was found in nearly half of these cases.
In this episode we speak with Jed Rakoff, senior US district judge for the Southern District of New York. Judge Rakoff discussed the weaknesses in eyewitness identification and forensic science and offered thoughts on how judges, policymakers, and others can reform the use of these methods and get stronger science into the courtroom.
Rakoff is the author of the 2021 book Why the Innocent Plead Guilty and the Guilty Go Free: and Other Paradoxes of Our Broken Legal System. He cochaired the National Academies committee that wrote the 2014 report Identifying the Culprit: Assessing Eyewitness Identification, and served on the National Commission on Forensic Science from 2013 to 2017.
In 2000, associate justice of the Supreme Court Stephen Breyer wrote for Issues about the role of science in the justice system: “Science in the Courtroom”
Transcript
Frueh: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. I’m Sara Frueh, consulting editor for the journal. I’m joined today by Judge Jed Rakoff to talk about flaws in forensic science and eyewitness identification, their use in the courtroom, and what can be done to address these problems. Judge Rakoff is a senior US district judge for the Southern District of New York. He co-chaired a National Academy’s committee that issued a landmark report on eyewitness identification in 2014, and his recent book is called Why the Innocent Plead Guilty and the Guilty Go Free: And Other Paradoxes of Our Broken Legal System.
Frueh: Welcome, Judge Rakoff, it’s great to have you with us.
Rakoff: It’s my pleasure, and thank you so much for inviting me.
Frueh: I’d like to start by talking about eyewitness identifications. People and juries often assume that if a witness in a criminal case says, “I was there, I saw that person commit the crime, I’ll never forget his face,” that it’s reliable. But several years ago, you led a study that found that these identifications often aren’t as reliable as people think. Why is that?
Rakoff: First of all, in the words of the Innocence Project, inaccurate eyewitness identification is so common that it’s the single greatest factor in the wrongful convictions that, through DNA, have led to exonerations. The Innocence Project has exonerated, to date, almost 400 people. These were people who were convicted beyond a reasonable doubt of very serious crimes—murder, rape, things like that—and yet were eventually proven to be absolutely innocent. And in over 70% of those cases, eyewitness identification evidence was introduced. In one of the earliest cases, the case involving Kirk Bloodsworth, a murder rape case—terrible crime—no fewer than five eyewitnesses said they had seen him commit the crime or had seen him fleeing from the scene of the crime. And they were all wrong. eventually, the DNA testing of the semen from the victim proved that it was someone else, who later confessed, but not until Mr. Bloodsworth had spent nine years in prison, [two of those years] on death row, I might add.
So it is a real conundrum for the legal system because, as you point out, this is powerful evidence. It’s not only evidence that’s frequent, particularly in state crimes, but it’s quite powerful. The eyewitness typically has no motive to lie. The eyewitness is usually a responsible citizen. The eyewitness is someone who, by the time a case goes to trial, or even when it just goes to a hearing before a judge, or even before that, when he or she is talking to the prosecutor, has become quite sure that he has identified the right guy. Juries naturally believe them. So why are they wrong? Well, sometimes they’re wrong for reasons that are understandable to a jury: bad lighting, the fact that the guy was carrying a weapon so that the eyes of the witness were focused on him, the fact that the eye witness’s own view is obscured.
Unfortunately, that’s just the tip of the iceberg. There are all sorts of things going on in the human perception equipment and the human memory that jurors are not aware of, and that can make for false identification. Just to give you two examples: one is the racial effect. People of one race are much better at perceiving and remembering the fine facial features of someone of their own race than someone of a different race. There is some controversy over why this is so, but there’s no doubt that it is so. Another more subtle problem, and one I think you see frequently in the exoneration cases, is where two different memories have merged together—unconsciously, but for very real. And the way this works, for example, is you see the culprit committing the crime, and maybe you see it for a minute or two. And if one could take a photo of the inside of your brain at that point, one would see that you only had a very fuzzy perception of what the guy looked like.
But three hours later, you’re shown a photo array, seven photos. And if it’s done properly, you’re told, “No one here may be the guy, but if there is someone who is the guy, let us know.” You’re a conscientious citizen. You pour over those photographs very carefully. And finally you say, “Well, I’m not sure, but the person who looks most to me like the guy that I saw commit the crime is number two.” And in studying number two, you notice, among other things, that he has a scar over his right eyebrow. Now you did not, in fact, perceive that at the time. But over the next few weeks and months, those two memories will merge. And by the time you come to testify, you will say, “I am really absolutely sure this is the guy, because I will never forget that I saw this scar over his right eyebrow.” And what you’re really remembering is the photo, not the guy you saw, but the two memories that merged unconsciously and just reinforced your wrongful identification.
Frueh: In your book, you discuss those fundamental problems with our vision and memory, and you suggest that one way to limit the damage they can cause would be to educate prosecutors about those flaws and how they can lead to mistaken identifications. How would that education work, and how would it help solve the problem?
Rakoff: You have to understand that because of laws, many of which I think are unfortunate, that were passed in the 1960s, seventies, eighties, and nineties, 97% of all criminal cases are plea bargained. And this is, it used to be no more than 80%, sometimes 75% in some jurisdictions. Now it’s overwhelmingly the case. And so the prosecutor is really the key player. To be frank, he or she wields much more power than the judge in determining who gets charged, who goes to jail, and so forth. And most prosecutors are very well intentioned, but very young. I was a prosecutor for seven years. I was three years out of law school, two years out of clerkship. I knew nothing about most of this stuff. I learned it on the job, so to speak. And what I regret is that I didn’t have what I had when I became a judge.
When I became a judge, like all federal judges, I was sent to something called baby judge school. But it’s a week-long program in Richmond, Virginia, where you’re taught things that you probably don’t know about being a judge. Well, I would like to see the same kind of thing for prosecutors. It doesn’t necessarily have to be a whole week, but several days. It could be done, ideally, in person, but it could be done by video and pre-packaged programs and so forth in which, among other things, prosecutors would be made aware of how often eyewitness identifications are wrong and why this is so. And you see the importance of this in the cases where there were exonerations, because almost always the guy who they then discover really was guilty was one of the suspects originally. But then the eyewitness came along and said, “Oh no, it’s Jones.” And so the police stopped looking at Smith and they started focusing exclusively on Jones.
But the prosecutor, if he knows of that danger, could say to the police at that point, “Before we charge this guy, have you looked into Smith? Let’s follow up a little bit more with Smith.” Because they would know that this testimony is not as reliable, in many cases, as it appears. So that’s the thing I have in mind. Now, I have another suggestion, which I feel will probably not be adopted in the United States, but it’s based on the British system. And my idea would be that for every three years that a prosecutor is a prosecutor, he has to spend six months of those three years as a criminal defense lawyer, defending indigent people. This would have to be in another district so there wouldn’t be conflicts of interest. And that would give him a much greater feel for where weaknesses in his approach may lie. Not just in eyewitness identification, although that would be certainly a key one, but also in areas of doubtful forensic science, for example.
Frueh: Speaking of that, I’d like to switch gears and talk a little about forensic science, which in the last decade or more has faced similar problems in revelations in terms of it not being as reliable as many people think. In your book, you talk about how the emergence of DNA analysis in the 1990s revealed the weaknesses in some other forensic science methods, like hair analysis and bite mark analysis and other techniques. In other words, that they’re not really grounded in science. What does it mean to say that a forensic science method is truly scientific?
Rakoff: So the great report—done by the National Academy of Sciences, let’s hear it for the National Academy of Sciences, and published in 2009, and the co-chair of that was the very great Judge Harry Edwards from the DC circuit—that report went through virtually all the major forms of forensic science and concluded, as you say, that most, if not all, were not really grounded in science. And what do we mean by grounded in science? Well, the Supreme Court has spoken to this in the Daubert case. We mean first, that the theory has been tested, tested in scientifically sound studies, blind studies, studies that meet all the requirements of a good scientific study, that [it] has then been peer-reviewed in publications that are well-known as being the publications that monitor developments in the relevant areas, that it has then been used sufficiently so that we can calculate an error rate, and if the error rate is too large, say more than 10%, that would cast great doubt on the reliability of the method—and finally, that it’s generally accepted, not just in the narrow community of people who administer these tests, but in the broader scientific community.
The National Academy report found that DNA was really the only one that met all four of those requirements, probably because it was developed by scientists for other reasons. Most of what we think of as forensic science was originally developed by police labs as a way of helping leads. And that’s perfectly legitimate. You don’t have to have rigid science in order to have something that may help you go identify another lead to follow. But then, beginning in the early 1900s, it began to be introduced in the federal and then the state courts. And a lot of it proved to be very unreliable, although we didn’t know that until things like the Innocence Project came along. There’s something called The National Registry of Exonerations, which is put together by several law schools, which records all court-ordered exonerations since 1989. And there are now about 2,500 of those. They found that in 40% of their cases, there was inaccurate forensic science testimony introduced.
I’ll give you an even more extreme example. It used to be that the FBI and various local police people would do what’s called a microscopic hair analysis, where the theory was that everyone’s hair is unique, just like everyone’s fingerprint is unique. I think many barbers would disagree. In any event, just taking that—without ever having tested it to see if that was true or not—the theory that developed [was] that if you look carefully enough at a blown-up slide of the hair found at the scene of the crime and compare it with the hair of the suspect, you can make a match. And in the lingo of forensic science, this got introduced as, “I am sure to a reasonable degree of scientific certainty, that the hair found at the scene, the crime matches the hair of the defendant and the likelihood that it could be anyone else is extremely remote.”
Well, that turned out to be wrong. It was bad science and it wasn’t true. And finally, the FBI, to its credit, did its own study of the 3,000 cases in which its own experts had testified to the effect that I just indicated. And they concluded that in 95% of those, what the experts said was either flatly inaccurate or, more often, considerably overstated. Many courts now preclude the emission of this bogus forensic science. Now, the bad news is there are still states that allow it.
There is a lot of problem with forensic science. Not all DNA is good. Fingerprint is better than things like bite marks and microscopic hair analysis, but it still has a degree of subjectivity, and that’s the last thing I’ll make mention of.
In a really scientific test, subjectivity plays no role, but as the National Academy found in their report, even in things like fingerprints, subjectivity plays a big role. So you have two fingerprints that you’re comparing, and it’s a subjective decision based on “experience,” which things to look at for comparison and which not. And so you may say, “You see that little swirl over there and the one that’s in the other slide, and that only occurs when it’s the same person.” Another examiner will say, “No, the swirls are really not central, and you have to look at something else.” There have been cases in which fingerprint experts have gotten it flat wrong, but it is better than some of the others.
Frueh: Is there a way to make some of the subjective forensic approaches more objective, or are they just intrinsically subjective?
Rakoff: Well, I think the biggest suggestion from the National Academy of Science 2009 report that I really do not know why it hasn’t been followed up, but it hasn’t, was to create a National Institute of Forensic Science staffed by high level scientists. And they would look at each form of forensic science and they would say, “This one’s good, but it could be made even better in the following ways. This one’s so bad, we don’t think it’s salvageable. This one’s bad, but it could be made salvageable if the following steps were taken.” And they would be in a position, as scientists, to bring to bear a kind of rigor that no judge is capable of doing and no party is capable of doing. So that would be, I think, the ideal solution—it was the proposal from the National Academy. I’m not sure why it was never followed up. There is some opposition to all of this from local police labs and the like. They have understandable biases in favor of what they’re doing, which I can fully appreciate. But I’m not sure why it didn’t attract greater national attention.
Frueh: Given that that hasn’t happened so far, have you seen any areas of forensic science where there has been progress made, either to put disciplines on more solid scientific footing or to make courtroom testimony more accurate and to reflect more fully the limitations of these methods? Have we made any progress at all?
Rakoff: Yes. It varies a lot from state to state. And please remember that the criminal justice system is mostly a matter of state prosecutions. Something like 90% of all criminal prosecutions are brought by the state, not by the federal government, so that’s where the action is. I think 37 of the states have now adopted Daubert and therefore, at least in theory, could subject forensic science, so-called forensic science, to the four-part analysis that I mentioned previously. And some cases have done that. There was an early case in Oklahoma, for example, where the federal judge threw out a microscopic hair analysis because after a Daubert hearing, he realized that this was just not good science at all.
A number of judges, and I would have to admit including myself, have also experimented with putting limits on what the expert can say. So I had a case a few years ago involving ballistic testimony. The theory of ballistics is that when you fire a bullet, it makes certain markings on the cartridge, and more importantly, on the barrel of the gun that are unique because the physical situation is never totally identical—it could be in a room with more pressure or less pressure, whatever. There has been doubt cast on that theory. But in the particular case that I had, and some other judges have done similar things, I said, OK, I will allow the expert to show two blowups, one of the bullet and one of the barrel of the gun, and to point out what he thinks are the commonalities of the two. But he can’t say there’s a match. And the most he can say is that it’s more likely than not, a much lower standard than proof beyond a reasonable doubt, more likely than not that this bullet came from this gun.
A number of judges have taken that approach, and I did. I’m not so sure in the end that’s the approach I would take today. I think I might throw it out altogether. But that is the approach that’s been taken. One of the reasons is judges are, I think, reluctant in a criminal case to exclude evidence that they think might make the difference between a determination of guilt or innocence. And they sort of feel, “Gee, the jury should see it all and then make their decision.” But seeing it all is not really meaningful when the jury has not the slightest scientific clue as to whether this evidence is good science or bad science. But nevertheless, that is, I think, deeply ingrained in many judges: when in doubt, let it in. And I think that happens too often in these cases.
Frueh: What about the role of the federal government in all of this? It sounds like a lot of these decisions are made on a court-by-court basis, on a state-by-state basis. Is there a role where the administration or Congress could take a more active part in driving change beyond the founding of the National Institute of Forensic Science, which I know you said was your first choice? Short of that, are there other steps that they could take to improve forensic science and its use?
Rakoff: The government, to its credit, in the last term of President Obama, created a National Council of Forensic Science [National Commission on Forensic Science] and they brought together all the players. So there were defense lawyers, there were government prosecutors, there were hard scientists, so to speak, and scientists who had devoted themselves to particular forensic disciplines. There were lab technicians and there were some judges. There was even one federal judge—they were desperate. Anyway, over the course of the four years that that existed, the group made no fewer than 59 recommendations. Now, these were technically recommendations being made to the Department of Justice, but they were written in such a way that we were hopeful would also impact local police and prosecutors as well.
To give you one example, we came down very strongly—this was almost unanimous—against the formulation [of] reasonable degree of scientific certainty, saying A) that’s not good science, science’s probability is not certainties, and B) it conveys completely the wrong view. It basically says to the jury, “Don’t be skeptical of this, it’s certain.” So we recommended the Department of Justice adopted that recommendation. I think some states have followed suit as well. I think it would be not as good as the so-called National Institute, but I think a useful step would be to revive the National [Commission on] Forensic Science. In the early days of the Trump administration, it had a four-year term and it was allowed to lapse and not be renewed. But a majority of the members felt there was still work to be done and asked that it be renewed. I think that would still be a helpful step.
Frueh: OK. As a final question, and we’ve talked about some areas where progress has been made, sometimes piecemeal, sometimes slow, and where progress hasn’t been made, and I’d like to ask, do you have hope for the future of forensic science and for fundamental change in forensic science? And if so, where does it lie, and what part of the system do you think is most fertile ground for change?
Rakoff: Well, being American, of course I’m optimistic. I’m always optimistic. I do think—and this goes not just to forensic science, this goes more across the board—movements for reform in the criminal justice area will go up and down depending on crime rates. So crime rates, until the last year or so, were tending down quite dramatically, and that made people more willing to consider reforms. But with the increase in violent crimes in the last year or so, I think that consensus may be broken up. The other thing though that leads to reform is something that is continuing and has been quite well publicized, which is the exoneration of innocent people. You can pick up a newspaper every week, practically, and find another case where someone who’s been in prison for a substantial period of time is found to be totally innocent and the court says, “Oops, sorry about that, you can go free.”
And the American people, I do think, are fundamentally a very fair people, and they are affected, as they should be, by that knowledge. So against that big background, what needs to be done is educating the public that it is things like eyewitness testimony and bad forensic science that is such a big factor in these wrongful convictions, so that they can see the connection. They’re more used to seeing on TV where the CSI type of approach—“Oh, it’s brilliant.” And they solve this problem, this case that could not otherwise be solved, and so forth. So I think that’s where you read these newspaper accounts of exonerations, rarely do they get into the weeds of why the person was wrongfully convicted. Usually it will be either wrongful eyewitness identification or bad forensic science. There are other factors as well, but those are the two big ones.
So I think the focus needs to be there. Where reform will ultimately come from, I think the courts can’t escape responsibility. These wrongful convictions occur on our turf and sure, Congress sets a lot of the rules, the president exercises a lot of discretion, but, when all is said and done, it’s in our courtrooms that the wrong occurs. And so if we don’t start doing something about it, we will have evaded our responsibility.
Frueh: Thank you, Judge Rakoff. I want to give you a hearty thanks for being with us today and just for all of the efforts that you’ve made to educate people about the importance of science in the courtroom.
Rakoff: My pleasure and thank you so much for inviting me.
Crushed House after Hurricane Irma, South Ponte Vedra Beach, Florida, USA, September 12, 2017.
For four decades, James Balog has photographed the beauty of the world’s natural resources as well as the impact of climate change on the Earth and its inhabitants. His projects explore the consequences of human behavior as it has begun to affect the stability of the natural world and the health of its citizens. He has focused his camera on the intertwined events of melting glaciers, rising seas, warming oceans, polluted air, increased temperatures, and the destructive forces of ferocious hurricanes, floods, and wildfires.
His projects explore the consequences of human behavior as it has begun to affect the stability of the natural world and the health of its citizens.
As an artist, Balog pushes aesthetic boundaries to create simultaneously engaging and disquieting individual photographs, as well as websites and films. Through his work, he aims to stimulate public awareness and mobilize action on behalf of the Earth and its populations. Widely published and exhibited as a photographer, Balog also collects data and visual evidence on climate and environmental change. He created the Extreme Ice Survey in 2007 to document and measure the retreat of glaciers around the world. All of Balog’s photographic essays relate to his conviction that human behavior is changing our globe and that these changes are, in turn, having serious impacts on humanity.
Ancient Air Bubbles Released By Melting of Greenland Ice Sheet, Greenland, 2008. Air trapped in ice more than 10,000 years ago is now being released as the Greenland ice sheet melts.
Copper Berg, Jakobshavn Isbræ, Greenland, August 24, 2007.
Icebergs that have rolled over and been scalloped by waves metamorphose into fantastic shapes.
Giant Sequoia, “Stagg,” Camp Nelson, California, USA, December 28, 2001.
Burning ethanol in a steel pan to study flame dynamics and flame buoyancy at the US Forest Service’s Missoula Fire Sciences Lab. Fire Plume #1, Missoula, Montana, USA, 2015.
Tire Tracks, Bonneville Salt Flats, Utah, USA, 2019. The Bonneville Salt Flats are a densely packed salt pan in Tooele County in northwestern Utah. The area is a remnant of the Pleistocene Lake Bonneville and is the largest of many salt flats located west of the Great Salt Lake.
Climate Change and Communities
In “A Climate Equity Agenda Informed by Community Brilliance” (Issues, Fall 2021), Jalonne L. White-Newsom discusses the predictability of climate change-related health challenges in communities whose primary residents are low-income and people of color. She points to the compounding effects of failing infrastructure, structural racism, and climate change as a primary culprit in adverse health outcomes in these communities. The evidence is certainly compelling.
In 2016, the Detroit area saw an outbreak of hepatitis A, culminating in 907 cases, 728 hospitalizations, and 28 deaths by 2018, according to the US Centers for Disease Control and Prevention. It’s no wonder that after the city underwent major water shutoffs, impacting thousands of Detroit residents, we would see such a widespread public health crisis. However, just as White-Newsom mentioned, this outcome was not unpredictable.
In the 1970s, the Clean Water Act prompted the dismantling of the Detroit Water and Sewage Department, making it prime for bankruptcy. The subsequent reallocation of the department’s assets significantly contributed to the failing infrastructure, leading to the water shutoffs in a failed attempt to recoup losses. The massive shutoffs negatively impacted the city’s water infrastructure, causing much less movement through the pipes; these factors have created the perfect environment for the overwhelming flooding we’ve seen in residential homes in recent years. The water shutoffs and residential flooding together create a petri dish for infectious diseases, such as hepatitis A and COVID-19.
Community-led and community-based research can create more positive outcomes if our government would invest more in partnering with community scientists.
As White-Newsom indicates, there is a solution. Community-led and community-based research can create more positive outcomes if our government would invest more in partnering with community scientists. It must be noted that the water crisis in the Michigan cities of Flint, Benton Harbor, and Detroit were brought to the forefront by community members who identified the water contamination. That is why we practice community-first research at We The People of Detroit.
Community-first research puts people before data. This represents a dramatic shift in perspective. One of the most significant issues researchers face is providing the public with data that offer actionable value to the communities they are working in. Our group’s Community Research Coalition’s community-first research framework helps address this issue, advocating that researchers begin their work with the end result in mind, asking what research questions are most relevant, timely, and useful to residents.
Monica Lewis-Patrick
President & CEO
We The People of Detroit
Jalonne L. White-Newsome argues that climate agendas will fail if they are not focused on the lived experience of people most vulnerable to climate change and environmental hazards. She advocates for more equitable approaches to addressing and governing climate change.
Throughout my years of research for my book Climate Change from the Streets, I witnessed environmental justice (EJ) activists being motivated by their lived and embodied experiences. They are increasingly debating with experts over issues of truth and method in science. They are also demanding a greater role in environmental health decisionmaking that impacts their lives and bodies. EJ groups are not only challenging the political use and control of science and expertise by claiming to speak credibly as experts in their own right, they are also challenging the process by which technical knowledge is produced. Conventional climate change policy often overlooks the ways in which scientific knowledge and notions of expertise develop, become institutionalized, and tend to exclude from their cognitive domain other ways of knowing and doing.
Conventional climate change policy often overlooks the ways in which scientific knowledge and notions of expertise develop, become institutionalized, and tend to exclude from their cognitive domain other ways of knowing and doing.
By extending the arena of legitimate climate change knowledge to include embodied knowledge, regulators and policymakers can better understand the insights that EJ advocates can offer to environmental problem-solving. In settings where there is high uncertainty, embodied approaches can uncover new hypotheses rather than test predetermined ones. Embodied approaches, moreover, can provide a complex (or thick) description of the environmental condition that is faithful to the lived experience of residents. Such accounts provide a cultural consciousness that the environment can invoke multiple harms to human bodies and that the combining of knowledge and action for social change can ultimately help improve health in the most disadvantaged communities.
In this context, EJ groups are both pushing new hypotheses and evaluating existing ones around climate problems and solutions. They are calling for multiple ways of learning and knowing about climate change. In my research and practice, I have observed how EJ groups have centered their work on telling stories of how their bodies bear the marks of environmental interactions. They framed their work on the human embodiment of climate change and carbon’s associated co-pollutants. For them, the body is where diverse points of pollution, social stratification, and poverty intersect. I call this way of knowing and learning “climate embodiment”—a concept that draws on eco-feminist studies and the field of public health.
For example, EJ advocates in Richmond, California (home to one of the world’s largest oil companies, the Chevron Corporation, and California’s single largest source of greenhouse gas emissions) argue for a holistic understanding of the links between the infrastructural body (that is, the extraction of raw materials to support a fossil fuel economy) and the contaminated human body. In other words, we begin to imagine a form of climate embodiment that represents a continuum, where the human body cannot be divorced from its environment, and environmental solutions cannot be isolated from the human body. Climate embodiment represents new models of engagement with climate change that makes space for alternative paradigms of environmental protection.
Michael Méndez
Assistant Professor of Environmental Planning and Policy
University of California, Irvine
He previously served as a gubernatorial appointee and senior consultant during California’s passage of its climate change laws.
Designing a New AI
Immediately upon securing the Normandy beachheads in World War II, American forces were faced with a task for which they were wholly unprepared. The breakout from Normandy required Allied forces to penetrate the hedgerow country that was ideally suited for German defensive operations seeking to stall the offensive. Neither infantry nor tanks could easily penetrate these hedgerows, and the defenses quickly decimated American tanks and infantry when they operated independently. “Technology” solutions soon emerged when soldiers used metal from the German beach defenses to create “teeth” on the front of American tanks. Infantry and armor soldiers soon developed new tactics jointly maneuvering upon breaching a hedgerow to overcome the German defensive kill zones. The leadership, courage, and innovation of these American forces enabled them to break out from the landing sites before German reserves could decisively respond and push the Allies back into the English Channel.
Innovation often appears intuitive in hindsight, and the American success can appear preordained. This would be a revisionist view of the situation at the time as well as a misleading indication for future success. As organizations incorporate growing numbers of cyber-physical systems, artificial intelligence reasoning, and diverse and potentially distributed human teams, broad and rapid innovation as witnessed in the “Norman Bocage” is not guaranteed. In “An AI That’s Not Artificial at All,” (Issues, Fall 2021) John Paschkewitz, Bart Russell, and John Main lay the foundation for innovation and learning in such future organizations through their proposed concept of liminal design.
Soldiers at Normandy first attempted to use explosives to penetrate the hedgerows, but there were not enough explosives to broadly employ this strategy. Only when soldiers abstracted the problem of “hedgerow penetration” as the function could they identify the German steel beach defenses as a viable source. One can view this simply as “looking at the problem in a different way,” but future organizations consisting of AI decision aids will struggle to perform similar composition without a structured language and design methodology to do so. Cyber-physical systems may provide a broad set of services within current organizations, but new situations require these organizations to mediate among these broadly available resources within the organization to address a local problem.
The most crucial aspect of the liminal design framework that Paschkewitz and his coauthors propose is the need for hybrid organizations to learn. Returning to the example of Normandy, tankers and infantrymen did not trust each other because they had not trained previously to work together. They had to form new hybrid teams with new tactics on the fly in the face of a stiff German defense. Imagine the challenge of seeking to create novel, hybrid human-machine teams that don’t even share the same native language. The concept of digital twins that the authors describe would allow current organizations to experiment and breed the trust needed in the face of dynamic environments is compelling and essential.
Finally, liminal design should not be considered orthogonal to human-centered design or systems thinking, but rather a modern complement to structured problem-solving for hybrid organizations faced with pressing problems in dynamic environments. This is just as true today as it was in Normandy 75 years ago.
Philip Root
Acting Director, Defense Sciences Office
Defense Advanced Research Projects Agency
John Paschkewitz, Bart Russell, and John Main propose a new design approach that they term liminal design for collaboration between human and artificial intelligence agents. As a design method it mirrors many of the exciting ideas found in the best product innovation methods. These methods include human-centered design that includes a wealth of sometimes conflicting stakeholders. Examples are found in medical equipment, where the insurance provider focuses on costs, the physician on care, and the patient on comfort; functional modeling, which raises the search process to a more abstract what does it need to do level; configuration design, which provides a formalism for composing complete solutions from constituent components, resulting in the composition of a product that meets that abstract functionality; the concept of slack in supplier-to-product developer and producer negotiations that results in a sweet spot of resolution that trades off different individual goals (a mediated solution); negotiations that result from perceptual gaps between those that must contribute to a design solution (another approach to mediation); and systems that are designed to learn in the context of Industry 4.0.
The liminal design approach raises the need to explicitly address mediation between domains with divergent needs, goals, and problem representations. The formality necessary to manifest this concept may be achievable through market-based mediation, an approach that balances the maximization of outcomes (such as profit) for different participants in the design process. This formal approach could raise the negotiations that occur between players to a more optimal or at least better-resolved finality. Mediation is a current challenge that appears in domains such as infrastructure (the need to mediate between construction and engineering), automobile design (the need to mediate between engineers and studio designers), and additive manufacturing (the need to mediate between design engineers and those intimately familiar with the limitations of specific processes and printers). Mediation is perhaps one of the most pervasive and important perceptual gaps, or liminal spaces, that requires dedicated attention from both industry and academia.
Liminal design also invites the exploration of how artificial intelligence can serve the human team, especially in this mediation process. For instance, AI can help the designers achieve more articulated solutions through tracking how designs are progressing and how team members focus or fixate on different aspects, providing incentive or suggestion to shift the problem-solving direction. AI can also assess trade-off scenarios that lead to mediated solutions faster. Moreover, the possibility of elevating AI from simply a tool used by problem solvers to more of a proactive, adaptive, and responsive partner in the design process offers additional potential for supercharging the search for high-performance solutions, an area of research that we, the authors, currently focus much of our own effort and interests on. The creation of AI tools and partners that advance solutions to these and other wicked problems may ultimately have a transformational impact on the many liminal spaces in which we live, work, and create.
Jonathan Cagan
George Tallman and Florence Barrett Ladd Professor
Christopher McComb
Associate Professor
Department of Mechanical Engineering
Carnegie Mellon University
Episode 6: The Marvelous and the Mundane
The James Webb Space Telescope is expected to reveal secrets of every phase of cosmic history, going all the way back to the Big Bang. In this episode we talk with Washington, DC-based artist Timothy Makepeace about his exhibition Reflections on a Tool of Observation: Artwork Inspired by the James Webb Space Telescope. Makepeace’s artwork celebrates the awe-inspiring technology of the space telescope while drawing attention to the fact that it is a human endeavor, revealing the nuts, bolts, and wires of the instrument. Makepeace is joined by art historian Anne Collins Goodyear, whose research exploring the relationship between art and technology provides thought-provoking historical context.
Talasek: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. I’m J.D. Talesek, and I’m the director of Cultural Programs at the National Academy of Sciences. On this episode, we’re in discussion with DC-based artist Tim Makepeace, whose exhibit, entitled “Reflections on a Tool of Observation: Artwork Inspired by the James Webb Space Telescope,” was organized by Cultural Programs of the NAS. To see Tim’s artwork and get more context for this discussion, visit his website at www.tmakepeace.com, or check the notes of this podcast. Also joining us is art historian Anne Collins Goodyear, who is the co-director of the Bowdoin College Museum of Art. Welcome to you both, Tim, Anne, I’m so glad to have you with us today.
Makepeace: Thanks for having us.
Goodyear: Wonderful to be here, J.D.
Talasek: So Tim’s most recent work as an artist has been based upon his interest in the James Webb Telescope, a highly advanced instrument, which hopefully will allow us to observe parts of space that we’ve never seen before. But telescopes are not typically objects that we think of as subjects for art, which begs the question: Why? Why would an artist be drawn towards such technology for inspiration as a subject matter for their artwork? So Tim, maybe we can start with you. How did you become interested in the telescope? And what drew you to that as a subject for your art and for inspiration?
Makepeace: Well, counterintuitively, I’m not that interested in space and telescopes. I just happen to see a call for artists, it was a contest that was run by folks at Goddard Space Flight Center at NASA here in Greenbelt, Maryland, just outside DC. And they were looking for some artists to come make artwork about this new telescope, which I had never heard anything about. And so I applied for it, and in the meantime, I started looking up this thing. And the more I looked into it, the cooler it was, and I thought, well, I’ve just got to be a part of this. This sounds really amazing. So in the end, I was one of a handful of artists that was selected to come to the space center where they have the telescope under construction in the one of the world’s largest, cleanest rooms, for fabricating such a spacecraft. I got a couple tours, I went out there twice and took a lot of photographs. The more I learned about it, the more excited I was, and it just drove my interest. And from there, I made a lot of artwork and just kept going.
Talasek: Tim, that’s fantastic. Thank you. So Anne, the relationship of art and technology has long been an interest for you, as a scholar and as a historian. And through your work, we know that this is not an uncommon topic for artists. But I’m wondering if you could tell us, what drew you into this terrain between art and technology as an area of research?
Goodyear: Absolutely, J.D. And again, it’s a pleasure to be able to be part of this conversation. And as you mentioned, this interest of mine in the interconnection between art, science, and technology is a very deep-seated one. In fact, funny enough, I have recollections of having done a term paper, way back when in high school, about Leonardo da Vinci, who of course, is somebody who is understood as sort of a consonant unifier of art and science, especially in his own era. And in many ways, I think the type of curiosity that we associate with Leonardo da Vinci is very much at the core of why I find it’s so interesting to look at how artists, scientists, and technologists can come together. It does strike me that there have been very special moments throughout history at which visionary individuals have come together, sometimes in person and very deliberately, and other times perhaps in spaces that were adjacent to one another, but spaces that nevertheless pushed the development of new breakthrough ideas.
I think, for me, at the end of the day, I really do see a core principle of creativity at work in scientists and engineers and in visual artists that are helping us, quite literally, to see the world and perhaps to see the future. And ultimately, that’s what I find so compelling. I think I like looking at this question of the intersection between art, science, and technology through the eyes of an art historian because there are ways in which visual artists quite literally picture our world. That is to say that they create images that may reflect back to us or preserve for us extraordinary and exciting sites around us. But I think there are also ways in which artists distill and pick up on themes that we may not yet even recognize the significance of. I think that that often grows out of this special curiosity that an artist may bring to subjects that grab their attention. It’s one of the reasons that I’m so interested in the ways in which Tim Makepeace has chosen to picture the James Webb Space Telescope.
And I have to say, I think it’s very special to picture a telescope, in the sense that once it is in orbit, once it is in action, we will see images that are gathered by this telescope, but this may be our one and only opportunity to really picture the instrument itself. And so I think there’s a way in which Tim’s artwork is helping us to be more cognizant of some of the decisions that were actually made in engineering the instrument that will allow us to penetrate in new ways into our universe. So I think it is a very suitable and exciting topic for an artist, but I think it’s also an invaluable subject for us as human beings: to be able to reflect on the instrument that is reflecting back to us the world, the universe that we inhabit.
Talasek: I think that’s a really great observation. And to that point, since we are on a podcast right now, and we’re listening and not seeing the artwork, Tim, I wonder if we could go back to what you were saying about this clean space and this room. And I wonder if you could describe to us not only the space that you were inspired by, being in with the telescope, but also can you describe your work to us in a little bit more detail as to what it represents?
Makepeace: Sure. There’s two parts, there’s the science and then there’s the art, and the art is always hard to talk about and describe. So I’ll start with the science of this beast, this beast that the NASA engineers have come up with. They spent over 20 years developing this telescope. Its purpose is, some say, to replace Hubble, which is going to burn up in a few years, it’s past its lifespan. This new telescope is a space telescope they’re launching a million miles out into space, it’s going to be like 100 times better than Hubble. Hubble is an amazing instrument. So it’s very expensive, very technologically tricky, right on the edge of what is possible. When they first designed the telescope, they didn’t even know how to build it, it was so advanced. They had to invent certain technologies just to fabricate it. Every part of this thing is tricky, from where it orbits to how they get it there. It’s an infrared telescope. Infrared is a reflective telescope, like most large telescopes, and the reflective surface is not silver, it’s gold, because gold reflects infrared energy better than silver. So it has this stunning appearance. It’s a 24 gold-plated dish that’s 21 feet in diameter.
It’s so big, it doesn’t even fit in the rocket, so it has to fold. So anytime you have moving parts, you’ve got complications. And because it’s an infrared telescope, it images heat. And so to image heat, you have to be in a cold environment, you have to have cold instruments, and by cold they mean really cold. It’s like 40 degrees above Kelvin or minus 400 degrees Fahrenheit. So they have this enormous sunshades size of a tennis court and that has to unfold. And it’s just, it’s just wild.
So I got to see this thing under construction, and I took a lot of photographs of it. What inspired me mainly was it’s sort of very abstract quality. I was less interested in its iconic imagery, you know, what it looks like as a piece. If you think about an old-fashioned telescope, it’s a tube with an eyepiece. I wasn’t interested in capturing the hole and just describing it as, “here’s this thing, it looks kind of like this”—I was interested in focusing on narrowing my focus, cropping to tighter elements of it, which highlighted the geometry and the materials and the architectural elements of it. I kinda like to think of myself as a modernist artist, and one of the founding principles of modernist architecture is form follows function. So the forms that I was able to see and focus on all derived from this very specialized function that they’ve invented this machine for.
It’s a very abstract concept that is the driver behind these abstract shapes, that I am composing in an abstract way to illuminate some of the really interesting sculptural things about this. My work is based on photographs; they are large charcoal and pastel drawings that are taken directly from the photographs I took and that the official NASA photographer took, and I am drawing them in a very photographic way—very precisely. To me, it’s very important to not be inventive, because what these engineers have invented is, there’s no need to invent anything more, artistically. It’s an amazing artistic piece in itself.
Talasek: I’d like to talk a little bit about your choice of material. I find it wonderful, if you think of drawing in pencil, charcoal, pastel—any of those is technology. To represent such a high-tech, engineered piece of technology with something very, very basic, is, in my mind, quite wonderful. And I wonder if both of you would talk about that a little bit.
Goodyear: Maybe I could jump in for a moment here. I know Tim is going to really have something very insightful to say in response to that. But I’m so glad that you honed in on this question of the choices that Tim is making. Because in a sense, while I would certainly describe the idiom in which Tim is making his drawings as a “photorealistic idiom,” at the same time I would say that there’s really in some senses no such thing as an impartial transcription of reality, in the sense that we know that each of us as human beings is going to make certain choices about what we want to represent that reflect on our own sensibilities. I think in some ways, Tim may be selling himself a little bit short in terms of describing the work that he has created, because in fact I think what makes it so exceptionally compelling, both visually and intellectually, is that he has been very deliberate in a number of the choices that he’s made.
J.D., I’m just going to pick up the gauntlet that you’ve thrown down here for a moment. You focused on the question of medium, which I think is really important and interesting; I think that would be very exciting to ask you to unpack further, Tim, because you are an experienced photographer. And yet you have chosen not to present photographs, you’ve made a very deliberate decision to interpret photographic material, to make particular choices about what you include in an image, how you center an image.
For example, there’s a beautiful drawing now on view at the National Academy of Sciences entitled “Acoustic Test Chamber.” It’s stunning visually, and at the center is, it looks like, a portal of some sort. But it’s no accident that you placed that circular element so prominently in this composition. Another thing that I find really exciting about some of the choices you make is the choice, specifically, of going with a square format, a format that evades the traditional use of a horizon line in landscape painting, particularly, but you defy some of our expectations about how these images might be intrinsically oriented, specifically by going with this square format. I think that’s really exciting, given the fact that space is one of the things that you’re evoking now. I think J.D.’s point about your use of charcoal, which is, and this is a point that you make about your own work, one of the most ancient of human mediums, to depict one of the most sophisticated tools of our era, this telescope, is incredibly exciting.
It becomes even more exciting when we realize that one of the explicit purposes of this telescope is in fact to allow us to peer back into the origins of the universe. There’s this really interesting way in which your choice of the medium, of drawing, in and of itself provides this really beautiful connection between history, the present moment, and of course, what might be possible in the future. Really, I’m just amplifying J.D.’s question. But I did want to comment for a moment on the beauty of the choices I see you making, and to emphasize the significance of those because of course, that’s what makes your work art.
Makepeace: The drawing that Anna’s referring to is a drawing of the acoustic test chamber, it’s this room where they tested the telescope for vibrations, higher frequency vibrations, and it is a specialized room, but it’s just a very basic thing. It has a couple interesting things in it, this big circular tweeter speaker, and it has gantry and some high beams and stuff. But it is not something that Goddard is particularly proud of—they don’t bring tourists there to show them this, you know, beautiful room. It’s just a tool. It’s just a boring, besides a special use, it’s just a tool.
But I liked it because I’m interested in the working elements and finding beauty in everyday things, and particularly in structural things and mechanical things. And so this was a perfect subject. It’s just a derivative element of the fabrication of this telescope. It’s just a tool that they use for the telescope, but the picture does not have the telescope in it. The picture has a large circular tweeter in the center left and then it has a bunch of other structural elements or gantry elements that they can lift things up with, but the net result is this composition of lines and triangles—it’s a very classic composition. And it’s something you could find in the Renaissance, you know, force perspective, all these elements of classical painting or art.
They were there. I didn’t invent this, I just saw it, and I compose this picture to bring these elements out. And one of the intents of that is to, in a way, glorify the mundane, glorify the workaday. That’s what I was focusing on. I just love the architectural elements, the geometry, I love the purity of it, the Euclidean geometry of it.
That’s what drew me to it, and I drew it in the most precise way I could. It’s not just bold, gestural, rough ideas of these formal elements. It was a combination of those things, because this work is an exploration of the relationship between sculpture, photography, and abstraction—finding abstraction in the real world and finding those sculptural elements and putting it all together in various ways. Some of them are more photorealistic, some are less.
It’s this combination of the exquisite and the mundane. This telescope is as exquisite an object as you can imagine. And yet, I’ve included in the image the reflections of the telescope, some of the structural elements of the room—there’s high beams, there’s railings, there’s fluorescent light—things that the telescope was not intended to image. But I liked it because it showed the architectural elements of the environment it was in. It’s an odd contrast.
And then to the point of, why did I use charcoal? Why didn’t I just leave it as a photograph? Whenever you draw something, there’s always an element of simplification, amplification, streamlining, and you’re able to more easily emphasize what the point of that image was, you know: Was it about the sky? Or was it about the land? Or the wall? You can, subtly or not so subtly, amplify those elements and heighten either the emotional content or the physical content of it. That’s one reason I draw them.
And then pencil makes a lot of sense. Only a fool would use charcoal to try and draw something so precise. Charcoal is very hard to control; pencil is much easier to control. But a pencil is really designed for making lines. And I’m not really interested in lines, I’m more interested as a photographer in tones, and particularly black and white tones. And so with charcoal, you can smudge it, smear it, and you can get tones very easily. It’s sort of inherent in the media to get tones from black to dark gray, medium gray, light, just by smearing it. That’s what’s attracted me to photography. So I’m able to bring that part of it into these drawings. And that’s inherent in charcoal. The trick is, how do you control it? You figure out a few tricks and making masks and so forth. A side interest is how primitive the idea of using charcoal is. People have been using charcoal to describe their environment for 50,000 years. And here I am, describing the most advanced engineering thing that they’ve made, with charcoal.
Goodyear: I find it so exciting, as an art historian who has looked at the history of the emergence of the NASA art program in the 1960s, that we actually are talking about a telescope that is named for James Webb, who I believe was the second administrator of NASA. At any rate, James Webb was definitely the initiator of the NASA art program in 1962. And so I also find it really exciting that almost to the day, it is 60 years later, we are reflecting upon his legacy, with respect to creating an expectation of programmatic strategies by which artists can, and could, and perhaps even should reflect upon the emergence of space technology.
If I may, maybe I’ll just share a comment—this was actually in 1963, so it’s a little bit after the program was begun. But Jim Webb said, in June of 1963, that “an artistic record of this nation’s [program] of space exploration will have great historical value for future generations and [may] make a substantial contribution to the history of American art.” So I think Webb recognized, as did some of his contemporaries, the important ways in which it actually took the vision of an artist, perhaps, to create ways of picturing and recording space exploration that could bring together the sense of wonder that these extraordinary, ambitious technological undertakings stimulate in our souls. It is exciting to imagine—right now it’s hard to—but it’s exciting to imagine what it will be like to see those first images that come back.
Makepeace: Part of my excitement when I first saw the thing was knowing that I’m standing there, 30 feet from this telescope that soon will be a million miles from Earth for eternity. And this is a very brief moment in time when it is in this position, in this place on Earth, available to see—it was a very special thing to be part of. And so that also was super inspirational.
Talasek: I so appreciate that there’s so many threads here. That brings up a question in my mind, and I’m so glad that you brought up Director Webb’s creation of the NASA arts program. It’s actually one of the first things that I thought of when I saw Tim’s work—this ode to constructivism, which has influenced Tim’s work, and the Russian space program and the propaganda that was associated with it and space programs from other countries as well—I was thinking about the idea of this as propaganda versus the artwork with the artist’s voice. I was thinking about that in terms of what you both have referred to as the mundane and the marvelous, where we might think of propaganda as just a basic, single objective. Whereas the marvelous is where the artist reminds us of the humaneness and the wonder and the marvel. I wonder, Anne, if you could talk a little bit more about the role of the artist in that context. You quoted Webb talking about how the artist’s role was to help us to imagine the history of space and art in years to come. I’m reminded of a phrase that I heard: that it’s not the role of the artist to communicate or to teach or to be didactic, but to expand our imagination. I love that, and I wonder if you could talk a little bit about that in the context of the history of the arts programs within space exploration?
Goodyear: I couldn’t agree more that I think the role of the arts at the end of the day is always about expanding our imagination, no matter the medium in which that artistic vision is carried forward. This question of the relationship of art and propaganda, I think, is an extremely interesting one. Probably at the end of the day, it has a lot to do with context, both in terms of the circumstances under which artists may be permitted certain privileges of viewing something—that might be part of the equation. But perhaps more important is the context in which artwork is being framed—particularly by the state. I think most of us tend to associate propaganda with political outcomes.
One artist who we might immediately say has nothing to do with propaganda is the artist Jackson Pollock. I mean, here he is doing drip paintings that are totally non-objective. And yet even Jackson Pollock arguably gets pulled into the arena of propaganda during the Cold War, in the form of art exhibitions that are being organized by MoMA and other institutions, but with the support of the State Department. It’s a very subtle and nuanced question, what we mean by propaganda, but if an image is held up as being emblematic of a particular set of values for the sake of putting forward a political message, we might say that that image is being used for propagandistic purposes.
However, I think it’s really important to separate that from what individual artists are trying to achieve. Now, there may be artists whose goal it is to forward certain political objectives, and maybe they cheerfully engage in work that might be understood to be “propagandistic.” But I think that the very best art that has come out of the observation of the adventures this country and others have had in the realm of space exploration has been that art that on in its own right, has sought to break boundaries, has sought to reorient the way in which we see and understand the world. And I certainly see those values very much present in Tim’s work. I think what we never know about art, and I feel like this is why it’s so important that we sponsor its creation to the very utmost of our ability, is we never know how art, which conveys a genuinely creative vision—and I do think that, unlike propaganda, art understands the relationship of the mundane to the marvelous. I think things that are created specifically as propaganda often are only boosterism; they don’t necessarily also embrace the mundane that must also be a part of our existence.
That’s one of the reasons that I love, Tim, the fact that you very explicitly think about those the relationship of those two types of human experiences that we have. But ultimately, I think that the reason it matters so much to protect and incubate and nurture artistic expression is precisely because in the fine arts, in these creative moments, we see individuals who are seeking, ideally, to bring together ideas out of particles that may not previously have been fused. And ultimately, it’s that spark of the imagination, when it is transmitted to a viewer, to a reader, to a listener, that I think in turn, stimulates and elicits more creative responses. And ultimately, it is the creative imagination that allows us to see things in a new way. It is the creative imagination that opens up new realms of exploration. It’s the creative imagination that solves problems. And at the end of the day, I think what really differentiates art from any attempt at propaganda is, I think, even if art is functioning in a propagandistic culture, art becomes an illustration.
Whereas I think when art is functioning in its creative universe, it is demonstrating both the struggle and the glory of trying to forge a particular vision. Artists don’t make their images automatically. They are hard-won hard-wrought products that bring together inspiration with blood, sweat, and tears. And I think that that sense of passion that has brought somebody forward to create something truly marvelous and worthy of our attention—it’s that sense of that creative inspiration which is transmitted, and it’s transmitted across generations.
That’s why we still care about what Leonardo and Michelangelo were doing. That’s why we’re so interested in even the mathematics of Pythagoras. We may have moved beyond it, in some ways, but we’re still standing on the inspiration that was behind those achievements. So there’s not an expiration date on art. And actually, maybe in some ways, that distinguishes it from technology, which is a tool that is developed to solve a particular problem at a particular moment in time. Art is always going to transmit ideas across generations and continue to have that power to inspire, even if—and in fact, maybe we would say because—artists are so plugged in to the questions of their own era.
Talasek: Tim and Anne, I want to thank you so much for this conversation. And to everyone listening, thank you for joining us for another episode of The Ongoing Transformation. To see more of Tim’s work, visit his site at www.tmakepeace.com, and find out more about his exhibit at the National Academy of Sciences by visiting www.cpnas.org. You can follow Anne’s current work at the Bowdoin College Museum of Art by going to the college website, www.bowdoin.edu/art-museum. Check out our show notes for these links and much more. Thanks for joining us.
Codeswitch
A Visionary Agenda—in Quilts, Mars, and Pound Cake
Sanford Biggers, The Talk, 2016. Antique quilt, fabric, tar, and glitter, 80 x 84 inches. Courtesy of the artist.
Artists and poets have unique ways to communicate salient truths about the human experience. In “Quilting the Black-Eyed Pea (We’re Going to Mars),” poet Nikki Giovanni conjures imagery that, on the surface, seems whimsical—space travel to Mars accompanied by the songs of Billie Holiday and slices of lemon pound cake. Yet her vivid language reminds us that we can learn from the past to imagine how we might construct our shared destiny on this planet, and on others. In one sense, her aim is practical: space exploration (and indeed all exploration) benefits from diverse perspectives. But on a metaphorical level, Giovanni expansively connects the past experiences of Black Americans to the future, which is “ours to take.” In the telling, she creates entirely new narratives of what interplanetary inclusivity could mean.
As with Giovanni’s poem, the artist Sanford Biggers revisits an American tradition of quilting and storytelling to create an imaginative bridge between the past and the future. In his work, the lives of Black Americans are an evolving and complex story with shifting meanings. Inspired by the idea that quilts may have provided coded information to African Americans navigating the Underground Railroad before the Civil War, Biggers adds new layers of information and meaning to the antique quilts. His work suggests that lessons learned through one of the darkest moments in American history can be reimagined, and, as in Giovanni’s poem, layered into a visionary agenda that embraces innovation and joy.
Over the last two decades, Biggers has been developing a singular body of work informed by African American history and traditions. Sanford Biggers: Codeswitch, the first survey of the artist’s quilt-based works, features nearly 50 pieces that seamlessly weave together references to contemporary art, urban culture, sacred geometry, the body, and American symbolism. The exhibition’s title refers both to the artist’s quilt series, known as the Codex series, and to the idea of code-switching, or shifting from one linguistic code to another depending on social situation.
Codeswitch is on display at the California African American Museum in Los Angeles from July 28, 2021, through January 23, 2022.
Images of works by Sanford Biggers courtesy of the California African American Museum.
Scientific Cooperation with China
The recent deterioration of the US-China relationship could not have come at a worse time for global science. With China’s sustained effort in catching up in scientific capabilities over the last 40 years, benefits from collaborating with China have already increased tremendously, and will grow further over time. These collaborations can help, among other things, to address some of the unprecedented challenges we are facing such as climate change and COVID-19. Therefore, in the political rush to set up barriers that impede science collaboration with China, Valerie Karplus, M. Granger Morgan, and David G. Victor, in their article “Finding Safe Zones for Science” (Issues, Fall 2021), offer some fresh and sensible ideas that are practical in preserving valued collaborations with China yet mindful of the domestic political reality in the United States.
The key feature the authors present is a framework that helps identify areas with potentially large gains and areas with high political risks. Such a framework can help US policymakers, including Congress and the Biden administration, to act in a more rational way so as to reduce the damage to the global science enterprise. In addition, if accepted by policymakers, the framework can be useful to the US scientific community by ensuring that people who engage in collaborative research activities in the safe zones do not have to worry that they would be investigated or charged some day for working with their Chinese colleagues. Further, such a framework can also help to identify potential areas where collaboration between the two countries may yield huge rewards. To this end, the United States and China should try to revive some formal or semiformal channels of communication in science, such as the US-China Innovation Dialogue that existed between 2010 and 2016.
These collaborations can help, among other things, to address some of the unprecedented challenges we are facing such as climate change and COVID-19.
At the same time, there are practical challenges in adopting this framework for policy purposes. First, putting different research areas into the four quadrants the authors describe is not easy. For example, technology standards in the lower-right quadrant can be questionable for some industries. At the same time, tracing the origin of COVID-19 is not intrinsically high risk. The rare incident of politicizing a pandemic made it high risk. A more fundamental issue is that in the current political climate in the United States, will there be changes in some of the basic principles held dear by global science community? For example, in the basic research area, people collaborate and publish internationally without any concern for where their partners are from and how their knowledge will be used. The recent US investigations of scientists who are of Chinese origin or who are engaged in collaboration with Chinese institutions undermine many of these principles.
Finally, scientists in the Chinese research community, many of whom studied in the United States as graduate students or visiting scholars, still treasure their friendships and collaborative relationships with their US colleagues. These relationships are the joint efforts of generations of scientists in both countries since the 1970s. They should be valued and cultivated in our joint work to address the common challenges we face, instead of being the victim of haste to contain China’s emergence.
Lan Xue
Cheung Kong Chair Distinguished Professor and Dean of Schwarzman College
Tsinghua University
Complexity and Visual Systems
Art, in both creation and experience, is one of the most complex of human endeavors. Artist Ellen K. Levy engages the mental loop of seeing, connecting, and processing by juxtaposing imagery that creates meaning from unexpected and often disconnected relationships. Printing, painting, and animating images of complex systems relating to society, biology, and economics, she creates visual contexts that critique technological progress gained at the cost of ignoring the importance of the environment and society.
Ellen K. Levy, Mining: A Brief History, 2021, acrylic and gel over print with augmented reality component, 40 x 60 inches
For the past decade, Levy has incorporated renderings of US Patent Office drawings into digital collages made vivid with paint. About her latest series, Re-Inventions, she writes, “Most inventions are reinventions; they spin from developments in prior innovations. In my works I explore unintended consequences of technology and include (re)drafted plans of some of the patented inventions that cause them (e.g., steam engines leading to cumulative carbon dioxide emissions). Some of the patents propose remedies resulting from yet other (patented) technologies (e.g., protection from nuclear radiation).”
She creates visual contexts that critique technological progress gained at the cost of ignoring the importance of the environment and society.
Levy, who is based in New York, has been exploring the interrelationships among art, science, and technology through her exhibitions, educational programs, publications, and curatorial work since the mid-1980s. As guest editor of Art Journal in 1996, she published the first widely distributed academic publication on contemporary art and the genetic code. With Charissa Terranova, she is coeditor of D’Arcy Wentworth Thompson’s Generative Influences in Art, Design, and Architecture (Bloomsbury Press, 2021), and with Barbara Larson, she is coeditor of the Routledge book series Science and the Arts since 1750.
Levy’s work is a part of a group exhibition in Vienna titled EXTR-Activism: Decolonising Space Mining, curated by Saskia Vermeylen. More information about the exhibit can be found at https://www.wuk.at/en/events/extr-activism/.
All images courtesy of the artist.
Ellen K. Levy, Transmission , 2019, mixed media on paper, 60 x 40 inchesEllen K. Levy, Messenger, 2021, acrylic and gel over print, 40 x 60 inchesEllen K. Levy, 2020 Vision, 2007, mixed media on paper, each 80 x 20 inches
Episode 5: Dinosaurs!
It may surprise you to learn that the enormous dinosaur skeletons that wow museum visitors were not assembled by paleontologists. The specialized and critical task of removing fossilized bones from surrounding rock, and then reconstructing the fragments into a specimen that a scientist can research or a member of the public can view, is the work of fossil preparators. Many of these preparators are volunteers without scientific credentials, working long hours to assemble the fossils on which scientific knowledge of the prehistoric world is built. In this episode we speak with social scientist and University of Virginia professor Caitlin Donahue Wylie, who takes us inside the paleontology lab to uncover a complex world of status hierarchies, glue controversies, phones that don’t work—and, potentially, a way to open up the scientific enterprise to far more people.
Jason Lloyd: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. I’m Jason Lloyd, the managing editor of Issues. On this episode, I’m talking with Caitlin Wylie. She’s an assistant professor of science, technology, and society at the University of Virginia. She wrote an essay for the fall 2021 Issue called “What Fossil Preparators Can Teach Us About More Inclusive Science.” And she recently wrote a book, Preparing Dinosaurs: The Work Behind the Scenes, published by MIT Press, which is about the workers in fossil preparation labs and their often unacknowledged contributions to science. So Caitlin, thank you very much for joining us today on our podcast.
Caitlin Wylie: My pleasure. Thanks for having me.
Lloyd: You wrote a fantastic piece for the fall Issue about preparing dinosaurs and what the role of fossil preparators is in paleontology research. So I thought a good place to start might be if you could talk a little bit about Keith, who’s one of the fossil preparators you describe in your essay.
Wylie: Yeah, thanks. So Keith—and that’s not his name, that’s a pseudonym—he is a volunteer at a museum and he works in the fossil prep lab. And he’s a pretty typical volunteer in the sense that he’s retired. He was self-employed, so he ran a business for most of his career. He’s a veteran. And for him, the point of being in a lab was really to be able to contribute something back to society—which is interesting, because usually when we retire, we think we’ve done enough for society. But he was really attached to the idea that he was serving science by preparing fossils. And of course, volunteering in a lab is different from volunteering as a museum docent, say, or at a soup kitchen, because the work of preparing fossils is really skillful. So Keith had to invest a lot of time learning how to work with these specimens under the guidance of more experienced preparators. And once he put the time in, he was showing up every day, sometimes putting in a full work day just because he found it so satisfying and rewarding.
Lloyd: I’m curious how Keith got interested in this job. Was he really into paleontology? Was he a frequent museum goer? What was the initial reason that he did this in retirement?
Wylie: It’s interesting because a lot of volunteers say, “I’ve always loved dinosaurs. I’ve loved dinosaurs since I was a kid.” But Keith was unusual in that he did not love dinosaurs. He wasn’t all that interested in dinosaurs. But he was very good with his hands. He really liked doing home improvement projects. He was into carpentry as a hobby, and he liked the idea of working in a lab where the tools are tools that he’s familiar with: basic hammers and chisels and little drills. And he found that setting very familiar and comforting.
So for him, it wasn’t so much about the dinosaurs as about the work itself. He often said that he found it relaxing. He was scheduled to come like, I don’t know, Mondays and Wednesdays every week. And then sometimes he would show up on a Thursday and the staff preparators would say, “Hey Keith, what are you doing here?” And he would say, “Well, I did all of my chores at home and I ran out of things to do, so here I am.” But for him it was really a place to go. He had a lot of friends among the other volunteers. He loved to talk to the staff preparators. So for him, the community aspect was strong and the tools were something that he loved, but not so much the science, which is interesting.
Lloyd: So what did he do as a volunteer preparator? What was he doing?
Wylie: Yeah. So the staff preparators would assign volunteer preparators a bone, and that would be their bone until it was finished. And that’s a really important way of training volunteers, that they really do a bone from start to finish and see all of the steps along the way. So I followed Keith because I was following the bone that he was working on, which was a vertebra of a hadrosaur, which is like the cow of the dinosaur era. They’re pretty common, and the vertebra he was working on was broken into several pieces. So basically he was handed this field jacket, and he had to dig the rock out and find out where the fossil was in all of this wrapping that they’ve put around it to protect it on the journey to the museum.
And then once he did that, he put it under a microscope and used what’s called an air scribe, it’s pneumatic, an air-powered sort of hammer and chisel type thing. It’s basically a little drill, handheld, to get the rest of the rock off the bone so that you could see the surface. And then he moved into a reconstruction phase where he had all these bits of dinosaur bone, and he was trying to piece them together to make the vertebra look more like it would in life. For that, he used a variety of different glues and adhesives—which is something that fossil preparators care about a lot. Glue is absolutely central to vertebra paleontology because the specimens are always fragmentary because of the process of fossilization.
You can’t really follow a bone without following the person who’s working on it. So I sat next to Keith for a lot of hours as he was chipping the rock off or trying to piece bits together and telling me what he’s working on. He would narrate while I was watching.
Lloyd: When he gets the jacket and starts opening it up and trying to find the fossil inside, is it fairly clear what’s bone and what’s not, or is that part of the skillset of the preparator?
Wylie: That is a crucial skillset of the preparator. So no, it’s often not at all clear what is fossil and what is rock. Because, of course, the fossil and the rock, the fossil is rock. They’re made of the same minerals. The bone has been replaced with minerals over millions of years. And so they are, materially, often identical. Sometimes there’s a difference in color, which is very helpful. Sometimes there’s a slight difference in texture. And of course, bones are porous when rocks are usually not. So if you see little bumps or little holes in the specimen, you know that’s bone and you’ve gone too far because you’ve penetrated into the inner bone instead of the surface.
Usually to become a preparator, you have to pass what they call the “prep test,” where you walk into the lab as an applicant and they hand do a crappy fossil, usually a fish or something that museums have a lot of and is not very scientifically important. And they hand you a tool and they say, “Take the rock off,” with no training. And if you pass that—basically if you don’t damage the fossil—then they’ll take you on and train you. And they think that that prep test is testing for certain innate skills that you have to have to be a fossil preparator that you cannot learn. Isn’t that fascinating?
Lloyd: Yeah. That’s really interesting. What are those innate things that they think this tests for?
Wylie: Attention to detail, manual dexterity, so like fine motor skills, and patience. Are you willing to work really slowly? And basically, if the applicant passes, then they start the process of training with the staff preparators, where basically they just get another fossil to work on and the staff preparator checks in on them a bunch to see how they’re doing and give them advice. So for example, I’ve seen preparators use a Sharpie to mark the rock, to show the volunteer what to remove, to show them the distinction between the fossil and the bone. And through instruction like that and through lots of time spent staring at these materials, that’s how volunteers learn how to distinguish fossil from rock.
Lloyd: You mentioned adhesives before, and those sound extremely important for reconstructing a fossil. And one of the things that I found really interesting—I don’t think you mentioned it in the essay, but you do talk about it a bit in your book—is that different institutions and different places have different cultures around the kind of adhesive they use and whether they use it or not. Could you talk a little bit about that? This gets to maybe how you got interested in this subject when you were studying abroad in the United Kingdom, right?
Wylie: Totally. Yeah, thanks. The cyanoacrylate controversy was a massive disagreement—continues to be a massive disagreement among the community of fossil preparators. Cyanoacrylate is the chemical name for super glue. It bonds two bones together in an instant, and then you can’t take it off, you can’t dissolve it. And some preparators think that’s great, because it’s really strong and it works fast. You don’t have to wait for it. And their thought is, “I’ve put this piece together perfectly. Why would anyone ever want to take it apart?” So that’s the camp that really believes strongly in cyanoacrylate. It tends to be a more American-heavy camp. And the opposing view is the idea, borrowed from conservators, that all materials put on a specimen and should be removable. The conservation-minded camp argues for adhesives that are basically based on solvents, so you can re-dissolve them if you wanted to take them off and actually remove the chemicals altogether. But those take a long time because for them to adhere, the solvent has to evaporate. So you have to sit there and hold the bits of bone together in a precise location while it dries. And it’s not as strong as cyanoacrylate.
I learned about those two opposing views because I was a student preparator at the University of Chicago, and I was taught to use cyanoacrylate—probably because I was working on not very important bones, right? I was a student, I was learning. And so probably no one ever will take those bones apart. And then as you said, I volunteered at the Natural History Museum in London for a semester. And they were, like, appalled that I asked for cyanoacrylate, and they introduced me to this other family of adhesives which are solvent-based.
I found them really hard to work with because they were so different from cyanoacrylate, in terms of how you line up the joints and how you apply them. In college, I just constantly had glue on my fingers. My fingers were just permanently super glued. And so for that not to be a part of fossil preparation, I struggled to learn that. That made me wonder, how can fossils be prepared in such different ways—and be compared and studied in the same ways? And none of that preparation work is documented in scientific papers. It’s beginning to be documented in specimen records, but even that is not universal. So it just blew my mind that the work of making a fossil researchable could be so different in the United States versus the United Kingdom and other places. And yet, the fossils are considered basically the same kinds of data.
Lloyd: Yeah. Actually, that really segues really well into the preparator’s role in research. So as the preparators work in putting together fossils, what role is that playing in paleontological research?
Wylie: So you might think that scientists would prepare their own specimens. And this is true in paleoanthropology, because there are so few fossils of human ancestors that scientists do prepare them themselves. But for vertebrate paleontology, there’s so many vertebrates compared to hominins that the bulk of it is just too much. It would take scientists forever to prepare the specimens and study them. It’s not sustainable in that sense, so they needed a division of labor.
The interesting thing about vertebrate paleontology is that the division of labor is so strong. Very few vertebrate paleontologists know how to prepare a fossil. They automatically take it to a preparator and preparators, the vast majority, have no idea how to study a fossil. I guess they would know what species they’re working on, but they don’t know how to identify species versus one species versus another. They don’t see that as relevant to the work they’re doing of revealing that specimen. And so that’s the part that I find really interesting, that that division is so strong even though they’re working on the same specimen. The handoff between fossil preparation and fossil research is a clean break. No pun intended.
Lloyd: Yeah. That’s really interesting. How do preparators talk about what they do?
Wylie: They say that they are serving science. They take a very long-term view, and they see that as their responsibility, to take a long-term view of the benefit, or the protection of, the specimen itself. They consider themselves advocates for the fossils, sometimes against their bosses. Scientists are the ones who hire staff preparators and pay for them through their grants or through institutional funds. So technically, the preparators work for the scientists. But preparators would say that they work for science in general. So not just this scientist who needs the fossil tomorrow to write a paper, they’re in a big hurry to publish or perish. Whereas preparators would say, “No, I need more time to prepare this fossil well,” or “I need more time to prepare it in a way that is conservation friendly so that the fossil lasts for another generation and isn’t just useful for you tomorrow.” So in some ways, fossil preparators are like the mediators between the specimens and the scientists, even though they’re all arguably working towards the same goal, which is learning about the distant past.
Lloyd: I’m interested in the disagreements that occur. I think you mentioned this in the book, that occasionally scientists will not allow a preparator to look at a jacket or an unprepared fossil yet, or vice versa. The preparator will keep a specimen from the scientist until he or she is ready to hand it off. Do those disagreements, those kinds of conflicts, occur frequently, or is that a pretty rare occasion?
Wylie: That’s a good question. The disagreements arise when preparators do research and when researchers do preparation. So there’s very much a territorial sense. That’s when they get upset. For example, in one lab, there was a scientist who would sneak into the lab at lunchtime when nobody was there and work on fossils. And he found it relaxing and just described it as like, “I just needed something brainless to do,” which is pretty offensive to the preparators.
And so the preparators installed locks on the specimen drawers so that he couldn’t get the specimens out, because he was hurting them. He was causing damage and, mostly, he was insulting the preparators by invading their territory. And the reverse happened too, in the sense that I talked to preparators who wanted to do research on fossils, like to write a paper describing a new species or comparing species, and their bosses, the scientists, would say, “No, that’s not your job.”
Some preparators got pretty dressed down by their bosses, yelled at. Others got fired for doing too much research. That wasn’t considered part of their job. And so the disagreements are very unequal in the sense that scientists have more power than preparators do. So preparators can’t fire our scientist for coming into the lab and breaking a specimen by trying to prepare it. But of course, scientists can fire a preparator. So yeah, they do a lot of work—both groups do a lot of work to distinguish themselves from each other, if you see what I mean. So yeah, that’s one way in which they do it is those conflicts.
Lloyd: Yeah. And that brainless comment made by the researcher sort of hints at the larger power dynamics. But just to get into that, how does the scientist conceive of what the preparator is doing? How do they describe what the volunteers are up to?
Wylie: Yeah. The difference in language here is really interesting. Often, scientists will say that preparators are cleaning fossils. And cleaning’s pretty easy, right? Anybody could wipe the dust off a countertop. And preparators never say that they’re cleaning fossils. They say that they’re preparing fossils, or sometimes they say that they’re sculpting fossils because those micro-decisions of “what is rock” and “what is fossil” feel like they’re sculpting or they’re creating. And those are really different ways of talking about this work, in the sense of the scientists kind of dismissing it as merely technical or grunt work that anybody could do, whereas preparators are likening it to art, which is much higher status than technical work.
Lloyd: That really gives you a sense of the hierarchies in the lab. So just to describe that hierarchy with some specificity, am I right in thinking that volunteer preparators are the lowest status and then there’s staff preparators and then it’s the research scientist, the paleontologists themselves? Is that sort of the general order?
Wylie: Officially, yeah. So in a museum, in the list of job titles and salaries, yeah, that’s the order. And sort of formal institutional power, yeah. In practice, it really depends on the context. For example, when fossils get broken—often by scientists who are trying to study them, and these fossils are super heavy and super fragile and they break under their own weight, so it might not even be a mishandling. But if a scientist breaks a fossil, it’s amazing how the power dynamic immediately shifts to the preparator. So then whatever the preparator says is going to happen. So the preparer might say, “I’m just going to glue this for you right now and give it back and you can keep working on it,” or the preparator might say, “You mishandled handled this. You can’t study it anymore.” Which is amazing, right? Very different from the formal hierarchy of the scientists having power over the technicians. So in case, preparators have that much power because they are the only ones who know how to fix that broken fossil.
The other case in which preparators have power is over the volunteers. So the scientists usually say, “The volunteers are not my problem.” And so it falls to the preparators to train them, select them, manage them as a workforce. And actually, that’s an incredible amount of power for technicians. And especially for technicians who don’t have standard credentials or a shared degree. In that sense, I think that volunteers are a major source of empowerment for preparators. And it also means that the preparators are in charge in the lab. Deciding what preparation methods to use totally falls to the preparators, scientists have no say in that. Partly as a power thing—preparators would never listen to a scientist who said, “You must use this tool,” because it’s not their expertise—and partly as a knowledge thing, that scientists really don’t know which tool to use. They wouldn’t know what recommendation to make.
So in that sense, preparators have a lot of power within their domain of the lab, over the other workers, volunteers, over how they’re going to prepare those specimens. And so then the power that scientist have really comes down to funding and what specimens the preparators are working on. So the scientists say, “I really want to study this bone. I need it in six months to write this paper.” And then the preparators do whatever they think is best to achieve that scientist’s goals.
Lloyd: So do the staff preparators, the paid preparators, do they have similarly varied backgrounds to the volunteer preparators, or do they have generally more a scientific background or a post-secondary degree in some sort of science?
Wylie: Yes. All of those things. Almost all preparators start as volunteers, which is interesting because the number of volunteers who become preparators is very small, percentage wise. But almost all of the staff preparators begin as volunteers and get that early training and exposure. Some of them have PhDs in paleontology, some of them have PhDs in literature. Some of them have only a high school education. It’s a really wide variety.
Lloyd: Does that very general hierarchy apply throughout paleontology, or do different institutions or maybe different kinds of institutions, such as museum labs versus university research labs, are there differences there? Or is it generally pretty much the same?
Wylie: It’s pretty much the same. I studied 14 labs in three countries. About half were in museums and about half were in university labs, and it seemed pretty much the same. The major difference was the number of workers. Generally, museums have a larger staff of everybody, more scientists, more preparators, more volunteers, whereas university research labs might only have one or two preparators. And they’re generally doing more specialized work. A scientist might only study fossil lizards, and then that lab’s only going to work on fossil lizards. The university ones tend to be more specialized in that sense.
There’s a slight difference in responsibilities towards the public. In universities, preparators work with grad students. Not necessarily to teach them how to prepare, but more to prepare specimens for them. And then in museums, preparators are somewhat responsible for the mission of the museum, as are all the staff, which is outreach and education. So they do a lot of lab tours. Training the volunteers is a form of outreach. I think if you work in a research lab, a university lab, you probably do less outreach and a little more work with students than in a museum lab. But yeah, those are the main differences.
And also, there are a couple of labs in museums that are for demonstration only. They’re not specifically research labs, and those are really different from taking a lab and just turning the walls into windows, which is the basis of most of the glass-walled labs that I studied. But there are a couple where they just have a couple of tools lying around, a couple of junky fossils. And they prepare them to show groups of school children, for example, as a demonstration, rather than actually preparing the specimens to be studied.
Lloyd: Oh, okay. That’s interesting. They’re like ersatz labs that are only for showing kids how it works.
Wylie: Yeah. So that’s more of a traditional display as opposed to an actual workplace that you can watch.
Lloyd: But there are some museum labs that are sort of fishbowl glass-walled labs, where the preparators are actually doing research and the public can see them doing what they’re doing.
Wylie: Totally. And I would say that’s the majority.
Lloyd: Okay. How do preparators feel? Do they like being on display in that sense, or are they annoyed by the attention, or do they just not really even think about it and get used to having people looking over their shoulder?
Wylie: It depends. A lot of the volunteers really like it. They like to be seen as someone who gets to work in a science lab. So they’ll wave at kids through the windows, or sometimes they’ll go outside and chat with visitors and stop working and take a break and serve as a public face of the lab. But staff preparators generally think it’s a drag, right? They’re in this business because they want to work with fossils. I don’t think I’ve ever met an extroverted fossil preparator. They really prefer the sort of solitary, focused work and they do outreach, like working in the glass-walled lab, as kind of a chore, as a service, but not as their favorite thing, for sure. And almost all museums that have glass-walled labs also have backstage labs, behind-the-scenes labs, and that’s usually where staff work.
And then it’s often the volunteers who are out in the public-facing lab. Part of that is because staff preparators are working on more complicated and more important fossils, so it’s just easier to do that in a place that’s quiet and has ideal air filtration and all the noisy tools that aren’t allowed on the museum floor, for example. I heard a couple of stories from labs that had been designed so that visitors and preparators could talk. So visitors could ask a question while a preparator’s working. And almost every single lab then removed that feature because it meant that the preparators were just answering questions all day long and not preparing fossils. And they found that infuriating. So yeah, lots of these labs have a telephone on the wall that doesn’t work or it used to connect into the lab.
Lloyd: It’s really fascinating that the most public-facing aspect of this research that occurs in museums would be enacted by these folks who are essentially members of the public themselves. They don’t necessarily have specialized scientific credentials, and they’re on a volunteer basis, and they’re the closest to the public. That just seems really interesting. Do they see themselves as sort of citizen scientists? That’s a very broad movement that comprises a lot of different sorts of research and people. But I wonder if they’re sort of enveloped in that broader movement to open up science a little bit more to the public.
Wylie: I think so. I don’t think they would identify as citizen scientists. They describe themselves as volunteers because most of them, like Keith, see themselves as distanced from the science. So they’re serving science, or they’re doing this work to help out scientists or to help out the museum, but they don’t see themselves as researchers. And most of them are like, “Why would I want to be a researcher?” They’re kind of dismissive of the very idea. And Keith would say things like, “I like working alongside people who are furthering the world of knowledge, and I’m just along for the ride.” And so there’s this sense of being science-adjacent that people like, rather than actually doing research.
Lloyd: That’s an interesting conception of their role. When the average, I don’t know if you studied this, when the average museum goer goes and sees this person working in a glass-walled lab, who do they think that person is? Do they think that that’s the paleontologist, or do they know what’s going on in the lab? Maybe this gets to why they were installing telephones that no longer work to ask them questions. Do you think the public has a sense of who these folks are and what they’re doing?
Wylie: No, they don’t. I thought a lot about what these labs are doing, because there are text panels around these labs and they say things like, “This is an air scribe, this is a microscope.” None of them, I never saw a sign that said, “These are volunteers. If you want to volunteer, take this flyer.” I never saw a recruitment form, I never saw any information about who these people were or what they were doing. So yes, I would stand outside the lab and eavesdrop on visitors to try to understand what they thought this was. And they would mostly say things like, “Look at the scientists” or, “Are those robots?”
Lloyd: Like at Disney World?
Wylie: Exactly, because that’s what you expect to see in a museum. You don’t expect to see people at work. And of course, fossil preparators don’t move very much. The movements they’re making are very small, so you can believe that it’s not a person, it’s a stuffed model or something. The conclusion I came to about the purpose of these labs is partly that they’re a scientific workplace. Volunteers are producing specimens that are going to be studied, in most cases. And the other function they serve I think is to show that a museum is a home of research. So we might think of museums as being a home of just dead stuff and finished facts written on these authoritative text panels. But actually, they’re housing a lot of research, and this is one way to show visitors this is a research lab. Research is a process, research is work, research is done by ordinary looking people wearing jeans and drinking coffee and chatting.
So it’s a pretty different front portrayal of science from the rest of a typical natural history museum. And the coolest part about it, I think, is that again, the text panels don’t really explain what the preparators are doing. They’re usually about the specimens or the tools, not about the people. And so I think that creates an opportunity for visitors to actually practice skills of scientific meaning making. You’re making observations, you’re trying to make sense of what you’re seeing. You’re asking yourself questions, “What are they doing? Who are these people?” And then you’re drawing conclusions. And that’s what scientists do. I think that’s what museums want to be teaching the public, is how to think like a scientist.
In that sense, these labs are very good for that because people don’t understand what’s going on. And the funny thing is that sometimes their conclusions are not what the preparators or the scientists would intend for their conclusions to be. I heard one woman approach one of these labs with a little kid and she said to the little kid, with great excitement, “Look, people making fossils.” So no scientist, no preparator would ever say they’re making a fossil. But yeah, you can understand why she got that idea, right? There’s plaster everywhere, there’s tools all over the place. You could totally understand why she would think that. And that’s drawing an evidence-based conclusion.
Lloyd: I did not realize that about the signs around the glass-walled labs, that they just don’t even mention who’s in there. It’s just maybe the tools that they’re using. But that does get to one of the things that you talk about where the preparators get very little, if any, credit for their work anywhere. And one of the things you talk about is potentially doing, in order to give credit, potentially provide some authorship to papers or to research papers that the scientists have written or, even just acknowledging them in the methods section. And I was wondering, is that a moral stance, or would that have some effect on the research itself or the products of research?
Wylie: Yeah, that’s a great questions. So I started this project from a Marxist perspective where I’m like, I’m going to go empower the proletariat, these oppressed workers who get no credit. And I very quickly abandoned that perspective because I realized how much power preparators actually have. They control the volunteer workforce. In effect, they control the space of the lab and the work that happens there and the decisions that go into each fossil. They choose their tools, they choose their materials. And so I started to think that actually being missing from scientific papers provides that space for preparators to have autonomy over their work and their workforce. And so I was thinking, there’s some evidence that as things become documented, as work becomes documented, then surveillance increases.
The classic example is nursing. 50 years ago, nurses pretty much did their own work because they were trusted as experts and professionals. And then as more and more documentation became common in the medical workplace, nurses lost some of that autonomy. So instead of saying, “I checked on the patient,” you had to document how you checked on it and what measurements you took, and so it lost that space for creative problem solving and judgment because it was becoming more documented. I worry that adding preparators to papers might increase scientists’ involvement in fossil prep decisions, which actually would be bad news for the preparators because that’s their main area of power. So I’m not sure that authorship is the right answer.
What I do think should be transparent is preparation methods. And preparators agree with me on this, and they really push each other to improve their documentation practices because it’s not a typical, traditional part of their work. For example, certain glues will screw up geochemical tests. So if you try to carbon date a fossil that has cyanoacrylate on it, it’s not going to work.
So that’s important to know for a scientist in 50 years who wants to date a particular specimen, and they have no idea what glue is in there—that’s going to impact what tests they can do. Keeping track of those kinds of materials and also who prepared it, because preparator skills are really different, and their decisions are really different, I think would be an awesome contribution to science, as part of the metadata of that specimen. And it would serve as a form of recognition, right? So if it’s in an institutional database of each specimen that includes the name of the preparator and all the materials they used and when it was prepared, that would make the preparator’s work look more legitimate, I think, more respectable, more scientific in a sense. But it would protect them from the surveillance that might come from being part of scientific papers.
Lloyd: Yeah. That’s really fascinating. I didn’t know that that would be a concern. It makes sense. And actually, that nursing example is helpful to extrapolate a little bit beyond the focus of your research: Are there positions comparable to fossil preparators in other fields? Nursing wouldn’t be one of them, they’re very specialized with a great deal of education. But I’m just thinking of other fields that may have people who come in on a volunteer basis, or maybe don’t have a scientific background, and do similarly very critical work for the research.
Wylie: Yeah. I’ve been thinking about this a lot. I would argue that the skill-based nature of research is ubiquitous, widespread. Even scientists have embodied skills of doing experiments, for example, that they don’t really describe. They get written out of papers that the rest of us outsiders wouldn’t know about. So the dependence of science on skill, I think, is ubiquitous. But the lack of credentials is really unusual in science. Lots of sciences have a history of lots of amateur participation—think of anything from natural history, botany, mycology, people who collect fungus as a hobby or people who study astronomy as a hobby. Those are very long histories of public involvement, but most of those fields have now specialized or credentialed those positions. And so now, if you like to look at the stars in your backyard, you’re not going to be considered a contributor to science; that’s your hobby.
I think preparators are not amateurs because they’re part of an institution. They’re working in a museum, they’re working in a research lab. Even if they’re volunteers, I would say they’re not amateurs because they’re not doing it on their own. I guess I would love for other scholars to tell me whether this position for people with a wider background, wider variety of backgrounds, exists in other fields, because I suspect that it does.
And one way which I know it does is with undergraduates. We all think that undergraduate research experience is a good thing for students. I’ve done a lot of research on undergraduate engineers, and I’m finding that it’s actually an excellent thing for the labs that these undergraduates work in, because undergrads bring this very interdisciplinary mindset that the grad students and the professors don’t tend to have, because they’re so much more specialized, right? They’ve had so much more education in engineering than the undergrads. So in that sense, the undergrads are kind of playing the role of preparators in the sense that they’re bringing in outside information, they’re having a different approach to problems that the professors think about in a very specific way.
Lloyd: So what would that look like, if there was a bigger focus on skills, maybe, rather than credentials, at different levels of the scientific enterprise, if you had to guess?
Wylie: I know, right. I’m a professor in an engineering school, so I hate to make this argument, but if we broaden paths to doing scientific work, that can only be a good thing. So I’m not arguing against STEM education, but education in science and engineering has a long history of discrimination and exclusion. I hope that we all will someday overcome that. That day is not today, it’s an ongoing process of making science and engineering education available to anybody. And so in the meantime, yeah, I think it would be awesome for science to include more kinds of people as volunteers, as technicians, as people watching from the outside—even that is a way of extending science beyond the lab. And I think this is good for people to participate in science, to learn that it’s not as elite and exclusive as it might seem, because that spreads scientific literacy. It spreads a sense of appreciation for science. It makes public trust in science stronger if people understand that science is just work done by people. It’s not magic.
And the other crucial thing it would do, I think, if we had a more diverse workforce in science, would be to bring ideas to scientists that are different. So to expose scientists to people who have backgrounds very different from theirs, which will bring in new skills that science, at the moment, doesn’t have or new ideas or new ways of understanding things that will improve the science for all of us. And crucially, watching the scientists chat with the preparators and chat with the volunteers, there’s a lot of knowledge exchange that happens just by having people around, hanging out together in the same space, talking about the same bone, people share a lot of knowledge.
And I think that that information sharing can help scientists learn to ask more relevant research questions. So for example, how can they use fossils to study how species adapt to climate change? Something that is crucial to our world now. How can they use fossils to study how environments change over time or change in response to rapid flooding, natural disasters, widespread wildfires, things that we’re experiencing that paleontology actually has enormous insights to offer? But I’m not sure those are the questions that scientists would come up with on their own. I think they need our help.
Lloyd: That’s a fantastic message for inspiring people to get more involved. So thank you for joining us for this episode of The Ongoing Transformation and thank you to our guest, Caitlin Wylie, for talking to us about the work fossil preparators do behind the scenes. Check out the show notes to find links to her Issues article, “What Fossil Preparators Can Teach Us About More Inclusive Science,” and to her book, Preparing Dinosaurs: The Work Behind the Scenes.
Please email us at [email protected] with any comments or suggestions. And if you enjoy conversations like this one, you should visit us at issues.org for many more discussions and articles. And I encourage you to subscribe to our print magazine, which is filled with incredible art, poetry, interviews, and in-depth articles. I’m Jason Lloyd, managing editor of Issues in Science and Technology. Thank you for joining us.
A New Compact for S&T Policy
Since I came to Congress in 1993, increasing diversity in science and technology has been a driving focus of mine. I know from experience that talent is everywhere and that far too often students from underserved communities are left behind. Unfortunately, while I and many passionate leaders such as Alondra Nelson, the deputy director for science and society in the White House Office of Science and Technology Policy, have spent our careers working to advance diversity, equity, and inclusion in science, technology, engineering, and medicine—the STEM fields—there is still so much more to be done. Nelson ably presented some of the challenges in her recent Issues interview (Fall 2021). Through my leadership of the House Committee on Science, Space, and Technology, I have listened to Nelson and numerous other experts and have reframed the problem, and the suite of solutions available to us.
Inclusive innovation is not just about representation. It is not just about creating new opportunities and breaking down barriers for historically marginalized groups to enter and remain in STEM fields, although that is a necessary step. To promote STEM diversity and equity, I developed the STEM Opportunities Act, the MSI STEM Achievement Act, and the Combatting Sexual Harassment in STEM Act. My committee also developed the Rural STEM Education Act and the Regional Innovation Act to address the geographic diversity of innovation.
I know from experience that talent is everywhere and that far too often students from underserved communities are left behind.
But diversity alone will not catalyze the paradigm shift we need to see. We need to rethink, at the highest levels, how we prioritize our investments in science and technology. To date, national security and economic competitiveness have dominated the discussion. This focus has served the nation well in many ways, but it has failed to address many of the challenges Americans are facing in their lives. We are faced with a web of complex and interconnected societal challenges ripe for innovative solutions—access to safe drinking water, gaping economic inequality, misinformation, addiction and mental health crises, climate change, and the list goes on. For too many Americans, science and technology is an abstraction that has no bearing on their daily lives. I echo Alondra Nelson’s call for increased transparency and accountability in US science and technology policy. And I commend President Biden for establishing the Science and Society Division at the Office of Science and Technology Policy.
Last year, led by my committee, Congress enacted legislation to establish a National Artificial Intelligence Initiative that has trustworthiness, transparency, equity, fairness, and diversity as core principles. I will make full use of my final year as a member of Congress and Chairwoman of the Science Committee to advance the congressional conversation around inclusive innovation. Already, I have proposed that the new Technology, Innovation, and Partnerships Directorate at the National Science Foundation be focused not only on competing with China, but on addressing the full breadth of challenges we face. Moreover, the legislation I introduced pushes NSF to take a much more expansive view of who gets to have input to the research agenda. We cannot let China set our agenda. We lead only by being the best possible version of ourselves. I believe we should steer our science and technology policy toward that goal and that, in doing so, we will strengthen this country, and its innovative capacity, from the inside out.
Eddie Bernice Johnson
Member, US House of Representatives (D-TX)
Chairwoman, House Committee on Science, Space, and Technology
In her interview, Alondra Nelson lays out her vision for what it means to bring social science knowledge to the work the White House Office of Science and Technology Policy will undertake. The creation of its new Science and Society Division and the selection of Nelson to lead it are exceptionally welcome initiatives of the Biden-Harris administration’s agenda. As a renowned expert with deep knowledge about the links among science and technology, social inequities, and access inequalities, Nelson is ideally situated to bring social science to this policy table. Reflecting sociologists’ value commitments, her vision is anchored in a serious concern for justice, access, inclusion, and transparency.
I would like to highlight two points from her interview, as neither seems to have been prioritized in previous initiatives of science and technology policy. First and foremost is her vision for inclusivity, equality, and justice. This vision incorporates efforts to embrace all who are interested in studying and then working in technology, regardless of socioeconomic, racial, or any other form of inequality, including, I would like to imagine, immigration background. Her broad tent for inclusivity also seeks to incorporate in technology policy the diverse approaches, thinking, innovation, and creativity that those from different social backgrounds may bring to solving a problem or creating policy. In this thoroughly globalized world in which we are increasingly aware of the harms of exclusion, this is perhaps key to progress but also to a more just society that broadly foments equality and inclusion. Nelson’s vision goes to the core of what is needed to confront the challenges of this moment in history. I see it encapsulated in her description of what she would like science, technology, engineering, and mathematics—the STEM fields—to look like: “to look like all of us, that reflects all of us, in the classroom and in the boardroom.”
Nelson’s vision goes to the core of what is needed to confront the challenges of this moment in history.
Second, I want to remark on another aspect of her broad vision for inclusivity, and that is to incorporate social science knowledge as key to technological innovation and attend to the effects of technological advancements. Social science can shed light on the tensions in society that Nelson mentions, and how to reconcile them to create more equitable conditions to expand opportunities for all. Social science research is also equipped to contribute knowledge on how organizations and institutions work; it can provide critical research on organizational culture and on how team members’ social characteristics shape organizational hierarchies, which often determine the success of a project and ultimately better policy solutions. It can also help to illuminate the social effects of new technologies and how they may reconfigure human interaction.
We in the social science fields look with excitement to the many possibilities for science and technology to progress in equitable, just, and inclusive fashion with Alondra Nelson in the lead. With a sociologist at the helm of this new top-level division, we trust that our value commitments as sociologists will be reflected in progressive, transparent, and just policy for all.
Cecilia Menjívar
Dorothy L. Meier Chair in Social Equities
Department of Sociology
University of California, Los Angeles
President, American Sociological Association, 2021–2022
Alondra Nelson highlights the importance of science, and social science in particular, for developing effective interventions across all policy domains. We applaud the Biden administration for elevating the Office of Science and Technology Policy to cabinet level and for bringing Nelson’s expertise as a social scientist into the upper echelon of its leadership. As Nelson noted, science and technology policy in the United States has not historically incorporated all voices or responded well to the needs of all Americans. We share her assertion that community partnership is fundamental for moving the nation forward in a more inclusive and equitable way.
From our point of view as sociologists, Nelson’s focus on involving communities in the policymaking process is an important step toward achieving racial and social justice. Such a focus goes beyond simply informing communities about policy initiatives or getting feedback from community members after implementation. Rather, policymakers should seek to understand the needs of communities from community members and engage communities directly in creating and articulating the kinds of interventions that can most effectively address those needs.
There is a long tradition of community-based sociological scholarship, but it has often been marginalized. Our sense, as supported by Nelson’s comments, is that such work is increasingly central not only within our discipline but within the academy more broadly. The American Sociological Association (ASA) has sought to elevate this work, including running a longstanding funding program for research collaborations between sociologists and community partners, and the Winter 2022 issue of our online magazine, Footnotes, is devoted to community-focused research. Universities, including Syracuse University, the University of Minnesota, and the University of Wisconsin, have begun to prioritize and reward community-focused research in recommendations for tenure and promotion.
Policymakers should seek to understand the needs of communities from community members and engage communities directly in creating and articulating the kinds of interventions that can most effectively address those needs.
Also important for generating more equitable and inclusive policy is the training of the next generation of community-focused scholars. Students of color often enter graduate school with aspirations of studying issues that affect the communities from which they originate. The ASA is committed to supporting graduate students of color in their research endeavors through the Minority Fellowship Program, a predoctoral fellowship initiative that has funded more than 450 fellows across almost 50 years. Programs such as this can play an important role in diversifying the scientific workforce and serve to bring scholars and communities into the policymaking process who have often been excluded.
Our hope is that institutional support—from scholarly societies, colleges and universities, the top levels of government, and beyond—will indeed move the scientific enterprise as a whole toward incorporating true understanding of and consideration for all populations into the policymaking process. Such a shift would be entirely consistent with what sociologists have known for a long time and Alondra Nelson has illuminated: humans are at the center of all science. Failing to incorporate the full range of human voices into policy development is not an option if we seek a truly democratic nation.
Nancy Kidd
Executive Director
Heather M. Washington
Director of Diversity, Equity, and Inclusion
American Sociological Association
Bridging Divides Through Science Diplomacy
The COVID-19 pandemic has presented the international community with a series of unprecedented scientific, social, and public policy challenges. Particularly in the early days of the pandemic, the world experienced a shift toward geopolitical tribalism exemplified by nationalistic quests for personal protective equipment, testing supplies, and therapies. Rhetoric focused on “self-reliance” cast a shadow beyond the political and into the scientific, further magnifying perceptions that science is a competitive rather than a collaborative endeavor and increasing concerns that such actions may be encouraging a retreat into research secrecy.
Nowhere has the retreat from international cooperation been more drastic and consequential than between the governments of China and the United States, where increasingly antagonistic dialogue has exacerbated existing tensions between the two countries. If continued, the growing geopolitical conflict between the United States and China and decliningfaith in multilateralism could dominate a post-COVID world.
We argue that it is critical to foster international cooperation in the face of global crises. Early-career researchers (ECRs) like us are in a unique position to create new and lasting ties among scientists, with implications for improved international relations and the progress of science more broadly. However, helping ECRs develop the necessary skills requires investment by both research institutions and governments.
COVID-19 has highlighted the importance of international cooperation in confronting global threats, which was observed through the role publications and knowledge exchanges played in quickly characterizing the virus. Collaboration appears to be an important way forward as the world looks to make meaningful progress in tackling both the pandemic and other pressing issues, such as climate change.
Early-career researchers are in a unique position to create new and lasting ties among scientists, with implications for improved international relations and the progress of science more broadly.
In mapping out his own experiences with “science for diplomacy,” President Obama’s science adviser John Holdren wrote that international science and technology (S&T) collaboration “foments personal relationships of mutual respect and trust across international boundaries that can bring unexpected dividends when the scientists and engineers involved end up in positions to play active roles in international diplomacy around issues with significant S&T content—e.g., climate change, nuclear arms control, and intellectual property.”
We define ECRs to include undergraduate and graduate students as well as those still in the early stages of their careers, in any sector. Precisely because they are early in their careers, ECRs are uniquely positioned to create bonds with foreign researchers now that can mature and strengthen over the coming decades.
ECRs are also key to collaborative efforts in low-risk research areas, which are nonpolitical and concern only basic scientific questions of mutual interests. Valerie Karplus, Granger Morgan, and David Victor noted in Issues that these “safe zones” could include research necessary to address climate change, such as advanced battery chemistry or carbon capture and sequestration, among other topics. These areas are unlikely to be of immediate commercial or military applications and thus are fertile grounds for developing international cooperative partnerships.
Looking back to another time of heightened geopolitical tensions—the Cold War—reveals that scientific cooperation at the level of individual laboratories, or through the exchange of students and scholars, was a popular and effective way of carrying out international cooperation. In the case of the United States and the Soviet Union, interpersonal relationships between scientists proved beneficial as the countries sought to cooperate on discrete space-related activities. Acknowledging the caveat that the political circumstances of the two periods are not identical, this type of approach, focusing on particular projects and individual relationships, could be used as a model to facilitate communication between China and the United States.
We also believe that any framework for scientific cooperation between the United States and other countries should center the role of ECRs. Areas of mutual interest present a prime opportunity for extending international collaboration beyond individual scientists to the level of research institutions and government agencies. Cooperation in such areas could be politically feasible despite geopolitical tensions: the 1985 Cold War-era agreement between the United States and the Soviet Union to jointly develop the International Thermonuclear Experimental Reactor (ITER), an international nuclear fusion facility, is an illustrative example. ITER continues to operate today with an expanded coalition of international partners aiming to develop nuclear fusion as a sustainable energy source. A more contemporary example is US-Russia collaboration on spaceflight programs. Although space cooperation between the United States and China is less likely in the face of political tensions, expanding cooperation in health security could present a more feasible opportunity to warm relations.
Areas of mutual interest present a prime opportunity for extending international collaboration beyond individual scientists to the level of research institutions and government agencies.
By actively contributing to these projects, ECRs can play crucial roles in developing research agendas as well as in building relationships with individual researchers. The interpersonal relationships that develop among ECRs over the course of cross-border collaborations could prove instrumental as these scientists rise through the professional ranks in diplomatic or research arenas. In his op-ed, Holdren credited the relationship he developed with the Soviet scientist Evgeny Velikhov during US-Soviet collaboration in the field of nuclear fusion with the success of the bilateral commission on the disposal of excess plutonium in the post-Soviet era.
Through this process of long-term relationship building, scientific cooperation at the level of individual scientists could play a central role in building trust between countries. Over time, countries involved in individual-level collaborations may become more amenable to broader collaborative efforts, even in the field of commercial technologies.
As an example of how early-career personal relationships can lead to cross-institutional and even cross-national trust, as well as far-reaching research progress, consider the relationship between Mark Levine and Zhou Dadi. Levine, director of the US-China Clean Energy Research Center (CERC) at the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL), and Zhou, then an ECR in energy efficiency, began working closely in 1988 at the start of the LBNL initiative to support international clean energy research, development, and deployment. Twenty years later, by the time momentum was growing for a US-China agreement to address climate change, Zhou had become an advisor on energy issues to premier Wen Jiabao and director of the China Energy Research Institute.
Scientific cooperation at the level of individual scientists could play a central role in building trust between countries.
Zhou and Levine’s relationship created a foundation for progress on climate-related research. It also fostered mutual trust in intellectual property protections, easing the way for more expansive agreements in the future. For instance, CERC developed an intellectual property protection plan that “may ultimately play an important role in building trust among the consortia participants, which could lead to even more constructive collaborations in the future, and serve as a model for future bilateral cooperation agreements,” according to a 2014 examination of the program. Thus, Zhou and Levine’s ECR relationship provided a powerful connection between the two countries that grew to support meaningful progress on the broader issue of climate change.
Although ECRs could be highly effective in making meaningful progress on a range of S&T issues, the lack of awareness about science diplomacy career pathways and dearth of training opportunities has inhibited their ability to participate in this arena. Thus far, science diplomacy has been largely taught to ECRs through extracurricular courses and workshops or within general science policy programs (see Table 1). But there is a clear case for increasing support for ECRs to receive science diplomacy training.
Universities and research institutions can play a crucial role by creating new science diplomacy courses or certificate programs and ensuring that students and scholars have opportunities to pursue work experience in science diplomacy and other policy-related fields. Workshops and seminars involving professionals working in these fields could help expose ECRs to the various available avenues and provide potential mentors.
The lack of awareness about science diplomacy career pathways and dearth of training opportunities has inhibited ECRs’ ability to participate in this arena.
We argue that science diplomacy should be taught as an elective course and included in career development discussions. One possibility is to build on existing virtual courses, such as those offered by S4D4C and the DiploFoundation, among others.
Informal communities and networks can also be valuable resources for researchers interested in learning more about science diplomacy, providing a platform for networking and opportunities for engagement. The National Science Policy Network (NSPN), where several of the authors met to collaborate on this article, is one such community. That community’s informal environment facilitates open dialogue and discussion of innovative solutions to confront global challenges. NSPN’s Science Diplomacy Exchange and Learning program (SciDEAL), which completed its inaugural year, facilitates collaborative work between ECRs and science diplomacy institutions, including nonprofit organizations, embassies, and consulates.
The move into a post-COVID world requires all hands on deck to build the international collaboration that will help science most effectively address pressing global issues. With additional training opportunities and mentorship, ECRs can play an even greater role in building trust between countries—a fact illustrated through recent historical examples. ECRs, including us, are looking to gain experience in cross-border cooperative projects now so that as we move along our career trajectories in academic and science policy spaces, we can help shape a policy environment that promotes science for diplomacy.
Table 1. Selected opportunities for early-career researchers to train in science diplomacy.
Organization
Course / summer school description
Website link
The American Association for the Advancement of Science, Washington (AAAS), DC, USA, and The World Academy of Sciences, Trieste, Italy.
This course exposes participants to key contemporary international policy issues relating to science, technology, environment, and health.
This one-hour course is hosted by AAAS Center for Science Diplomacy and covers the basic definitions and frameworks of science diplomacy as well as its evolution in history using several case studies.
The Barcelona Science and Technology Diplomacy Hub (SciTech DiploHub) and Institut Barcelona d’Estudis Internacionals (IBEI).
The summer school offers an intense, 40+ hours course that covers the most pressing issues on science and technology diplomacy, such as sustainable development and technology diplomacy. It has a special focus on Europe, the Mediterranean, and the role of global cities.
European Academy of Diplomacy and InsSciDE (Inventing a Shared Science Diplomacy for Europe)
The Warsaw Science Diplomacy School allows young diplomats and scientists from across Europe to build diplomatic skills and create a new network of science diplomats.
The European Science Diplomacy Online Course introduces participants to science diplomacy, including the conceptual framing of science diplomacy and the variety of stakeholders and networks involved.
National Science Policy Network’s Science Diplomacy Exchange and Learning (SciDEAL) Program
This new program provides ECRs with opportunities to pursue project-based collaborations between early-career scientists and science diplomacy institutions, including nonprofit organizations, embassies, and consulates. Participants will create tangible outputs while also learning about science diplomacy and cooperation.
The Institute of International Relations (IRI-USP) and the Institute of Advanced Studies (IEA-USP)
The São Paulo School of Advanced Science on Science Diplomacy and Innovation Diplomacy (InnSciD SP) organizes an annual summer school introducing participants to multidisciplinary aspects of science diplomacy and innovation.
In the early days of the pandemic, communities began singing together over balconies, banging pans, and engaging in other forms of collective support, release, and creativity. Artists have also been creatively responding to this global event. In this episode, we explore how artists help us deal with a crisis such as COVID-19 by documenting, preserving, and helping us process our experiences. Over the course of 2020, San Francisco artist James Gouldthorpe created a visual journal starting at the very onset of the pandemic to record its personal, societal, and historical impacts. We spoke with Gouldthorpe and Dominic Montagu, a professor of epidemiology and biostatistics at the University of California, San Francisco.
Recommended reading
See a selection of James Gouldthorpe’s artwork from the COVID Artifacts series.
Transcript
Host: Hello and welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology, a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. You can find us at issues.org.
J. D. Talasek: Hi everyone. I’m J. D. Talasek, and I’m the director of Cultural Programs at the National Academy of Sciences. Welcome to The Ongoing Transformation podcast. For over 10 years, my colleague Alana Quinn and I have had the privilege of working with the journal, Issues in Science and Technology. We get to suggest artists to feature in the magazine, and it has been a real joy to do so. We believe that not only do artists have a unique perspective, they also have a unique way of communicating that perspective.
For this episode, I’m joined by one of these artists, James Gouldthorpe, who is based in the San Francisco area. We’re also joined in discussion by Dominic Montagu, who is a professor of epidemiology and biostatistics at the University of California, San Francisco.
James, Dominic, welcome. We’re glad you’re here.
Gouldthorpe: Hello.
Montagu: Pleasure to be here.
Talasek: So I’d like to just start by asking you how you met. It sounds like the start of a bad joke: an artist and a scientist walk into a bar. So why don’t you tell us what the real story is? How did you guys meet?
Gouldthorpe: You want me to go, Dominic? It’s actually all-around parenting. Our sons, who are both now in their mid-twenties, met in middle school, and they and some other boys formed this really tight group of delinquents that have remained friends for many years now. And through them, we got to know each other as parents. Dominic’s home became the sanctuary for all these boys as they roam the streets. So, we always knew where they were when it came time to track them down.
Talasek: Well, it just reminds me—we talk about cross-disciplinary discussions and the way that different disciplines interact. What you just said reminds us that it’s because we’re all human and that we have other ways of connecting through just our systems of knowledge.
James, we reached out to you because of a body of work that you’ve done called COVID Artifacts. And I wonder if you could tell us about that project—how it started, maybe just describe it for us, as well as how you view it now, after a year or so?
Gouldthorpe: Like a lot of people at the beginning of the pandemic, there was a certain level of panic. We had been sent home. I actually work at SF MoMA, and we had been sent home with the idea that we were going to check in in two weeks and all come back to work at that point. And as we all know, it didn’t happen that way. So at home I was panicking, kind of spinning out, and I just retreated to my studio to start working. I don’t know if you remember back at the initial start of the pandemic, there was this video that went around of this nurse showing you how to disinfect your groceries. He took each one out, wiped it down. It was a bit excessive, but back then we didn’t know.
And I remember my wife and I did our first trip to the grocery store, a little local grocery store. And we came back and spent over an hour wiping down every item. It had started to occur to me the things that we had taken for granted, our regular daily items, had suddenly become this vector for death. We had no idea how dangerous these things were now. Suddenly, a bag of potato chips could kill you.
I got the urge to represent that somehow, so I sat down and I painted a bag of groceries, which now was weaponized. It was this terrifying thing that was part of our daily lives, but it had this feeling of danger around it. And I discovered that doing that, just staying in my studio, kept me from spinning out. It started to really help my mental health. So I began reviewing the daily news feeds, which got brutal. I mean, there are people who chose to look away from the news feeds. I did a deep dive and then every day I would try to find something new to represent in a painting.
Talasek: Dominic, I’m wondering if you can remember the first time that you saw this work that James was doing and what your response was initially to it?
Montagu: I didn’t see any of the painted until I went to see the show at SF MoMA. And then it was just extraordinary because at UCSF, we realized, I think, in January that something rather dramatic was going on in Asia, and the university started at having weekly updates tracking COVID-19. And the first cases—do you remember the boat that came into Oakland and they were identifying positive cases and sending them on airplanes to North Dakota? And Trump was saying, “It’s okay, there’s only 13 cases. So we think we’re fine. Look at Asia. China has 50 cases.” And I spent a year looking at the and the infection numbers and from when it was single digits in the US and forecasting how bad it was going to get and worrying about that.
It was a really stressful year for all of the reasons that James said, as well. And I forgot all of the daily events. And I forgot the individuals. I forgot what it was like when that boat got towed through San Francisco Bay and those first few weeks, when we worried about groceries. The friends that I had who went to New York to support the doctors and nurses when New York seemed like it was overwhelmed and that was going to be the end of the world. And each of those episodes got replaced by a new trauma or by avoiding those traumas by focusing on the infection numbers and the statistics, or the mechanisms of infection, what we were learning. Is it aerated? Is it aerosol? Is it just droplets on objects that don’t absorb liquids? Do we only have to worry about droplets on metal? How well do we have to do all of this? Each new worry meant you had something to focus on that was pragmatic and you could control it a bit by understanding it and everything else got forgotten.
And so this was an amazing thing, to look at all of James’s paintings and have it all come just rushing back—both the human impacts that were so vivid in the moment and just returned, or even the impacts on all of us, remembering what it was like to get the first bag of groceries and “how worried should we be?” I remember hearing about people in Italy using bleach to wipe down every apple that they got and thinking, “We’re not doing that. Should we?” And yeah, it was incredibly impactful.
Talasek: Your account of that is almost exactly like mine. It was that I had forgotten that this happened. I had forgotten that we had experienced that. And I’m wondering also, Dominic, how does that feed back into your work and into your research? How does that inform a scientist, to have that sort of moment of reflection?
Montagu: A lot of epidemiology, a lot of biostatistics, is not thinking about individuals. We look at aggregate, we look at infection rates, at mortality rates per hundred thousand. And 600,000 people—we’re close to that [number of] deaths from COVID-19 in the US—I immediately want to think, “Well, but it’d be hundreds of thousands of people who would’ve died from other diseases if we didn’t have COVID.” I contextualize everything in abstracts. And so it’s quite powerful to have a collection of images that breaks you out of that and forces you to constantly think about the human context, the human importance that is behind all of the numbers. That, I think, matters—it’s why the numbers matter rather than the inverse. It’s not because there’s lots of people that we get excited by statistics. It’s the other way around. The statistics only have value because they represent people. And if you forget that, you lose an enormous amount. You’re doing things for the wrong reasons. So, it’s been very important to me.
Gouldthorpe: When I look back, particularly at the early works, they become these icons of human behavior in the face of near-apocalyptic events. And you see what becomes the focus. Suddenly we have a shortage of toilet paper, which never fully made sense to me. It was this sort of irrational response. And then as events went on, there was a period where I was like, “How am I going to keep painting these objects?”
But suddenly, society, social norms started to unravel. When the George Floyd murder happened, there was this explosion of protests and the exposing of just how deep [systemic] racism is. And then events just began to accelerate. Some people say to me that this project was a great idea, but it wasn’t really an idea. It was a reaction. I was just like, I want to stay ahead of this. I want to note how we behave, how we’re responding to this, and what layers are being exposed as we move along.
It’s interesting in retrospect, because even I forgot what some of the images were about. Things went by so fast. I was clicking and I was like, “I don’t know what that is, but it’s tragic.” I don’t want to forget, but then at the same time, it was so accelerated that, as a painter, I had a hard time maintaining the momentum, because it was so much happening.
And it keeps shifting. Out here in California, we ended up with the wildfires, and we had this apocalyptic sky that was very Blade Runner-esque that went on for a day. And then it seemed like the events grew larger and larger in their consequences as it went along, and it all seemed to stem from the pandemic. The pandemic seemed to be the foundation for this unraveling of society, I guess, is a very dramatic way to say it.
Talasek: I’ve heard you talk about your work, James, in terms of storytelling and in terms of narrative. And certainly what you’re describing here exactly fits into that larger impetus of your work. And I think that it also ties in with what Dominic was talking about. The work that he does in the lab is statistics and you’re dealing with numbers, but then the power of the narrative, such as is represented in your work, to humanize that and to connect that very necessary study of the numbers with what the numbers mean in our real lives.
I’m wondering, Dominic, in your work as a scientist, how does storytelling manifest itself for you? Once you crunch the numbers, so to speak, at what point is a narrative, like what James is creating, helpful?
Montagu: It becomes very important for communicating the visceral information that’s behind statistical reality, but it always works in the opposite direction of what James’s paintings have done. At least, as a scientist, you do the analysis, you look at the data, and then you identify stories that illustrate the data rather than being outliers to the data. You might have a great story, but it turns out it’s the one in a thousand where the person, they survived against all odds. Or they died, but not from the disease that you’re looking at; they got hit by a truck. And so it might be a great story, but you wouldn’t choose that because you’re choosing stories that illustrate data.
I think what James has done and why this resonated so strongly for me was it’s completely the opposite. It’s a collection of 365 and counting items of information, each of which is incredibly powerful. And the story is built from the collection of all of them. You don’t look at averages. You can see a shared narrative. In many ways, COVID turned us all sitting at home into observers of the world, much more so than we had been before. We would participate in real life more than somehow we did for a year.
And so, what you get is a story that is more like a reflection of real life, where it’s many different things which all built up to a collective influence. And I think you see that, and that doesn’t come out in data. Nobody analyzes data to produce that story. So, it’s been really interesting for me to try and think about the relative position, the relative utility of those different ways of approaching the creation of a narrative to reflect back something that’s happened.
I think that the paintings are really useful because they show a really complicated narrative and experience. I assume, James, that for any one person, 60% or 70% of the paintings will resonate with them. And the other ones—one of the paintings that I love the most is this enormous crowd of people on the Golden Gate Bridge. I don’t remember that. I never saw that. I really like it, but it doesn’t viscerally hit me. And yet there’s so much overlap between what you experienced and what I or anyone else experienced, that we build a bond there.
And the bond is much more interesting because it’s an imperfect overlap. If it was just the average experience, if it was just the statistically calculated median, it’d be much duller. It would reflect what all of us share, which is probably Trump and doctors in New York and three or four other things, but it wouldn’t be as nuanced and it wouldn’t be as powerful. What we didn’t both see is as interesting in this story as the things that we both saw on TV or in the newspaper.
Talasek: That’s an amazing description of what this is. And James, I’d like to get your response to what Dominic just said.
Montagu: Come on James, I want to hear you say, “I disagree completely.”
Gouldthorpe: I’m leaving right now [laughs]. One of the benefits of working at SF MoMA and having this exhibition at SF MoMA is I can go into the galleries and sit in the corner and basically loiter. I’m the doughy middle-aged guy in the corner that’s a little creepy. But I get to watch people as they review the year. The work up is not the entire year. It goes basically from the start of the pandemic until just post-election. There’s a lot of other work that’s not on the wall. And I can watch the recognition go across people’s faces. And something that I’ve been trying to do with my work over the past years is to create a communal event, in a way, when you come and visit my work, that you linger and you read it like a book or a painting.
And I’m pretty excited to see that people come to it and they’re all pointing at different paintings and sharing a story about it, sharing that moment. Or, “I don’t remember this, what was the…” Trying to get other people and they’ll gather and all discuss it and review it. And it’s humbling, for one thing, because I wasn’t thinking about that when I was painting them. I wasn’t thinking exhibition, I was just literally in my own head trying to get through the day. But I am witnessing this shared narrative, this global narrative.
It’s now this archive of the pandemic. Now, whether it’s going to be able to exist with as much intention after the pandemic’s over, I don’t know, because our memory’s going to fade even more. But at the moment it’s fresh enough that people are definitely finding a collective memory out of it. It’s interesting to watch, and it was unexpected. I’m really enjoying lurking in there and seeing how people respond.
Talasek: Well, it is interesting. James, do you see this as, or was it part of your original intent, for it to be such a healing process? I mean, you talk about it as originally just your needing to do something creative in response and then you see people coming together and you know that’s a healing conversation to have. Was that your original intention?
Gouldthorpe: It was not. I can’t claim there was any intention. The intention was really to keep my hand moving and my mind occupied. But then, I chose to use social media, which is something that I generally avoid. I don’t like the endorphin rush that you get addicted to with social media, but I decided to start posting daily. And as it went along, I started getting responses from people who were very appreciative of the work. A lot of frontline workers, when I would paint hospital scenarios and nurses and doctors, would write in their appreciation for my depiction. And over time, even the certain specific subject matter got in touch with me. I painted a young man getting arrested in St. Louis, and his girlfriend wrote and talked to me about that day.
And Rahul Dubey, who was the gentleman who gave sanctuary to the people in Washington [DC] during Trump’s little stroll over to the church—they were all about to be arrested for curfew. And he threw open his doors and he brought them all in and he had 70 people and they spent the night. And I did a portrait of him and he wrote me and now we’re in regular communication. His portrait’s on the wall and he was so excited that his portrait was in the museum.
Here’s the thing. I’m basically an introvert, so it was a little strange to have these strangers reaching out to me. But then that became my way to stay connected outside the studio. I was watching the news feeds, but to actually start to hear from people that were having the genuine experience that I was painting, exposed a reality that I was only experiencing through my laptop screen. When I was invited to do the exhibition, I was shocked, for one thing. But then, to be able to do this, to see people—and people still write me now and say, “I saw your show and it moved me in this way.”
It is unexpected. And I’m still processing what that means. I hope that it has a life beyond the exhibition and that I can continue to do work that has this meaning for people, because that’s really what I’m trying to find in my work. A lot of art deals with deeper conceptual things that have a limited audience that can begin to understand and to dissect that work. I like, if I can, that my work can actually have a human element that can reach people in unexpected ways.
Montagu: I have a question for you, James. Because of this forum, because you made the clear mistake to invite me on to also talk with you about this, when you’re painting in general, or with this series of paintings specifically, do you think about the differences between recording a lived experience and a scientific analysis of the world—whether that’s about something specific to diseases or physics or chemistry or other aspects of science—what do you think about what it means to have an artistic, or an artist’s perspective, on the world, versus the scientists and how those either are completely unrelated or how they complement each other?
Gouldthorpe: I think they’re very related. I think that if you go through contemporary art, you’ll find a lot of artists who are working very specifically within the sciences as well. And it’s interesting. On my behalf, I’ve long had an interest in science, and I’ve tried to work it into my art, but it’s not always been successful. I’m perhaps realizing that just because you have an interest doesn’t mean you have to make more about it.
I have seen artists who have been successful at it, but in the case of the pandemic, the science was so integrated to what was happening, that it was a crucial part of the narrative of the year. So in this case, I was able to review the science and represent it in the painting.
Now my other work, which is large narratives that I do, are fictionalized stories set in the past and the element of science is not there. I haven’t figured out a way to do that, that makes any kind of sense. Whenever I do it, it feels forced and inauthentic.
But if you start looking around and you look at artists, there are artists who work with scientists and do an amazing job bringing the two elements together. Not two elements, there’s multiple elements in this. I don’t think they’re very different, science and art. They’re both explorations of ideas. And I think that they both manifest crucial elements of our human existence. Wow, that was—I’m sorry. You can use that or not. That sounded ridiculous coming out of my mouth when I said it.
Montagu: One thing that this raised for me is, Why was there so little shared memory? Why was there so little, certainly none that I’ve seen, art that came out of the Spanish Flu of 1918? OK, in part because the news was shut down because of World War I, but this was immensely traumatic. And, there was very little scientific understanding of what was happening, little analysis of the disease that was helpful. And it also didn’t get shared or discussed.
And I wonder if those two things are related. That this was simply a shared trauma that had no explanation and no answer and that somehow that’s quite different from World War II. World War I, that produced… The answers and the resolutions came in the end of the war. And so, it became cathartic or useful to explore what happened in the war through art. And that somehow those two things relate: having a resolution through better understanding or through the conclusion of an event helps.
Talasek: It seems interesting to me, Dominic, going back to not having these communal experiences during the earlier catastrophes that you described. And a lot of creative outlets were probably lost. And I think that’s why it works such as what James is doing. It’s so very important. Around the same time that the pandemic hit, Cultural Programs of the National Academy of Sciences started collecting creative responses from artists, engineers, and scientists. Everyone was responding to it in different ways.
We started collecting those and that’s how we actually found out about James’s work. So thank you so much, James, because now it is part of the archive of our collective experience that will hopefully live on.
Talasek: Thank you for joining us for this episode of The Ongoing Transformation. I’d like to take a moment to thank our guests, James and Dominic. Thank you for taking your time to be with us, for sharing your insights on art, science, COVID-19, and our collective memory.
“Disaster recovery” is a generous concept, in theory. But the reality is much muddier, and the issues plaguing the communities that Nicholas Pinter describes in “True Stories of Managed Retreat From Rising Water” (Issues, Summer 2021) are daunting.
Rural river towns are often critical for surrounding agricultural production, and in some lucky cases, such as Gays Mills, Wisconsin, are home to important manufacturing facilities. But their location, part of their appeal, can render them prone to floods. Often, many of these communities also face pre-disaster challenges, such as internal community fracturing, financial shortages, an aging population, and outdated housing stock. The dangers for residents during rescue events, as well as their frustrations with repeated cleanups, are not trivial.
A lack of resources to move anywhere before, or after, an event is also a reality for many. Given current programs and policies, it is unlikely that government funding or insurance coverage could cover buyouts that would allow people or communities to relocate to safer locations. New types of collaboration involving the private sector will be needed, along with new goals for development. Rather than focusing on building spotty new developments that can further sprawl, the aim should be to design for flexibility that incorporates housing, businesses, shared green spaces, and facilities that foster independence. This strategy can attract people to safer and better new sites, and do so in advance of disaster.
The good news is that there have been tremendous advances in both understanding what is needed for preparedness and in the building sciences, so that it is increasingly possible to quickly develop flexible and attractive state of the art housing. Human needs for connection, green space, and self sufficiency need to be embraced. To rebuild or relocate a town to make each resident “whole” is unlikely in the coming years, and technology, planning, and a cultural understanding that moving to smarter communities and perhaps different types of shelters might in some cases be the wisest use of resources.
To capitalize on the opportunities, we need private planners and developers, along with federal leadership, to promote innovation that will help create attractive mixed-use rural communities that can become the vibrant, sustainable choices of the future. We need to realize that doing so will result in better management of limited resources of time, money, and materials. Refurbishing power grids to have backup capability to support self-sufficiency can also mitigate wholesale disaster.
Rather than focusing on building spotty new developments that can further sprawl, the aim should be to design for flexibility that incorporates housing, businesses, shared green spaces, and facilities that foster independence.
And in a more profound shift, residents and policymakers in flood-prone areas will benefit from embracing the cultural reality that moving to a safer location is not a failure, or to be feared, but rather a smart strategy—environmentally, financially, and from a quality of life perspective—regardless of the disaster relocation funds available. Of course, government or private-sector aid can make moving an economically easier choice. Over time, smart planning and development investment in smarter places will become a natural transition, rather than scrambling under the pressure of disaster recovery.
As the managed retreat case studies that Pinter describes have been whispering, there are better ways to prepare, rebuild elsewhere, and embrace a new lifestyle without breaking the collective bank or tolerating years of trauma after a disaster. Getting on with life comes from rapid response, embracing transitions, and new ideas. Managed retreat can work, but the cultural mindset that being “made whole” or remaining in place is not a real option. Managed planning and innovation will save us via public and private collaboration.
Julia Henley
Former Gays Mills Recovery Coordinator, 2009-2013
She is a business, housing, and community developer in Southwest Wisconsin
In his review of the rich 140-year history of relocation projects to respond to and protect from floods in the United States, Nicholas Pinter provides important insights that can be applied in implementing managed retreat in other countries as well. While managed retreat can eliminate disaster risks, there are many challenges to implementing projects.
As the author’s examination of Japanese cases of managed retreat shows, the country has promoted large-scale relocation programs following the Great East Japan Earthquake and Tsunami in 2011. The disaster killed over 20,000 people, completely destroyed some 130,000 buildings, and partially damaged 1 million more. Local governments in affected areas prohibited the construction of new houses and bought up lands in at-risk areas of tsunamis in the Tohoku Region. In all, Japan has conducted managed retreat for more than 100 years since the recovery from the 1896 tsunami in the Tohoku Region.
Japan and the United States share common lessons from managed retreat projects, but can learn from each other as well. Furthermore, they can share these lessons with the rest of the world.
While managed retreat can eliminate disaster risks, there are many challenges to implementing projects.
Japan could reconstruct local communities in safe areas by conducting managed retreat just as the US reduced flood risks. Japan experiences the same issues of complicated implementation processes as the United States in building consensus among the affected people, securing funding, and supporting vulnerable and low-income groups. In addition, the population in affected areas along the seacoast has declined and some local communities have collapsed. Some members of local communities cannot wait years to complete managed retreat and move to major cities that provide better education and job opportunities. These are challenges in promoting managed retreat in any country.
Support for local governments is essential in promoting managed retreat. Generally speaking, local governments have limited capacity to implement the complicated processes of managed retreat. Specialists are involved in planning and implementing in both countries.
Japan should learn from the approaches of the United States to sustain local businesses. Communities’ members relocated to higher safe grounds currently face difficulties in accessing shopping centers and commercial facilities. Considering residential, industrial, and commercial areas together is essential in rehabilitating people’s lives at relocation sites.
The Japanese system of managed retreat includes not only buy-outs of damaged sites but also developing relocation sites so that communities can be maintained at relocation sites. The country has constructed 393 relocation sites, which contain some 48,000 houses and 30,000 units of public apartments. The programs of tsunami recovery cover supporting measures to vulnerable groups. Older people and members of low-income groups, who cannot afford to construct new houses, can live in public apartments with subsidized rents. Local governments send supporting teams to ensure that older adults do not become isolated from their communities. A nongovernmental organization is operating an “Ibasho” house that assists the lives of older people.
Countries that are vulnerable to natural hazards can apply managed retreat as an adaptation measure to increased disaster risks due to climate change. By exchanging knowledge, countries can strengthen policies and approaches to promote managed retreat to make societies more resilient to natural disasters.
Mikio Ishiwatari
Visiting Professor, Graduate School of Frontier Sciences
University of Tokyo, Japan
Time to Modernize Privacy Risk Assessment
In 2018, media reports revealed that a company called Cambridge Analytica had harvested data from millions of Facebook users, creating psychometric profiles and models that were then used for political manipulation. For Facebook, it was a high-profile privacy debacle. For the information technology and privacy communities, it was a particularly high-profile wake-up call.
This privacy failure was far too complex to be called simply a breach; it occurred across multiple layers and involved several companies. At the foundation of the scheme was data gleaned from Facebook’s “like” button, which Cambridge Analytica used to infer users’ personality traits. The company gained access to more people and more information through users’ profiles and Facebook friends (i.e., social networks). This unchecked flow of data was enabled by Facebook’s privacy policy, the way the platform interacted with third-party apps, and its desire to support social science research. Amazon’s Mechanical Turk, a platform for virtual paid piecework, also played a key role, as did various sources of public information, including US Census data. At first glance, the whole mess looks like a textbook example of an emergent property of a complex system: the interactions of multiple actors and systems producing completely unanticipated results.
It’s possible that Facebook didn’t see the potential for such a disaster brewing in advance because of outdated and inadequate methods for defining and evaluating privacy risks. Despite dizzying socio-technical changes over the past quarter of a century, organizations still rely heavily on assessments of privacy impacts with simplistic forms and functions that are poor matches for the layered complexity of today’s technologies. The United States is not alone in this; the data protection impact assessments required by the European Union’s General Data Protection Regulation, although an improvement in some respects, are similarly lacking.
Despite dizzying socio-technical changes over the past quarter of a century, organizations still rely heavily on assessments of privacy impacts with simplistic forms and functions that are poor matches for the layered complexity of today’s technologies.
As long as this dependence continues, we can expect new, more frequent, and ever-stranger privacy incidents. AI-based decisionmaking tools, for example, which often require large amounts of personal information for training their algorithms, can encode bias in their operation and injure or expose individuals to harm in everything from criminal justice proceedings to benefits eligibility determinations. The Internet of Things raises difficult issues of data aggregation—in which seemingly innocuous data points acquire much greater significance when combined—and of ubiquity, where multiple platforms can create mosaics of individuals’ activities. Biometrics, especially facial recognition, create additional potential for persistent surveillance as well as for problematic inference of individual attributes from physiological features. As these technologies develop, public- and private-sector organizations must update their approach to effectively manage privacy risk.
How we got here
In the early 1970s, the public was concerned about the potential implications of modern data processing systems, then standalone mainframes, for civil liberties. The US Department of Health, Education, and Welfare ordered a 1973 report by the Secretary’s Advisory Committee on Automated Personal Data Systems. The committee articulated a set of guidelines called the “Code of Fair Information Practices.”
That code formed the basis of the federal Privacy Act of 1974 and prompted the development of numerous, slightly varying, and expanded sets of best practices for protecting privacy. These practices—which typically included consideration of data collection, retention, and use as well as training, security, transparency, consent, access, and redress—were eventually dubbed Fair Information Practice Principles (FIPPs). Around the world, they became the de facto approach to protecting informational privacy. Most privacy statutes and regulations today are built on some version of FIPPs.
Chief among the approaches enabled by FIPPs are Privacy Impact Assessments (PIAs), which bear a name and constitute an approach partly inspired by environmental impact statements and assessments. Echoing these roots, a PIA is both the process of assessing a system’s privacy risks and the name of the statement that results. In the evolution of impact assessments, PIAs act as tools for addressing one particular societal value. However, they have been constructed in a way that renders them less about privacy as a human value and more about procedural niceties.
Around the world, Fair Information Practice Principles (FIPPs) became the de facto approach to protecting informational privacy. Most privacy statutes and regulations today are built on some version of FIPPs.
PIAs (and FIPPs) became further embedded in US law and practice when they were required for federal information systems by the E-Government Act of 2002. Today’s PIAs largely retain their original form—a set of written questions and answers about each of the FIPPs—and the same function, firmly rooted in identifying potential violations. Because FIPPs provide the principal structure of PIAs, they have become so intertwined with these processes and artifacts that they have together taken on a perception of inseparability. And as the interactions between society and computational technology become more complex, that perceived indivisibility increasingly poses problems.
Problems with the status quo
By using FIPPs to define privacy practices without requiring more expansive analysis, PIAs today maintain a relatively narrow and inelastic view of privacy. This static conception offers a very circumscribed model for imagining and understanding the risks that technological systems could pose to privacy. An ideal risk model describes possible threats, identifies vulnerabilities that might be exploited by them, and lays out what would happen if each exploit were realized, including its likelihood and severity. However, because FIPPs are the risk model utilized by PIAs, today’s consideration of privacy risks is largely restricted to violations of FIPPs. Furthermore, the close integration of PIAs and FIPPs, together with FIPPs-based compliance obligations, effectively discourages the use of other privacy risk models and assessment methods.
PIAs also suffer from two problems that have been significantly exacerbated by the evolution of technologies. First, PIAs tend to emphasize description over analysis, which prejudices them toward addressing privacy in a checklist fashion. Second, even when PIAs do explicitly invite discussion of possible privacy risks and potential mitigation strategies, risks are typically construed narrowly. They tend to be first-order problems, issues that might arise as the immediate result of system operation. Potential knock-on effects are seldom considered, nor are potential problems involving indirect cause and effect.
These problems are compounded by the largely procedural nature of FIPPs. Consequences for individuals’ privacy are often framed only as possible FIPP violations—as violations of privacy-related procedure—rather than as violations of privacy per se. Many real results from privacy violations, such as embarrassment, lost opportunities, discrimination, physical danger (e.g., stalking), and more, are overlooked.
Another limitation of FIPPs is that they ignore the social context of systems, preventing analysts from considering potential harms originating in the external environment. Finally, FIPPs are so dependent on a system’s purpose, without carefully evaluating whether that purpose is fundamentally objectionable, that an unethical purpose can sometimes serve as the basis for satisfied FIPPs. If a system had the purpose of maintaining individual political dossiers on members of the general public, for example, a purely FIPPs-based analysis would take this as its unquestioned starting point and assess each principle relative to that disturbing purpose.
Other risk models and methods
Over the past two decades, other, more capable privacy risk models and assessment methods have been developed that could address the inadequacies of FIPPs and PIAs. Law professor Ryan Calo’s dichotomous privacy harms, for example, categorizes all privacy injuries as either subjective or objective, with the former forcing explicit consideration of potential impacts on individuals’ mental states—something often ignored by FIPPs-based models. Another model, a taxonomy of privacy developed by law professor Daniel J. Solove, proposes 16 different kinds of privacy problems divided into four groups relating to information collection, information processing, information dissemination, and invasions. This granular categorization enables more precise identification of privacy harms, again forcing more nuanced consideration of potential adverse privacy consequences.
Many real results from privacy violations, such as embarrassment, lost opportunities, discrimination, physical danger (e.g., stalking), and more, are overlooked.
Other risk models address vulnerabilities or threats. The contextual integrity heuristic, developed by information science professor Helen Nissenbaum, aims to identify violations of informational norms, which can be construed as privacy vulnerabilities. The model is noteworthy for explicitly recognizing the existence of social standards of privacy in various spheres of life, something FIPPs avoid by design. In contrast, frameworks such as LINDDUN, a privacy threat model and methodology, focus on modeling threats at the level of system architecture, considering factors such as potential attempts to link together system elements pertinent to individuals (data, processes, flows, etc.). Although situated at notably different levels, all these models attempt to discover issues that might ultimately affect privacy as experienced by individuals, rather than primarily looking for procedural problems.
Just as there are models beyond FIPPs, there are also new privacy risk assessment methodologies that could replace or complement PIAs. For example, the National Institute of Standards and Technology has developed a Privacy Risk Assessment Methodology that addresses systemic privacy vulnerabilities, defined as “problematic data actions,” and consequences, defined as “problems for individuals.” This methodology features numeric scores that explicitly estimate the likelihood and severity of privacy consequences. There are also more advanced quantitative options using statistical analysis, such as privacy expert R. Jason Cronk’s adaptation of the Factor Analysis for Information Risk framework, as well as a wholly qualitative but rigorous methodology called System-Theoretic Process Analysis for Privacy (STPA-Priv), which uses an approach originally developed to address the safety properties of system control structures. These models and methods have distinct emphases and orientations and could be mixed and matched for best effect.
Cambridge Analytica revisited
Could using other risk models and assessment methods have helped Facebook avert the Cambridge Analytica scandal? It’s not clear what, if any, privacy risk analysis was performed by the company, but it’s likely that more innovative approaches could have anticipated some of the factors that eventually led to the attempted political manipulation of Facebook users.
One fundamental reason Facebook users were vulnerable is that they were part of a social network. The privacy of any specific Facebook user depends in part on others in their network. This is, of course, part and parcel of being on Facebook, but that connectedness tends to be viewed exclusively as a feature—not a vulnerability. Cambridge Analytica exploited this weakness, termed “passthrough” by information science professors Solon Barocas and Karen Levy, in which the connections of one user enable access to the information of other users.
More recent risk models could have illuminated the threat of manipulation amid the tempestuous political climate of the time. Solove’s taxonomy, which considers decisional interference a significant privacy problem, might have suggested the potential consequence of inappropriately influencing voters. And if Facebook had performed an analysis using STPA-Priv, looking at the combination of technology and social context through the lens of hierarchical control and feedback loops, it might even have found the specific control failure scenarios that actually led to abuse.
Using one or several of the models available, Facebook almost certainly could have identified and addressed at least some of the relevant control weaknesses, which might have prevented the debacle. These weaknesses included inadequate monitoring of researcher data use, insufficient restrictions on app access to user and friend profiles, and targeting of users across platforms. That apparently none of these weaknesses provoked concern at the time highlights the importance of adopting more capable approaches to privacy risk.
It’s likely that more innovative approaches could have anticipated some of the factors that eventually led to the attempted political manipulation of Facebook users.
The complex role of technology in society demands that public and private entities expand their privacy risk assessment toolbox beyond FIPPs and PIAs. In the United States, the Federal Trade Commission should issue guidance for the private sector encouraging the adoption of a broader range of privacy risk models and assessment processes. The National Institute of Standards and Technology, through its privacy engineering program, should develop guidance and tools to assist organizations in comparing and selecting appropriate privacy risk models and assessment methods.
The White House Office of Management and Budget should update and supplement its existing PIA guidance for federal agencies, directing them to actively consider and deploy privacy risk models and assessment methods in addition to FIPPs and PIAs. Finally, the National Science Foundation should encourage and support research explicitly focused on enhancing privacy risk models and assessment methods, consistent with the 2016 National Privacy Research Strategy.
FIPPs and PIAs were innovative in their early days, but the world has changed dramatically. Modern technologies and systems require complementary and flexible approaches to privacy risk that are more likely to discover serious and unexpected issues. FIPPs and PIAs by themselves are no longer enough. Moving forward, organizations need to employ privacy risk assessments that ultimately serve the public interest.
Artificial Intelligence and Galileo’s Telescope
In 2018 Henry Kissinger published a remarkable essay in The Atlantic on artificial intelligence. At a time when most foreign policy experts interested in AI were laser-focused on the rise of China, Kissinger pointed to a different challenge. In “How The Enlightenment Ends” Kissinger warned that the Age of Reason may come crashing down as machines displace people with decisions we cannot comprehend and outcomes we cannot control. “We must expect AI to make mistakes faster—and of greater magnitude—than humans do,” he wrote.
This sentiment is nowhere to be found in The Age of AI: And Our Human Future, coauthored by Kissinger, Eric Schmidt, and Daniel Huttenlocher. If Kissinger’s entry into the AI world appeared surprising, Schmidt and Huttenlocher’s should not be. Schmidt, the former head of Google, has just wrapped up a two-year stint as chair of the National Security Commission on Artificial Intelligence. Huttenlocher is the inaugural dean of the College of Computing at the Massachusetts Institute of Technology.
The stories they tell in TheAge of AI are familiar. AlphaZero defeated the reigning chess program in 2017 by teaching itself the game rather than incorporating the knowledge of grandmasters. Understanding the 3D structure of proteins, an enormously complex problem, was tackled by AI-driven protein folding which uncovered new molecular qualities that humans had not previously recognized. GPT-3, a natural language processor, produces text that is surprisingly humanlike. We are somewhere beyond the Turing test, the challenge to mimic human behaviour, and into a realm where machines produce results we do not fully understand and cannot replicate or prove. But the results are impressive.
Once past the recent successes of AI, a deep current of technological determinism underlies the authors’ views of the AI future and our place in that world. They state that the advance of AI is inevitable and warn that those who might oppose its development “merely cede the future to the element of humanity courageous enough to face the implications of its own inventiveness.” Given the choice, most readers will opt for Team Courage. And if there are any doubters, the authors warn there could be consequences. If the AI is better than a human at a given task, “failing to apply that AI … may appear increasingly as perverse or even negligent.” Early in the book, the authors suggest that military commanders might defer to the AI to sacrifice some number of citizens if a larger number can be saved, although later on they propose a more reasoned approach to strategic defense. Elsewhere, readers are instructed that “as AI can predict what is relevant to our lives,” the role of human reason will change—a dangerous invitation to disarm the human intellect.
We are somewhere beyond the Turing test, the challenge to mimic human behaviour, and into a realm where machines produce results we do not fully understand and cannot replicate or prove.
The authors’ technological determinism, and their unquestioned assertion of inevitability, operates on several levels. The AI that will dominate our world, they assert, is of a particular form. “Since machine learning will drive AI for the foreseeable future, humans will remain unaware of what it is learning and how it knows what it has learned.” In an earlier AI world, systems could be tested and tweaked based on outcomes and human insight. If a chess program sacrificed pieces too freely, a few coefficients were adjusted, and the results could then be assessed. That process, by the way, is the essence of the scientific method: a constant testing of hypotheses based on the careful examination of data.
As the current AI world faces increasingly opaque systems, a debate rages over transparency and accountability—how to validate AI outputs when they cannot be replicated. The authors sidestep this important debate and propose licensing to validate proficiency, but a smart AI can evade compliance. Consider the well-known instances of systems designed to skirt regulation: Volkswagen hacked emissions testing by ensuring compliance while in testing mode but otherwise ignoring regulatory obligations, and Uber pulled a similar tactic with its Greyball tool, which used data collected from its app to circumvent authorities. Imagine the ability of a sophisticated AI system with access to extensive training data on enforcement actions concerning health, consumer safety, or environmental protection.
Determinism is also a handy technique to assume an outcome that could otherwise be contested. The authors write that with “the rise of AI, the definition of the human role, human aspiration, and human fulfillment will change.” In The Age of AI, the authors argue that people should simply accept, without explanation, an AI’s determination of the denial of credit, the loss of a job interview, or the determination that research is not worth pursuing. Parents who “want to push their children to succeed” are admonished not to limit access to AI. Elsewhere, those who reject AI are likened to the Amish and the Mennonites. But even they will be caught in The Matrix as AI’s reach, according to the authors, “may prove all but inescapable.” You will be assimilated.
The pro-AI bias is also reflected in the authors’ tour de table of Western philosophy. Making much of the German Enlightenment thinker Immanuel Kant’s description of the imprecision of human knowledge (from the Critique of Pure Reason), the authors suggest that the philosopher’s insight can prepare us for an era when AI has knowledge of a reality beyond our perception.
Kant certainly recognized the limitations of human knowledge, but in his “What is Enlightenment?” essay he also argued for the centrality of human reason. “Dare to know! (Sapere aude.) ‘Have the courage to use your own understanding’ is therefore the motto of the enlightenment,” he explained. Kant was particularly concerned about deferring to “guardians who imposed their judgment on others.” Reason, in all matters, is the basis of human freedom. It is difficult to imagine, as the authors of The Age of AI contend, that one of the most influential figures from the Age of Enlightenment would welcome a world dominated by opaque and unaccountable machines.
The authors argue that people should simply accept, without explanation, an AI’s determination of the denial of credit, the loss of a job interview, or the determination that research is not worth pursuing.
On this philosophical journey, we also confront a central teleological question: Should we adapt to AI or should AI adapt to us? On this point, the authors appear to side with the machines: “it is incumbent on societies across the globe to understand these changes so they reconcile them with their values, structures, and social contracts.” In fact, many governments have chosen a very different course, seeking to ensure that AI is aligned with human values, described in many national strategic plans as “trustworthy” and “human-centric” AI. As more countries around the world have engaged on this question, the expectation that AI aligns with human values has only increased.
A related question is whether the Age of AI, as presented by the authors, is a step forward beyond the Age of Reason or a step backward to an Age of Faith. Increasingly, we are asked by the AI priesthood to accept without questioning the Delphic predictions that their devices produce. Those who challenge these outcomes, a form of skepticism traditionally associated with innovation and progress, could now be considered heretics. This alignment of technology with the power of a reigning elite stands in sharp contrast to previous innovations, such as Galileo’s telescope, that challenged an existing order and carried forward human knowledge.
There is also an apologia that runs through much of the book, a purposeful decision to elide the hard problems that AI poses. Among the most widely discussed AI problems today is the replication of bias, the encoding of past discrimination in hiring, housing, medical care, and criminal sentencing. To the credit of many AI ethicists and the White House Office of Science and Technology Policy, considerable work is now underway to understand and correct this problem. Maybe the solution requires better data sets. Maybe it requires a closer examination of decision-making and the decisionmakers. Maybe it requires limiting the use of AI. Maybe it cannot be solved until larger social problems are addressed.
But for the authors, this central problem is not such a big deal. “Of course,” they write, “the problem of bias in technology is not limited to AI,” before going on to explain that the pulse oximeter, a (non-AI) medical device that estimates blood oxygen levels, has been found to overestimate oxygen saturation in dark-skinned individuals. If that example is too narrow, the authors encourage us to recognize that “bias besets all aspects of society.”
To the credit of many AI ethicists and the White House Office of Science and Technology Policy, considerable work is now underway to understand and correct this problem.
The authors also ignore a growing problem with internet search when they write that search is optimized to benefit the interests of the end-user. That description doesn’t fit the current business model that prioritizes advertising revenue, a company’s related products and services, and keeping the user on the website (or affiliated websites) for as long as possible. Traditional methods for organizing access to information, such as the Library of Congress Classification system, are transparent. The organizing system is known to the person providing information and the person seeking information. Knowledge is symmetric. AI-enabled search does not replicate that experience.
The book is not without warnings. On the issue of democratic deliberation, the authors warn that artificial intelligence will amplify disinformation and wisely admonish that AI speech should not be protected as part of democratic discourse. On this point, though, a more useful legal rule would impose transparency obligations to enable independent assessment, allowing us to distinguish bots from human speakers.
Toward the end of their journey through the Age of AI, the authors allow that some restrictions on AI may be necessary. They acknowledge the effort of the European Union to develop comprehensive legislation for AI, although Schmidt had previously criticized the EU initiative, most notably for the effort to make AI transparent.
Much has happened in the AI policy world in the three years since Kissinger warned that human society is unprepared for the rise of artificial intelligence. International organizations have moved to establish new legal norms for the governance of AI. The Organisation for Economic Co-operation and Development, made up of leading democratic nations, set out the OECD Principles on Artificial Intelligence in 2019. The G20 countries, which include Russia and China, backed similar guidelines in 2019. Earlier in 2021, the top human rights official at the United Nations, Michelle Bachelet, called for a prohibition on AI techniques that fail to comply with international human rights law. The UNESCO agency in November 2021 endorsed a comprehensive Recommendation on the Ethics of Artificial Intelligence that may actually limit the ability of China to go forward with its AI-enabled social credit system for evaluating—and disciplining—citizens based on their behavior and trustworthiness.
Much has happened in the AI policy world in the three years since Kissinger warned that human society is unprepared for the rise of artificial intelligence.
The more governments have studied the benefits as well as the risks of AI, the more they have supported these policy initiatives. That shouldn’t be surprising. One can be impressed by a world-class chess program and acknowledge advances in medical science, and still see that autonomous vehicles, opaque evaluations of employees and students, and the enormous energy requirements of datasets with trillions of elements will pose new challenges for society.
The United States has stood mostly on the sidelines as other nations define rules for the Age of AI. But “democratic values” has appeared repeatedly in the US formulation of AI policy as the Biden administration attempts to connect with European allies, and sharpen the contrast between AI policies that promote pluralism and open societies and those which concentrate the power of authoritarian governments. That is an important contribution for a leading democratic nation.
In his 2018 “How the Enlightenment Ends” essay, Kissinger seemed well aware of the threat AI posed to democratic institutions. Information overwhelms wisdom. Political leaders are deprived of opportunity to think or reflect on context. AI itself is unstable, he wrote, as “uncertainty and ambiguity are inherent in its results.” He outlined three areas of particular concern: AI may achieve unintended results; AI may alter human reasoning (“Do we want children to learn values through untethered algorithms?”); and AI may achieve results that cannot be explained (“Will AI’s decision making surpass the explanatory powers of human language and reason?”). Throughout human history, civilizations have created ways to explain the world around them, if not through reason, then through religion, ideology, or history. How do we exist in a world we are told we can never comprehend?
Kissinger observed in 2018 that other countries have made it a priority to assess the human implications of AI and urged the establishment of a national commission in the United States to investigate these topics. His essay ended with another warning: “If we do not start this effort soon, before long we shall discover we started too late.” That work is still to be done.
Managing Retreat Equitably
In “A Concerted and Equitable Approach to Managed Retreat” (Issues, Summer 2021), Kavitha Chintam, Christopher Jackson, Fiona Dunn, Caitlyn Hall, Sindhu Nathan, and Bernat Navaro-Serer call for expanded efforts by the US Federal Emergency Management Agency (FEMA) to support managed retreat—a strategy to reduce risk by relocating homes and other infrastructure away from hazard-prone areas—in an equitable manner. They describe how inequalities in community resources to apply for and administer federal funds may exacerbate historical social inequalities, and they call for greater support for persons displaced by climate change and natural hazards. Some of the changes they propose are already in place. For example, relocation assistance for renters is already required by the Uniform Relocation Act; FEMA incentivizes properties that experience “substantial damage” to relocate through requirements to rebuild at higher elevations; and FEMA’s Building Resilient Infrastructure and Communities program is an explicit attempt to provide more risk mitigation funding. But the authors’ overarching point that existing measures are often insufficient remains important.
Developing strategic support for managed retreat will require coordinated actions by numerous federal agencies and state and local governments. The Department of Housing and Urban Development, for example, oversees more postdisaster funding than any other agency, including FEMA, and has funded numerous relocations, in whole or part, through its Community Development Block Grant programs. In fact, almost every federal agency receives funding following a major disaster, and where and how they spend those funds shapes the willingness and ability of communities to relocate or to receive displaced persons. Relocation is influenced, for example, by where schools are rebuilt, using funds from the Department of Education; what roads are elevated, using funds from the Department of Transportation; what small businesses recover, using loans from the Small Business Administration; and where floodwalls are built by the US Army Corps of Engineers.
I am cautious about expanding the role of FEMA to address land use, housing, development, and employment.
State and local governments determine where new buildings are constructed and to what standards. They, and not the federal government, have authority over building codes and land-use laws. According to a recent study by the Government Accountability Office, over 80% of properties that have received FEMA funding to address repeat flood risk received that funding as a buyout. Managed retreat, through buyouts, is therefore FEMA’s primary means of addressing repetitive flood loss. Nevertheless, the number of homes at risk of repeat flooding has increased over the past two decades. This is, in part, because state and local governments have not exercised their authority to redirect new construction away from the most flood-prone areas. Federal reform, to encourage risk reduction, will need to create greater incentives for local governments to act and will need to build local capacity to meet their greater responsibilities.
I agree with Chintam et al. that managed retreat requires a more holistic approach than has been the case to date. However, I am cautious about expanding the role of FEMA to address land use, housing, development, and employment. FEMA was originally established to provide federal assistance in response to disasters that overwhelm local and state resources. Over time, FEMA has been required to take a larger role in reducing risk, and climate change undoubtedly requires a different approach to disaster management and risk reduction.
However, it is probable that other agencies, such as the Department of Housing and Urban Development, have more experience in directing development and establishing incentives or training programs to entice or enable displaced persons to resettle in less risk-prone areas, and it is possible that state governments could or should play a larger role in guiding development (housing, business, and infrastructure) toward safer areas within their borders. FEMA could, and probably should, provide additional funding and incentives for local governments to engage in relocation. But as Chintam et al. note, local involvement is likely to remain crucial in tailoring future relocation programs to local contexts, to avoid losing histories or erasing identities.
A. R. Siders
Assistant Professor of Geography and Public Policy
University of Delaware
Ethics in Animal Research
It was reassuring to read Jane Johnson’s frank assessment of the limitations of animal research, presented in “Lost in Translation: Why Animal Research Fails to Deliver on Its Promise” (Issues, Summer 2021). As a practicing physician who has worked in public health, clinical research, and research ethics, I appreciate Johnson’s appraisal of the practical and fundamental problems with animal research.
As Johnson notes, scientific problems with animal research are entangled with its ethical problems. Exaggerations of the potential benefits of animal research, and the confounding effects of stress on animals used in research, cannot be extricated from decisions about the ethical permissibility of animal research.
In 2011, in the journal PLOS ONE, my colleagues and I published the first of multiple papers showing how chimpanzees used in laboratory research demonstrated signs of depression, anxiety, and posttraumatic stress disorder. Other authors have shown how various nonhuman species experience acute and chronic pain and a range of physical and mental disorders. In laboratories, these physical and psychological injuries accrue. As Johnson notes, the animals, the people who care for them, and patients pay the price of these cumulative harms.
Exaggerations of the potential benefits of animal research, and the confounding effects of stress on animals used in research, cannot be extricated from decisions about the ethical permissibility of animal research.
As Johnson also observes, improved standards in human clinical trials are relevant. Although she highlights the relevance of improvements in methodological interventions, advancements in ethical standards in human research offer more salient guidance.
In a 2015 Cambridge Quarterly for Healthcare Ethics article honoring the pioneering medical ethicist and investigator Henry K. Beecher—for his approach to moral problems in human research and his landmark 1966 article in the New England Journal of Medicine—John P. Gluck and I identified problems in animal research that are analogous to those Beecher described. These include, for example, inattention to the issue of consent, incomplete surveys of harms, and inequitable burdens on research subjects in the absence of benefits to them. Beecher noted how these ethical deficiencies were bad for science.
Fortunately, by the middle of the twenty-first century, concerns about human research practices in the United States led to the establishment of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which resulted in the publication of the Belmont Report in 1979. Like the World Medical Association’s Declaration of Helsinki, the Belmont Report emphasizes key ethical principles: respect for autonomy; duties to nonmaleficence, beneficence, and justice; and special protections for vulnerable groups and individuals. Human research has become more ethical and improved ethical expectations have enhanced the scientific merit of human research studies.
Despite significant advancements in our understanding of animals’ capacities and growing consensus on the limitations of animal research, no similar effort has addressed the use of animals in research. But it could. Extending principles such as respect for autonomy and duties to nonmaleficence and justice to decisions about the use of animals in research could lead to the needed shift in culture that Johnson stresses. It could also lead to positive changes in education and training and a national research agenda that favors more translatable, human-centered, modern research methods.
Hope Ferdowsian
Associate Professor, University of New Mexico School of Medicine
President/CEO, Phoenix Zones Initiative
Episode 3: Eternal Memory of the Facebook Mind
Social media and streaming platforms such as Facebook and Spotify analyze huge quantities of data from users before feeding selections back as personal “memories.” How do the algorithms select which content to turn into memories? And how does this feature affect the way we remember—and even what we think memory is? We spoke to David Beer, professor of sociology at the University of York, about how algorithms and classifications play an increasingly important role in producing and shaping what we remember about the past.
Recommended reading
David Beer reviews Streaming Culture: Subscription Platforms and the Unending Consumption of Culture by David Arditi: “More and More and More Culture”
Spotify Wrapped, Spotify’s yearly wrap-up of your listening habits.
Transcript
Jason Lloyd: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. I’m Jason Lloyd, the managing editor of Issues. On this episode, I’m talking to David Beer about social media and how these platforms, algorithms, and classifications play a role in shaping our memories and reality. David is a professor of sociology at the University of York. His new book, Social Media and the Automatic Production of Memory, which he co-authored with Ben Jacobsen, explores these themes.
Dave, thanks for joining us today. I just finished your book and it got my brain spinning in really terrific ways. You also recently reviewed David Arditi’s book Streaming Culture for Issues, which dealt with similar topics, and may be a good place to start. Could you tell me a little bit about “Chamber Psych?”
David Beer: Thanks. The “Chamber Psych” thing I started the review with, that was from Spotify, and it was the End of Year Review thing that they produce. It’s like an automated narrative of your music tastes that Spotify creates on your behalf and provides you with. It tells a bit of a story. And one of the aspects of the story is about genre.
So I’ve had this interest in genre as part of a broader interest in archiving—how media, new types of media platforms, [operate] through an archival lens. And then you start think how are they organized, what are the classifications that are going on within these spaces, and that sort of thing. They’re really interesting, the grids that we put culture into are really interesting and very vibrant, I think, as a result of the new types of media structures.
So my top five genres last year—I can’t remember now which number it was at, but in the top five was “Chamber Psych.” I didn’t know what that referred to, or which artist, which songs that referred to. It was a genre label I was unfamiliar with.
I found myself, as I mentioned in the review, searching for the genre that described my own taste so that I could understand how my tastes were being classified, really, or categorized. I thought that was interesting. I think the thing that drove it—I don’t mention this in the review—I think the thing that drove it was listening to the band Super Furry Animals and then a couple of other things that I’d listened to must have been categorized as that as well, I think.
Lloyd: And that’s how the platform slots those artists, or does it categorize them by album, or song?
Beer: Yeah, I think it’s by song. But I suspect it filters through from the artist. So the songs, I imagine, can be tagged with more than one classificatory label, but that seems to be the one. But I’ve been listening to Super [Furry] Animals for about 25 years and I’ve never heard of Chamber Psych.
It’s interesting how these labels then start to take on a presence within these platforms, even if they’re not perhaps labels that we might use ourselves. The other genres are more familiar, but it just struck me that it’s indicative of the vibrancy of the kind of classificatory systems that are going on in media structures.
Lloyd: I was really struck by the fact, I think you mentioned in the review, that Spotify has more than 2,000 different types of classifications for the various types of music that they host—Chamber Psych, obviously, being one of them. Could you talk a little bit more about the role of classification—obviously, on a streaming platform like Spotify, but also in other social media platforms such as Facebook?
Beer: Well, I think consumer culture is full of this kind of classificatory system. The comparable thing on Spotify you get on Netflix, you get on Amazon: these labels, very specific, granular types of labels that are used to organize content. I think this is driven, in part, by people’s involvement in the platforms, but it’s also to do with the amount of content that there is out there, and that needs to be organized.
We’re familiar with ideas about algorithms presenting content back to us, but I think alongside that, we sometimes focus less upon the classificatory systems, the kind of archival systems that are active. So once you get all this massive, massive content—like you get all these films, all these TV shows, all these podcasts, all these songs available, and you move to an access-type cultural consumption—you need ways of organizing that, so that it’s manageable. It needs to be rendered fathomable to a consumer, and one of the ways this works is through classificatory systems taking on a greater level of significance within the media structures, so that people can then find their way around and find culture they might be interested in.
So you get the automated thing presenting you with suggestions, but there’s also the classificatory structures that allow us to organize this content in different ways. You get that in the consumer-culture platforms like Spotify, Netflix, Amazon, and so on, but you also then get it within social media with people using hashtags and stuff to try to organize content within those spaces. There’s classificatory systems that work there as well, and you had it with tagging in the past—tagging photos and so on.
So these are classificatory systems. Some of them are user-generated classifications, others feed off that but are led by the platforms, and there’s this interesting mix there of everyday classifications that people use, combined with the classificatory schemas that are applied, or imposed, onto culture by the platforms themselves, or actors within them. It’s a quite interesting mixture, I think, of agendas going on within classification of culture and content. But behind it all is there’s so much there. You need ways, then, of managing it, and these are vast archives, as I see them as, of content that classificatory systems allow us to access and allow us to retrieve the things that we might be interested in.
Lloyd: I was really struck. I had not given this any thought, how a platform like Facebook takes a post and assesses it for how it’s going to classify it, then how it ranks it, and then how it determines whether or not it’s going to feed it back to you as a memory. I assume it has some sort of AI that looks at the image itself or classifies the image in some way, maybe fairly generally as “two people on a beach” or something like that, and then it also looks at how it’s tagged, the comments on it, I guess they do some linguistic analysis on whether those comments tend to be good or bad, and so the platform purports to have a sense of whether or not this post or this photograph will be positive or negative for the user. I had not thought about that.
Beer: And that’s where the interesting thing of these things is archival, and thinking about classifications. To understand the kind of politics of those archival structures, what they allow to be said, what comes back to us, what we see, what we encounter, then, are all a product of those archival structures and that kind of politics.
Memory, then, is a part of that. So that was the collaborative work I’d done with Ben Jacobsen, who has written widely about algorithmic memory as well. And, yes, we were trying to understand, in particular, the classification and ranking processes, and then how people responded to them. That’s the three movements in the book.
One of the starting points, really, was understanding that Facebook had a taxonomy of memories—types—and the content was then slotted into, literally, a grid. And, in the way that you’ve described, it’s assessed, and the images, the comments, and so on are used to place memories within a grid. … As we live in social media, they become memory devices. And that past content then becomes slots into these pigeon holes, these categories of types. And in that moment, you’re deciding, “Well, what, of that content we create, constitutes a memory?” And also then “What type of memory is it?” So at that moment, memory becomes part of the logic of social media. Because it’s the types of things that social media want us to engage with as memories—[those] are going to be the types things driven by the logic of stickiness, and sharing, and commenting, and engagement.
We thought that was quite interesting to see the grid. We use Facebook in this book, but what we were pointing towards is a broader set of trends in social media through memories: a memory culture. And we’ve also got mobile devices doing something similar, but we used Facebook as a way into that because we’d got this taxonomy. So we did that, yes. And then, once classified, they’re ranked for their worth or value, which is partly to do with the prediction of what the person will want to remember and when, and then we looked at how people respond to those recirculated and sorted versions of their own past.
Lloyd: You touched on this just now, but when I think about the goals of the platform, what this kind of classification scheme, the targeting and regurgitating of your memories, what the objective is for that, I’m thinking of other related concepts like James Scott’s ideas about legibility in the state: in order to get visibility and control over the population, they need a census, a sense of who lives there and where they are and what they do. In this case, it wouldn’t be for something like taxation or conscription—why is this useful for Facebook? What do they do with these systems?
Beer: Well, because people have been living on social media platforms and using social media platforms for a significant amount of time, we’ve built up a series of biographical traces about our lives within those platforms.Now, instantly then, if you run a platform and you want to have maximized personalization, then having people’s biographical traces is a significant, valuable resource for knowing them. It’s a way of knowing people through those pasts, and that becomes a resource for maintaining engagement in the present.
Obviously things like nostalgia, memories are powerful things for people and they’re understanding themselves, but also their understandings of their friendships and relationships, and connections with other people, collective understanding of what’s going on. So this can generate significant activity, because what you are looking to do is maintain people in that platform, as long as possible, each day.
That’s the objective, because, really, some of these social media platforms, they can’t grow much bigger in terms of the number of users. What they’re looking to grow is the level of engagement that people have with the platform, particularly as they’re competing more with each other, I suppose, a little bit on this now as well with generational shifts and so on.
It doesn’t have to be a long time in the past, it can just be a previous year or whatever, you don’t have to be recalling long periods of time for it to work, but it gets people engaging with their own past and with shared moments that then recirculate and trigger activity in the present.
Therefore it fills the gaps. It fills the voids within social media for us to say something, or respond, or act within the platform—to be active. So it gives us an option, little bit like memes, I think, they become these anchor points for activity that allow people to fill the spaces of social media and satisfy the obligation for activity, I suppose.
Lloyd: Yeah, and it seems really effective. Part of the book is about, as you mentioned, the user response to the memory feature. Could you talk a little bit about what you found? You did focus groups and some structured interviews, right?
Beer: So this was off Ben Jacobsen’s project. He’d performed his interviews, and has been writing about those, and this became a kind of side project to that, really, around classification and ranking. It was a kind of unexpected insight that came off the back of that. We started to think about ranking and classification and then how people were responding to their past content being classified and ranked. And we found some, we use Imogen Tyler’s term “classificatory struggles” within the space. So it’s not like these things fit seamlessly in. They do generate activity and they do generate content and stickiness, but they also create other outcomes. We detail this a bit in the book, but these are things to do with misunderstandings of memory. So, presenting back things that weren’t that significant to the individual.
That was one of the examples, one of the things we looked at. We also look at the way that, sometimes, it can feel invasive. It is part of the surveillance, I suppose it’s that creepy over-surveillance you sometimes get from these platforms that unsettles you.
And in other instances, it was almost like this reaction against the polished, sleek version of their past that didn’t feel quite right. It didn’t sit well with people to have their past packaged in quite such a neat way. That was one of the other things that we found. So people were engaging and showed that they were entertained and amused by these memories, or they saw it as useful in some instances, but there were also these struggles, uncertainties, unsettling properties to it as well, sometimes, that people found.
Lloyd: It strikes me as paradoxical that people would find these too polished, because the conventional wisdom about what you put on social media is that it’s your vacation photos and the studio pics of your baby and things like that. And so the idea that in a year’s time it would come back to you as a memory and you’d find it too sleek.
Beer: People use those types of terms in their response to it, but maybe part of this is that an automated story of your life can create a kind of unsettling presence or it can clash with your own version of your own past, and therefore feels wrong. Or it might just be that people have an uneasy sense of automation within the space.
We use Walter Benjamin in the book, there’s an illustration fragment from Walter Benjamin about how memories gain authenticity through the digging, through the actual digging them up, unearthing the memory actively, is how they gain legitimacy. And here, maybe it’s the fact that because people aren’t digging it for themselves when they’re presented back, there’s a sense that they lack authenticity, or lack a kind of legitimacy.
We problematize notions of authenticity in the book, but you can see how it might be communicated as a sense of not liking the kind of polished nature of what’s presented to them. That’s part of their response, perhaps, to automation.
Lloyd: That’s really interesting.So, you point to this in the book a bit as a potential path for future research, but I was wondering, if you were to speculate on what effect this automated production of memory, what these algorithms are doing, both to the individual and maybe to social relations more broadly—what do you think will be the effect?
Beer: It will change what we remember and how and when, because the things we’re encountering from our own pasts are coming up through the devices. So what we remember and how and when we remember it is going to be filtered through this archival structure. I think that’s already happening, that’s in place.
I think that there’s the potential there for a reworking of what the notion of a memory is. What we understand to be a memory could change as a result of this, that it is something that’s in the platform, as well, and that’s automated, and that is provided to us. And the selection of what a memory is by this system could then lead us to see memory through that lens, potentially. So there’s a possibility for that there. And then, I think, the third thing is that this will have consequences for individual notions of self, potentially, and identity, but also collective remembering.
I’m not sure we understand fully what the implications are for collective memory, and therefore for solidarity, social connections, and social divisions that could come from a transformation in the collective memory, when memory is something that’s personalized to algorithms and is fragmented at the level of the individual, potentially.
So I think there’s three things there: when we remember, what we remember, what we understand the memory to be, but also how individual and collective memory might operate in the future, particularly as these things become more and more embedded, more active, and, potentially, more predictive.
Lloyd: This type of feature seems to be everywhere. I assume it’s in part because they’ve just found it so effective, a really effective way of increasing and engagement. But it’s on your phone, Spotify now sort of famously has this year-end feature where they feed you back a memory of the year in music. So it doesn’t seem like it’s going away anytime soon.
Beer: I don’t think so. The first thing I did on the memory thing was maybe about four or five years ago, and you can just see it escalating. Most of these platforms and devices have got their version of presenting your past to you, or the automatic production of our past.
That seems to me to be spreading out into these platforms and devices, and they’ve just got more and more biographical traces on which to use, but also now the accumulation of data about people’s engagement with those recirculated memories, which then feeds into the system itself. So they can then use that to try to be predictive about which types of memories work and what to rank as being the memory to send back to you.
The consequence of that are difficult to predict really, because it might be that that narrows down memory, or it might be that they find that they want to try to create ways of being unpredictable, because that’s what people are. So you don’t know, but it’s going to get coded into the algorithms.
Lloyd: So you and your co-author on this book, Ben Jacobsen, did a really deep dive into this [memory] feature that is a fairly significant part of, but in some ways tangential to, the overall structure of the platform and what they try to do, which is increase engagement. So I’m wondering, what do you think about what social media is doing overall, after having looked at this particular feature that seems to have all these complexities and tensions in it, potentially a manipulative approach, although not necessarily—but it seems like this particular feature is such a rich source of research and tension. How does it make you think about the larger platform or social media itself?
Beer: Yeah, you’re right. This project is part of trying to build up a bigger picture around the way that these systems work, and what their objectives are, and what the politics of platforms is, and data, and algorithms, and that kind of thing. And I think you can understand this in terms of the broader transformation that we’ve seen through social media as it builds up.
I did another book called The Data Gaze, which is about how this gaze is exercised on us, how we’re watched through platforms and by data. So I think you can see the memory thing in terms of the broader political economy of platforms, and particularly social media, which is about the data.
The data of the archived users is really where the value is in social media. That’s where the predictions are, because the idea is you can use the data to be more predictive about individuals, and, therefore, target content towards them in ways where value can be extracted. So I think you can see the memory thing in that broad term.
So what you want to do is keep people engaging with the platforms as much as possible, because that generates the maximum amount of data about those individuals, which then lends itself value. Now I’m saying they can use the data to achieve the things they say they can achieve, but it’s the notion that that data is of value. The ideas around value attached to data are the really important things in terms of understanding the activities of a number of these platforms, I think. So you can see the memories thing, I think, through the broader ideas around data capitalism, probably.
Lloyd: And engagement.
Beer: Engagement really equals data production and stickiness. These are things that create, that increase the amount of data gathered, and therefore maximize the opportunities for value to be generated—or for notions of value to be generated, at least.
Lloyd: What’s been your experience with this feature on social media?
Beer: Apart from Spotify, which I think—I did the calculation—I think I spent 3.64% of 2020 on Spotify(you can work it out from the hours it gives you), I don’t actually have any social media profiles. And a student asked me about this recently actually, and I said, “Well, the reason I don’t have any social media platforms is because I do research them. I’m some sort of social media enthusiast.”
So I did my first project on social media in about 2006, 2007, I was working on a program called the e-Society program funded by the ESRC with a colleague called Roger Burrows. It was called Web 2.0 then, and I created a Facebook profile, and I found it very unsettling. So I deleted that after we had done a little bit of research on it.
And then I did have a Twitter account for about six, seven years, I deleted that. And the only thing I’ve really stuck with is blogging, really. Yeah. Blogs and working on blogs, used Medium for a bit, been using Substack, just experimenting with those types of mediums as a way of writing and communicating and being part of an online community. But it’s not quite the same thing, is it?
So personally, I never get presented with any memories about my past. There’s a problem. It’s “How do you understand social media from the outside?” is something that I’m always working with because I teach this as well. I actually find it’s quite useful, because you can look across platforms, internationally, to try to understand it, rather than being led by a targeted, personalized experience of the social media space, I think. That’s the way I justify it to myself, anyway. I have a kind of discomfort with social media, but I can see the value in social media and understand people’s engagement with it, absolutely do.
I see my job is to try to think skeptically about what’s going on and to try to think in sociological terms about broader transformations.
Lloyd: Thinking skeptically about the world and ongoing transformations is also our goal here at Issues. I’m grateful you joined us for this episode, Dave.
If you’d like to read more about the new process of digital memory making, check out his book, called Social Media and the Automatic Production of Memory, and visit us at issues.org for more conversations and articles. And of course, you can read Dave’s review there. I’m Jason Lloyd, managing editor at Issues. Thank you for joining us for this episode of The Ongoing Transformation.
Digital Learning and Employment Records
In “Everything You’ve Ever Learned” (Issues, Summer 2021), Isabel Cardenas-Navia and Shalin Jyotishi present a compelling and timely argument for the important role digital learning and employment records (LERs) can play. While many organizations have piloted LERs or moved in the direction of issuing microcredentials or other types of digital credentials, it is clear they are valuable only if stakeholders can easily decipher their quality and credibility. Developing governance structures internally within institutions, let alone across an ecosystem that has many actors, is a primary challenge.
Institutions of higher education are poised to play a major role in this development if they can take a fully learner-centered approach and be willing to engage with industry openly and intentionally. LERs can increase responsiveness to the market and facilitate a learner’s ability to package what they have learned to be meaningful and understood outside the walls of academia. While many institutions work with advisory boards or have recruiting relationships, the depth of these conversations often does not reach an understanding of the competency-development process. Leveraging the expertise of institutions when it comes to assessment is critical—and relying on them to be the posting and vetting authority for external skills or credentials could help to mitigate concerns regarding validity.
Further, through being leaders in developing LERs, institutions of higher learning will better support adult learners and currently enrolled students. However, this requires focusing on competency-based education and pushing institutions beyond what is currently a very rigid academic structure. LERs utilized in this way can increase access to higher education, especially for adult learners, and provide students who are also working while learning or engaging in cocurricular activities the ability to document skills that will make them more competitive after graduation. If more institutions move toward a competency-based model, LERs would begin to quickly demonstrate returns of investment. Further, a system such as this—one that can be taken with students as they graduate—facilitates the development of a “lifelong learner” mindset and drives an institution’s ability to market programs to alumni for reskilling and upskilling.
Institutions of higher education are poised to play a major role in this development if they can take a fully learner-centered approach and be willing to engage with industry openly and intentionally.
The call to include workers as well as institutions of higher learning and corporations is critical. As institutions pilot these programs, involving their offices of engagement and continuing education offers additional perspectives in terms of what is meaningful to learners who are not formally enrolled. Expanding into these areas can enable institutions to play a critical service role in validating skills and “hosting” these LERs for the community at large, and not just students. For anchor institutions, this is already part of their mission, and would enable them to not only better serve their communities, but better develop pathways that could lead to additional stackable credentials.
To be successful, taking a learner-centered rather than an institutional-centered approach will be imperative. Institutions of higher learning are poised to play a critical role based on their expertise in documenting learning. However, their input and expertise are only as valuable as their ability to engage with industry and move toward a more flexible and nimble approach to learning.
Bridgette Cram
Assistant Vice President for Academic and Student Affairs
Florida International University
Isabel Cardenas-Navia and Shalin Jyotishi outline the challenges workers face in attempting to articulate their knowledge, skills, and abilities learned and demonstrated in informal learning environments. Their solution is the adoption of comprehensive learning and employment records stored and distributed digitally.
Nowhere do we see a better example of the need for verified and trusted digital records than in the United States’ withdrawal from Afghanistan. Anecdotal reports of Afghan scholars arriving at checkpoints only to have their academic credentials and visa documents destroyed is a perfect example of the need for trusted and verified credentials—immutable and secured digitally. While the ethics and self-sovereignty of these data is a debate for another day, the fact remains that many scholars around the world would benefit from digital LERs, similar to the workers highlighted in the opening paragraphs by Cardenas-Navia and Jyotishi.
The authors accurately describe a critical issue within the workforce ecosystem: those skills most desired by employers, earned through informal learning experiences, are the most difficult for job seekers to describe and the least likely to be included in transcripts or certificates. When we consider the thousands of quality nondegree postsecondary credentials offered by the University Professional and Continuing Education Association’s member institutions, along with those offered by informal or noninstitutional education providers, we should question how we will ever make progress on ensuring equity and fairness in hiring when only a subset of learners—degree-holders—possess evidence of their learning, even if it was achieved in contexts independent of, and perhaps unrelated to, the world of work.
My hope is that readers will return to Cardenas-Navia and Jyotishi’s contribution and consider the equity issues they raise, as well as those raised here. We should all feel compelled to examine the nature of credentials and the role they play in hiring decisions, to consider the nature of assessment as well as demonstrations of learning, and to better appreciate how the use of digital learning and employment records could impact a global workforce.
Julie Uranis
Vice President of Online and Strategic Initiatives
Managing Director, National Council for Online Education
University Professional and Continuing Education Association
Isabel Cardenas-Navia and Shalin Jyotishi’s article explores the potential for learning and employment records to address key market failures in the US labor market associated with hiring/job finding and ongoing learning. The authors maintain that LERs may make some aspects of the labor market process function better. They also note several challenges that will play a major role in shaping whether LERs can truly increase equity and expand talent pools, such as how LERs can verify quality, protect privacy, and counteract bias.
In order to succeed, it will require legislative and policy changes focused on standardization, data security, and regulation (e.g., quality assurance) at a level that some stakeholders, especially employers in the private sector, often are unwilling to get behind.
I have two observations to add. First, there is an underlying tension and political struggle over decentralized versus centralized governance. Education is highly decentralized in the United States. This makes it difficult to evaluate or change education systems at scale, because innovations require coordinated action across multiple fragmented programs and overlapping, often conflicting authority structures. Decentralization also has contributed to the reification of formal higher education institutions and an overreliance on a narrow set of education delivery models. It has similarly constrained private-sector education technology innovations due to the challenges of scaling solutions across a fragmented, chaotic landscape. In short, the lack of a consistent “rules of the game” framework is a barrier for private-sector education technology actors to consolidate a market to innovate.
LERs offer a potential opportunity to transcend this tension over governance because the technology allows for both a distributed administrative setup and a centralized (harmonized) set of data standards to ensure interoperability. However, in order to succeed, it will require legislative and policy changes focused on standardization, data security, and regulation (e.g., quality assurance) at a level that some stakeholders, especially employers in the private sector, often are unwilling to get behind. Standardization and clear rules will be necessary for LERs to produce data that are useful across states and localities, as well as to ensure that the data and signals that LERs produce are consistently meaningful and trustworthy. Without an explicit conversation about the roles of the public and private sectors, as well as the need for redesigning authority structures, LERs will be very difficult to scale and will struggle to gain a foothold.
Second, the development of LERs should be considered alongside the growing evidence that the blockchain technologies needed for using the digital records use tremendous amounts of energy. The impacts of climate change disproportionately burden the same communities that are structurally excluded from access to quality education and formal higher education. Given the climate emergency, any effort to plan for LER implementation should simultaneously be paired with an effort to incentivize and make commitments to align it with a clean energy transition in the power grid.
Overall, this is just the beginning of a longstanding conversation about the potential for LERs to address structural and signaling problems in the US labor market and close the opportunity gap. I agree with the authors that a technical fix alone is probably not enough.
Annelies M. Goger
Fellow, Metropolitan Policy Program
Brookings Institution
Looking Beyond Economic Growth
In “When the Unspeakable Is No Longer Taboo: Growth Without Economic Growth” (Issues, Summer 2021), Zora Kovacic, Lorenzo Benini, Ana Jesus, Roger Strand and Silvio Funtowicz call for radical transformation in how, why, and by whom society is governed. As they rightly observe, the “obsession with measuring growth seems to have derailed public policy.” Focusing on economic growth, and on measuring gross domestic product (considered the broadest measure of goods and services produced by an economy) has led to governance and policy lock-ins that are inconsistent with the radical transformation required to respond to the climate and ecological crises. These reflections should spur urgent and deep questioning among policy actors, knowledge actors, and community actors on their assumptions, modes of working, and values.
Although policymakers in the European Union acknowledge the need for transformation, the implications of this are poorly reflected in policy and governance plans. Many programs, such as the European Green Deal, advance sustainability policies on paper, but they do not necessarily question the underlying assumptions of the desirability, fairness, or attainability of continued economic (“green”) growth. Research on institutional change warns us that EU institutions are sticky, that change is often slow and incremental. Procedural rules, customs, habits, culture, and institutionalized values all interact to prevent radical transformation. The realization of institutional or policy change frequently requires the pressure of external crises, combined with the fruition of good ideas.
These reflections should spur urgent and deep questioning among policy actors, knowledge actors, and community actors on their assumptions, modes of working, and values.
But the development of integrated, transformational policy and governance programs also requires integrated knowledge and wisdom. What does this mean for our academic knowledge systems? There is an urgent need to break down the artificial disciplinary silos upon which the academic knowledge system relies. When it comes to societal and planetary challenges, no single discipline can provide insights on when, where, how best, or how not to develop governance responses. Relying on single-indicator or aggregated indicator analyses to assess something as qualitatively subjective as “well-being” is flawed, and this means that scientific knowledge must integrate knowledge and wisdom beyond economics, incorporating all the social sciences, arts and humanities, and natural and physical sciences. Such integrated, interdisciplinary knowledge is key not only for providing sufficient and relevant evidence to the policymaking process, but also for establishing valid forms of evaluation and learning as policies are implemented.
Furthermore, insights from research on the quality of democracy in the EU highlight the importance of citizen participation. The EU has long suffered from a reputation of democratic deficit. Research on deliberative democratic processes that engage citizens at all stages of policymaking has shown that such processes can alleviate perceptions of democratic underrepresentation, and can also be particularly appropriate when developing policies for sustainable transformation—policies that tend to have direct impacts on citizens’ lives. Such citizen participation can occur at the stage of knowledge-creation, by drawing on local, indigenous, and lived experiences and wisdom to cocreate the “actionable” knowledge or wisdom for policymakers. Deliberative processes with citizens help codevelop appropriate policy options. Groups of citizens can decide together with policymakers on the final policy option and on the scope of the evaluation and learning processes to follow implementation.
Imagining, investigating, and implementing a radical, societal, and governance transformation that moves away from entrenched ideas of growth is a collective endeavor. The call raised by the authors challenges policy, knowledge, community, and other societal actors to face the implications of this necessary transformation.
Claire Dupont
Assistant Professor of European Governance
Ghent University, Belgium
Zora Kovacic and her colleagues provide a valuable critique of gross domestic product and the challenges of uncoupling economic growth from environmental damage. The idea that getting rich helps the environment as some proponents advocate is simply magical thinking. Currently, the faster GDP grows, the faster we destroy the natural world that supports us, and a semicircular economy spinning faster may actually end up doing more damage to nature than a sluggish economy.
I applaud the authors’ call to recast government as facilitators of local deliberation, to seek plural views on progress beyond GDP growth. The European Commission’s vision to “live well within the limits of the planet” is useful rhetoric, but what is needed is more facilitated dialogue on the key elements of “living well” (and presumably how to reconcile trade-offs when one person’s good life impacts others’).
For the latter part of the commission’s vision—“within the limits of the planet”—there are a growing number of initiatives to measure sustainable progress at the local level that go beyond GDP. These include, for example, city-level assessments of progress toward the United Nations’ Sustainable Development Goals, or attempts to downscale the planetary boundaries concept, so that economic activities do not exceed biophysical thresholds supporting human flourishing.
The idea that getting rich helps the environment as some proponents advocate is simply magical thinking.
These creative approaches must continue “breaking the taboo,” as Kovacic et al. put it, of focusing primarily on economic growth. Yet if such approaches are to be successful, they might also need to tackle another taboo: questioning the self-identity and attitudes of citizens.
The authors highlight how materialist values influence people’s response to the idea of limits on economic growth. There is evidence, too, linking self-identity, values, and attitudes to the exceedance of planetary boundaries. Excessive individualism and narcissism are associated with fewer pro-environmental behaviors, while a greater sense of connection to other people and the natural world promote greener actions, such as recycling and reducing carbon dioxide emissions.
Many Western democracies are reticent to influence the self-identity of citizens. Perhaps this is unsurprising given the tragic history of interventions in some communist and fascist regimes, which aimed to transform (“brainwash”) the characters of citizens. Nonetheless, the laissez-faire attitude of liberal governments toward self-identity does not mean citizens experience a lack of influence: people’s mindsets are continually shaped by media, business, education systems, and government action (even if unintentionally). Over the past half century, evidence shows self-identity shifting toward more individualistic values and attitudes in most countries, accompanied by a greater focus on the accumulation of material wealth.
The paradox is that by framing progress primarily in terms of economic growth, governments have all the time been modeling (potentially misleading) views on what people need to live well, while at the same time undermining tendencies to live within planetary limits.
Although tracking citizen attitudes is increasingly common, active intervention is still somewhat taboo. To deal with the sustainability crisis, however, it is a matter that must be navigated soon. The role of governments in stewarding self-identity for planetary health is a ripe area for ethical research.
Tom Oliver
Professor of Applied Ecology
University of Reading, United Kingdom
Episode 2: Doing Science With Everyone at the Table
Could we create more knowledge by changing the way we do scientific research? We spoke with the NASA Psyche mission’s principal investigator and ASU Interplanetary Initiative vice president Lindy Elkins-Tanton about the limitations of “hero science,” and how she is using an inclusive model where collaborative teams pursue “profound and important questions.”
Lisa Margonelli: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine, and Arizona State University. I’m Lisa Margonelli, editor-in-chief of Issues, and on this episode we’re interviewing Lindy Elkins-Tanton. Lindy is the vice president of the ASU Interplanetary Initiative, and she’s also principal investigator of the NASA Psyche mission, which launches in 2022 to explore a unique metallic asteroid orbiting the sun between Mars and Jupiter. In her August 2021 Issues essay, Lindy argues for a radical restructuring of how we do research, divesting from big names and asking teams instead to focus on big questions and ambitious goals. The future of humankind, she says, requires that we hear all the voices at the table—not only the loudest.
Margonelli: Lindy, thank you so much for joining us today. I’d like to ask you how you got interested in science. Was there some sort of ideal picture of what a scientist was?
Elkins-Tanton: That’s a great question. Thanks, Lisa. It’s really great to be here to talk about this. I’m looking forward to the conversation. You know, I often ask when I give a public lecture, especially to people interested in astronomy and planetary science: What was the moment in your life when you knew that you wanted to do this, or be interested in this, or follow this? It’s like the question you just asked me. And probably almost a third of the audience says it was when they saw Jupiter or Saturn through a telescope when they were 10 or 11 or 12. It’s just this formative moment. For the rest, it’s mostly Carl Sagan — Cosmos, Star Trek, Star Wars, and NASA. And I had all those things, including the Jupiter and Saturn sighting when I was about that age. But I still wanted to be a veterinarian.
I had this tremendous, all-consuming interest in natural sciences that carried me across all the disciplines. And even though as an undergraduate I studied science, I was not quite ready to go to graduate school. So for me, it’s been not a real direct path into science, but instead, a real passion that grew largely in the decade after my undergraduate degree in how teams of people work together. What is it that makes for not just a good outcome for the project, but a good outcome for the person? The thing that made me come back was the knowledge that in research science, the questions can be as challenging as you want. You never need get bored, you can always challenge yourself to a greater question. And it came along with the beautiful opportunity to teach. Those are the things that drove me back to science, so I’ve had multiple multiple drives all along.
Margonelli: That’s a very unusual path, especially to where you’re working now. I want to know, when did you realize that science was cutthroat?
Elkins-Tanton: You know, I feel like that is an education that most of us get—as I got—during our PhDs. Some people are clever enough to cotton on to this a little sooner. But for the rest of us, there’s really a bit of a professional education during our PhDs, where we learn that we need to stand up and fight for our ideas. We shed that sweet, naive notion that if I do a fantastic study that gives us new insight into the world around us, and I publish it, and it’s peer reviewed, then there it is—people will understand it, and they will adopt it, and it will change human thought.
Very quickly, you begin to realize that that’s not enough. You can publish a brilliant piece of work, but unless you go out on the conference circuit, give talks, engage with other people, have what can be heated conversations, and you’re determined, your information doesn’t really spread. It’s little epiphanies like that, that begin to help us understand what the culture really is.
Margonelli: Did you have a particular epiphany about what the culture really was, where you realized, “Oh, this is really, really highly competitive?”
Elkins-Tanton: There wasn’t one specific epiphany, but I was at MIT for my PhD, a place that I love and have huge loyalty for but which is also absolutely a series of warring city-states among the faculty
People really are fighting for their name, for their results, to be known and not to be dismissed and not to be disrespected, but instead to be adopted by the field and seen for what it is. And I got the feeling while I was there, and also a little bit later in my career, that talking about things like the culture of the laboratory wasn’t welcome. This was around 2000, so it’s only 20 years ago. It’s not totally ancient history. I got the feeling that talking about things like, “let’s definitely take turns speaking at team meetings” and “maybe when you criticize someone else’s work, you could go about it in a more supportive way”—those were thought to be for people who were too weak to make it in the real way. And that if you were really meant to be a serious, top-notch research scientist, you didn’t need to worry about those kinds of things because you’re ready to play hardball. And it took me, oh, about 15 years to figure out what the rebuttal to that was. It took a long time.
Margonelli: I want to move to your rebuttal in a second. I think it’s so interesting because many of us have a really heroic ideal of scientists from movies, from the books that we read, just from our culture. We see them as explorers, visionaries, people who solve problems, moral exemplars, the whole bit. And we don’t really like to think of them as competitive, cutthroat, potentially underhanded, undermining, loud, maybe mean. But let’s talk about this thing that started happening after you’d been in the field for 15 years, and you start to look closer at what was going on around you. You saw something wrong, and you called it the “hero model.” What did you see?
Elkins-Tanton: To address exactly these words that you’re using, I think a lot of scientists are adventurers and explorers and visionaries. And I think a lot of scientists are truly driving forward human knowledge. That’s what science is about‑it’s a way to apprehend the world around us and deepen human knowledge in a way that we hope eliminates or reduces our implicit and explicit bias about what we are observing. We’re just trying to be better observers.
But if you think for a moment, science is a human endeavor. Everything humans do is a human endeavor, made up of humans with all of our faults and foibles and all of our inclinations. And of course, there are people in science who want to be famous. And of course, there are people in science who want to be lauded as excellent and people who want to win awards. I think it’s true in every field of human endeavor. And in science, unfortunately, it does pull us a little bit away from the reason that we’re there in the first place.
While I was a management consultant, I had this sort of epiphany moment around what it can be to work together—where it’s not always each person wanting their own reputation to be the more famous, it’s not always each person trying to be so careful not to ask a question that might be viewed as stupid, or to show weakness. Instead, you can have a circumstance where everyone is working together to create an outcome which is more important than their personal fame. This was a moment working with what was then Touche Ross Management Consulting in Philadelphia, and it’s now at Deloitte. We were working with a client around an issue that the client had, a big client. And we started as a team, envisioning how we could organize ourselves and the client in such a way that we would have a better outcome. And we made up this construct in our heads, and then we convinced everyone to do it. It sounds so simple, right? We all sat around, thought of a way to change, and discussed it. And then it happened. It was an organizational change. It was how the team was going to be organized, the actions that they were going to take, and the outcomes that they were going to make.
That was in stark difference to the kind of science I was doing, where you can’t just imagine what the outcome is and then make it happen. You don’t make it up in your head and then it becomes real. Suddenly, I realized that in the human endeavor, that is what we do. We agree upon how we’re going to organize ourselves, we agree upon the culture we’re going to take, and we agree upon the outcomes we’re trying to create, and magic—it happens. And that was the reason why I realized that in science, we could be doing these great outcomes, we could be creating this new knowledge, but in a construct that was more human and inclusive and positive and effective. We could make up that part of it.
Margonelli: If I can just back up, I think that what happened here is that a management consultant went to some of the fanciest labs in the country and said, “Why are they managed this way? Why are people interacting this way?” I think that’s what you’re saying. And I need you to give me a picture of how science labs are organized and why you called it “hero science.”
Elkins-Tanton: Yeah. Let’s go back to what was happening in 19th-century Germany that was then carried forward to other parts of Europe and to the United States. And I’m going to give you what’s a little bit of a caricature, but for anyone who’s active in research science, I think you’ll also absolutely recognize it. It’s a circumstance where one professor is the person who personifies their subdiscipline at that university. They own that field, they own that body of knowledge, they are the expert in it. And they also own a pyramid of resources. In extreme cases, that includes junior faculty hires along with lab techs and staff to run their organization, and graduate students, and sometimes undergraduate interns, postdocs, and then budgets and equipment, and access to that equipment. So there’s a big pyramid of resources, and on the top is the “hero” professor. So, you know, what could go wrong?
Margonelli: So they started this way back in Germany, in the 15 or 1600s—this was the beginning of the German research?
Elkins-Tanton: Yeah. And then it really got developed in the 18th and 19th centuries, when there was actually a recognition there that to become a leading faculty, you actually had to have charisma and fame. And that was part of your job: to stand up there and assert, “I am the expert. Listen to me, I’ll use my deep convincing voice. And I will never end my sentences with an upturned question inflection.”
There was this culture to create the hero. It was a purposeful culture; we wanted our senior faculty to stand up and be heroes.
Margonelli: And now what we’ve done is we’ve imported that over here, many, many years later, doing a completely different kind of science. We’re not looking at ants and saying where are their ovaries? We’re doing a completely different kind of science that pervades our entire lives. And we still have funding, fame, students’ education, discovery, equipment tied up to an individual.
Elkins-Tanton: That’s right.
Margonelli: And so that has become, in effect, a management culture of science. So how did that model go adrift?
Elkins-Tanton: It served us well for almost a millennium, didn’t it? You know, alas, we’re no longer Lord Kelvin, we can’t any longer discover fundamental chemistry in our kitchen. And it’s very hard to make gigantic breakthroughs in individual subdisciplines unconnected to other subdisciplines.
There are many different ways that it can and has gone wrong and ways that it’s still working really well, too. There are subdisciplines that are super fruitful in this model. But one problem is there is a limit to the resources that are available, so people become very protective of their pyramid of resources. In some cases, this even means that they don’t like their graduate students to spend time with other faculty or research in other labs because they want all of their time and attention in that one discipline on their thesis.
So this kind of “team” culture that’s led entirely by one senior person—who I might add, in general, has never had any leadership or management training, or HR training of any kind; they come at us purely as an individual scientist—it can be rife for bullying and harassment. And often, there’s very little transparency to outsiders or other people in the organization, and few paths for help. This is something we’ve heard so much about since the Me Too movement began, there’s been a big National Academy report—we know that there are problems with harassment and bullying in science and engineering and STEM fields.
Part of it is this: there is not a network of resources available for the people in the pyramid, and their entire careers are dependent upon that senior person. So those are some of the ways [the hero model] goes wrong.
And I would just add that another really critical way it can go wrong is that the senior scientist, the hero scientist, is very motivated to protect their intellectual property and not have other people, at their own institution or others, who claim to have exactly the same or better expertise in that area. So new discoveries tend to be in incremental slivers of real estate around that pyramid of resources and knowledge, up until they bump against another subdiscipline. Right away that paradigm is something that has to be broken. We have to be welcome and rewarded for connecting outside of our pyramid.
Margonelli: It’s interesting. So you’re saying there’s two issues here. One is there’s a set of incentives that drive people towards competitive behaviors. There might be bullying, there might be harassment. I think Science in 2017 published an article by two academics called “Bullying Is Real,” which is kind of a wild stage on which to have that realization. And then there’s also this problem with reproducing the science. Nature interviewed 1500 scientists in 2016 and found that 70% of them said that they couldn’t reproduce their colleagues’ studies, which means that there’s incentives in place to publish that do not also incentivize that being good, reproducible research. So there’s a set of incentives for negative behaviors. And then there’s another set of incentives that are hampering progress, or the same set of incentives are hampering progress on big questions.
Elkins-Tanton: That’s right. That’s exactly right. So we would like the questions to be bigger, and we’d like progress toward them to be faster. And we would also like the process to be more rewarding and inclusive for everyone who wants to participate. Here’s really the bottom line. To me, the absolute bottom line is that science is the best way that humans have ever invented to create lasting knowledge, knowledge that we don’t immediately find out is wrong, knowledge that we can actually make progress based upon. It’s the knowledge that gave us the Pfizer and the Moderna vaccinations. These are things that really matter, and this is a process that really matters.
But of course, it’s imperfect. It’s imperfect because it’s done by humans. It’s not that science is either this perfect thing or we stop believing in it. It’s that science as a human endeavor, and like every human endeavor, we can improve it.
So here are some ways that we could make it better: We can remove some of the things that make harassment and bullying possible. We can create new connections. We can reward scientists and other researchers for working across disciplines. And then, how do we stretch out of the subdiscipline model? That’s the second part of it. How do we ask bigger questions?
Margonelli: One of the things that came up in reading your story and talking to you is that while we’re all kind of hung up on the hero model because it seems totally normal to us and it’s a big part of our popular culture; in fact, there are places like NASA that don’t use it. They have a different organizational model. Can you explain to me what these other models could be, and the models that you’re thinking about?
Elkins-Tanton: Yeah, let’s consider a kind of axis of models where on the one hand, we’ve got this hero model of the person sitting on top of their mountain and asserting that what they know is true. And so the product here is knowledge, but it is produced by a person—in fact, a personality, I would say—and that’s what leads to the hero aspect. On the other end might be something where you’re just focused on the product, where you really are looking at an outcome, and the people are a way to create that outcome. A corporate setting is often a situation where that happens, and any place where there’s a project that’s bigger than the individual.
That’s what happens a lot of times with NASA missions. I’m working on one right now, and working on this mission really did lead to a lot of epiphanies for me about how things can work. This is not to say that NASA is without heroes. In a lot of ways, NASA, and all space exploration, is all about heroes. But it doesn’t have to be. Everything we do can be more inclusive, more voices heard, focused on the outcome. It doesn’t have to be about making individual people more famous.
Margonelli: So there’s a couple of things that you do. You’re doing the Psyche mission, I think that has 800 people involved in it. So obviously your management training is a really big deal there, being able to think in terms of what do you do with 800 people. But the other thing is that you’re working with ASU’s Interplanetary Initiative. And you’re thinking about how to create learning environments at the same time, because one of the issues is that the heroes are supposed to train students. And they do train students, but there are a lot of other incentives involved in here which may not end up with students who are set to go to work. So let’s stop and talk just a little bit about heroes and students, and then talk about your approach.
Elkins-Tanton: Yeah, I’d love to, thanks. Of course, faculty at universities and colleges teach classes to undergraduates. So that’s one very important part of our purpose. And our addition to society is teaching people not just content, but how to learn. Teaching people to be learners, teaching people to have agency, teaching people to go out and be effective in the world and in their lives.
That takes on its most focused version when faculty are working with graduate students, students who are getting their masters or their PhDs. They’re really entered into that pyramid of resources because usually they’re doing original research that is based upon an idea that the faculty member had. It’s the faculty member’s idea–usually, not always—and the student’s job is to carry it out and simultaneously to learn. It’s an apprenticeship model. Now, apprenticeship [models], when they’re done well—totally brilliant. The students learn to be a top expert in their aspect of this subdiscipline, and they’re supported by their faculty member, who then writes them supportive letters, and helps them get jobs, and talks to their colleagues about how great they are, and sets them up for talks, and does all the things that that a dedicated mentor can do to help launch their career.
Now, I don’t need to say, that is a lot of work. It takes some emotional intelligence as well as an intellectual and emotional commitment to the student. So you can immediately see, if you haven’t experienced it yourself, how this can either be a beautiful, effective thing or a tremendous tragedy for the student.
So we’re working at the Interplanetary Initiative at ASU not just on different ways to put together teams for more rapid and effective outcomes, and also more positive ones for the team members, but also on the education side. I’ve been focusing a lot on undergraduates because here’s the divide that I’ve been seeing in education: undergraduates, in its sort of pure end number state, listen to lectures and read textbooks and give back the information on a test, which is incredibly passive. We’ve known for decades that that is not the most effective way to learn. But it’s the way that we faculty thing that undergraduates have to learn in order to get all the content that we need to cram into their heads during these four precious years that we have to influence what they know.
But of course, we’re now in the information age, where all information is everywhere. So how about if we teach students instead the skills that they would otherwise have to wait [until] graduate school to learn? What if we teach them how to find information, decide upon its biases and its verity, and know what to do with that information, decide what outcomes they’re looking for, and figure out how to do those outcomes? In other words, how to be a master learner—someone who can actually execute with expertise, someone who can decide for themselves whether the answer is right or wrong.
These things are not what’s usually taught in undergraduate [education]. And they lead graduate students to have existential crises because all the ways they’d been judged a good student in their lives—test scores, grades, sitting still and listening—are now no longer useful. In fact, they’re the opposite of what a graduate student needs. The graduate student needs to think for themselves, find their own information, decide for themselves when it’s right, decide how to take action. So we’re trying to teach all those things as undergraduates; we’re trying to give the agency and the voice to everyone in the pyramid, not just to the hero.
Margonelli: Wow. Okay. So now let’s talk a little bit about how the Interplanetary Initiative is trying to move away from the hero model into a different way of doing research.
Elkins-Tanton: I’m excited to talk about this. So people talk a lot about how do we bring together art and science. And what I’ve mainly seen happen, from a scientist’s point of view, is there’s a hero scientist who’s running this research project. And an artist is just seconded onto their team almost like a mascot, who’s going to follow them around, learn about this, and create some art. And I haven’t seen many cases where that drove forward the science or the art. So I felt like that was an unsuccessful way to become interdisciplinary.
Meanwhile, I start working on the Psyche mission at Jet Propulsion Laboratory and with our many other partners across the country and around the world. As you said, at peak, we’ve had 800 people working on this team. And I see meetings where, in the room, we’ve got, say, three engineers and a couple of scientists, a graphic designer, a scheduler, a budgeter, a photographer, and we’re all working together and everyone is speaking. And we’re all creating these plans and these actions.
It really struck me like, this is such a different model for how people actually sit around a table, plan their actions, and then go off and produce a product. And the thing that I realized was that, in this model, the goal that we’re doing is exterior to ourselves. Everyone is there because they themselves, and their specific knowledge, is required to reach that goal. And that’s not the same thing as in a scientific lab where the one person has the idea—so the goal is almost internal to the leader—and the other people are brought along, maybe almost as observers, in some cases.
So at Interplanetary Initiative, we’re trying to use this other model, where we agree upon an external goal. It doesn’t just come from one person who’s the leader and the thought proposer, it comes from the whole group. We decide on an external goal, we assemble the team of disciplines that are required to reach that goal, and then everyone’s there for a reason, everyone’s voice gets heard, everyone’s knowledge is necessary. You immediately start with a much more equal and collaborative culture, working toward a goal that everyone equally values.
Margonelli: The culture I’ve been brought up in, which isn’t even the culture of science, says, “Well, you know, that’s just much too squishy. Expertise has to have some edges to it, and if you let everything in, you’re no longer experts.” Give me a really close-up look of like, how do you come up with the questions? And how do you compose the teams?
Elkins-Tanton: So coming up with the questions, we’ve been experimenting with different processing, so I’ll describe to you the one that we’re using right now that seems to be working pretty well. But I want to start with a little preamble, which might be a question of, how do scientists and engineers decide the questions that they’re pursuing? Did I start, when I was purely an academic scientist, thinking to myself, “What is the most important thing I could possibly solve with my time and effort here on Earth?” Generally, not. Generally, I start with, “What is the next really cool question that could possibly be addressed with the tools that I have in my tool belt?”—which is a different question, which is a different way to come about your research. That’s not true for everyone. There are labs all over the world where people are saying, “The very most important thing I can solve with my knowledge in the world today is blah, blah, blah.” And whatever it is, they’re going for it. It’s a really big, important goal. But a lot of us start with a little bit closer horizon.
And so what we’ve been doing instead is we do something we call the big questions process, where we bring as many people as we can into a room. The first time we did it, it was 40 or 50 people from the university and from the community.
Margonelli: They weren’t all scientists?
Elkins-Tanton: No, right. So I just invited everyone I thought I could convince to come because it was such a kind of flyer experiment that I was running. This was in 2017. And we’ve updated a little bit, but basically, the process was I invited people I thought would come. I had some deans, I had somebody from business school, somebody from public service, I had somebody from science, I had faculty. And then I had graduate students, and even undergraduates, and also some members of the general community outside the university who were just interested in what we were doing. So 40 or 50 people, very wide range of disciplines and very wide range of experiences.
And we started with a really classic brainstorm, meaning no criticism. Meaning everyone’s idea is received with a welcome. That’s very important so as not to cause people to shut up from pressure. And what we were trying to do is discover what the questions were that needed to be answered to create a positive human space future. What are the most important questions for us to answer to create a positive human space future? And people started thinking of ideas. One idea would be, “How do we make sure that when we are settled on another body, when we become interplanetary on the Moon or Mars, that humans have a structure to interact? And we understand what we’re going to be—who’s our governance, how they relate to each other?” Questions like that, all the way to, “What is a faster propulsion system that will get us to Mars?” And also, “How do we educate humans here at home on Earth so they’ll be ready to be interplanetary?”
So many questions across such a wide range of things. So we wrote them all down. And after we were finished writing them down on the board, we voted on them. We talked about them a little bit but didn’t want to get people into their critical mode.
Margonelli: This actually gives a very interesting view into your mindset, which is that you’re really looking at interactions with humans and then thinking about results, rather than looking to, in the crudest terms, separate the sheep from the goats, which has often been a winnowing process in science of separating out the people who don’t get to talk. And so this is much more about using every bit of information to structure some set of results that you might deliver or act upon.
Elkins-Tanton: That’s so right. It’s the fundamental belief that I have that science and engineering is in service of all humanity. It’s not in service of a tiny club of your closest peers who could recognize or contest what you’ve discovered. That is not a sufficient use of our time and resources. It’s really in service of all of humanity. And so let’s involve everyone in thinking about what’s important and feeling like they’re a part of the conversation.
Now, very, very important distinction: this is not getting rid of the idea of an expert. It’s not downplaying in any way the importance of a deeply rigorous education and an absolutely unflagging determination to find something that is true and not just guided by your own biases. You have to have that. You need to have disciplinary expertise of the deepest variety. But the thing that’s different is that we can bring those disciplinary experts together in groups of people who include non-disciplinary experts and find directions that are even more important for all of us.
Margonelli: I want you to actually talk about what happens when people get interdisciplinary. Then you set up these teams, and then the teams work in a really different way. Can you just talk about that a little bit?
Elkins-Tanton: Right. Let me start by saying that we give little bits of seed funding to these projects to kind of get them going. And of course, the traditional way that a seed funding program works is that individual heroes come and say, “Here’s my proposal for this brilliant idea.” And then they get some money to take back and do with as they normally would.
So that’s not what we do. We do these big questions. It’s a group project. And then around each of the highest voted questions, we invite people to volunteer into teams—all happening in the same afternoon. This isn’t a go-home exercise, this is all happening in real time. And then they have a couple of different jobs to do while they’re sitting together in the room for an hour. What are some milestones that we could reach in one year with a modest amount of funding that would get us on the track toward a solution for this very big question? Some of these questions are questions that would take a lifetime or several lifetimes to answer. But you can make a milestone for the year.
So first of all, setting really big outcomes and goals. And then you have to identify the disciplines that are needed for your milestones that you don’t yet have. Who are the empty seats at the table, so to speak? Then we pick a leader—there’s no leader till then. We pick a leader and we send them away, and we give them about two weeks to come back with a budget and a team and the fleshed-out milestones. The budgets aren’t big, you know, $10,000 $20,000 per year—they don’t even pay for a whole student. But if you have a leader, if you’ve picked a leader who can come back in two weeks with those things, then they’re probably effective enough to go for the year.
And then the big difference is we put them under professional project management. So we actually hold them to their milestones and their goals and their budget. And we support them if they need extra help in a different way. That’s not usual in academia, and I expected people to kind of run screaming—but it turned out people loved it. We’ve had very few teams disband. People really respond to having a question that’s bigger than just themselves. And that’s about being on a motivated team and having the supportive structure. It turns out, it really connects to something deeply human among us, and it’s been really successful.
Margonelli: Wow, that’s such an inspiring model. And you make it sound kind of fun.
If you want to read more about how successful Lindy’s ideas have been, read her article over on issues.org. The way we conduct research could be very, very different.
Thank you for joining us for this episode of The Ongoing Transformation. And thank you so much to our guest, Lindy Elkins Tanton, for talking to us about the problems of the hero model of science, how we can change it, and how to train the next generation of science leaders. Visit us at issues.org for more conversations and articles. I’m Lisa Margonelli, editor-in-chief of Issues in Science and Technology. See you next time.
Episode 1: Science Policymakers’ Required Reading
Every Monday afternoon, the Washington, DC, science policy community clicks open an email newsletter from the American Institute of Physics’ science policy news service, FYI, to learn what they’ve missed. We spoke with Mitch Ambrose and Will Thomas about this amazing must-read: how it comes together in real time and what it reveals about the ever-changing world of science policy itself.
Recommended reading
Find FYI’s trackers and subscribe to their newsletters at aip.org/fyi.
Transcript
Josh Trapani: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Science, Engineering and Medicine, and Arizona State University. I’m Joshua Trapani, senior editor at Issues in Science and Technology. I’m joined by Mitch Ambrose and Will Thomas from the American Institute of Physics science policy news service called FYI. Their newsletters and tools for tracking science policy budgets and legislation are key assets in the science policy community. On this episode, we’ll talk to Mitch and Will about their view of science policy and get a look under the hood at what goes into creating FYI’s newsletters and resources. Welcome, Mitch and Will, it’s a real pleasure to have you with us.
Thomas: Thanks very much. We’re pleased to be here.
Ambrose: Great to be on.
Trapani: So FYI describes itself as an authoritative news and resource center for federal science policy. And I’d like to start with a big picture question: How do you define science policy?
Ambrose: So there’s a very classical formulation of that, that’s the two sides of the coin of science for policy and policy for science. And I have nothing against that formulation, I think it is helpful to broadly bin the types of issues you come across. But we don’t really think about it in that way in FYI. We approach it in a variety of ways. We were not thinking about “Oh, this is a policy for science story,” and “Oh, this is a science for policy story.” We’re focused on various aspects of the process. You know, there’s the very formal set of budget documents that gets through the annual appropriations process following the President’s budget request to the House and Senate appropriations bills to the final outcome. There’s a whole procedure around that and a whole cast of characters involved in that process, and really getting a sense of what individual people’s priorities are, and the whole machinery of how priorities get set. And that’s just one lane of science policy. But then there’s all sorts of other lanes as well that we pay attention to. So we take this very procedural focus, I would say.
Thomas: Yeah, I think of it as a very empirical approach. What is science policy? Well, it’s what policymakers are talking about. What challenges are they facing? What opportunities do they see? What proposals are they putting out there? And what kinds of arguments are they making for and against different sorts of things? And when you do that, you pick up on a lot of things that maybe you didn’t even think about as science policy ahead of time, or you find that there’s an issue in some area of policy, like trade relations, for example, that turns out to have a very technical dimension to it. The nice thing about taking that approach as a news organization is that you’re always talking about things that are definitely on policymakers’ agendas, whether it’s in Congress, or in the agencies, or in universities, or among advocacy groups.
Trapani: How do you draw that line? If there’s something that maybe hasn’t traditionally been part of science policy but it comes up, how do you decide: this is in or this is out? Or is that not how it works?
Thomas: Yeah, I think it’s a really cogent question that we’re asking ourselves all the time. You know, we work for the American Institute of Physics, which is a federation of 10 member societies—American Physical Society, Optica, the American Astronomical Society, and so on. And so we’re always asking ourselves, “What sorts of issues might they be interested in?” That’s a very practical way of delimiting ourselves because we’re a team of four people and we can only cover so much.
And then there are certain issues that just kind of creep onto our agenda after a while. For example, the meteorologists have been concerned for quite some time about federal allocation of radio spectrum because with new 5G devices coming online, that can interfere with weather satellite observations, for example. And so for a long time, we were interested in this issue in a very, very top-level way. We just saw spectrum meetings, and we took note of the fact—“Oh, there’s something with spectrum going on.” And then starting about two, three years ago, this became a really, really serious issue. We decided that we had to learn about it because there was a lot of action going on, a lot of arguments between federal agencies—between the Federal Communications Commission, NASA, the National Oceanic and Atmospheric Administration, Department of Defense—and so suddenly, something that had not been part of our agenda at all was part of our agenda simply because it cropped up and you couldn’t ignore it anymore.
Ambrose: I’d like to build on Will’s comments there. To the broader point of how would you bound science policy—setting aside bandwidth constraints—how would you bound the topic space? I would say, well, one approach would be: OK. There are various committees in Congress that have control over science. There’s the appropriations committees, and there’s no one appropriations committee for science; it’s distributed across many different subcommittees. So the subcommittee that funds the Department of Energy also funds the Army Corps of Engineers. And then there’s a separate subcommittee that funds NASA, NSF, and NIST, but also the FBI, and all sorts of other agencies that have nothing really to do with science policy, in a narrow sense. So you could take a very structured approach to just looking at specific committees that have jurisdiction over science.
But what we found, especially over the past few years, is that science policy is cropping up across many, many committees that we would never expect. The Judiciary Committee, for instance, is considering immigration reforms, some of which have very big implications, potentially, for the science workforce. You have many, many committees, beyond even the Intelligence Committee, that are getting interested in the topic of what I’ll call research security, which is largely tied to the US-China dynamic, where there’s many people across committees in Congress that believe that China’s taking advantage of the US research system in various ways. That’s just become such a burning topic that it’s showing up in all sorts of places we never looked at.
We have a congressional tracking service. And I had a keyword search for various science terms. All of a sudden, I started hearing the FBI director start talking about science. And I’m like, “Huh, that’s interesting.” So there’s a whole new set of institutions that we had to learn about, as essentially an emerging area of science policy. And I would say as well, and we can get into this later—there’s a whole series of prosecutions of scientists through the Justice Department’s China Initiative. For my first few years in this job, I never looked at court documents at all—it just didn’t come up as an area of physical science policy. But now, when scientists are starting to get prosecuted, now I’m looking through the PACER system, and it’s a whole new set of procedures that I would argue is now part of science policy based on the current dynamics.
Thomas: It’s an interesting thing. I mean, in 2018, when the FBI director first started talking about this, we were one of the very few organizations that was really paying attention. And we noticed that it started cropping up in additional congressional committees, and there were a series of members of Congress who were really interested in the issue. So now you have large petitions at Stanford University and other universities and large protest movements against this China Initiative. I’ve seen it on the evening news—but that’s been only within the past year or two. By taking this empirical approach, we’ve been there all along, and we’ve been tracing the different facets of the issue. That’s one area where our, for the lack of a better term, empirical approach has really kind of paid off.
Trapani: This is really interesting, you have this really broad, comprehensive, holistic view of science policy that lets you almost see out ahead of where things are. I was wondering if you wanted to provide any insights on what you see as the most important things happening right now, that people either aren’t paying attention to or aren’t paying sufficient attention to, in the realm of science policy?
Ambrose: To build on what Will just said, I would say the China Initiative itself is something that wasn’t being paid attention to enough until fairly recently. And now, as Will mentioned, you have these campaigns of scientists at different universities that are starting to really mobilize around that issue. When this initiative was first announced, I think in late 2018, it took quite a while—after a few of these prosecutions of scientists to sink in, you know, what are the effects on the academic community. And now people are going to pay much more attention to it, and it’s getting a lot more media coverage broadly. So I would say that that issue is now getting the attention it warrants.
But there are others, like the spectrum issue that Will mentioned is another one that really burst onto the scene. And you know, the FCC, if you had been following filing documents for FCC going back several years—and this was actually the topic of a recent hearing in the Science Committee where essentially the chair of the committee, Representative Eddie Bernice Johnson, made the point that had the science agencies been paying attention to these FCC proceedings more closely, they would have been able to see this coming years in advance, this issue of spectrum interference with Earth observation satellites or astronomical observations. But it was just a foreign area of policy, even to the science agencies themselves, and it’s quite arcane. And she made this remark about essentially, you need lawyers to decipher this sort of thing for you. But now that issue blew up, you had these fights between agencies over spectrum allocations, and now it’s getting quite a bit of attention.
One other one that I’ll mention quickly is this issue of light pollution from satellite mega-constellations. And what it really took was that first launch of a bunch of satellites from SpaceX’s Starlink constellation, and then the astronomers are like, “Oh, no, this is gonna be a huge deal.” So now it’s getting a ton of attention. There’s a few issues like that, that just within the past few years have burst onto the scene for our reporting, that I think if people had perhaps been a bit savvier, [they] would have seen them coming down the pipe. We didn’t forecast those issues until they burst into public view ourselves, so we’re not claiming special knowledge in this area. But I think those are some good recent examples of how these hot topics can really come out of nowhere, almost, in science policy.
Thomas: One thing you asked, is enough attention being paid to an issue? And the question is really, attention by whom? Sometimes there are people who are fairly niche who are really, really interested in an issue, and nobody else pays any attention to it whatsoever. So FCC filings, for example, the telecommunications industry is paying attention to that all the time. But scientists weren’t. The scientists didn’t know how to do it. Scientists’ lawyers … didn’t know how to pay attention to it. And so it’s only recently, years after the initial filing, that they really glommed on to it and said, “Hey, actually, this is a really important issue, and it could cause us some fairly serious problems.”
Similarly, we have two issues that are really big in science policy right now—we mentioned the China Initiative and all these arrests of people with Chinese backgrounds, be they immigrants from China or visitors from China or simply Chinese Americans, and then you have other sets of people who are interested in diversity, equity, and inclusion issues. And those really aren’t the same groups, even though they’re united by a common cause of justice and civil liberties and that sort of thing. So one of the things that we hope that we do—we don’t know if we do it or not; we don’t know how effective we are in doing it—is if there’s a fairly niche issue, or if there’s a community that should be paying attention to it, that we can help alert them to the existence of these issues and help to get them up to speed on the nitty gritty of it as best as we can.
Trapani: I’d like to turn to FYI itself. There is a lot of reporting, there’s a weekly newsletter, there’s a budget tracker, you track people in the science policy world. And it gets circulated around the science policy world quite broadly. Before I came to Issues in Science and Technology, I was in several other science policy roles, and when I first learned about FYI, I was like, “Oh my gosh, I have to go subscribe to this immediately.” I learned about it in a way that a lot of people do, which is people will forward it on or forward on chunks of it.
The thing is that once you subscribe, you realize that a lot of the really smart people who seem like they’re in the know in your organization are actually just forwarding on bits and pieces of FYI. And then you get to laugh at those people in your mind. But within a few weeks, you find yourself turning around and engaging in exactly the same behavior, because it is just such a valuable resource for the community. I was wondering if you could just talk a little bit, because it is so comprehensive, about how do you go about gathering up all the things that go into the weekly newsletter or the other tools that you have? And what kind of analysis goes into that?
Thomas: That is really our secret sauce. I’ll let Mitch take the lead.
Ambrose: I’d first like to just sketch a bit of the history of FYI, I think it’d be instructive at this stage. It was started in the late 80s by essentially one person, Dick Jones. And at that time, it was just distributed literally by paper mail for the first few years of its existence, but it did have certain elements that have continued through today.
As I mentioned at the outset, we have this very formalized way of covering the federal budget process, for instance. There’s a series of documents that are produced through that: there’s the President’s budget request, then the House Appropriations Committee advances its set of bills that have reports with all sorts of detailed policy instruction. Then the Senate will eventually do its version of those same set of reports. And then they finally, usually several weeks late, have a final agreement. And there’s documents associated with every stage and from its outset, FYI, its bread and butter has been stepping through those foundational science policy documents. And that continues through the current day, except we’re much more in-depth than we used to be, which I’ll get into in a bit. And also, FYI covered a lot of speeches from policymakers, and did still have that kind of people-focused approach. But it was essentially just one person, for the most part, up until when the founder, Dick Jones, retired in about 2015.
And then AIP reflected at that point, people seem to really like this type of information. Let’s really scale this up. So over the coming couple of years, we scaled up to four people. And that has really enabled us to [make] a sea change in FYI reporting, where we launched this weekly newsletter, called FYI This Week, that is giving you a preview of what’s coming down the pipe in the coming week or so, a summary of the big things that happened in the previous week, and then all sorts of additional information like an event calendar and a roundup of job opportunities, and also a roundup of other people’s reporting. And we’re very generous in acknowledging just good science policy reporting that we see. Every edition has about 100 or so links at the end. It’s almost like this little appendix of interesting science policy articles that the team sees throughout the previous week.
I’m always floored at how much science policy reporting there is, if you just know where to look. And it was in the process of constructing this very comprehensive, weekly newsletter that we started to really formalize a way for surveilling what’s going on. And we have all sorts of fishing lines, I like to think, out looking for relevant events, relevant reports, there’s a series of information streams that we’ve set up in order to have this week-over-week reporting on what’s happening—and trying not just to catch the newsiest things. We do give those more attention, but also including all sorts of links to less newsy things—but you can kind of see something’s bubbling up. And so we have this way of, across the whole landscape, we try to pay attention, essentially, to everything—as much as we can at once. I can’t say everything at once. By paying attention to the entire landscape, or as much as you can at one time, then you can start to see these little deltas of activity in different committees or different agencies. And then eventually, that might bubble up into something that we write a full article about. And that’s, we have this thing called the FYI Bulletin, which has existed from the beginning, which is our full length reporting. So we have this interplay between the weekly newsletter, which is OK, here’s the week-to-week churn, and then once something becomes a big enough story, we do a Bulletin on it. And I’ll stop there and see if, Will, you want to add to that.
Thomas: Yeah, I mean, it’s really just being knowledgeable about what sorts of documents are apt to contain… We develop a baseline knowledge of what exists right now, we call it the landscape of science policy. And the more you can know about that, the more you can see where it changed. Mitch is apt to call this a delta, with his physics background—and then you learn about the windows where these things are apt to come out.
So I mentioned the documents, but there are also congressional hearings. And you know, 95% of what’s said at a congressional hearing is not, frankly, going to be very interesting—to be honest, like 99%. But there’s always going to be some little thing, maybe it’s in the opening statement, maybe it’s later on in the hearing, maybe it’s something the witnesses say, and you can glom onto that, if you know what’s already out there, and say, “That’s new, I have not heard that before. This is something that we have to pay attention to.”
All the federal agencies have these advisory committees of outside scientists, and that tends to be where they talk about what’s going on with their programs. Is something over budget, is there something that they’re worried about, what’s their latest initiative in research or in some other aspect of their activities. It used to be that we would be listening to these things live, and that ultimately became untenable because one, there’s only going to be a little bit that you really, truly need to pay attention to, and also there’s lots of different things going on at once.
So we started to be a little bit smarter about recording these things, feeding them into these new AI transcription services so that we can scan what was said a lot more easily, and it’s been just really a series of small innovations that lets us consume more and more and more, even though we’re a really small team, we can pay attention to an astonishingly large amount of things. And then things that we miss, we depend on reporters for other outlets. We can say, Science magazine, it has excellent reporters, SpaceNews is just awesome, awesome reporting in the space sector, Nature—it goes on and on. What does it, Mitch, National Journal that’s been reporting on the science policy legislation?
Ambrose: Yeah, there’s a particular reporter at National Journal who’s gotten very interested in science policy.
Thomas: And they just came out of nowhere, and they do a lot of important work for you. And we always say like, we’re only four people, we’re not going to only cite ourselves, because there are a lot of people who are paying attention to a lot of things we simply can’t pay attention to. And we want to acknowledge them as part of this science policy news ecosystem.
Trapani: It’s remarkable how much information you all process and put into your stuff. I would have thought that there would have been an army over there, so I was really curious as to how you did it. It sounds like FYI has grown—in terms of sophistication, in terms of people—in terms of the issues over the last few years. What do you see as coming next for FYI? Or what would you like to see next?
Ambrose: One other thing I didn’t mention that we launched over the past few years, in addition to the weekly newsletter, is we have this series of trackers. They’re essentially landing pages on our website. We have a budget tracker, which has very fine-grained information on, for a given agency, what is the funding outlook for [a] particular project? Then we have a leadership tracker, which is the “who are in positions of power over the science agencies in some way?”—both people going through the Senate confirmation process, but also a whole constellation of people who are career officials that don’t typically turn over with a given administration. And then finally, we have a bill tracker, which is an index of key legislation relevant to the physical sciences.
It just gives you this whole map of in these different categories of data—budget data, people data, and legislative data—what’s going on? To your question about some new things we’d like to do right now, each of those trackers is in a pretty rudimentary stage. We’d really like to take each of them to the next level, and build out what we kind of call to ourselves an information architecture. And how do you provide some extra context around all the information that’s provided in there? Particularly with budget information for large facilities as they’re going through the process, for instance, it can be difficult to interpret the significance of certain changes in the funding profile.
Thomas: If I can offer an example: NASA launches science missions, right? And their funding will go on an arc, and one day, you’ll see they’re going to cut the budget for the science mission by 80%. Well, yes, that’s because it’s launching—it’s not because they don’t like it. And we don’t communicate that in any way in the budget tracker. As it stands right now, you just have to be aware of that, whether by reading our bulletins or because you’re an insider. So that’s what Mitch means by context.
Ambrose: We think you could even make, beyond just the contextual information, you know, building it into a richer resource for people. The astronomy and astrophysics community just came out with their latest decadal survey, an extremely important prioritization report for that discipline. And a big part of that exercise is, you know, constructing different budget wedges of, how much money will we have in a given amount of time to do a flagship space telescope mission versus a ground-based telescope? And how do we fit that under certain budget guidance that we’ve gotten from the agencies? We feel like we could, for instance, build out our budget tracker into a tool to help those types of planning exercises—really look at what past budget wedges were like for different sets of projects, how did that fit under a given constraint. And also looking forward as to what the current projections are, adding that up in what’s known as a sand chart and seeing if you’re going to be able to fit under a certain budget target. That’s just one of many examples I could give of how you could make richer information resources that aren’t strictly news, per se, but we think could provide, both in aiding our own understanding of these processes but also providing tools for the scientific community to understand what’s going on. So a lot of this falls under a concept we’ve thought about of establishing almost like a research hub for science policy to complement the journalism that FYI does.
Thomas: When you’re a news organization, you accumulate a lot of information over time. Something you knew as a fact last year may no longer be true this year. And you can follow FYI, maybe if you do very studiously, you’ll be really up on the issue. But if you haven’t been an absolute scholar on this issue, we’d like to have a place where you can go so that you can learn everything that we’ve learned about this issue and have the most up-to-date information. And that would really make us almost as much of a research organization as it would make us a news organization.
The fact is, we have four people who work for FYI: me and Mitch, Adria Schwarber and Andrea Peterson, and none of us have backgrounds in journalism. We all are in science or history of science or something like that. We’re all researchers in one way or another. And so we’re not, in some sense, content just to write news articles—we want to share our knowledge with the world, so to speak. That’s kind of the central idea, is becoming more of a research organization. There are multiple ways in which we can do that. And expanding our trackers, creating these issue guides, those are two facets of what we’d like to do. We just have to find a logical way to expand that doesn’t put too much pressure on us because we’re pretty much at the red line as it is.
Trapani: Figuring out how to expand, reach new audiences, and create new resources while at the red line is a challenge for Issues too. We’re inspired by what you’re doing at FYI. On behalf of myself, Issues in Science and Technology, and probably thousands of people who work in science policy fields, I’d like to thank you for all you do and all the tools that you put out there.
Thomas: Thanks so much! It’s been really enjoyable.
Ambrose: And thanks for the opportunity to come on the podcast.
Trapani: Thank you for joining us for this episode of The Ongoing Transformation. If you have any comments, please email us at [email protected] and visit us at issues.org for more conversations and articles. I’m Josh Trapani, senior editor of Issues in Science and Technology. See you next time.
A Veneer of Objectivity
In “Unmasking Scientific Expertise” (Issues, Summer 2021), M. Anthony Mills exposes the danger of the vacuous “follow the science” slogan that has been used by politicians, scientists, and others throughout the COVID-19 pandemic to command allegiance to particular scientific conclusions or policies and to shut down what is sometimes reasonable disagreement. The pandemic is rife with disagreements over the science or the scientific backing of public health actions. Some of those disagreements are militant enough to evoke the (admittedly overused) metaphor of a science war. The possible explanations for scientific disagreements are many. Here is a non-exhaustive list of explanations for the sometimes-stark disagreements among scientists, public health experts, and other science advisers during the pandemic, some of which Mills discusses.
Normal science in real time. Reasonable uncertainty over unsettled science generates normal, rational disagreement. There is nothing unusual here in need of a special explanation. It seems unusual only to outsiders who are not used to seeing scientific disagreements livestreamed and live-Tweeted.
Fast science, bad science. The pandemic has provided a breeding ground for bad science owing to the urgency of the situation. Fast science promotes bad science, and bad science promotes scientific disagreement.
The pandemic is rife with disagreements over the science or the scientific backing of public health actions.
Belief factions. Belief factions are rival networks of knowledge users, sometimes though not always formed along lines of political affiliation, that preferentially believe, endorse, or share information coming from within the network. Even seemingly politically neutral matters such as whether hydroxychloroquine is effective can become polarized by belief factions. Different science experts may be part of distinct networks.
Epistemic trespassing. Given the enormity and multidimensional nature of the problems faced, experts from different fields have become COVID researchers or thought leaders. They commit epistemic trespassing when they overstep their expertise, potentially leading them to spuriously challenge the “real experts.”
Different disciplines, different disciplinary frameworks. Individuals from different research traditions such as evidence-based medicine and public health epidemiology sometimes rely on different standards or principles of evidence, reasoning, and decisionmaking, leading to disagreements that can be resolved only through higher order analysis.
Policy proxy wars. Policy conflicts rooted in disagreements over values or decisionmaking can masquerade as disagreements over science or evidence, fought by appealing to (or producing) research favorable to one’s preferred policy and criticizing or discrediting unfavorable research rather than deliberating over the values and decisionmaking at issue.
Pandemic theater. Disagreements among experts may be exaggerated, amplified, dramatized, or concocted in network media, on social media, by politicians, or by others.
Of course, a list of explanations for disagreements among politicians and members of the wider public would look a bit different. Distinct explanations might better explain distinct disagreements. Because these distinct explanations often demand different responses, it is important to consider which explanations apply in a given case.
Finally, absent from this consideration is the notion that experts are not actually following the science. Though nonexperts may sometimes ignore the science, when scientific experts disagree it is more likely that they are interpreting or weighing research findings differently, perhaps for reasons above.
Jonathan Fuller
Assistant Professor
Department of History and Philosophy of Science
University of Pittsburgh
M. Anthony Mills argues that the technocratic rhetoric of “following the science” hides the role of judgment and values behind a veneer of objectivity. On Mills’s analysis, this mismatch between the appearance of value-freedom and the reality of value-ladenness has contributed to the twin crises of loss of trust in scientific expertise and general political polarization.
I agree with Mills’s diagnosis. Policy-relevant science is necessarily “shot through with values,” to use the phrase of the philosopher of science Janet Kourany. And the mismatch between the value-free ideal and value-laden reality has indeed caused significant problems. But the underlying mechanisms are more complex than Mills indicates.
Trust in scientific expertise is itself a partisan phenomenon. Survey studies by the sociologist Gordon Gauchat and the Pew Research Center show that over the past five decades, liberals have had steady or even increasing trust in science and scientists, while conservatives have gradually lost trust. But even this is an oversimplification, as conservatives have maintained trust in what the sociologists Aaron McCright and Riley Dunlap call “production science” (science as used by industry) and lost trust only in “impact science” (science as used by regulatory agencies for goals such as restricting pollution and protecting human health). At the same time, many conservative voters support environmental and public health policies, even when their elected representatives do not. For example, long-running surveys by the Yale Program on Climate Change Communication indicate that about half of conservative Republicans have supported regulating carbon dioxide as a pollutant since at least 2008.
The mismatch between the value-free ideal and value-laden reality has indeed caused significant problems. But the underlying mechanisms are more complex than Mills indicates.
This paradoxical set of conservative attitudes toward science policy is plausibly due to the way that certain industries have used public relations campaigns and “merchants of doubt”—a term introduced by the historians Naomi Oreskes and Erik Conway to refer to scientists paid by industry to raise often-specious concerns about impact science. Merchants of doubt have sometimes weaponized the value-free ideal in these public scientific controversies, attacking the work of climate scientists or environmental epidemiologists as “politically motivated” “junk science.” Meanwhile, these industries’ own scientific staff typically know about the hazards posed by their products, at the same time as outsiders are being paid to act as merchants of doubt. This dual strategy, hiring merchants of doubt to attack impact scientists while concealing the findings of their own regulatory scientists, has evidently been effective in confusing the public—especially conservatives—and delaying regulation.
As the science policy scholar Sheila Jasanoff has demonstrated, the value-free ideal was supposed to ensure the legitimacy of technocratic policymaking at agencies such as the Centers for Disease Control and Prevention, the Food and Drug Administration, and the Environmental Protection Agency. By being value-neutral, science was supposed to provide an apolitical foundation for policy, immune to partisan politics. Instead, the value-free ideal has been weaponized by regulated industries to challenge the legitimacy of any unfavorable policies. The value-free ideal has undermined itself not so much because of general scientific hubris, but more because it has been susceptible to profit-motivated exploitation.
Daniel J. Hicks
Assistant Professor of Philosophy
Department of Cognitive and Information Sciences
University of California, Merced
M. Anthony Mills calls for us to rethink the proper place of scientific expertise in policymaking and public deliberation. His inventory of the consequences of “follow the science” politics is sobering, applying to COVID-19 no less than to climate change and nuclear energy. When scientific advice is framed as unassailable and value-free, about-faces in policy undermine public trust in authorities. When “following the science” stifles debate, conflicts become a battle between competing experts and studies.
We must grapple with the complex and difficult trade-offs and judgment calls out in the open, rather than hide behind people in lab coats, if we are to successfully and democratically navigate the conflicts and crises that we face.
I want to expand on one of Mills’s points, namely that public conversation is increasingly preoccupied with who is or isn’t following the science. Our democracy is pathologically tribalized, as Mills says, when science becomes “a shibboleth,” and rules “begin to resemble cultural prohibitions more than public policies: taboos to be ritualistically followed or transgressed.”
When “following the science” stifles debate, conflicts become a battle between competing experts and studies.
Perhaps the most pernicious consequence of following the science is what it does to us as political beings. Debate, negotiation, and compromise are shunted aside as disagreements take on a Manichean good/evil character. Resistance to mandates about masking, restaurant shutdowns, or vaccines is no longer understood in terms of mistrust in authorities, concerns about unanticipated consequences, or political interests. It is cast as the rebellion against rationality writ large. The political correspondent Tim Dickinson in the February issue of Rolling Stone didn’t blink when blaming Americans’ vaccine hesitancy on their “surrender” to a “kind of unreal thinking.”
But “you can’t fix stupid,” as people across the political spectrum often chant. And because democracy offers little recourse to “correct” what opponents see as each other’s irredeemable cognitive defects, our political discourse takes on a fanatical impatience. News headlines have noted the increasing anger among the vaccinated. Editorials shame the unvaccinated for their “idiocy” or “arrogance,” and social media are filled with comments proposing that we let the willingly unvaccinated die. All the while, vaccine hesitancy transforms into outright hostility.
Fanaticized discourse, in turn, legitimates strong-arm policy. The Biden White House, which brands itself as an administration that “respects” and “follows” science, will restrict nursing homes’ access to Medicare and Medicaid unless staff meet vaccination quotas. This move mirrors threats made by Governor Greg Abbot of Texas and Governor Ron DeSantis of Florida to defund mask-mandating school districts. Both follow-the-science policy and its populist nemesis prefer executive decree over democracy, which risks making our gridlocked political system even worse.
The philosopher Karl Popper warned about this in The Open Society and Its Enemies: “They split mankind into friends and foes; into the few who share in reason with the gods, and the many who don’t.… Once we have done this, political equalitarianism becomes practically impossible.” Although his book was more concerned with Marxists and Fascists who claimed to know the essence of human society, Popper’s warning applies equally to the effort to make science politically authoritative. What we need most right now is not a society that respects science, but one that respects disagreement.
As the public health establishment stared down the oncoming pandemic in early 2020, quite a few members of this community pointed out a conundrum: if they convinced the country to ramp up a massive response to SARS-CoV-2, and, by doing so, successfully prevented it from becoming a serious problem, critics would nevertheless bemoan the waste of public resources. What pandemic, they’d say, smugly and stupidly.
But noting this possibility hardly settles the question. Massive anticipatory responses to novel pathogens, or potential hurricanes, or date-sensitive computer glitches, really can be wasteful, and self-serving for bureaucracies that hold themselves forth as fixers.
My colleague Anthony Mills’s invocation in his Issues article of the 1976 “swine-flu fiasco,” as the New York Times called it, shows us that the political perils of success in heading off a serious problem are not merely theoretical. Because the problem with swine flu remained potential—because catastrophe failed to materialize—the preparations to combat it were seen as a politically motivated stunt.
Former New York governor Andrew Cuomo’s run as a media darling in 2020 shows something like the opposite. Even as his state suffered some of the country’s worst COVID-19 outcomes, Cuomo’s willingness to hold himself forth as a responsible, science-following leader left commentators musing about whether he could replace Joe Biden atop the Democratic ticket. Cuomo showed that policy failure could be spun into political gold, at least for a time. Sometimes seeming good beats being good.
Mills tells us, “Reestablishing an appropriate role for science in our politics … requires restoring the central role of politics itself in making policy decisions.” I heartily concur. But I worry that this makes a saner discourse sound much too easy to achieve. Because I fear that what the public wants, at bottom, is someone who will do exactly the right thing, every time, without any vexing complications.
I fear that what the public wants, at bottom, is someone who will do exactly the right thing, every time, without any vexing complications.
That is not a realistic expectation, of course. Once conflicting values—held by different individuals, or even by single persons—are taken into account, it is usually not even a sensible concept. But “follow the science” has been a siren song precisely because it tells people that they do not have to confront the unrealism of this desire. If all we have to do is be led to the science, the burdens of self-government fall away.
Doing politics is the proper way to resolve difficult questions such as whether it is worth it for us to force people to wear masks—but it is painful. That makes getting to a healthier, more openly political discourse very difficult. If one side begins the process, the other side can simply call them “political” (often an effective slander) and congratulate themselves on their willingness to be “scientific.” This is one of those problems for which a clear sense of what is wrong does not immediately lead to a solution.
That said, a discourse that understands the proper relationship between politics and science can’t hurt, and we should be glad that Mills is leading the way.
Philip Wallach
Senior Fellow
American Enterprise Institute
Accounting for Lives Lost
By some estimates at least 1.8 million Africans lost their lives during the transatlantic slave trade. Using an online database called Slave Voyages, artist Kathie Foley-Meyer studied maps detailing the paths that slave ships took from Africa to the Americas. Foley-Meyer, a PhD student in visual studies at the University of California, Irvine, created the painting In the Wake: With the Bones of Our Ancestors in an effort to remember those who perished before making landfall. “I just became obsessed with the lives of these human beings that are accounted for, but not really,” she told a university reporter. “They exist as bodies that disintegrated when they were put into the ocean and became part of the oceanic life cycle.”
KATHIE FOLEY-MEYER In the Wake: With the Bones of Our Ancestors, 2018 Watercolor, chalk, and wax on paper, 41.5 x 29 inches Collection of the National Academy of Sciences
She continued, “I remember looking at the statistics of human cargo—the number of people that survived the voyage to the New World and the number of people who did not. I began to wonder, other than numbers, how do you account for those lives lost? Those people were taken from their homeland and deposited in the ocean for one reason or another that rendered them disposable and not recognized as human beings.”
Foley-Meyer is a participant in the Ocean Memory Project, a collaboration of scientists, artists, engineers, and designers who are exploring the question, “Does the ocean have a memory?” The project is funded by the National Academies Keck Futures Initiative and is led by National Academy of Sciences member Jody Deming.
Image courtesy of the artist.
Ethics and Policymaking
The ongoing public health crisis is a moment of reckoning for those of us who work in the field that has come to be known as bioethics. As R. Alta Charo notes in her interview (Issues, Summer 2021), the word was coined by Van Rensselaer Potter, a biochemist at the University of Wisconsin, her former institution. For Potter, bioethics signified the integration of biology and moral values for the sake of human survival. In those days there was an emerging awareness of the fragility of the ecosystem upon which human life on the planet depends, culminating in the first Earth Day in 1970 and in the publication of The Limits to Growth, commissioned by the Club of Rome, in 1972.
Oddly, though, the word was captured not by environmentalism (Potter later tried to rename his concept “global bioethics,” to no avail) but by another emerging field, one captured in the unwieldy original name coined by the Hastings Center in 1969 as the intersection of “society, ethics, and the life sciences.” When Georgetown University’s Kennedy Institute was founded in 1971, the word used was simply ethics, as continues to be the case in the formal name of the institute today. Perhaps an earnest archivist will connect the dots that led to the expedient adoption of “bioethics” by the original participants. What is certain is that by the mid-1970s the word was ensconced in the early literature and in a growing media presence.
Bioethics signified the integration of biology and moral values for the sake of human survival.
Other dynamics in the late 1960s and early 1970s are relevant to the biography of the word bioethics. The full import of the Nuremberg Code’s insistence on “voluntary consent” was more evident as a series of research ethics scandals were reported in the media, tying into the contemporary civil rights movements. By the time the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research published its Belmont Report, in 1979, “respect for persons” (defined as respect for autonomy) was a core value, perhaps the primary principle. On the other hand, there was growing confidence in the benefits of mass vaccination, including the beginnings of the World Health Organization’s smallpox eradication program, allowing some optimism about ending the scourge of infectious diseases. In short, bioethics appeared as individual rights were gaining public interest and threats to public health seemed to be diminishing.
Now, with public health of immediate concern amid the COVID-19 pandemic, the “first among equals” standing of respect for autonomy requires closer examination. Too often opponents of elementary public health practices such as masking have exploited the legitimacy that the academic literature has granted appeals to individual autonomy. My own preference is for a greater role for reciprocity in the pantheon of bioethical values, and not limited to specific cases such as organ donation. For bioethics scholarship, there is much more work to be done to specify reciprocity in concepts of justice and fairness, in effect shifting the balance of power among bioethics principles.
Jonathan D. Moreno
David and Lyn Silfen University Professor
University of Pennsylvania
S&T Policy and Economic Security
The post-World War II period ushered in a rapid increase in and unprecedented level of international scientific cooperation and knowledge flows. The end of the Cold War and the opening and rise of China, economically and scientifically, have further boosted this trend. For three decades, the global enterprise of science benefitted from a strong consensus on the positive value of international collaboration. Throughout most of this period, the United States has been the undisputed leader in terms of knowledge resources, international attractiveness for global talent, and economic prowess.
In more recent times, however, policymakers around the world are reassessing internationalization or globalization of research and development as something unequivocally positive. They are also revising their view of the United States as an undisputed beacon in the global science landscape.
Guile and Wagner point to the declining US dominance in the global science and technology enterprise and argue for a different approach to international science and technology cooperation. Tyson and Guile see an urgent need for linking science, technology, and innovation more closely to issues of economic security. They warn of rising cross-border supply chain vulnerabilities and the risk of other countries appropriating the gains of new technologies and knowledge-intensive ventures originating in the United States.
Policymakers around the world are reassessing internationalization of research and development as something unequivocally positive.
Both pieces argue that the United States needs to rethink its role in the global knowledge landscape and adopt a more strategic and coordinated approach to science and technology policy and to international cooperation. The United States, the authors write, should focus less on generating all knowledge at home and more on tapping into and benefitting from the knowledge and innovation created outside its borders.
Working in the European scientific and policymaking community, I see a growing concern over the United States wavering in its commitment to the global enterprise of science. Some of its closest traditional allies worry that it is prioritizing its own narrow national interests over the benefits of international cooperation, even pitting the two against each other or instrumentalizing the latter in pursuit of the former in a zero-sum fashion. They also observe US actions that seemingly reflect a lack of awareness of a rapidly changing role and perception of the United States in the rest of the world, not least among its closest friends. Large parts of the world and the global scientific community are looking to the United States to redefine its role and regain its credibility as a pillar of responsible research and innovation in a rapidly changing global context. As someone who is firmly rooted in both Europe and the United States, I echo that hope.
Sylvia Schwaag Serger
Professor of Research Policy
Lund University, Sweden
Bruce Guile, Caroline S. Wagner, and Laura D. Tyson present strong arguments for better research and development policies (Issues, Summer 2021). In “A New S&T Policy for a New Global Reality,” Guile and Wagner correctly acknowledge that America is no longer the only country that does intensive R&D. Europe, Japan, and more recently Korea, Taiwan, and China also contribute a growing percentage of academic papers, patents, and commercialized high-tech products and services. As their article subtitle suggests: “US policies need to be reconfigured to respond.”
In “Innovation-Based Economic Security,” Tyson and Guile also correctly argue that innovation is an issue of economic security. Without innovation, American companies, workers, and institutions will suffer. And to innovate, America must make holding onto new, emerging technologies a national priority, and, as the authors stress, improve its “ability to capture economic or national security value from scientific and engineering advances originating outside the United States.” They argue that a better integration and coherence of US policies can help.
On the other hand, the articles don’t acknowledge that innovation has dramatically slowed over the past decade, meaning fewer new technologies are being commercialized not only in the United States but throughout the world. The result is slowing productivity and few new manufacturing industries to employ America’s middle-class workers.
For instance, the 2010s produced fewer new digital industries than did the 1990s and 2000s. The 1990s gave us e-commerce and enterprise software and the 2000s gave us smartphones and cloud computing, each producing more than $100 billion in revenues by the end of their respective decades. In contrast, in the decade ending in 2020, only one new digital technology, video streaming, had achieved $50 billion in sales, while “big data” analytic software and services, tablet computers, and OLED displays contributed between $20 and $50 billion. Artificial intelligence, virtual reality, augmented reality, commercial drones, smart homes, and blockchain have even smaller markets (as do nondigital technologies such as nanotechnology, quantum computers, and fusion).
Innovation has dramatically slowed over the past decade, meaning fewer new technologies are being commercialized not only in the United States but throughout the world.
A lack of new technologies is also a big reason why today’s so-called unicorn start-ups—those with a valuation of $1 billion or more—have much higher losses than those of previous decades. My recent analysis published on the MarketWatch website found that 51 of 76 unicorn start-ups have more than $500 million in cumulative losses, 27 more than $1 billion, and 6 more than Amazon’s peak cumulative losses of $3 billion. Many of these start-ups will be unable to overcome these losses because half of them had losses greater than 20% of revenues and one-fourth had losses greater than 40% in 2020.
The small markets for these technologies and the disappointing performance of America’s start-ups suggest that new ways of doing R&D are needed. We need university researchers to bring technologies closer to commercialization and not just write academic papers, and we need companies to commercialize more of this university research. How can America’s policymakers encourage companies and universities to work more closely together? A first step is for funding agencies to measure output from university research more by commercialized products and services than by numbers of academic papers and their citations. Improving America’s R&D system is a big challenge.
Jeffrey Funk
Independent technology consultant
Solar Climate Intervention
The “moral hazard” of solar geoengineering that Daniel Bodansky and Andy Parker examine in “Research on Solar Climate Intervention Is the Cure for Moral Hazard” (Issues, Summer 2021) is an illustration of a general phenomenon: introducing a new, potentially low-cost opportunity for reducing the risk of a loss may weaken the incentive to take other actions that prevent that risk from occurring. Some climate policy stakeholders have opposed solar geoengineering (SG) research and deployment out of concern that SG would discourage and hence substitute for emission mitigation. This prospect of new strategies influencing the use of existing strategies to combat climate change raises two important policy and political economy questions.
First, how is SG different from other approaches that reduce the risks of a changing climate? Substitution among climate change risk-reduction strategies already characterizes climate policy in practice. Investing in solar panels reduces the emission-cutting returns of energy efficiency investments, and vice versa. R&D on battery storage may enable dispatching of intermittent solar power, and reduce the returns to R&D on carbon capture and storage technology.
One may argue that substitution within emission mitigation is fine, but different from SG substitution, since the former represents various ways of preventing climate change risk, instead of potentially ameliorating the risk under SG. The same logic, however, applies to climate adaptation and resilience efforts. The emerging acceptance of the need for adaptation is clear evidence of insufficient emission mitigation over the past three decades. The failure of the single-pronged emission mitigation strategy has strengthened the incentives of individuals, businesses, and governments to invest in climate-adaptation programs.
This prospect of new strategies influencing the use of existing strategies to combat climate change raises two important policy and political economy questions.
Second, how could policymakers craft and implement a portfolio approach to climate change risk reduction? For example, would SG substitute for or complement emission mitigation? The underlying logic of the SG moral hazard critique is that decisionmakers optimize their risk reduction strategies. The analysis that SG deployment reduces the social return for a unit of emission mitigation thereby causing decisionmakers to undertake less emission mitigation presumes that decisionmakers already pursue optimal emission mitigation. The myriad imperfections and inadequacies of mitigation policy to date undermines this assumption and should give us pause about the prospect of optimizing the deployment of SG (and adaptation) to displace some emission mitigation.
Pursuing SG research and enhancing its salience among policymakers, stakeholders, and the public may represent an “awful action alert”—considering actions to block some of the incoming sunlight may galvanize public attention and enhance support for mitigation and adaptation. As my colleague Richard Zeckhauser and I emphasize in our paper “Three Prongs for Prudent Climate Policy,” such an awful action alert may spur greater emission mitigation and increase support for using every tool for reducing climate change risks. As Bodansky and Parker note in their compelling case for SG research, there is already preliminary social science research consistent with this notion. Going forward, we need to better understand the political economy of a portfolio approach to climate change risks. This suggests that a SG research agenda should address the political, economic, sociological, and international relations dimensions of SG research and deployment, in addition to the engineering and scientific dimensions of solar geoengineering.
Joseph E. Aldy
Professor of the Practice of Public Policy
Harvard Kennedy School
Daniel Bodansky and Andy Parker’s call for more research into solar geoengineering rests on a neat but false dichotomy. They imply that research must be either constrained or extended. In practice, what is needed is neither a ban nor a free-for-all, but appropriately regulated multilateral research.
The authors are concerned about fears of mitigation deterrence or “moral hazard,” using the latter term despite widespread criticism of its inappropriateness. They argue that such fears will motivate more opposition to research, of the sort recently mounted by an international coalition of Indigenous peoples and environmental groups when Harvard researchers prepared to conduct solar geoengineering experiments in northern Sweden without first engaging with the local Saami people, or indeed other Swedish and European stakeholders.
In defending this sort of careless research management, Bodansky and Parker do not help their own case. They also slip into a rather one-sided review of the existing literature on moral hazard and mitigation deterrence, foregrounding individual effects rather than political, systemic, and emergent ones. Though it is generally accepted that in rich Western populations, exposure to ideas of solar geoengineering tends to galvanize concern over climate change, there is a striking contrast between the German and American experiments the authors cite. The German researchers showed that their participants supported stronger mitigation measures, while the Americans merely revealed that some individuals expressed more concern about climate change when they were told about a possible response that would not mean restricting their emissions. In other words, one of the experiments that Bodansky and Parker cite as rejecting moral hazard actually illustrated it.
What is needed is neither a ban nor a free-for-all, but appropriately regulated multilateral research.
Moreover, as the authors themselves acknowledge, politicians and businesses face stronger incentives than individuals to grasp at excuses for delay in climate action. Their solution is often to ignore the problem or hope for the best, deflecting attention to the reasonable—but tangential—concern that more research is necessary to deter future decisionmakers, faced with serious climate impacts, from ill-informed efforts at geoengineering. Unfortunately, the record of solar geoengineering research in providing such practical guidance is poor, with most modeling-based studies presuming away a whole range of technical and political limitations and risks that would make the carefully designed and modulated interventions they consider impossible in practice.
More research of this sort risks reinforcing unrealistic expectations of the possibilities. The authors might retort that this is exactly why more experimental research should be undertaken. Sadly, while small-scale experiments might help us understand how particular chemicals will react in the stratosphere, they offer little scope to understand large-scale climate system responses, or to help accurately attribute climate effects to geoengineering interventions. As has been long recognized, the only experiments that could answer such questions would actually constitute global-scale long-term interventions.
But the central problem of Bodansky and Parker’s piece is not their limited and partial coverage of the literature, nor their “knowledge-gap” theory of research that overestimates the learning that could be achieved through more experimentation, but their presumption that the choice we face is binary. There is a middle way, in which research is conducted in ways that minimize the risks of mitigation deterrence through prior development of binding international governance standards and procedures, including requirements for appropriate advance public engagement. Advocates for geoengineering research need to stop attempting to dismiss the risks of mitigation deterrence, and accept the challenge to collectively design research processes that minimize those risks.
Duncan McLaren
Research Fellow, Lancaster Environment Centre
Lancaster University, United Kingdom
A New Model for Research Teams
In “Time to Say Goodbye to Our Heroes?” (Issues, Summer 2021), Lindy Elkins-Tanton challenges the conventional wisdom on how we organize our research enterprises. She calls our current approach the “hero model,” where professors in subdisciplines control a pyramid of resources—mini-fiefdoms that end up vying for attention, students, and budget. This model has tended to disincentivize collaboration, encourage cutthroat competition for resources, and in the worst cases, facilitate bullying and harassment. Without collaboration, research tends away from interdisciplinary work, where many of the true breakthroughs in science and technology emerge.
Even more worryingly, the hero model has produced a personality-based environment, driving away many students who could have truly contributed. It might preserve the students who thrive in a highly competitive environment, but not necessarily the best or most creative scientists. It has helped suppress diversity and discouraged inclusion.
Instead, Elkins-Tanton, who is a colleague of mine, suggests that the research community could move toward a more team-based model, with multidisciplinary groups addressing big challenges in science and society. In order to solve big problems such as climate change, we need multiple skillsets and voices. Both she and I have seen this model work extremely well at NASA, where multidisciplinary teams have conceptualized and implemented missions that explore our solar system and the universe. Our most significant challenges require interdisciplinary work, and require us to include all voices.
Most research enterprises are aligned much the way universities have been organized for hundreds of years. To truly move science and technology forward, it is time to break this paradigm and rethink how we conduct our enterprise. Heroes can’t save us—we all need to be part of the solutions.
Ellen R. Stofan
Under Secretary for Science and Research
Smithsonian Institution
In her thought-provoking essay, Lindy Elkins-Tanton urges her fellow scientists to “ask ourselves whether we are solving the biggest and most urgent problems, and whether we are lifting up our colleagues and the next generation to do the same.” At universities, the stark answer to this critical question is no. However, given the challenges facing all of us across the globe, we need to change our approach so we can answer yes—and we need to do it right now. The challenges are too complex, too impactful, and too urgent to continue as we are.
We need to change the way we do research, and Elkins-Tanton offers an indispensable framework: identifying questions, creating an interdisciplinary team, using seed funding, and making a professional project manager a key member of the team. She cites no loss of scholarly output from these changes; in fact, they provide the added benefits of increased speed of innovation, incorporation of goals not usually pursued, and a transformative change in culture.
Importantly, this framework motivates a focus on big questions that matter not only to scientists but also to people in the community, whom Elkins-Tanton invites to participate in the problem-formulation stage. By emphasizing expansive interdisciplinary teams, she places diversity and inclusion at the center of ethical and pragmatic science, where they belong. As she writes, “The collective future of humankind requires that we hear all the voices at the table, not just the loudest.”
The challenges are too complex, too impactful, and too urgent to continue as we are.
As a statistician, I would urge anyone considering implementing Elkins-Tanton’s model—and I hope many do, quickly—to include from the start robust assessment tools and the collection and analysis of data. Her proposed framework deserves a rigorous empirical understanding of what is working and why, so that the model can be improved with each iteration. The resulting evidence will also promote the model’s adoption.
Changing the way we do research, of course, cannot be achieved with the snap of our fingers. Elkins-Tanton alludes to the need to alter incentives around hiring, promotion, and tenure—issues that are often allergic to risk-taking, team-based projects and scholarship derived from societal needs. Fortunately, there is work being done to identify ways to evolve universities’ existing practices, as evidenced by a workshop I participated in led by the Meta-Research Innovation Center at Stanford, the conclusions of which appeared in an article by David Moher and colleagues in PLOS Biology in 2018, as well in an article by Moher et al. in Issues the same year.
It will take strong leadership across all universities to evolve faculty incentives, but that work is worth it because until academics can answer Elkins-Tanton’s key question in the affirmative, we are not serving the true needs of humanity. We are serving only ourselves.
Sally C. Morton
Executive Vice President, Knowledge Enterprise
Professor of Health Solutions, and Mathematical and Statistical Sciences
Arizona State University
In “Time to Say Goodbye to Our Heroes?” Lindy Elkins-Tanton not only asks and answers this critical question about our traditional academic structure, but pushes us to reevaluate the underlying value system and reward structure of the knowledge creation enterprise. She suggests that knowledge creation be driven by “big questions” rather than “big names.” I expect that the “heroes” themselves were originally motivated by such big questions, but the current funding structures and conservatism of review panels make it difficult to shift to the bigger, more complex questions asked by modern society.
As Elkins-Tanton also describes, research development is most innovative and fruitful when led by a diverse, creative, empowered team equipped with the opportunity and safety to bring its best ideas. I myself am a product of the traditional “hero” system, but only now—being solidly mid-career with tenure—am I able to realize the full potential of a collaborative and diverse team.
Research development is most innovative and fruitful when led by a diverse, creative, empowered team equipped with the opportunity and safety to bring its best ideas.
We work exclusively with this model in our research projects at Arizona State University’s Interplanetary Initiative, of which Elkins-Tanton is vice president and I am an associate director. And our experiment is working!
Since the hero model is increasingly in conflict with the societal shift toward teamwork, interdisciplinarity, and the inclusion of diverse voices, it’s time to broaden the scope of our experiment.
Here are a few opportunities for bringing these values to the wider academic system:
Review panels for any resource allocation should be double-blind when possible, such that the research questions and proposed experiment methodology are evaluated rather than the principal investigator. This has been shown to work in at least the few cases I am particularly familiar with, such as the time allocation process of the Hubble Space Telescope and some smaller grant programs within NASA and the National Science Foundation.
Universities’ promotion and tenure criteria should include an explicit evaluation of these values, so that people coming up the ranks with these newer research perspectives are able to reach positions of influence and promote the values-evolution process.
We need to teach students at the undergraduate level how to ask big questions and guide their own learning as part of interdisciplinary and diverse teams, while training people to be both leaders and collaborators. Our Interplanetary Initiative is now in its second year of offering a Technological Leadership bachelor of science degree, which is designed to do exactly this, aligning the next generation of learners with the needs of modern society.
There are many more changes, both systemic and specific, we need to make as our values in the knowledge creation enterprise shift away from the hero model. It won’t be easy, but it is necessary to ask and answer the big questions society faces today.
Evgenya L. Shkolnik
Associate Professor of Astrophysics, School of Earth and Space Exploration
Associate Director, Interplanetary Initiative
Arizona State University
When Lindy Elkins-Tanton asks if it’s “time to say goodbye to our heroes,” I respond: “Most definitely!” Her article focused on the social and productive benefits of teamwork, specifically mentioning NASA mission teams. As cochair of the National Academies of Sciences, Engineering, and Medicine’s committee charged with “Increasing Diversity and Inclusion in the Leadership of Competed Space Missions” proposed to NASA’s Science Mission Directorate, I’ve been following her teamwork approach—especially on the Psyche mission.
But I think it’s clear that the problem starts long before the time of graduate students and junior researchers that she mentions. I’ve been following the demographics regularly posted by the American Institute of Physics’ Statistical Research Center, which show that the big drop in participation in science, technology, engineering, and mathematics—the STEM fields—by historically underrepresented communities happens earlier along the career pathway. The “pinch-point” is somewhere between high school and the first couple years of college. It’s those 400-student Physics 1 classes where the “hero” culture hits home.
It’s clear that the problem starts long before the time of graduate students and junior researchers that she mentions.
True, many universities have moved on from the “chalk and talk” lecture mode. But despite increases in class demonstrations, group discussions, and the use of classroom response systems known as “clickers,” there’s still a culture of the person in the front (still most likely to be an older white man) knowing it all and telling you—perhaps with a well-meaning smile—the facts you need to memorize. Studies reported in the 1997 book Talking About Leaving:Why Undergraduates Leave The Sciences, by Elaine Seymour and Nancy M. Hewitt, showed that both women and men had similar negative reactions to such teaching, but the men tended to stay while the (equally capable) women tended to leave. A follow-up book in 2019, Talking About Leaving Revisited, showed that such issues persist, and extend beyond the factor of gender to race and ethnicity.
I fully respect the work of Elkins-Tanton and her Interplanetary Initiative. The much harder job will be changing the education system to increase the embarrassingly low (and demographically narrow) US per-capita production of STEM bachelor’s degrees, as shown in, among other sources, The Perils of Complacency: America at a Tipping Point in Science & Engineering, published in 2020. Achieving that goal will require not just saying goodbye to the heroes but also making serious national investments in education.
Fran Bagenal
Assistant Director for Planetary Science
Laboratory for Atmospheric and Space Physics
University of Colorado Boulder
Climate Scenarios and Reality
Progress on the important issue of climate change requires a framework for evaluating the likely consequences of different courses of action. Science can powerfully inform public decisions on energy systems, infrastructure, and economic policy when researchers explore, using the best available evidence, a range of possible futures using emissions scenarios. The process of constructing, describing, and using these scenarios is challenging for many reasons. The continued evolution and improvement of emissions scenarios is an important element of the future of climate-change research. But in “How Climate Scenarios Lost Touch With Reality” (Issues, Summer 2021), Roger Pielke Jr. and Justin Ritchie are wildly off base in declaring that the “misuse of scenarios in climate research has become pervasive and consequential—so much so that we view it as one of the most significant failures of scientific integrity in the twenty-first century thus far.”
Their characterization is wrong for three main reasons. First, the scenario developers and the Intergovernmental Panel on Climate Change have been explicit about the features of the scenarios and the limits on their relevance to specific applications. In particular, the high-emissions RCP8.5 scenario has long been described as a “business-as-usual” pathway with a continued emphasis on energy from fossil fuels with no climate policies in place. This remains 100% accurate, even if RCP8.5 does not appear to be the most likely high-emissions pathway.
The scenario developers and the Intergovernmental Panel on Climate Change have been explicit about the features of the scenarios and the limits on their relevance to specific applications.
Second, one of the main motivations for emissions scenarios is to provide a basis for comparing futures with and without policies related to climate change. Until recently, it has been reasonable to expect that a no-policy future would be a world of continuing high emissions and ongoing emphasis on fossil fuels, namely RCP8.5. As greater understanding of climate change spurs new policies and advances in technology, the notion of a no-policy world becomes increasingly abstract. But a no-policy endpoint remains an important point for comparison, even after the world has begun to diverge from the no-policy path. Referring to this no-policy endpoint as business-as-usual is imprecise, but it is not a significant failure of scientific integrity.
Third, at least part of the reason that the world is moving away from RCP8.5 and toward lower emissions is that effective communication of risks from a changing climate (and the unacceptable consequences to society of the business-as-usual scenario) has stimulated technology advances, incentives, and policies that now make RCP8.5 unlikely. Progress in tackling the risks of a changing climate, even if progress is still too slow, should be celebrated. It should not be converted into an implied failure of scientific integrity. Around the world, tens of thousands of scientists are working hard to understand the details of climate change and the risks it brings. The research tools are imperfect, and the future has many features that are unknowable. In this setting, the key to maintaining the highest standards of scientific integrity is maintaining commitments to professionalism and transparency, including continuing to fine-tune the development, use, and interpretation of emissions scenarios.
Chris Field
Perry L. McCarty Director of the Stanford Woods Institute for the Environment
Stanford University
Marcia McNutt
President
National Academy of Sciences
“All models are wrong,” said the renowned statistician George Box, “but some are useful.” The same could be said of future predictions. Climate models have proved enormously useful and minimally wrong: they have captured the observed pattern and magnitude of human-caused global warming stunningly well. But they don’t even try to predict the future. Instead, they make projections: incomplete but informative pictures of possible worlds conditional on different carbon dioxide emissions scenarios.
I agree with Roger Pielke Jr. and Justin Ritchie’s statement that we shouldn’t call the high-emissions RCP8.5 scenario “business as usual,” and they are right to call for the climate community to end this sloppy wording. The world appears to be off that particular nightmare trajectory, but horrors still await us if we fail to rein in greenhouse gas emissions. We don’t know what the future holds, but we are clear that the biggest wild card is completely within our control. This is the message that emerges from the best available climate science, a complex and remarkable picture assembled from climate models; basic theory; observations of temperature, ice, precipitation, sea level, cloud cover, and many other variables; as well as reconstructions of past climate.
Climate models have proved enormously useful and minimally wrong: they have captured the observed pattern and magnitude of human-caused global warming stunningly well.
I was, however, saddened and confused by the authors’ contention that the use of RCP8.5 threatens the integrity of that science. Neither the most recent Intergovernmental Panel of Climate Change report nor the National Climate Assessment claims RCP8.5 is “business as usual,” but even an unrealistic scenario can yield interesting science if used appropriately. After all, we can do experiments in a climate model that we’d never be able or allowed to do in the real world. We abruptly quadruple carbon dioxide in the atmosphere, return it to preindustrial levels, or increase it steadily by 1% every year. I am using RCP8.5 in my research right now—not because I believe it to be business as usual or our inevitable future, but because I am interested in what happens to the climate as Earth passes temperature thresholds as it warms. There is not much difference between a world that passes 1.5 degrees Centigrade and eventually warms by three degrees and a world that exceeds that threshold on its way to something hotter.
Thousands of scientists use this scenario for other perfectly legitimate reasons: to understand signals of forced change against a background of natural variability, for instance, or to compare state-of-the-art climate models to earlier generations. They do so while facing constant criticism, much of which I worry is in bad faith. As Pielke Jr. and Richie note, “Groups such as the Global Warming Policy Foundation in London and the Competitiveness Enterprise Institute in Washington, DC, are highlighting the misuse of RCP8.5 to call into question the quality and legitimacy of climate science and assessments as a whole.” I think it’s wrong to claim that the existence of a high-forcing scenario compromises scientific integrity. But for some, it’s certainly useful.
Kate Marvel
Research Scientist
Columbia University and NASA Goddard Institute for Space Studies
Roger Pielke Jr. and Justin Ritchie make a number of provocative claims that deserve additional scrutiny.
Since the beginning of global climate modeling, scientists have been acutely aware of the need to maximize the ratio of climate change signals to the noise of chaotic internal variability. Two approaches are widely used. One is employing large-magnitude “forcings” (such as projecting abrupt increases of carbon dioxide concentrations by as much as four times current levels, or increasing carbon dioxide levels by 1% annually) to establish patterns of future climate change. The second is using wide spreads of storyline-based scenarios, where emissions and land use/land cover change as functions of varying underlying assumptions about energy use, economic growth, and other factors. These will hopefully bracket potential future changes and explore thresholds and non-linearities in the transient climate system response.
However, as climate models have become more comprehensive, creating coherent storyline-based scenarios for all relevant inputs has become more challenging. The increased coherence requires a substantial length of time (years) for the mostly unfunded volunteer economic and energy modelers around the world to create the input files for the climate modelers who, in turn, take another couple of years to complete the multi-model simulations and make the results available. It is thus neither remarkable nor surprising that the literature available for assessments such as those by the Intergovernmental Panel on Climate Change (IPCC) relies heavily on scenarios established a decade ago, including a high-emissions scenario (RCP8.5) that was originally described as “business-as-usual,” in the event society made no efforts to cut greenhouse gas emissions.
Over time, assumptions underlying the storylines can become more or less plausible, and specific scenarios, more or less useful. This was true for scenarios devised in the early 1980s that didn’t envisage the success of 1987’s Montreal Protocol on Substances that Deplete the Ozone Layer in curbing emissions of chlorofluorocarbons or foresee China’s rapid industrialization. Notably, we agree that the concept of a business-as-usual scenario in today’s fast-moving policy environment is poorly defined—particularly for a general audience—though neither recent IPCC reports nor the National Climate Assessment use such terminology.
As climate models have become more comprehensive, creating coherent storyline-based scenarios for all relevant inputs has become more challenging.
Despite claims by Pielke Jr. and Ritchie, the use of a wide range of plausible scenarios is neither a blunder on par with misidentified cancer cell lines (an absurd claim) nor an issue of “scientific integrity.” Rather, the scientific community is already responding to the need for increased diversity and real-world grounding of projections, as well as new conceptual approaches. New scenarios are continually developed for many different purposes, for example, to assess the climate impact of the COVID-19 pandemic. Additionally, there is already movement to assess impacts based on the commonly projected “scenario-free” global warming levels of 1.5 degrees Centigrade, 2ºC, 3ºC, and so on, which can be used broadly to quantify impacts for any new proposed scenarios.
Faster updates could be accelerated by institutionalizing scenario development and associated climate model input files. More focus on scenario-free analyses would also be useful. Certainly, increased communication between economic and energy modelers, climate modelers, and impact modelers is welcome. We stress that the use of a scenario such as RCP8.5 tells nothing about whether the results depend on the realism of the scenario itself. Thus, assessing the worth of scientific contributions by counting which scenarios are mentioned is like assessing honesty by counting the number of times the word integrity is used in an article; it is both pointless and misleading.
Gavin A. Schmidt
Director, NASA Goddard Institute for Space Studies
Senior Climate Science Advisor to the NASA Administrator
Peter H. Jacobs
Strategic Science Advisor, Earth Communications
NASA Goddard Space Flight Center
Note: Chris Field and Marcia McNutt’s letter has been updated to include a more complete quotation from the original essay by Roger Pielke Jr. and Justin Ritchie.