Forum

Protecting the census

Census Day is right around the corner. The US Government Accountability Office agrees with the challenges facing the upcoming enumeration that Constance F. Citro identified in “Protecting the Accuracy of the 2020 Census” (Issues, Summer 2019). A complete count is inherently difficult given the size and diversity of the country, and the stakes couldn’t be higher: data from the decennial census, which is mandated by the Constitution, are used for such essential purposes as apportioning and redistricting the House of Representatives and allocating hundreds of billions of dollars each year in federal financial assistance.

As Citro points out, previous enumerations have had their risks and challenges, and 2020 is no exception. Over the past decade, our work has recommended steps the Census Bureau can take to ensure a more cost-effective and secure count of the nation’s population, and in February 2017 we added the 2020 Decennial Census to our list of high-risk government programs. This is because, among other things:

  • The Bureau is using innovations that are not expected to be fully tested before being used in 2020 Census operations. These innovations, which include allowing the public to respond using the internet, show promise for controlling costs. But they also introduce new risks because, in part, they have not been used extensively, if at all, in earlier decennials.
  • The Bureau faces challenges in implementing information technology (IT) systems. In July 2019, we reported that the Bureau was at risk of not meeting near-term IT system development and testing schedule milestones. These schedule-management challenges may compress the time available for remaining system development and testing, and increase the risk that systems will not function as intended.
  • The Bureau faces significant cybersecurity risks to its systems and data. For example, as of the end of May 2019, the Bureau had over 330 corrective actions from its security assessments that needed to be addressed, including 217 that were considered “high risk” or “very high risk.”
  • The Bureau is seeking to control the cost of the census, which has been escalating with each decade. According to the Bureau, the 2010 Census cost about $12.3 billion (in constant 2020 dollars), while the 2020 Census is estimated to cost approximately $15.6 billion.

Continued management attention and oversight will be vital for ensuring that risks are managed, preparations stay on track, and the Bureau is held accountable for implementing the enumeration as planned. As of July 2019, the Government Accountability Office has made 107 recommendations to help address these risks and other concerns, 32 of which have not been fully implemented. To ensure a high-quality, cost-effective, and secure count of the population, it is important that the Census Bureau continue to address these recommendations.

Gene L. Dodaro

Comptroller General of the United States


As Constance Citro ably catalogued, every decennial census has its challenges and controversies. The 2020 Census, I believe, faces a set of unprecedented challenges that collectively could create a perfect storm and threaten a successful census—that is, one that counts all communities equally well.

To be sure, there have been noteworthy advancements in census methods and operations, as well as measurable improvement in census accuracy, over time. Nevertheless, disproportionate undercounting of blacks, Latinos, American Indians living on reservations, renters, and children under age five persists, and recent censuses have overcounted non-Hispanic whites, as well as homeowners and older Americans in some race and gender cohorts. In short, the census is not yet an equal opportunity enumeration.

For the 2020 Census, the consequences of funding shortfalls and test cancellations throughout the planning cycle could fall hardest on activities specifically designed to reduce this disproportionate undercounting, such as language-appropriate promotion, community-based assistance centers, and a sufficiently large army of census takers to follow-up with reluctant and, yes, fearful households. Unfortunately, the current administration’s effort to add an untested citizenship question to the 2020 Census exacerbated concerns among immigrants that their census responses would be used to harm them or their families. Despite the US Supreme Court decision effectively quashing the citizenship question for 2020, President Trump’s July 11, 2019, executive order directing the Census Bureau to produce census data on citizenship and legal (immigration) status using administrative records continues to raise concerns about the administration’s motives. I am not surprised by the skepticism: the terms “census” and “immigration enforcement” should never appear in the same official document.The census is only as good as the public’s willingness to participate. If public confidence in the Bureau’s statistical mission and the motives for producing data falls, the agency’s ability to fulfill its mission could be in jeopardy.

Funding shortfalls this decade also curtailed sufficient investment in early research and testing that could have fostered the “cumulative learning for evidence-based planning decisions” that Citro highlighted. She noted that there have been “sharp increases in costs per housing unit” over the past five censuses. As lawmakers work to spend federal tax dollars prudently, the Census Bureau certainly must do its part to keep costs in check, even when faced with a growing, diversifying population. Technology undoubtedly offers opportunities to improve cost-effectiveness and productivity; administrative and commercial data can streamline address verification and help fill in some missing data, although the jury is still out on whether administrative records can replace efforts to count entire households directly. Still, I do not think we can cut corners or settle for uneven results when the very strength of our democracy’s foundation is at stake. The digital divide could prevent a not-insignificant number of households from answering the census online. Administrative datasets often cover harder-to-enumerate population groups less well and do not include information consistent with Bureau rules for determining where people should be counted. Overall, personal outreach from community leaders and “trusted messengers” will still be necessary to improve participation in historically undercounted communities.

Are Americans ready for a markedly different census—one that does not rely significantly on personal outreach, persuasion, and response? I’m not convinced. The census is the nation’s largest, most inclusive civic engagement exercise. It gives most residents an opportunity to participate directly in an activity that empowers them and their communities. Everyone is counted—regardless of age, citizenship or legal status, or prior incarceration—which means everyone counts. And that is a powerful message for a nation whose cohesiveness and common understanding of our unique place in the modern world feels increasingly fragile and even fractured.

Terri Ann Lowenthal

Census consultant and former staff director, US House of Representatives census oversight subcommittee, 1987–1994


Constance Citro provides an excellent short history of United States censuses, along with detailed information on the successes and shortcomings of the five most recent ones. Her information is consistent with what I learned and observed during my tenure at the Census Bureau (1977–79, 1983–90, 1996–2004). In my last position there, I oversaw research, methodology, and quality for all Bureau programs. In particular, I oversaw an immense evaluation program for the 1998 dress rehearsal mandated by Congress and a later evaluation of the 2000 census, and was heavily involved in the discussions regarding coverage adjustment of the 2000 census.

The census was instituted to inform political processes—reapportionment and redistricting. At its heart, this constitutional rationale remains. Additionally, through legislation, census counts have come to be used to provide accurate estimates regarding funding needed to meet numerous societal needs. During the past five censuses, the political purpose of the census operation has occasioned conflict with efficient information-gathering.

I share many of Citro’s concerns. Conducting a mandatory census of the US population is an immense peacetime operation—with critical time constraints set in law. The relative infrequency of the decennial census leads to loss of institutional memory. In contrast, to meet mandated time deadlines, the Census Bureau has often been forced to return to past practices that were no longer technologically optimal.

Funding is always an issue, particularly as legislators never seem to understand that new procedures used in a large operation must be tested years before the operation. Familiar technologies have been adopted only tardily. In the 2010 census, even smartphones were used only hesitantly. To my knowledge, nobody has ever created a gold standard Master Address File. This is crucial, given that it is the dwelling that shelters the people being counted.

My greatest concern for the 2020 census is the potential for high levels of nonresponse and for coverage error. It is not clear that procedures are in place to deal with either under-or over-coverage. Both the dialogue concerning a citizenship question and the general fear of government have made it difficult to effect data collection and the outreach procedures needed to encourage response.

Internet collection will begin as a letter sent to each housing unit on the Master Address File that had been given an identification number (to access a census form). But individuals will be allowed to obtain and complete a census form without having been given an identification number. The Census Bureau will then match those forms to their housing units. I fear that this could lead to yet another type of over-coverage (in addition to what regularly happens with college students and children in joint custody families, among other cases)—and funding will not be available to correct this over-count.

Cynthia Z. F. Clark

Executive Director
Council of Professional Associations on Federal Statistics

Expanding the CRISPR conversation

In “Lessons From the He Jiankui Incident” (Issues, Summer 2019), Xiaomei Zhai, Ruipeng Lei, and Renzong Qiu call for a regulatory system that would involve governmental authority as well as oversight from the research community. They make a compelling case that self-regulation by researchers is not sufficient to curb rogue scientists from embarking on risky investigations, such as the germline experiment the Chinese researcher He Jiankui conducted. I agree that a “bottom-up” approach is needed in addition to the “top-down” system of government regulation. The authors mention political leaders, researchers, humanities and social science scholars, and public stakeholders as necessary participants in this process.

More needs to be said about the “public stakeholders.” Zhai, Lei, and Qiu do not elaborate on this suggestion; it is not clear in their account whether ordinary citizens count as stakeholders. I contend that they should be. How, then, should members of the public be chosen for involvement in oversight of emerging science and technology? Given the highly technical aspects of much cutting-edge research, what kind of scientific background—if any—should such public stakeholders have? The same questions could be asked about the political leaders and humanities and social science scholars the authors mention as participants in the bottom-up oversight they recommend. As a past member of multidisciplinary bodies at the institutional, national, and international levels during my long career in bioethics, I am confident that thoughtful, dedicated members of the public can acquire the necessary knowledge to be useful members of such committees. Would it be best to have a semipermanent body, such as the national bioethics commissions that exist in many countries today? Should ad hoc committees be formed for each new scientific endeavor that requires such oversight? These and other questions deserve careful scrutiny and an examination of existing models of public engagement.

Zhai, Lei, and Qiu mention a critical element in this process: the need to be alert to potential conflicts of interest. This is obvious in the case of scientists who are directly involved in research and experimentation. But it can also be true of members of the public. For example, in North America patient advocacy organizations often receive financial support from the pharmaceutical industry. As has been noted elsewhere, neither the industry nor advocacy organizations are required to fully and routinely disclose their financial ties. A robust system is needed to prevent the appointment of individuals who have conflicts of interest to serve on the proposed bottom-up bodies to oversee gene editing. It is not sufficient simply to require disclosure of conflicts; individuals with such conflicts should be disallowed as members.

Many details have to be put in place to realize the forward-looking recommendations the three Chinese professors propose. But their article is a good start in a much-needed direction.

Ruth Macklin

Distinguished University Professor Emerita
Albert Einstein College of Medicine


The article by the Chinese bioethicists Xiaomei Zhai, Ruipeng Lei, and Renzong Qiu on the use of CRISPR to genetically modify the germline DNA of human babies offers an important perspective on regulating this emerging technology. But it represents only a narrow, elite view of what transpired and what human gene editing might mean in China. A broader spectrum of perspectives on the He Jiankui incident is available for those willing to look.

To wit, the first few months following He’s announcement witnessed publication of a large number of articles about the event by Chinese scholars not directly involved in the biomedical field. Duan Weiwen of the Chinese Academy of Social Sciences, for example, argued that the event reveals weaknesses in bioethics. And Tian Song of Beijing Normal University pointed out that the event provides an opportunity for society to develop appropriate legal mechanisms to protect society from possible harms from science.

What is even more remarkable, however, is the extent to which the public immediately became involved in responding to and thinking about human germline engineering. The “public” here refers primarily to Chinese internet users, or “netizens,” since the attitude of offline people is difficult to obtain, and Chinese people increasingly access scientific and technological information through internet searches.

Some commentators have speculated that netizen discussions of the gene-edited babies event suggest a watershed in Chinese public attitudes toward science: from overwhelmingly positive to increasingly doubtful and questioning. A sample of messages posted on CCTV news point in this direction: “If [genes] can be edited casually, it is a potential threat to the natural development of mankind.” “It’s as if … Pandora’s box has been opened.” “Scientific research does need to be based on ethics and law. Technology without ethical and legal foundations is not a blessing but a disaster!”

Weibo, a popular microblogging platform, is the primary channel through which netizens express opinions in China. Postings there reveal a nuanced discussion about the gene-edited babies event. Here it is possible to identify four different types of comments: those criticizing the scientists (“shameless”); those supporting the scientists (“The Chinese also treated the railway like this more than 100 years ago”); those criticizing the government (“How come there’s no more to come out of this? What about the results?”); and those supporting the government (“The results of the investigation prove that the research results released by He Jiankui are … prohibited by the state”).

Despite a range of participants discussing the gene-edited baby event, one stakeholder that has been conspicuously marginalized, both in China and abroad, is that of the Bai hualin (White birch forest organization), a charity foundation for AIDS patients. According to paperwork for He Jiankui’s project, Bai hualin was the institution from which He recruited couples for gene editing. After reviewing information provided by He, the public welfare organization screened out volunteers who did not meet his protocol requirements and then, with their consent, introduced 50 people to him. Although questions have been raised about the degree of informed consent, as far as we can determine no serious effort has been made to solicit views from this organization or its clientele.

One can undertake ex ante and ex post assessments of the gene-edited babies event. Ex ante, the gene editing was clearly deficient in taking all stakeholders into account. Ex post, however, it seems reasonable to argue that a range of stakeholders, including government, academics, media, and the public, have participated in critical discussions. Although they have made different contributions to the emerging assessment in China, there has in fact been a broad participation of multiple stakeholders—broader, we suggest, than has been the case with regard to this particular issue outside China.

Yan Ping

School of Marxism Studies
Dalian University of Technology

Carl Mitcham

School of Philosophy
Renmin University of China

Socializing artificial intelligence

What is a social interaction? What is a relationship? What is a friendship? Justine Cassell, the author of “Artificial Intelligence for a Social World” (Issues, Summer 2019), built her research program in artificial intelligence around a theoretical model grounded in linguistics, psychology, and computer science. I’ve taken a different approach, examining these kinds of questions from a developmental psychology, communications, and computer science perspective. In particular, children’s longstanding experiences with media characters provide a window for understanding how children treat nonliving entities in terms of their feelings about them, called parasocial relationships, and their parasocial interactions with them, in which a “conversation” is created by having a character ask questions, pause for a reply, and then act as if they heard what a child said. For both of us, socially contingent interactions are a defining quality of what it means to be a virtual human, and are modelled after linguistic and behavioral exchanges with actual people, particularly children’s friends.

A key question for Cassell and colleagues involves how artificial intelligence can help us understand social interaction. They find that a virtual peer who is created to function interdependently with children and aligned with who children are (e.g., dialect patterns) leads to beneficial social and academic outcomes in science and math. Similarly, our team finds that virtual characters are effective learning companions when children interact with them and feel stronger parasocial relationships with them, defined as perceptions that a character is a trusted friend who makes them feel safe. Why would that not be the case? We create artificial beings based on who we are, on what our needs are. We model their actions and behaviors on us, and we, in turn, treat them as if they are human.

One place where our results differ is the role of rapport, Cassell’s “chit chat” that takes place at the “water cooler.” For Cassell and colleagues, rapport is linked to learning. For our team, parasocial “math talk” improves math skills, but “small talk” does not. Perhaps these outcomes differ because Cassell’s virtual peers are novel and our virtual character is well known to children, one with which they have already established a parasocial relationship. Indeed, the conversational patterns that children share with one another, which Cassell uses to build virtual peers, vary based on whether children are friends or not. Nevertheless, both research lines point to the importance of trust and friendship in children’s learning from virtual companions.

Although the promise of intelligent entities to teach social and academic skills is very real, so too is the risk that they will somehow replace something that is fundamentally human. In truth, intelligent beings reflect what is best and worst in us. A robot feigning to be afraid of the dark elicits empathy from children. A virtual peer who acts differently from children can elicit abuse. Our capacity to build virtual learning companions who respond contingently to children in socially sensitive ways is a challenge before us, one that will influence children’s developmental outcomes as their virtual and physical worlds become increasingly intertwined and interdependent.

Sandra L. Calvert

Professor of Psychology
Georgetown University
Director, Children’s Digital Media Center

Social media pollution

In “Deciding the Facebook Question” (Issues, Summer 2019), Clarke Cooper really digs under today’s debates about regulating the large internet companies (the “FAANGs”) to reveal a much deeper problem with the internet. And that problem, like the classic challenge of all environmental problems, was immortalized by the comic strip character Pogo over 30 years ago: “We have met the enemy and he is us.” Yes, Cooper discusses the business shenanigans in which the FAANGs—Facebook, Apple, Amazon, Netflix, and Google—are engaged, and notes that better antitrust action and regulation might help address the standard problems that emerge from monopolies and market manipulation. But he shows us that even the strictest government antitrust rules will not address the more fundamental problem of today’s internet: information pollution. By driving the cost of speech, broadcast, and conversation to zero, we have created a system that encourages all of us to dump everything online, act on impulse, and ultimately accept whatever the algorithms tell us our predicted preferences are. Why take the time to think when you can immediately fulfill your initial impulse by pressing a button to “like,” “retweet,” or “1-Click shop?”

Cooper offers two ways to clean up this information pollution and the resulting loss of personal agency. The first would be a technical fix that would reintroduce costs and scarcity into the internet. The second would be through social deliberation that would create something like a constitutional protection for what a person is and what control that person has over his or her identity.

I kept wishing for easier ways. I imagined organizing well-regulated militias of Minutemen (Minutepeople?) armed with apps instead of muskets who would muster to protect our civil liberties, or adopting rules that would force companies to pay for the personal data that we would own and they could only rent. But the first approach requires too many volunteers with little prospect of matching the torrent of information, and the second is just as likely to be subverted when people unthinkingly assent to being monitored in return for an annual $5 discount coupon on their next purchase.

The dynamic unleashed by zero-cost information on the internet seems unstoppable. And it essentially undermines our agency—whether as consumers or citizens. I’m afraid that Cooper is right: asserting our own agency in public deliberation is really the only robust way to confront this threat to, well, our agency. Clearing Pogo’s Okefenokee Swamp of our discarded junk requires us to collectively agree on the rules that we will collectively follow. Clearing the internet polluted by our unthinking behaviors—reposting fake news, letting others make decisions for us, undermining what it is to be a citizen—requires us to collectively agree on the boundaries of personhood that are eroded by all that unthinking behavior. Cooper’s article is a call to start that discussion.

William Savedoff

Brunswick, Maine

Powering energy innovation

In “Clean Power From the Pentagon” (Issues, Summer 2019), Dorothy Robyn and Jeffrey Marqusee cogently make the case for using the pull of military energy needs to both advance more rapid development of low-carbon energy technologies and meet military mission needs for more energy efficient, lighter, and advanced power for the battlefield. The authors have decades of Pentagon experience—Marqusee worked for me running the Department of Defense (DOD) environment and facility energy technology programs (SERDP and ESTCP) in the 1990s. Robyn led DOD’s energy and environmental programs during the Obama administration, a position comparable to the one I held in the Clinton administration.

The Department of Energy (DOE) is charged with funding and conducting energy research for the nation. DOD is the nation’s single-largest energy user, and thus, even though it accounts for only about 1% of total US energy use, it has the ability to advance market demand for new energy technologies.

Leaders at both DOD and DOE have recognized the value of deeper collaboration on energy technology, and have signed a Memorandum of Agreement to advance this collaboration. Unfortunately, the stovepipes of the individual DOD and DOE labs, combined with the rigid bureaucratic structures through which each agency reports, have in many ways held back what could be even more productive collaborations in the nation’s interest to advance energy technologies.

The most compelling story the authors tell is about opportunities to advance battery technology for military power needs, and the enormous boost “next gen” batteries will enable in commercial energy storage for renewables. DOD’s demands for better batteries, for everything from lighter loads for soldier power to advanced drones and other autonomous systems, are enormous.

The key for DOE is that the collaboration recognize and target DOD end users as an early adoption market. In my own experience, DOE labs are increasingly interested and open to supporting military needs through their research, especially when encouraged by DOE headquarters to do so. DOD, for its part, needs to ensure that it has good channels for conveying its military energy needs across the DOE enterprise of labs and senior scientists.

Equally valuable would be collaborative R&D planning at an early enough stage that DOE research resources can meaningfully be devoted to defense needs. Such collaborative planning takes effort at the headquarters and program manager level, but can have meaningful payoff to improve the military’s readiness to power the next battle.

Sherri Goodman

Former Deputy Undersecretary of Defense (Environmental Security)
Founder of the CNA Military Advisory Board on Energy, Climate Change, and National Security
Senior Fellow, Woodrow Wilson International Center for Scholars


Robyn and Marqusee envision DOD as a partner of DOE in research, development, and early implementation of new energy technologies to meet both defense and commercial needs. They highlight specific opportunities for collaboration in cutting-edge energy technologies, note how DOD can be a lead customer for advanced technologies that may be too expensive or insufficiently proven for immediate commercial applications, and argue that the two departments are failing to take advantage of each other’s strengths to pursue shared goals for developing new energy technologies.

In essence the authors have restated the arguments for dual-use technologies, but have done so in the context of explicit cooperation between the federal government’s largest funder of energy R&D and its largest user of energy.

I found one aspect of their argument especially compelling. They note that “a partnership [of DOE] with DOD in areas where the military’s needs are aligned with those of commercial users would introduce much-needed demand-pull into DOE’s R&D process” (emphasis added). They list several reasons that the demand for new energy technologies in general is attenuated, although it is notable that they do not address why the demand for new energy technologies from DOE is not particularly strong. DOD is accustomed to using R&D and innovation to meet specific demanding mission needs in a way that commercial energy markets are not. As a “customer” for DOE-generated energy technology, DOD could provide a ready market for new high-performance or clean energy technologies, or both.

One shouldn’t underestimate the bureaucratic and cultural challenges of DOD/DOE partnerships. Each department has constituencies in Congress, in the Office of Management and Budget, and among interested publics that may not welcome such interactions. National security classification is of paramount concern in DOD, whereas it may be of little or no concern to the parts of DOE of interest to this argument. Priority-setting mechanisms and processes in the two departments are unlikely to be compatible. Even DARPA and ARPA-E, the departments’ advanced R&D agencies that are arguably the most similar, follow different protocols for identifying project priorities, selecting and evaluating performers, and putting results into practice.

Robyn and Marqusee argue successfully, I believe, for the potential value of energy R&D and implementation cooperation between DOD and DOE. They are largely silent, however, on how such cooperation might actually be accomplished. One approach would be for the agencies, under the watchful eye of Congress, to sign on to a high-level, high-stakes, “all-in” commitment to work together across a broad range of energy R&D and innovation—to create an “umbrella agreement” under which cooperation could be endorsed and enabled. A second approach would be for the two departments, acting through DARPA and ARPA-E, to experiment by engaging in several ad hoc joint projects or programs both to develop new advanced energy technologies and to “learn by doing” how to resolve the inevitable issues that will arise in their collaborations. My vote is for the second approach.

Christopher T. Hill

Professor Emeritus, George Mason University
Partner, Technology Policy International

Troubles in climate journalism

Matthew Nisbet’s column, “The Trouble With Climate Emergency Journalism” (Issues, Summer 2019), highlights two persistent problems at the heart of public debates on climate change over the past three decades: the relentless pursuance of consensus and message-discipline, combined with a flinching from the difficult political arguments that are required to move on climate policy.

The recent gilet jaunes protests in France against fuel tax increases are instructive of the difficulties of climate politics, and how public support for climate policies can be upended when those policies exacerbate economic inequalities. A recent editorial in the supposedly standard-setting Guardian newspaper belittled fuel tax protests as “support for the destruction of the planet” rather than acknowledging how climate politics needs to engage with the unequal resource consumption that is driving climate change. The declaration of a “climate emergency” reinforces this disavowal of politics, as science-inflected urgency displaces inclusive debate about what transition to a zero-carbon society means for citizens who are more concerned about the end of the month than the end of the world.

Hand in hand with this aversion to political argument is the promotion of consensus in climate politics that, the Guardian claims, “must ultimately transcend left-right distinctions.” Such a view is a logical extension of Nisbet’s observation that climate journalists “portray science and scientists as truth’s ultimate custodians,” but displaces the values-first discussion we need in order to imagine what zero-carbon societies will look like and, crucially, how we can get there. Social scientists and media scholars have a crucial role to play in facilitating such approaches, which hold more promise than unimaginative attempts to delineate who has the right to contribute to public debate based on consensus rather than actual expertise. When the role of climate science in political debate is questioned, as in recent citizens’ assemblies, citizens note how the participation of scientists and the framing of scientific knowledge may be neither neutral nor helpful.

These issues have deep roots, and will be hard to address. The long-established science-first framing has facilitated climate change being seen as a discrete problem, rather than a social issue that intersects with acute policy challenges such as poverty and inequality. Such framing is seductive, as it allows left-leaning journalists and activists to swerve the deep, difficult debates we need to build coalitions for climate policy. Yet as the gilet jaunes experience demonstrates, climate change will always be a political issue that inevitably prompts tension and dissensus. The sooner climate journalists focus on these politically productive conversations, the better.

Warren Pearce

Senior Lecturer, Department of Sociological Studies
University of Sheffield

Shock and thaw

In “The Empty Radicalism of the Climate Apocalypse” (Issues, Summer 2019), Ted Nordhaus imagines a future where a President Jay Inslee mobilizes and nationalizes in the first genuine effort to address climate change. The parable ends with Inslee aboard Air Force One on the way to deliberate with India and China. What if this were not Inslee the climate hawk? What if it were another hawkish president, more like George W. Bush? That administration might fly to India or China, but instead of Air Force One it might fly B-1B bombers. Instead of pursuing deliberations, it might bow major greenhouse-gas-emitting countries with military force, obliging them to join global climate policy. Climate shock and awe. Another radical path climate activists have yet to pursue.

Nordhaus is doubtful today’s environmentalists have the stomach for nationalization and large-scale technological innovation. I am doubtful—and thankful—that we also don’t have the stomach for climate shock and awe. But why not?

Nordhaus offers three reasons environmentalists have not advocated nationalization. First, techno-anxiety. Second, environmentalists don’t trust government, and are more comfortable relying on arms-length regulation than active management. Third, environmentalists are hampered by democracy. Environmental goals, Nordhaus writes, “require top-down, centralized, technocratic measures that most environmentalists are unwilling to seriously embrace.” But, he adds, the environmentalists’ ideal is one of “egalitarian politics,” “will of the people,” and “bottom-up democracy.”

I disagree. Democracy doesn’t obstruct climate goals. Democracy is the forum from which goals emerge in the first instance. A welfare economist might urge a two degree Centigrade goal from monetized benefits. A rights theorist might push the same to minimize suffering. As Nordhaus notes, this is all “endlessly contestable.” Neither rights-based pleas nor economic computations, as just two examples, deliver certainty. Each offers only fodder for democratic contestation. Environmentalists are right to rely on democratic practice that empowers individuals to make decisions and set collective goals. Environmentalists are right to embrace democracy even if, as Nordhaus suggests, democracy slows progress. Because without democracy, progress to what?

Without the democracy, how do we draw the line between Nordhaus’s radical techno-nationalized future—which is at least worth pondering—and the radical militaristic future of climate shock and awe? Military aggression has potential to address a lingering objection to US climate action if India, China, and others continue to increase their emissions. The two hawkish futures both call for mobilization and technological sophistication. They are both departures from mainstream proposals to tax, regulate, or subsidize. Yet I suspect Nordhaus would join me in rejecting the military alternative. But we can’t reject that future only with fundamental commitments of economics, ecology, or philosophy. Democracy is not merely an instrument to balance between competing prepolitical desires. Democracy shapes preferences and constructs civic goals. If there is not yet will for radical action, only democracy can build that will. We need to decide together that we care about our future, that violence is not a fitting tool to protect that future, that democratic politics is an opportunity not a burden.

Josh Galperin

Visiting Associate Professor of Law
University of Pittsburgh School of Law
Special Advisor for Environmental Law Programs
Yale School of Forestry & Environmental Studies

Ecology under threat?

In “Will NEON Kill Ecology?” (Issues, Summer 2019), Mark Sagoff’s critiques of the National Ecological Observatory Network’s management are indisputable. (Two authors of this note made similar points in a 2015 article, “Big Questions, Big Science: Meeting the Challenges of Global Ecology,” in the journal Oecologia.) But Sagoff’s perspective on NEON’s intellectual role is not. As John Magnusson, one of the deans of limnology, has noted, a “lack of historical perspective can place … studies in the invisible present, where a lack of … perspective can produce misleading conclusions.”

Sagoff’s perspective on NEON’s science is drawn from the dawn of ecology, and based on the views of a small selection of senior ecologists. One of us (Gram) conducted a survey in 2009 that found that mid-career ecologists were only “somewhat likely” to use NEON, but “un-ecologists” (undergraduate through untenured) were “very likely” to use NEON. In 2018, Gram and her team repeated this survey and found similar results, with “uninterested” respondents being older and later in their careers. In contrast to Sagoff’s opinion that “if you took the same amount of money, and used it to enhance … grants to young people, we’d get (better) science,” young ecologists look to NEON as a way to do new science.

Place-based science plays a crucial role in understanding the living world. But ecologists are also concerned with larger-scale patterns, to reveal how ecosystems are shaped and to provide principles to guide management more generally. This requires strategies of what to measure and where to measure across places and diverse systems. What to measure? In another NEON survey, and at a critical early workshop in NEON’s implementation, scientists stated that standard, tested, published methods were crucial and that the main reason for variants on basic measurements was simply the lack of standard methods. NEON and the National Science Foundation (NSF) invested in this need.

Where to measure? NEON’s analysis showed that many US ecosystems were un- or under-sampled, including the nation’s most productive and diverse systems. In 2016, one of us (Schimel) and colleagues published a paper showing that most ecological studies were in the less diverse, lower carbon storage, and less productive regions globally. Lacking an overarching strategy, ecologists work where it is convenient, close to home and inexpensive. NEON’s sampling strategy addressed this failure of ecological, geographical, intellectual, and social equity.

Ecological breakthroughs now often result from analyses of Big Data. Martin Jung led an effort using machine learning to estimate global primary productivity and disproved a number of concepts entrenched in the literature. Ethan Butler and colleagues used a massive data compilation to map global diversity of plant function. Patricia Soranno and colleagues argued that managers are responsible for vast landscapes and that case studies are of limited use: they applied Big Data to the multiple causes of lake eutrophication. Many other examples where Big Data were used to test theory or inform management can be cited.

Sagoff presents an incomplete view of how the discipline of ecology tests theory, and a one-sided view of NSF’s decision-making, which, though often flawed, has moved ecology from a collection of just-so stories to a systematic field, paralleling developments in other sciences that study the earth.

David Schimel

Senior Research Scientist
Jet Propulsion Laboratory, California Institute of Technology
Founding CEO and first Principal Investigator for NEON

Michael Keller

US Forest Service, International Institute of Tropical Forestry
First Chief of Science for NEON

Wendy Gram

First Chief of Education and Engagement for NEON
Independent ecologist and science educator
COMET Program Implementation Manager
University Corporation for Atmospheric Research


Mark Sagoff described the National Ecological Observatory Network as a project to “turn ecology into large-scale Big Data science whether it wants to be transformed or not.” In his account, the idea for a major observatory to detect long-term ecological change at the continental scale was generated and promoted by National Science Foundation staff, who foisted an infrastructure project devoid of scientific questions on the ecological community. As a community participant in many early NEON workshops and working groups, and much later a rotating NSF program officer (2014–2015), I feel that this narrative neglects to mention the many, many scientists who have strongly advocated for a national facility to test hypotheses about the effects of environmental change on ecological processes at multiple scales. Although there a number of mischaracterizations in Sagoff’s historical account, here I will focus on only two issues: the scientific rationale for an ecological observatory, and the role of ecologists in formulating a hypothesis-driven research agenda.

First, NEON was developed to answer specific questions about ecological change that could affect society, as described in NEON’s Integrated Science and Education Plan, published in 2006. Questions included, for example: how will ecosystems and their components respond to changes in natural- and human-induced forcings such as climate, land use, and invasive species across a range of spatial and temporal scales? There are numerous hypotheses in ecology about how terrestrial and aquatic systems will respond to changing atmospheric composition, climate, urbanization, and biodiversity loss. Early versions of NEON’s design were closely tied to testing these hypotheses. To evaluate the effects of climate change, many proponents favored large-scale experimental manipulations of elevated atmospheric carbon dioxide, increased temperature, and altered rainfall patterns. To address land use change, the design discussed around 2005 (and one I personally favored, long before I worked at NSF) included replicated urban-to-rural land use gradients across the continent.

These early designs were not ultimately implemented for two reasons. First and foremost was cost. Over the years, as Sagoff noted, the scope of the observatory was resized to the meet budgetary constraints. Second, I would argue that ecologists themselves have become more reluctant in recent years to prioritize particular environmental research questions and grand challenges. This has focused NEON on general change detection and ecological forecasting across levels of biological organization, rather than a potentially narrower set of questions.

Yet the ecological community has in the past come to consensus on urgent environmental research questions that required large-scale collaborations. For example, the Ecological Society of America’s Sustainable Biosphere Initiative (SBI) outlined a detailed list of prioritized questions that set the research agenda for decades afterward, and indeed, is reflected in the subsequent proposals for a national ecological observatory. This initiative is now three decades old, and in the interim I do not think there has been a comparable, comprehensive effort to identify research questions and goals across ecology. The reasons for this are numerous, but are likely related, at least in part, to the increasingly heated political discourse about environmental biology outside the scientific discipline. That said, there are a few notable exceptions, such as the 2001 National Research Council report Grand Challenges in Environmental Sciences, which is tied to NEON’s research agenda.

As a result, I don’t find it accurate to characterize either the rise of “big data” or an external set of priorities imposed by NSF as the main impetus for NEON. It was strongly driven by past scientific efforts such as the SBI. Can and should NEON operations be more hypothesis or question-driven in the future, as some researchers have proposed? Possibly so. I concur that in the past several years, its question-driven focus has become secondary to building out a facility that will meet the needs of a wide cross section of the scientific community.

In the coming months and years it is entirely possible to revive community-based efforts such as the SBI, in which ecologists and stakeholders come together to develop ambitious, high-priority research objectives that serve both science and society. This is not the role of NSF alone—it is a joint responsibility of scientists and diverse stakeholders—and it will not “kill ecology” but move it forward through an era of rapid environmental and social change.

Diane Pataki

Professor of Biological Sciences
University of Utah

Inside research ethics

Jim Ross’s article, “Research Ethics From the Inside” (Issues, Summer 2019), provides a good accounting of how institutional review boards (IRBs) work as well as some of the challenges they face. As a member of an IRB for nearly 10 years, I appreciate what he has done. However, I have to note that there continues to be a need to further explain to the lay public basically what an IRB is and why we have them. Without assurances that ethical procedures will be followed, confidence and trust in government-sponsored research will continue to erode.

Identity theft and misuse of data are problems that currently generate high levels of concern. Response rates to many government surveys are now at a level that would have been considered abysmal a couple decades ago. Respondents need assurances that personally identifiable information collected on them in any form of research will be kept private and held in strict confidentiality. IRBs have a responsibility to examine what data safeguards will be employed and how these will be explained to potential respondents or other research subjects within informed consent documents: in fact, this will receive increased attention from my IRB. This holds true for clinical research as well as surveys.

Accounts of the infamous Tuskeegee syphilis experiments conducted by the US Public Health Service on African American men, or the tale of how researchers took cells from the young black cancer patient Henrietta Lacks without her consent and created an “immortal” cell line now widely used in medicine, far too often simply end, with no explanation of the steps the government has taken to prevent such unethical activities from occurring today. These steps notably include adoption of the Federal Policy for Protection of Human Research Subjects, also known as the Common Rule, which defines the processes for IRB review and approval of research with human subjects. The public needs to know what IRBs are, why we have them, and what steps continue to be taken to ensure their rights as research subjects.

Dale Hitchcock

Member, Institutional Review Board for the National Center for Health Statistics at the US Centers for Disease Control and Prevention

Cite this Article

“Forum.” Issues in Science and Technology (): 5–16.