How to Bring Global R&D into Latin America: Lessons from Chile

Over the past several decades, a growing number of multinational companies have established research and development (R&D) facilities outside of their home countries. More recently, some universities and public research organizations have followed suit, including American universities such as Stanford, MIT, and Georgia Tech. From the perspective of host countries, attracting global R&D can facilitate the absorption of foreign knowledge and strengthen national technological capabilities, thereby helping to close technology gaps.

But emerging countries—with a few notable exceptions, such as China—have been largely left out of this picture. For the most part, these countries lack the large and dynamic markets, the scientific infrastructure, the human capital, and the specialized industrial clusters that typically attract foreign investments in R&D. Latin American countries, in particular, have been struggling with attracting foreign R&D. This is reflected in a recent report of the United Nations Economic Commission for Latin America and the Caribbean, which found that in 2013 the region attracted only 3 percent of global R&D foreign direct investment projects, whereas China attracted 34 percent.

Some countries in the region, however, are now trying to turn this around. We offer Chile as a case in point. Chile is making the attraction of foreign R&D a more explicit priority—part of a broader strategy to strengthen the internationalization of its national innovation system. Most critically, efforts include new policy programs that offer incentives to compensate for other location disadvantages.

Designing for success

The story begins in 2008, when a government economic development agency called Corfo, working through its InnovaChile Committee, launched the International Centers of Excellence (ICE) program. It is one of the few programs in the world—and the first of its kind in Latin America—explicitly aimed at creating R&D centers where foreign universities, public research organizations, and private corporations will carry out R&D, technology transfer, and commercialization activities. The centers, known as ICEs, are selected based on their potential to boost the competitiveness of Chilean industry. To help reach this goal, the ICEs are required to hire a significant number of local scientists, establish collaboration agreements with domestic universities, and contract with local companies to conduct research.

The ICE program’s annual budget is currently around $30 million (in U.S. dollars), which makes it the largest of Corfo’s programs designed to promote innovation in Chile. Rather than distributing this budget among a large number of projects, the program selects a limited number of R&D centers and provides them with substantial funding to help them reach critical mass relatively fast. Chilean embassies promoted the program through direct contacts with leading universities and research institutes around the world. Centers were selected through a competitive process. Corfo invited proposals and then evaluated them with the support of an international panel of experts.

The program’s first call for proposals, in 2009, focused on attracting foreign universities and public research organizations. Its second call, in 2012, expanded to include multinational companies. Four ICEs were developed from the first call, each of them receiving a grant of up to $19.5 million for a 10-year period. The recipients were expected to contribute to their respective center’s funding with the equivalent of at least 59.5 percent of the grant. In the second call, the maximum grant for universities and research organizations was reduced to $12.8 million per center over eight years, while the minimum co-financing increased to 87.5 percent of the grant. Grants made to companies were limited to $8 million over 4 years, with each recipient being required to contribute at least twice the amount of the grant.

A total of 12 R&D centers have been established so far through the ICE program, comprising eight research organizations and four multinational corporations from seven different countries (Table 1). The roster includes one university and two corporations from the United States. The University of California, Davis, opened a center—its first R&D center outside the United States—to focus on agricultural technology, including plant breeding, postharvest technologies, and technologies geared to adapt to global climate change. The pharmaceutical giant Pfizer established a center—its first R&D center in Latin America—to focus on developing new genome-based diagnosis technologies for cancer. And Emerson, a manufacturing and technology company, opened a center to focus on developing new technology to improve the productivity and efficiency of mining operations.

Guimon table 1

The ICEs vary in size, from over 120 researchers in the one run by the German research organization Fraunhofer to around 25 in the one run by Wageningen UR of the Netherlands. All are clearly aligned with the needs of Chilean industry, but some focus on specific sectors (such as mining, nutrition, or renewable energy), whereas others embrace platform technologies with applications across several industries (such as information technology, biotechnology, or nanotechnology). The four ICEs established through the first call are still in their early years of operation, but they have already obtained visible results. For example, during its first three years of operation, the Fraunhofer center has won one patent and has 10 additional patent applications pending, has spun off one independent company, has started 33 projects with various domestic industries, and has published 16 scientific papers in journals listed in the Thomson Reuters citation index. The eight ICEs selected through the second call have only very recently begun operations in the country or are in the process of doing so.

Challenges and future outlook

As part of its efforts to ensure that the ICE program is meeting its goals, Corfo commissioned an interim evaluation to one of us (Guimon) in 2014. Building on interviews with 10 key stakeholders—including the directors of the centers, representatives of Corfo, and government policy makers—the evaluation revealed a number of concerns and challenges.

Throughout its history, critics of the ICE program argue that the funding provided to foreign groups should be used instead to strengthen universities and R&D institutes within Chile that are in great need of additional investments to build critical mass. Another frequently mentioned concern is that the program might lead to a sort of “techno-colonialism,” whereby foreign centers focus on commercializing in Chile technologies they had already developed in their home countries, while paying less attention to local technology priorities and needs. Indeed, some previous studies have found that the R&D of multinational companies in emerging countries normally entails “familiar science” (that is, applications currently used by the firm or its competitors) rather than “new science.”

Despite such global-local frictions, since 2008 the ICE program has survived two changes of government, which usually come with substantial shifts in policy strategies. Long-term commitment to the ICE program beyond the political cycle is critical given that its full returns can accrue only over an extended period of time. This makes evaluation efforts important, especially regarding whether or not the grant funds might be better dedicated to R&D centers of Chilean ownership. The answer here will turn on the capacity of the ICEs to develop new solutions for Chilean industry and to instigate a systemic change in the national innovation system, thereby improving university-industry collaboration and enhancing the commercialization capacity of the national science base, while forging closer linkages with foreign sources of knowledge. Forthcoming evaluations will require a complex and flexible model, balancing quantitative metrics and qualitative assessments, accommodating differences between scientific fields, and ensuring high levels of transparency.

To overcome frictions and barriers to cooperation in the near future, we believe Corfo should adopt a more assertive approach, acting as a cross-border broker between the ICEs and Chilean research centers and firms. Corfo should also seek to foster closer cooperation among the ICEs themselves, to share good-practices and identify possible areas for scientific collaboration, while coordinating research agendas to exploit synergies and avoid duplications. Furthermore, Corfo will need to decide whether it will issue new calls for proposals for this program in the next few years or concentrate resources in existing ICEs and perhaps provide the best of them with additional funding. Another controversial decision that needs to be addressed shortly is whether to penalize the worst-performing centers—those found in the interim evaluation not to be meeting their original commitments—by curtailing their level of public funding.

Looking further into the future, the key challenge will be to ensure that the ICEs remain active and expand beyond the 10 years of public support envisioned under the first call of the program. The sustainability of the ICEs depends on their capacity to earn income from contract research, patent licensing, and other sources of competitive public funding, in addition to core funding. Based on the standards of international best practice, we believe the ICEs should evolve toward a funding model of around one-third income from industry, one-third from competitive public funding, and one-third from core public funding. Core funding may come from the Chilean government as well as from the countries in which the ICE grant recipients are based.

Therefore, it will be important for the program to demonstrate not only that the centers are having an impact locally, but that they are providing benefits for the home country that could justify future funding. Such benefits may be assessed in terms of new opportunities for firms from those countries to expand into Chile using the centers as local intermediaries, new technology generated in Chile that can be used at home or exported to other countries, and new opportunities for international scientific collaboration, among other possibilities. Moreover, for research institutions interested in becoming global players, some of the ICE managers indicated that the program might well provide an opportunity for experimenting with the internationalization challenges in a small and safe country such as Chile.

But the primary goal of the ICE program is to enhance Chile’s scientific and technological capabilities. In 2014, the Chilean government launched the Growth, Innovation, and Productive Agenda, a new economic strategy with a focus on diversifying the country’s economy, with emphasis on adding more technology-intensive industries, while improving the operations of existing industries so that they can provide new products and services of higher added value. The ICEs are expected to play a strong role in this strategy by bringing new capacities and technologies from their countries of origin to existing industries, as well as leveraging the emergence of new sectors driven by technology.

Learning model

Chile’s experience with the ICE program can be a learning model for other emerging countries in Latin America and beyond that want to attract global R&D. In fact, it has already served as inspiration for Peru, which launched a similar initiative in 2014 called the Formula C Program. However, we stress that for such programs to succeed, they must be grounded in a dynamic ecosystem where local researchers, universities, firms, and entrepreneurs are prepared to absorb and capitalize on the expected spillovers derived from the attraction of global R&D.

The ICE program is a good example of emerging modes of North-South research collaboration to build the kind of research and innovation capacities that can help emerging countries catch up economically and close development gaps. The program represents a shift from the traditional “one-way” knowledge transfer mentality that has so often marked international scientific cooperation for development, toward a deeper collaboration and coproduction of knowledge among equal partners with mutual benefits.

A Policy Experiment is Worth a Million Lives

Smoking cigarettes is a public health disaster in the United States and the rest of the world. Every year, around 500,000 smokers die prematurely, and the Surgeon General considers smoking the single largest preventable cause of death in the United States. But not all tobacco products are alike in the risks they bring to users and those around them. This diversity of ways to deliver nicotine to the user offers regulatory pathways to reducing—perhaps radically—the toll of smoking. Policy experiments at the state level could allow governments to explore and learn from these pathways, but the dominant, hard-line anti-tobacco ideology may stand in the way of implementing the necessary regulations.

Since 2007, an ever-expanding variety of electronic cigarettes have reached the market (although some of them in fact look nothing like cigarettes). These devices are popularly and generically known as “vape,” “vaping products,” and, by researchers, as Electronic Nicotine Delivery Systems (ENDS). Tobacco combustion products are the cause of virtually all of the serious health risks from cigarettes. ENDS do not burn tobacco. Rather, they use electricity to heat solutions that often contain nicotine, allowing the user to inhale nicotine-infused water vapor. Public Health England, whose mission is to “protect and improve the nation’s health and wellbeing, and reduce health inequalities,” supports the estimates that vaping is 95 percent less dangerous than cigarettes. The evidence is even stronger for oral smokeless tobacco products, such as Swedish snus, with one recent epidemiological review concluding that snus was 99 percent less dangerous than smoking. Unlike so many debates over relative risk (such as those relating to dietary choices or chemical food additives), which often hinge on very small differences between risks, or very large uncertainties, there is no doubt that the risks of vape and snus are dramatically lower than for smoking. The question is how to craft policies that take advantage of these differences to achieve public health benefits. The first step toward addressing that question is simply to acknowledge that the deadlier smoked products are widely and legally available to adult consumers—and that this reality won’t change anytime soon.

And the bans play on

It is commonly said of cigarettes that if they had been submitted to the Food and Drug Administration (FDA) for drug review, they never would have been allowed on the market. True enough. History did not allow us to make that choice (and if it had, we probably would have regretted it, as the lessons of Prohibition suggest). In any case, U.S. tobacco law does not allow the banning of tobacco products. Even though it discourages youth access, FDA regulation preserves adult access to these products, free from encumbrances such as being available by prescription only (as might be the case for other dangerous, pharmacologically-active products). Yet some anti-tobacco forces such as the Centers for Disease Control and Prevention (CDC) are disinclined to acknowledge or make the public aware of significant differences in product risk. They stress that all tobacco and nicotine products are “not safe” and worry that any “safer” tobacco product might lead to worsening of the total public health costs of tobacco use. The European Union, with the exception of Sweden, still bans smokeless tobacco. Most regulatory bodies seem driven by the belief that all tobacco products should be regulated uniformly. Although the FDA does not currently have jurisdiction over e-cigarettes, it is seeking it.

The perspective I take here is that given the reality of cigarette availability, it is a serious analytical and policy mistake to view all tobacco products as similar when they are so different. The zeal to stamp out tobacco products needs to be tempered by regulatory and market realities that offer no prospect of eliminating tobacco products. Policies and positions that would prefer the banning of tobacco products are an irrelevant distraction from those aimed at managing the product options that do exist in ways that might deliver big public health gains.

With the variety of non-combustion tobacco products now available, promising policy alternatives emerge from a “harm reduction” perspective. Harm reduction is essentially pragmatic rather than absolutist. For smoking, it means encouraging less harmful products as substitutes for the more harmful ones. The rapidly growing popularity of ENDS offers the potential to use these products as an important tool to reduce smoking, especially as evidence mounts that vape can compete with cigarettes in the marketplace. Harm reduction would happen in two fundamental ways: when individuals who would have become smokers choose instead to use a less harmful product like vape or snus, or when people who already smoke are able to substitute the less harmful product for cigarettes.

Skepticism among regulators and public health activists about harm reduction is not without reason. In the 1960s, lower-tar cigarettes were marketed as “light” and “mild” and were broadly perceived as safer than regular cigarettes, even though they offered little to no reduction in harm for individual smokers. “Harm reduction” does not occur if the substitute product is not really safer! Today, FDA rules require evidence that a claim of harm reduction not lead to net negative public health outcomes. And although E-cigarettes and smokeless tobacco products are significantly less dangerous than cigarettes, empirical questions remain about the possible net public health effects of promoting non-combusted tobacco and nicotine products. Just as trying beer and liking it could increase the chances of someone moving on to try liquor, one commonly voiced concern is that vape or snus might act as a new gateway into smoking by offering a safer nicotine product that leads its users toward unsafe use of cigarettes. To date, however, evidence supporting the idea of ENDS or smokeless tobacco as causal gateways to frequent smoking for most users is very limited to non-existent. On the contrary, the rise in use of vape by youth in the United States (much of it experimental rather than habitual) has been accompanied by further drops in cigarette smoking. In Sweden, many years of research has failed to detect a causal gateway between snus and cigarettes.

Another concern is that vaping might help smokers cope with restrictions against smoking in the workplace or other public places and therefore act as a kind of crutch for people who might otherwise try to quit. But even if smoking alternatives did create some (as yet unknown) health risk, it would require an enormous increase in the percentage of the public using non-combustion tobacco products to come close to equaling the harm caused by cigarettes and other combusted products. At this point, the focus on research and analysis should not be on the public health risks of harm reduction, but on the policy alternatives that could best deliver public health benefits.

When should you choose your poison?

All cultures make some distinction between adolescence and adulthood with respect to rights and privileges of consuming risky products and engaging in risky activities. The age of “adulthood” varies around the world and differs for different products and activities. And of course good judgment is a quality of the individual as much as one’s age; we have all known wise 16-year-olds and foolhardy 30-year-olds. Thus, there is inevitably a degree of arbitrariness in age-based qualifications. Federal law allows licensed firearms dealers to sell handguns to 21-year-olds and rifles to 18-year-olds. Americans can enlist in the military, an admittedly unsafe enterprise, at age 18, or at 17 with parental consent. In contrast, U.S. law penalizes states that allow people under the age of 21 to purchase alcoholic beverages. (For most other nations, the legal purchase age is 18.)

The precautionary bias that treats all tobacco products the same obscures an opportunity to reduce the most dangerous form of tobacco use and its costs to society.

Setting an age for legal purchase of tobacco products has been a standard practice in modern tobacco control. But what’s the right age, and for what types of tobacco products? Well-heeled from tobacco industry fees, the FDA commissioned the Institute of Medicine (IOM) in 2013 to conduct a study of the “public health implications of raising the minimum age of legal tobacco sale to 19, 21, or even 25 years old.” The IOM experts used statistical models to identify a kind of “sweet spot” for the age of access to be 21 and judged that this policy would have valuable public health effects on smoking prevalence and deaths and disability from smoking.

Ironically, the FDA Family Smoking Prevention and Tobacco Control Act itself does not permit the federal government to raise the legal age for purchase of tobacco products beyond 18 years. Fortunately, states and localities are free to set their own minimum age of purchase, which for most states is 18. A few states have set the age of legal purchase to 19, and in June 2015, Hawaii became the first state to increase the tobacco sale age to 21. Eighty localities, most notably New York City, have set the legal sale age to 21. Several other states, including California, are now considering increasing the age to 21. Setting the purchase age at 25 would offer even more protection against recruitment to smoking, but clearly there would be complaints from retailers about the more complex rules and significant political challenges to being able to pull off such a large departure from the current standards for the age of purchase of dangerous adult products.

The IOM report notes in its summary: “The committee assumes that the MLA [minimum legal age] will be increased for all tobacco products, including nicotine delivery systems (ENDS), and that the intensity of enforcement will be the same for all tobacco products.” Indeed, the momentum in the United States has been, when in doubt, treat all tobacco products as equally dangerous. For example, Hawaii treated ENDS just like cigarettes when raising the legal purchase age to 21.

The IOM committee raised concerns about ENDS as a possible gateway to cigarettes, but it also acknowledged that vaping could prevent or delay recruitment to smoking in some adolescents or young adults who would otherwise start smoking. The committee notes: “Presumably FDA and state policy makers will take these possibilities into account in setting the MLA and will carefully monitor the promotion and use of ENDS, especially by adolescents and young adults.” This presumption that FDA and state policy makers will actually evaluate the various possibilities may be overly optimistic, in light of the common regulatory party line that all tobacco products should be treated as if they were simply unsafe and even equally risky.

Yet, the flexibility that states have to adopt different legal purchase ages for tobacco, combined with the increasing availability and market penetration of non-combustion tobacco products, provides an unparalleled opportunity for quasi-experiments that can test the contributions of harm reduction policies to public health, with the potential for saving literally millions of lives in the long run. Such quasi-experiments, in fact, proved valuable for alcohol regulation, where the de facto standard legal drinking age of 21 was arrived at, in part, by comparing the experience of different state age limits. As more states consider raising the age of legal tobacco purchase, now is the time for public health scientists, policy makers, and activists to consider selectively and differentially raising the legal purchase age of tobacco and nicotine products according to their risks to health.

What a difference a year (or two) could make

Why not keep the legal purchase age for ENDS and snus at a lower age than for cigarettes? Multiple options for age differentials could be explored empirically. One could, for example, set ENDS or snus at 18, and move cigarettes to 19, 20, or 21. The need is to try to determine the benefits of pushing the age for cigarettes above that for products that are much less dangerous. If some state or other jurisdiction were to differentially set the age of legal purchase with consideration to the dramatic differences in health risks, this could permit more than hypothetical data to be collected for future consequences of various tobacco products—with an emphasis above all on reducing the use of cigarettes. One suggestive historical study compared smoking prevalence in states that did not restrict access to vape with those that had already set the purchase age for vape at 18; those states that had not yet set an age limit showed somewhat greater reductions in smoking prevalence in 12- to 17-year-olds. This finding provides preliminary support for the idea that differential age of purchase might have benefits for public health.

Those who are inclined toward a “ban them all” approach should also recognize the potential benefits that differential age limits could have on addressing the contribution of cigarette smoking to growing health disparities in the United States. Twenty-four percent of adults without high-school diplomas are smokers; for the college-educated, the number drops to nine percent. Twenty-nine percent of adults living below the poverty line are smokers; in the rest of the population, only 16 percent smoke. The mentally ill are especially susceptible, with smoking rates of 36 percent, versus 21 percent for those without mental illness. “Hard to reach” populations may not hear or respond to public health warnings or changing social norms. But even those who are not receiving regular health care, advice from health care professionals, and constructive social pressure from work and friends, will get the message from structural changes like a variable age of legal purchase that could help to delay or prevent the use of cigarettes among those who have been hard to reach with other tobacco control methods.

Setting the legal age of purchase is an important decision that sends a message about risk to all consumers. It is a powerful message, but it has never been completely effective in preventing underage use of products. Despite efforts to educate and restrict, adolescence is a time for recruitment to risky activities: unprotected sexual activity, smoking, drinking, and other recreational drug use. As poet Richard Wilbur observed, a dangerous activity can be “made safe by rashness.” Neuroscience research and behavioral research on adolescence has made it clear that adolescents have the cognitive abilities to understand messages and lessons about risk, but this information has little effect on their inclinations toward risk-taking until they mature beyond adolescence (which biologically is most likely after age 25). Differential access to tobacco products offers the potential of influencing choices by this “hard to reach” (but in a very different sense) group that might deflect them from smoking during the crucial rash years of adolescence.

Lumping all tobacco and nicotine products in the same standard ignores the considerable differences in harm to users or bystanders caused by different products. The science-base for setting the age at the same point is slight, and the evidence for a very high magnitude of differential risks is substantial. If some states allowed the purchase of snus and vape at an earlier age than the purchase of cigarettes, would smoking decline overall? Would smoking-related health problems in those states decline relative to states that regulate all tobacco products uniformly? We can’t know the answer to these questions until we do the experiment, but if we fail to do the experiment we miss the opportunity for potentially major public health benefits. Importantly, the potential that differential smoking ages would lead to unanticipated net public health harms seems very small. Yet if such harms did begin to emerge, the mistake could easily be corrected by moving the legal age for vape and snus up to the age for cigarettes. And if the benefits of harm reduction policies began to make themselves felt, then other states could adopt differential age limits as well.

There is a kind of timidity of purpose and failure of analysis that would treat sticks of dynamite and firecrackers as if they each should be subject to the same restrictions on use. The FDA tobacco products division has given a priority to do research on the use of low-nicotine, low-addiction cigarettes as a means to discourage the use of cigarettes. This has been viewed as a way to productively push cigarette smokers to less dangerous tobacco products. But states and municipalities have at their fingertips, as it were, an opportunity to move immediately and decisively beyond the serious limitations of federal tobacco law and develop a possible new way to reduce the harm of tobacco use. The precautionary bias that treats all tobacco products the same obscures an opportunity to reduce the most dangerous form of tobacco use and its costs to society—costs that are increasingly borne by those who are already unhealthy, uneducated, and poor. If the states are, as Louis D. Brandeis, former associate justice on the U.S. Supreme Court, called them, “the laboratories of democracy,” now is the time for them to initiate policy experiments that could help the nation take the next major step in reducing one of its worst public health scourges.

Lynn T. Kozlowski ([email protected]) is a professor of community health and health behavior, and former dean of the School of Public Health and Health Professions, at the University at Buffalo, State University of New York.

Fact Check: Scientific Research in the National Interest Act

Since its creation in 1950, the National Science Foundation (NSF) has served a mission that helps make the United States a world leader in science and innovation. NSF invests about $6 billion of public funds each year into research projects and related activities. In recent years, however, NSF has seemed to stray from its created purpose and has funded a number of grants that few Americans would consider to be in the national interest.

Congress has the constitutional responsibility to provide oversight of government spending. In carrying out that duty, my committee has questioned why NSF spent $700,000 of taxpayer money on a climate change musical, $220,000 to study animal photos in National Geographic magazine, or $50,000 to study lawsuits in Peru from 1600 to 1700, among dozens of examples. There may be good justifications for such work, but NSF has an obligation to the public to provide those explanations when asked.

In July 2015, I introduced the Scientific Research in the National Interest Act (H.R. 3293), a bipartisan bill that ensures that the grant process at NSF is transparent and accountable to the American taxpayer, whose money funds the research the agency supports. The bill has been approved by the House Science, Space, and Technology Committee that I chair.

The Scientific Research in the National Interest Act seeks to hold NSF accountable for its funding decisions by requiring the agency to explain in writing and in non-technical language how each research grant awarded supports the national interest and is worthy of federal funding. The bill also sets forth that NSF grants should meet at least one of seven broad criteria to demonstrate that the grant is in the national interest. This language restores the original intent of the 1950 legislation, which requires NSF to adhere to a “national interest” certification for each grant.

Some opponents of our work to bring accountability and transparency to taxpayer-funded scientific research have spread a number of falsehoods designed to scare the scientific community into opposing the legislation. Let me set the record straight.

First, opponents claim that the bill interferes with the merit review process for approving grants. False. Anyone who has taken the time to read the three-page bill will note that it clearly states, “nothing in this section [of the bill] shall be construed as altering the Foundation’s intellectual merit or broader impacts criteria for evaluating grant applications.” Since 1997, NSF has evaluated grant applications on both “intellectual merit” and “broader societal impact” criteria when determining the worthiness of a project for federal funding. The bill does not change that process.

What our proposal does do is ensure that the results of the peer review process are transparent and that the “broader societal impact” of the research is better communicated to the public. Over the past two years, House Science Committee staff have reviewed numerous NSF grants. The committee found that in many cases the benefits of a proposed project were not made evident in the summary or public description of the project. Our bill would require that those benefits be made clear.

A second falsehood being spread to scare scientists is that the legislation means that research projects will be judged by the title as to whether or not they are worthy of federal funding. Again, false. The bill actually helps correct a past problem with some NSF funded grants. Often the title and an incomprehensible summary were all that was publicly available about a research grant, which has left it open to criticism. 

For example, a grant titled “Accuracy in the Cross-Cultural Understanding of Others’ Emotions” is a project with a head-scratching title. But it turns out the research is intended to help American soldiers better identify potential security threats. That certainly seems worthy of federal funding and the National Interest bill would help ensure such a project’s benefits were better communicated to earn the public’s support and trust. Researchers should embrace the opportunity to better explain to the American people the potential value of their work.

The committee found that in many cases the benefits of a proposed project were not made evident in the summary or public description of the project. Our bill would require that those benefits be made clear.

Finally, opponents claim that the bill discourages high-risk, high-reward research. False. I can think of nothing more worthy of federal funding or more in the national interest than research with the potential to be groundbreaking. Research that has the potential to address some of society’s greatest challenges is what NSF should be funding. Improving cybersecurity, discovering new energy sources, and creating new advanced materials are just some of the ways that NSF-funded research can help create millions of new jobs and transform society in a positive way. 

On the other hand, how is spending $700,000 on a climate change musical encouraging transformative research? What is high-risk, high-reward about spending $340,000 to study early human-set fires in New Zealand? What is groundbreaking about spending $487,000 to study the Icelandic textile industry during the Viking era? There may well be good answers to those questions, but we weren’t able to come up with them. When NSF funds projects that don’t meet such standards, there is less money to support scientific research that keeps our country at the forefront of innovation.

Finally, some critics say the bill attempts to solve a problem that doesn’t exist. False. In January 2015, NSF director France Córdova began to implement new internal policies that acknowledged the need for NSF to communicate about the research projects it funds and how they are in the national interest clearly and in non-technical terms. She testified before the Science Committee earlier this year saying that the Research in the National Interest Act is compatible and consistent with the new NSF policy.

Dr. Córdova is challenging the agency to become more transparent and accountable to the American public. My bill seeks to ensure that the policy outlasts the current administration and helps maintain taxpayer support for basic scientific research. 

Today, NSF is able to fund only one out of every five proposals submitted by scientists and research institutions. With a national debt that exceeds $18 trillion and continues to climb by hundreds of billions of dollars each year, taxpayers cannot afford to fund every research proposal, much less frivolous ones.

We owe it to American taxpayers and the scientific community to ensure that every grant funded is worthy and in the national interest. 

Rep. Lamar Smith (R-TX) chairs the House Committee on Science, Space, and Technology.

My Climate Change

Decades of reporting on climate science and the climate policy debate have led me through a long evolution in my thinking, and I hope to a little practical wisdom.

Some things just seem too momentous to keep in mind. One is the planet we’re living on. We’re on the third rock from the sun twenty-four hours a day, but I’ve only been to one place where that awareness is enforced by nature. Squatting on a floe of eight-foot-thick sea ice at the North Pole, drifting on the 14,300-foot-deep Arctic Ocean hundreds of miles from land, with everything in every direction south and the sun circling the horizon, you absolutely feel you are on a planet.

Another momentous thing we hardly ever think about is the thing we think with: the brain. I think about mine now quite a bit, ever since a hot July day in 2011 when my eyes started telling me conflicting stories about the nature of the world as I huffed and strained to keep up with my far fitter son running up a steep trail in the woods near my home.

My left eye told me the world was paisley. The right eye insisted all was well. I called out; we returned home. I took a shower and some aspirin, wondering if I could be having a stroke. My son drove me to the hospital. It wasn’t a stroke . . . yet.

By the next morning, it was. From my hospital bed, I began reporting and blogging and tweeting about stroke risk and treatment—at least as well as I could in a hunt-and-peck way, given that my right hand didn’t work well for about a month. The stroke made me confront that critical human operating system in my skull for the first time.

I’d had the rarer kind of stroke that hits younger people who are not typical stroke candidates. Part of my drive to write about my experience was fueled by my desire to raise awareness; one tweet from the hospital was, “Don’t stress your carotid arteries if you like your brain and the things it does for you.” But my writing wasn’t all selfless. Turning to journalism allowed no emotional space for absorbing the jarring reality that the white spots in my brain scan showed I was breakable—that something as basic as dexterity, let alone a long healthy life, was no longer a given. (Fortunately, I recovered fully, but there was no guarantee that would be the case.) Reporting on my stroke as a medical and health care problem allowed me to treat it as an intellectual puzzle rather than an emotional crisis—to levitate above my mortality instead of confronting it, deeply feeling it, embracing it.

Some challenges are so grand and momentous that anxiety seems, at best, a waste of time and energy in confronting them. It occurs to me, looking back, that my approach to my stroke parallels, in a strange way, my approach to another almost incomprehensibly large challenge: that of how we face climate change. I have spent thirty years covering the growing human influence on the atmosphere and climate—how profoundly, irreversibly, and consequentially we are changing one of Earth’s critical operating systems. In essence, we have been learning, as uncomfortably as we navigate puberty, that our only planet is somewhat breakable.

And yet, I find global warming doesn’t worry me—at least not in a gut-twisting, obsessive way. Rather, a stripped-down agnostic version of the Serenity Prayer has come to mind lately as I’ve grappled with humanity’s “only one planet” predicament: change what can be changed, accept what can’t, and know the difference. Science can help clarify which is which.

With that mix in mind, in both making the most of a finite life and limiting regrets related to global climate change, it seems necessary to integrate two seemingly incompatible traits: urgency and patience. Since my stroke, I’ve struggled to balance the need to slow down with a rising sense of urgency related to the years ticking down. In my environmental journalism, the result has been lifelong engagement and, more recently, acceptance (if not full-scale embrace) of a lot of inconvenient truths that weren’t in Al Gore’s film.

I used to think of my reporting as a thousand separate stories. But I can see, as I age, that it is in fact one story—a single meandering learning journey with more than a few wrong turns, surprises, and reversals, starting with a dancing bivalve and scribbled death threat in the late 1960s. I didn’t start out wanting to be a journalist; my first fascination, as my childhood was coming to an end, was with biology. Charmed into the undersea world by Jacques Cousteau, I was taken by surprise one summer while snorkeling where a small river meets the sea not far from my Rhode Island home. A bay scallop, trying to evade me, jetted through the sea grass by castanet-clapping its corrugated shells, which were surreally fringed by fleshy curtains flecked with tiny glinting blue eyes.

I quickly moved from embracing nature to defending it. A small patch of woods and fields behind our house remained untouched amid the expanding suburban grid of streets and lawns. Around age fourteen, on one of my regular after-school walks through the trees, I encountered a bulldozer parked in a fresh-cut clearing near my favorite spruce. I placed a scribbled warning on the seat, something like Whoever chops down this tree will suffer a horrible death. (A few decades would pass before I reflected back on that bulldozer encounter and realized I had never considered that a bulldozer, just a few years earlier, had cleared the tract our house occupied.)

In high school, a teacher let me and a friend build and refine a crude wave tank in lieu of writing a paper. I loved reshaping the cardboard baffles I taped over an aquarium until the airflow from a fan blew across the water in the tank just right, forming perfect waves breaking on our artificial beach. The experience helped ignite my interest in science. I thought I might become a scientist, in fact, but biology studies at Brown University taught me that I didn’t have the close-focus temperament to pursue a Ph.D.

Finding my path

I shifted to journalism after winning a traveling fellowship just before graduation. My project was to study man’s relationship to the sea on some small islands, starting in French Polynesia. Three months in, I ended up studying my own relationship to the sea after encountering a Crew Wanted sign on a pier in Auckland, New Zealand, and signing up as first mate on a circumnavigating home-built sailboat, the Wanderlust.

That journey exposed me to the wonders and ills of a fast-changing world, including the sight of dozens of leopard skins piled on a street corner in Djibouti, at the base of the Red Sea, to entice French Foreign Legionnaires stationed there.

I felt a mix of anger and mission as I photographed the remains of those slaughtered cats, determined to tell their story. Where were they being killed? How could this be tolerated? A week or so later, riding a strong southerly wind up the Red Sea, we sheltered for a day or so in the lee of an uninhabited island off the coast of Yemen. Hiking to the windblown south-facing shore, I stumbled upon a random assortment of intact light bulbs—presumably cast from passing ships over many years—piled in drifts just above the tide line. Small inconsequential wounds to the world, building inexorably.

Energized by these experiences, I pursued journalism in graduate school and forged a path into magazines in the early 1980s—the heyday of science writing. At my first stop, Science Digest, I exposed pesticide perils, described the future of the automobile (at the time, the future was the Ford Taurus!), reported on the rise of the supercomputer, and more.

Fairly early on, I began probing what soon appeared to be the ultimate environmental story—our evolving and worrisome relationship with Earth’s atmosphere and climate. Until this point, most human assaults on nature were local—polluting a stream, felling a forest. Now, through booming populations and resource appetites, we were going global.

My first in-depth look at human-driven climate change, starting in 1984, focused on the dark sister of global warming—the Cold War prospect of a “nuclear winter.” This was the scary hypothesis that a nuclear war, by incinerating hundreds of cities, could cloak the planet in sun-blocking particles, disrupting agriculture and ecosystems around the world and thus undercutting the logic of a “Star Wars” missile defense. The authors of the key study, including Carl Sagan, had reached this result using computer models that had been built to study global warming as early as the 1950s but were becoming ever more sophisticated as computing power grew. The researchers called the hypothesized post-war chill the “anti-greenhouse effect.” The cover art for my article was an image of Earth frozen in an ice cube.

Subsequent analysis pointed to a more transitory climatic effect, which two climate scientists, Stephen H. Schneider and Starley L. Thompson, called “nuclear autumn” in one piece. Clearly less of a headline there. That pattern would pop up again and again in weighing environmental perils: newly discovered, they were stark and vivid, but in most cases, more science only led to more nuance and more questions—not a good mix for media thriving on stark drama.

Three years later, at Discover magazine, I was assigned to write a feature on global warming. Though only well understood by a handful of scientists and policymakers in the mid-1980s, research was revealing that human numbers and technological potency were changing the human-climate relationship in profound ways. Through nearly all of human history, this had been a one-way relationship. Weather patterns changed; ice sheets, coastlines, or deserts advanced or retreated; and communities thrived, suffered, or adjusted how or where they lived. But now, in subtle but measurable ways, the relationship was running in two directions, with enormous potential consequences.

A host of human activities, particularly the burning of fossil fuels and forests, were adding long-lived gases—most important, carbon dioxide—to the atmosphere. These gasses prevented some of the energy that arrived as visible sunlight from escaping as radiating heat. The imbalance guaranteed warming and resulting changes in climate, ice sheets, and sea levels with big implications for humans and other life. It soon became apparent, even back then, that this would be hard to reverse.

Climate change achieved headline status in 1988 because Yellowstone National Park and the Amazon rain forest were ablaze and the eastern United States baked in record heat. After testifying at a high-profile Senate hearing on global warming, James Hansen, the pioneering NASA climate scientist who would later become a climate activist, said, “The greenhouse effect has been detected and is changing our climate now.” Reporters who had been covering the Clean Air Act or endangered species or threats to the ozone layer had a big new story to tell.

It was a heady time. That year, I reported from the first World Conference on the Changing Atmosphere, in Toronto, where a keystone statement spelled out the momentous nature of what was unfolding: “Humanity is conducting an unintended, uncontrolled, globally pervasive experiment whose ultimate consequences could be second only to a global nuclear war.” The attendees recommended a 20% cut in emissions by 2005. The portentous cover art for my October 1988 climate article was the sweating Earth melting on a hot plate.

In the end, it is values and instincts and particular circumstances—economic and environmental and cultural—that determine what individuals and societies do.

The Toronto meeting, although relatively obscure, initiated the process leading to the first climate treaty, adopted at the Rio Earth Summit in 1992, and the negotiations that have been under way ever since, most recently in Paris, to try to strengthen responses to the threat. Later that year, the American Museum of Natural History staged the first museum exhibition on climate change. Reflecting how much momentum had built around this issue and how mainstream environmentalism had become, the exhibit was co-sponsored by the Environmental Defense Fund and largely funded by the National Science Foundation.

I was invited to write the companion book for the exhibition, and I drew some pretty ominous word pictures to lay out the stakes. In a section on climate history, I described how we were growing potent enough, perhaps, to end the current geological epoch, the Holocene:

Perhaps earth scientists of the future will name this new post-Holocene era for its causative element—for us. We are entering an age that might someday be referred to as, say, the Anthrocene. After all, it is a geological age of our own making. The challenge now is to find a way to act that will make geologists of the future look upon this age as a remarkable time, a time in which a species began to take into account the long-term impact of its actions. The alternative may be to leave a legacy of irresponsibility and neglect of the biosphere that could eventually manifest itself in the fossil record as just one more mass extinction—like the record of bones and footprints left behind by the dinosaurs.

Actually, it was only eight years later, in 2000, that “scientists of the future”—the chemistry Nobelist Paul Crutzen and the biologist Eugene Stoermer—proposed that Earth had entered the Anthropocene (a more etymologically sound neologism). As an environmental writer, I was on a roll, with several awards signaling my skill at communicating environmental science. But flipping through my 1992 book now, I see some signs that I was a bit carried away with a sense of mission and more than a bit naïve about the scale of the global warming challenge. For example, there was no basis for this breathlessly optimistic line about how the adoption of the 1987 Montreal Protocol—designed to phase out chlorofluorocarbons (CFCs) and other chemicals threatening the planet’s protective ozone layer—could be a template for curbing carbon dioxide: “The lesson of CFCs can be applied directly to the looming problem of greenhouse warming.”

I hate finding inconsistencies in my own writing, but this line from elsewhere in the same book strongly implies that, even at the time, such a comparison was unduly optimistic:

Of course, eliminating a class of synthetic chemicals is a relatively simple task, as Pieter Winsemius, a former minister of the environment for the Netherlands explained to me at one greenhouse-effect meeting. Substitutes for these destructive compounds are already being developed, he said. “There are only thirty-eight companies worldwide that produce CFCs. You can put them all in one room; you can talk to them. But you can’t do that with the producers of carbon dioxide—all the world’s utilities and industries.” Gases such as carbon dioxide and methane are a byproduct of the processes at the heart of modern civilization: industry, transportation, power generation, and agriculture.

Carbon dioxide had little in common with pollutants of old, stray impurities produced during combustion (sulfur compounds, for example), which could be controlled relatively easily and affordably with filters or catalytic converters. In the case of CFCs, industry had swiftly moved ahead to develop more sustainable, and affordable, alternatives. And it turned out, surprise of surprises, that we didn’t really need aerosol hair spray and deodorants.

Carbon dioxide, in contrast, is a fundamental and long-lived byproduct of burning fossil fuels, and, even now, efforts to capture and store this gas permanently—at a scale relevant to the climate system—remain costly drawing-board pipe dreams. Adding to the challenge, billions of people benefit from the actions creating the risk—burning cheap fossil fuels, spreading fertilizer made with fossil energy, cutting down forests—while most of those who stand to suffer the worst predicted impacts haven’t yet been born. Too, unlike other pollutants, carbon dioxide is also a ubiquitous and normal component of the air—not to mention the bubbles in beer and every exhaled breath. Where’s the peril, the villain, in that?

A tougher audience

There was another tough reality I hadn’t fully absorbed yet. Science magazines, books, and museum shows were mainly preaching to the converted. Once I moved to The New York Times in the mid-1990s, the phenomenon of global warming itself became a tougher sell, both to my editors and the public. A stock-market correction, an earthquake, a particular hurricane: that’s news. But you’d never see a banner headline proclaiming Planet Warms, Coasts Flood, Species Vanish, People Flee even though all of these things are sure to happen in a human-heated world—over decades or generations, and always amid a mix of old-fashioned truly natural disasters.

Still, they were great years for reporting. The paper sent me far and wide. In 2003, after several years of trying, I made that North Pole trip, spending three days with scientists camped on the drifting sea ice near the North Pole, studying the shifting ocean conditions for clues to how warming would affect the region. My addictions to science and nature were never more fully fed. Squatting by the edge of an expanding opening in the ice—staring at formations called “frost flowers” that formed where the frigid but steaming water met the twenty-below-zero air—I was so mesmerized that a Russian camp worker had to rush over and physically pull me back.

Some part of me anticipated a hero’s welcome when I returned to the newsroom, towing a huge duffel of Arctic gear. After all, I had literally gone to the ends of the Earth. But the newspaper was embroiled in controversy: a young reporter, Jayson Blair, had been caught serially fabricating details in stories. An ambitious package of climate change articles planned that year was spiked as new management, more focused on core issues, settled in.

The path to the front page was through covering climate politics, not climate science. I think one reason the issue was covered so often through the lens of politics is that doing so made the solution seem easier. After all, the only thing missing was political will, campaigners insisted. Stories that had villains and heroes, the empowered and the powerless—those were (often appropriately) news.

I thrived under this model, too. Advancing in newspaper journalism is mostly about the “get”—publishing the telling piece of evidence, ideally uncovered through wile or enterprise, that lays bare some nefarious activity or portentous threat. I got a series of exclusives on the Bush administration’s meddling at the National Aeronautics and Space Administration (NASA) and other agencies. And in 2006, it was with some pleasure that I saw one of my front-page stories—about a former oil lobbyist who had softened the language in government climate change reports while in the Bush White House—scrolling on Al Gore’s laptop screen in An Inconvenient Truth.

By then, I’d written hundreds of newspaper and magazine stories as well as two books about global warming, burning rain forests, melting glaciers, and the rest. I was hitting the peak of my influence among Earth-loving activists and loving it. After all, I was among my kin, in essence, as a liberal, Ivy League, middle-class Northeasterner. My responsibilities as a journalist were, at least at that moment, aligned with my longtime passion for protecting the environment.

Climate apostasy

But then my underlying hunger for reality spoiled things.

I saw a widening gap between what scientists had been learning about global warming and what advocates were claiming as they pushed ever harder to pass climate legislation or strengthen the faltering 1992 climate change treaty. Mind you, there was usually a much bigger gap between the science and the views of industry supporters defending fossil fuels or fighting environmental regulations or taxes. But to me, the monumental nature of the task facing those trying to move the world away from fossil fuels called for extra attention to detail.

Maybe, as the climate scientist Thomas Crowley later proposed, I was also prone to a kind of “reverse tribalism”—a variant on Groucho Marx’s aversion to being a member of a club that would have him as a member.

Thus it was that I found myself diverting from the pack—and not just environmental campaigners. In early April 2006, Time magazine ran a cover story intoning “Be Worried. Be Very Worried.” That would really kick people into acting, right? Well, no, I learned, as I began interviewing behavioral scientists about what prompts people to act or recoil. On Earth Day weekend that year, I wrote a piece titled “Yelling Fire on a Hot Planet,” which noted how hyperbole not only didn’t fit the science at the time but could even be counterproductive if the hope was to engage a distracted public. I always stressed that uncertainty was not a reason to relax, but warned that downplaying known unknowns simply empowered those seeking no action at all.

In 2006, I was part of a team of reporters at The Times that undertook a multi-year series called “The Energy Challenge” (nytimes.com/energychallenge), examining what it would take to deeply cut reliance on coal, oil, and gas, and move to climate-friendly technologies. The deeper we dug, the more we ran into enormous disconnects between the data and the claims. It was very clear that any transition to clean energy would be neither simple nor quick—and it wasn’t only for lack of political will.

I toured labs at Caltech with Nate Lewis, a chemist focused on improving solar panel performance. He described the challenge of transforming America’s fossil-dominated energy systems this way: “We already have electricity coming out of everybody’s wall socket. This is not a new function we’re seeking. It’s a substitution. It’s not like NASA sending a man to the moon. It’s like finding a new way to send a man to the moon when Southwest Airlines is already flying there every hour, handing out peanuts.”

And then there was the other end of the energy spectrum—areas of the world where electricity wasn’t coming out of a socket for hundreds of millions of people because there was no socket, or light bulb. More than a billion people lacked a way to cook that didn’t produce clouds of toxic indoor smoke. It became vividly clear that the world will need far more clean energy than even fossil fuels are currently providing as the human population heads toward a predicted mid-century peak of nine billion or so and poor nations push to improve well-being. The gap in energy access has contributed to rising tensions in the climate talks between nations that have already prospered burning fossil fuels, with prosperity reducing vulnerability to climate hazards, and those where hundreds of millions of people still cook in the dark on dried dung or firewood, with millions dying young each year from avoidable indoor pollution.

Renewable electricity sources like solar panels could help in many places that probably will never see a conventional power grid. But swelling cities need central power plants, as well. I took some consolation in looking back at my very first climate story, from 1988, and seeing a line which warned that this issue would loom: “[E]ven as the developed nations of the world cut back on fossil fuel use, there will be no justifiable way to prevent the Third World from expanding its use of coal and oil.” Great. I had seen it coming. Maybe we all had. But that foresight didn’t make the challenge now any smaller.

Journalism’s norms also required considering the full range of views on a complex issue like climate change, where science only delineated the risk but societal responses would always be a function of considering various tradeoffs. In 2007, I included Bjorn Lomborg’s climate book, Cool It, in a roundup of voices from “the pragmatic center.”

Lomborg, a Danish political scientist, became a widely quoted contrarian pundit after the publication of The Skeptical Environmentalist, a previous book that had challenged—and was vigorously challenged by—the environmental science community.

Given how Lomborg hadn’t resisted having his arguments wielded by factions seeking no action to cut climate change risks, my description of him was not apt.

But the reaction from longtime contacts in environmental science was like a digital sledgehammer. An e-mail string excoriating the story was forwarded to me in hopes I would understand how far I had strayed. In the exchange, one of the country’s top sustainability scientists told the others: “I think I’m going to throw up. I kept trying to believe that Andy was quite good, albeit subject to occasional lapses as well as rightward pressure from NYT higher-ups. But this is really too much. We have all over-rated him.”

The intensity of feelings, the divergent views of data, prompted me to examine old questions in new ways. For twenty years, I’d been reporting on climate change as a mechanistic geophysical problem with biological implications and technical, economic, or regulatory solutions. As a science writer, I was so focused on the puzzle that, I suddenly realized, I had neglected to consider why so little was happening and why so many people found the issue boring or inconsequential.

As I dug deeper into studies of human behavior and risk misperception—a different kind of science—much of what I learned posed potent, nearly existential questions, especially for a journalist. Like many of my friends in environmental sciences and journalism, I had long assumed the solution to global warming was, basically, clearer communication: fresh innovation in mixing pictures and words, video and graphics, different metaphors. If we could just explain the problem more clearly, people would see it more clearly, and then they would change.

There were countless attempts, often relying on metaphor:

Climate is your personality; weather is your mood.

Carbon dioxide added to the atmosphere is like water flowing into a tub faster than the drain can remove it—and the drain is getting clogged. (The bathtub effect!)

The greenhouse effect is building like unpaid credit card debt. Reducing spending doesn’t eliminate the debt.

There was, of course, the simmering frog failing to jump out of the pan.

On my blog, I tried breaking the language down to clear up disputes over which climate science conclusions were established and which remained uncertain. But I came to realize that the answer could not be found in clever slogans. Empirical studies and a batch of surveys pointed to a set of biases, reflexes, and cognitive filters that almost guaranteed failure in trying to galvanize broad action on global warming given the long time scales, enduring uncertainties, geographic spread, and lack of quick fixes.

One finding, by the British climate communication expert George Marshall, obliterated one of my longstanding assumptions—that people with children were more likely to be concerned about climate change because of its impact on their offspring. He found that, in fact, parents often appeared less concerned because they were so fixated on the day-to-day challenges of raising a family. Then there’s status quo bias (we overvalue the way things are), confirmation bias (we select information to reinforce established views), and motivated reasoning (even when we think we’re thinking objectively, we’re not).

I looked into the “cultural cognition” research of Dan Kahan, a professor of law and psychology at Yale, who has the animated mannerisms and wardrobe of Quentin Tarantino. Among a host of sobering findings, he showed that scientific literacy abounded at both ends of the spectrum of beliefs on global warming. So I tried a little experiment: I sifted for Nobel laureates in physics who’d expressed strong views on global warming. It turned out there was one to suit just about anyone’s argument, from deep worry to total unconcern.

As a journalist in my fifties, pondering how to make the most of the rest of my productive years, this was a more profound blow than that stinging e-mail from former fans years earlier. It was even worse than hearing Rush Limbaugh, from the other side, suggest in 2009 that if I really thought people were the worst thing for the planet, I should just kill myself.

A new path

Ultimately, the insights that these findings revealed helped drive my decision late that year to leave full-time reporting for academia. (Of course, journalism itself was going through profound changes at the same time, and my growing conviction to try new paths fortuitously coincided with an attractive buyout offer.)

The job title I concocted for my position at Pace University—Senior Fellow for Environmental Understanding—was meant to reflect that I was exploring how to make information matter, but in a new way. My Dot Earth blog moved to the Opinion side of The Times in 2010, but, as I stated at the time, my opinion was still that reality matters.

I hardly gave up communicating. In fact, I write more than ever, and I teach others how to make the most of the rapidly changing online information environment. It’s changing even faster than humans are changing the biophysical environment.

And the more time I’ve spent focusing on that sobering behavioral research, the more I’m realizing that it points to distinct opportunities to make progress on climate-smart energy steps and policies, which can create more resilient communities. Paradoxically, though, in some instances this would require something odd: not talking about global warming at all. Most powerfully, a recent nationwide analysis by researchers at Yale and Utah State University found that although asking questions about global warming reveals muted passions on both ends of America’s deeply polarized political map, asking different questions can mute the differences. For example, both red and blue voters strongly support investing in more research on renewable energy sources and regulating carbon dioxide as a power plant pollutant.

There are plenty of other examples across the board. There are libertarians who crave the taste of energy independence that comes with a rooftop solar panel. There are liberals who hate the idea that taxpayers should pay the bill when people who build repeatedly in flood zones get reimbursed under federal insurance policies that don’t reflect the real risk.

And as that national survey showed, there is widespread support for invigorating this country’s lagging investments in basic sciences related to better battery technology or solar panels, more efficient vehicles and electrical grids, and possibly even a new generation of nuclear plants. It’s time: American investment in basic research in energy-related sciences has been a dribble for decades compared to the money poured into science in other areas, such as defense and homeland security or the cancer fight.

And yet, it’s important to remember that science doesn’t always lead in directions you might expect.

Take, for example, fracking, shorthand for the hydraulic fracturing technology that has greatly expanded access to oil and gas reserves that were thought to be untappable. The roots of this technology lay in federally funded research that sat dormant until pioneering energy entrepreneurs, spurred by declining gas and oil supplies, adopted it. (I’ve been supportive of tightly regulated fracking but recognize that this leads to a longer tail on the era of gas and oil than those proclaiming Peak Oil foresaw.)

Here’s the other problem: science doesn’t tell you what to do.

The climate scientist Ken Caldeira, who studied philosophy in college, likes to paraphrase the 18th-century philosopher David Hume when describing the line between values and data: “You can’t get an ought from an is.”

In the end, it is values and instincts and particular circumstances—economic and environmental and cultural—that determine what individuals and societies do. In open societies, and in a variegated global discourse on climate vulnerability and energy access, that means there will inevitably be divergent stances and tradeoffs.

Those of us with a science bias expect that proper research will lead us to a menu of objective fixes, but you have to realize that even a passion for investing in science as the source of answers is the result of a value judgment.

It was Pete Seeger who helped me understand this as we sat in the kitchen of his hand-hewn home tucked high on the wooded shoulder of the Hudson Highlands overlooking Newburgh Bay. Pete was a friend and neighbor, with whom I’d been singing and conversing since I moved to the Hudson Valley in 1991.

He recalled how his father, a musicologist, used to prod friends who were scientists: “You think that an infinite increase in empirical information is a good thing. Can you prove it?”

Pete then described how his father would then exclaim that faith in science is no different than faith in anything else.

“Face it, it’s a religion,” Pete said.

Numerical goals, for example, are fine as first steps in considering options, and they provide a useful rallying point for activists. But to me, it seems they are being wielded as some hybrid of science and moral authority—Moses’s tablets inscribed with Einstein’s equations. For global warming, the reality remains a sliding scale of interrelated choices and outcomes, as John Holdren, President Obama’s science advisor, has been pointing out for many years, and as each of five reports on climate science from the Intergovernmental Panel on Climate Change has found. There are no clear-cut choices—only a mix of mitigation of emissions, adaptation to impacts, and suffering.

The intensity around numbers and particular strategic goals, like convincing Obama to kill the Keystone XL pipeline extension, has driven wedges between climate and energy factions that might otherwise have been allies. In 2013, grappling with the intertribal tensions over how to end our oil addiction, I did some Web searching for the terms “response . . . diversity . . . environment” to see if anyone had explored how or whether environmental campaigns might tolerate common but differentiated approaches to progress.

I admit it was personal. I was tired of being called a “hippie puncher” and VSP (“very serious person”) by the liberal green blogger David Roberts for arguing that whatever President Obama chose to do about the pipeline, oil demand had to be addressed or despoliation of the environment (whether in Canada or elsewhere) was inevitable.

My Google search turned up a remarkable 2003 paper on the sources of ecosystem resilience by Thomas Elmqvist of Stockholm University and others. It included this line:

The diversity of responses to environmental change among species contributing to the same ecosystem function, which we call response diversity, is critical to resilience. Response diversity is particularly important for ecosystem renewal and reorganization following change.

As I read it, I pondered whether the following slight tweak might also be true:

The diversity of responses to environmental change among people contributing to the same social function, which we call response diversity, is critical to resilience. Response diversity is particularly important for social renewal and reorganization following change.

Can the environmental movement find room for diverse strategies?

I hope so. It’s utterly human to have varied responses to change and challenges—in this case, humanity’s intertwined energy and climate challenges. I see great value, for example, in the work of students and academic colleagues pursuing divestment from fossil fuel companies. To me, there’s particular merit in examining investments and divestment as a path to putting ossified terms and norms under fresh scrutiny. Is a school’s endowment more than its financial investments? Is fiduciary responsibility limited to preserving those assets measured only in dollars and cents? Are trustees of a company, university, or planet responsible only for sustaining values measured that way?

But I also see the value in engaging with—dare I say it, even working for or investing in—big companies as a way to test the possibility of building a different culture from the inside out.

Rather than looking at either strategy as right or wrong, I see both as part of a broadening commitment to a new and durable human relationship with both energy and climate.

One thing that this approach requires is a willingness to accept, even embrace, failure and compromise.

A helpful metaphor came to me in a conversation about a decade ago with Joel E. Cohen, a demographer and development expert affiliated with Columbia and Rockefeller University. He said that after the sprint of the last couple of centuries, humans would do well to seek a transition to a more comfortable long-distance pace more suited to adulthood than adolescence.

Walking, he reminded me, is basically “a controlled forward fall.” It is a means of locomotion by which one moves steadily ahead, adjusting to bumps or hurdles, even trips and collisions, shifting course as needed but always making progress toward the desired destination.

Essentially, societies need to find a way to fall forward without falling down.

The prismatic complexity of climate change is what makes it so challenging to address, but this also means everyone can have a role in charting a smoother human journey. I’ve come to see the diversity of human temperaments and societal models and environmental circumstances and skills as kind of perfect for the task at hand. We need edge pushers and group huggers, faith and science, and—more than anything—dialogue and effort to find room for agreement even when there are substantial differences.

At the level of nations and cultures, a diversity of approaches is also inevitable, and that’s why the recent shift in climate diplomacy away from a binding top-down model to a flexible but credible and inclusive agreement, although long seen as a failure (including by me in early stories), is a perfectly human version of success.

It’s notable that Pope Francis last year stressed the need for diversity and dialogue in his historic encyclical on equity, climate change, and environmental care. He didn’t hesitate to state his views on consumptive capitalism, but despite being the ultimate top-down leader of a top-down institution—he is il Papa, after all—Francis said that dialogue and compromise between worldviews are key to whatever comes next.

Like a parent confronted by squabbling kids, he was essentially saying, “Work it out.”

By the time the encyclical was released, I had already gotten a feel for what was coming. The foundation for much of Francis’s papal letter was laid at a remarkable Vatican meeting in early May 2014, convened by some of his top advisers at the Pontifical Academy of Sciences. I’d been invited to serve as a respondent after four days of presentations and discussions by several dozen scientists, philosophers, economists, church figures, and human rights activists. The title of the conference perfectly captured the question on the floor: “Sustainable Humanity, Sustainable Nature, Our Responsibility.”

If you had told me a decade or two ago that some of the most valuable reporting I would do on climate, energy, and environmental sustainability would take place at the Vatican, I would have probably chuckled. An agnostic lapsed Reform Jew and science writer inside those high stone walls? Yet there I was.

The opening plenary, delivered by one of Francis’s top advisers, Cardinal Óscar Andrés Rodríguez Maradiaga of Honduras, had a potent line that piercingly captured humanity’s core dilemma: “Nowadays, man finds himself to be a technical giant and an ethical child.” Could our ethics and empathy, our self-awareness and time scales of concern, catch up with our potency?

Sessions ranged from the deeply philosophical (“People and Nature: Antagonism or Concordance?”) to the concrete (“Food Production in the Anthropocene”); from the hopeful (“The Promise of Mega-Cities”) to the apocalyptic (“Existential Risks”). Even the phrase birth control was uttered, albeit briefly.

After dinner on the final evening, in one of the ornate rooms of the Casina Pio IV, built as a summer home for Pope Pius IV in 1561, I turned to Walter Munk, the ninety-eight-year-old Scripps Institution oceanographer who, among other things, played a role in helping Allied amphibious invasions succeed by refining wave forecasts.

“What do you think it will take for humanity to have a smooth journey in this century?” I asked.

Munk didn’t mention science or technology, carbon capture or a carbon tax, fusion power or political will.

“This requires a miracle of love and unselfishness,” he said.

And there I was, a lifelong science writer in the Vatican, smiling and buoyed and chucking my stick-to-the-data “very serious person” persona and embracing the utterly human magic in that reply.

Returning home from heady events like that one, I’m always eager to escape into the woods around our Hudson Valley home, to scuff my feet in the leaf litter and clear my head. With my wife and sons, I occasionally return to the “Chimney Trail” where my left eye’s paisley signals hinted at a tear in the lining of my left internal carotid artery and at the clot and stroke to come. Needless to say, I no longer sprint as we follow the curling path uphill, crossing rocky stream beds and resinous soft carpets of aged pine needles, to the windswept ridge where the fieldstone chimney that gives the trail its name stands, like a sentinel, overlooking the full sweep of the Hudson Highlands.

The cabin built there a century ago by Rhinelander Waldo, a New York City police commissioner, long ago burned to the ground and, like so much of Hudson Valley lore, has vanished into the soil. It’s impossible to sit there without pondering history, from nearby hills once topped by Revolutionary War encampments to Storm King Mountain, where a legal battle over a pumped-storage power plant spawned the modern environmental movement.

For me, it’s a place that also brings to mind climate history. The rounded contours of the Highlands speak of a billion years of erosion, including the grinding passage of the mile-thick ice sheet that advanced toward the Atlantic in the last ice age, which ended 12,000 years ago, leaving the great gravel heap we call Long Island, sixty miles to the south, before retreating.

These days, it’s hard for me to sit there without pondering the climate to come, which scientists tell me is unlikely to include a new ice age given that the long-lasting greenhouse buildup of the Anthropocene will overwhelm the subtle orbital changes that bring back the cold.

After such musings, I get up and start the hike back down toward home, always a bit more careful on the gravelly spots, poised to catch myself—to fall forward instead of falling down.

Andrew C. Revkin is senior fellow for environmental understanding at Pace University’s Pace Academy for Applied Environmental Studies and the author of the Dot Earth blog at the New York Times. A version of this article will also appear Creative Nonfiction magazine.

The Search for Schizophrenia Genes

To gain insight into the biological basis of disease, President Obama launched the Precision Medicine Initiative in January 2015. A major aspiration of the program is to identify the genetic underpinnings of disease. Some commentators have questioned whether this research agenda has more to do with science or politics. In a New York Times op-ed, Dr. Michael Joyner, an anesthesiologist and physiologist at the Mayo Clinic, pointed out some reasons to be skeptical. In Joyner’s words, “no clear genetic story has emerged for a vast majority of cases.” The title of his piece summed up his conclusion: “‘Moonshot’ Medicine Will Let Us Down.” In other words, we are spending a lot of money on something with questionable utility, and even when we do find genetic variants that contribute to risk, their predictive power is based on environment, culture, and behavior.

The idea that mental illness is the result of a genetic predisposition is the foundation for modern-day psychiatry, and has been the driving force for how research money is allocated, how patients are treated, and how society views people diagnosed with conditions identified in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-V). Schizophrenia holds a unique spot in the annals of mental health research because of its perceived anatomical underpinnings, and is often cited as evidence in favor of a genetic predisposition to other conditions. The logic at work is that if schizophrenia is genetic, then depression, obsessive compulsive disorder, attention deficit hyperactivity disorder (ADHD), and a host of other DSM-V conditions must also have their roots in dysfunctional genes.

During the pre-molecular era—from about 1970 to 1990—a series of family, twin, and adoption studies were used to estimate the heritability of schizophrenia from 42 percent to 87 percent. Although the technology at that time was not advanced enough to identify the specific genes, it was assumed that technological advances would eventually catch up and pinpoint the genetic culprits. Once the genes were discovered, biological markers would be identified, which, in turn, would lead to the development of precision drugs. After the biological roots of schizophrenia were discovered, the other DSM-V conditions would shortly reveal their secrets—so the story went.

The technology has now caught up, and we are firmly entrenched in the molecular era of behavioral research. Yet, in spite of the fact that molecular geneticists have spent countless hours and millions of dollars, a specific gene has never been found. In the 1990s, several scientists reported finding a “schizophrenia gene” only to eventually retract their findings. Decades of research have confirmed that the influence of genetics on psychiatric conditions is relatively minor, and that those earlier studies misjudged heritability estimates.

Even for many common physical conditions, such as cancer, cardiovascular disease, and diabetes, all with clear biological pathology, the Human Genome Project has shown that there are hundreds of genetic risk variants, each with a very small effect. As geneticists implicate more and more genes, and the importance of each individual gene decreases, it becomes hard to see how this information can be used in a clinical setting. Compared to these physical conditions, the debate about genetic risk factors for psychological conditions such as schizophrenia, depression, and ADHD, which all lack distinct biological markers, is even more heated.

Genes of small effect and lowered expectations

The current trend in psychiatric genetics is to use enormous samples to find genes of miniscule effects. In May 2014, the Schizophrenia Working Group published “Biological Insights from 108 Schizophrenia-Associated Genetic Loci.” The study, a genome-wide association study (GWAS), looked at 36,989 patients and 113,075 controls and identified 108 loci with genome-wide associations. The risk scores explain up to 4 percent of the variance in the diagnosis of schizophrenia. Some might label this a success, but it’s reasonable to ask, “Only four percent?” Is the other 96 percent explained by the environment or more hidden genes? To complicate matters, these same genes have been implicated in other conditions, such as ADHD and autism. In his book Misbehaving Science, Aaron Panofsky, an associate professor in public policy at the University of California, Los Angeles (UCLA), discusses the strategies that behavior geneticists use to cope with the failure of molecular genetics. In his words: “One of the most basic strategies for dealing with the disappointment of molecular genetics has been to lower expectations.”

These lowered expectations were evident in the news articles about the study. In general, there was a disconnect between what the study actually showed—that nature plays a minor role—compared to the news headlines—that nature has won the race. An article in Scientific American stated, “This finding lays to rest any argument that genetics plays no role” (italics added). But the author could have stated, “With genetics explaining only 4 percent of the cause, this study lays to rest any argument that genetics plays a major role.” Taking the study’s results at face value, one could conclude that genetics plays a role—but not much of one. It was only in the realm of speculation that the genetic view won.

As another example, an enthusiastic article by a psychiatric geneticist in The Lancet Psychiatry referred to the 108 schizophrenia-variant study as a “game changer” and a “remarkable success.” He declared: “The importance of showing at least some biological validity of the clinical concept that is schizophrenia cannot be overstated.” But he is, in fact, overstating the usefulness of these results, as indicated by the rest of The Lancet article, which concluded that genetics cannot be used to make clinical predictions. Going beyond the actual results, some behavioral geneticists suggest that in the future, researchers may discover more than 8,000 variants for schizophrenia. In USA Today, Steve Hyman, director of the Stanley Center for Psychiatric Research, commented: “Now we have 108 pieces, but maybe it’s a 1,000-piece puzzle…” As more and more variants are implicated, the results become even more watered down. For instance, a recent algorithm to examine the polygenic risk of schizophrenia estimated that there are 20,000 single nucleotide polymorphisms (SNPs), or differences in single DNA components, implicated in schizophrenia. Commenting on these results, Alkes Price from the Harvard School of Public Health noted that because so many regions are implicated, there is the concern that “GWAS will ultimately implicate the entire genome, becoming uninformative.” A clinically useful signal appears impossible to distinguish from the noise.

Even if you completely agree with the 108 loci study’s methodology and all its inherent assumptions, there is no way to conclude that the researchers have discovered “schizophrenia genes.” In fact, they have disproved their existence. For each of the 108 loci there is a very small difference between the percent found in those diagnosed with schizophrenia and the control sample. Take the very first one: it’s found in 86.4 percent of the patients, and in 85 percent of the control group. This is a minor difference, and whether or not you have the variant tells you nothing about your risk of being diagnosed with schizophrenia. These genes are neither unique nor specific for people diagnosed with schizophrenia; many of the genes are scattered far and wide, and most of us carry at least some of them. As Kenneth Kendler, a psychiatry professor and geneticist at Virginia Commonwealth University, concluded in a recent paper, “All of us carry schizophrenia risk variants, and the vast majority of us carry quite a lot of them.” It is only by combining all the genetic markers into a single polygenic risk score that researchers can say that an individual has an increased risk of developing schizophrenia. However, even those individuals with a supposedly increased risk were more likely to not develop schizophrenia.

Behavioral geneticists are having an even harder time for other psychological conditions. For major depressive disorder (MDD), the results are much more sobering. A 2013 mega-analysis of genome-wide association studies, published in Molecular Psychiatry, concluded that “we were unable to identify robust and replicable findings.” Even though it is the largest genome-wide analysis yet conducted for MDD, the authors still mention the “missing heritability” theme, and attribute their failure to find the supposed risk genes to the sample being “underpowered to detect genetic effects typical for complex traits.” To explain the failure of finding predictive genes, researchers often refer to the idea of “missing heritability.” The thinking is that just because we cannot find the genes this doesn’t mean that they are not there—they’re just hiding. We need more time and more money to find them.

Implications for psychiatry

It is impossible to separate genetic theories from the medicalization of psychological stress. The widespread use of psychiatric medications is based on the idea that schizophrenia and other psychological conditions arise, in part, from genetic defects that result in biological alterations such as reduced levels of neurotransmitters, or deficits in neuronal circuits, that need to be fine-tuned with medications. In general, higher genetic contributions to a disease equate to a stronger case for pharmacological treatment, while diseases with a higher environmental component are seen as better candidates for lifestyle changes and therapy. In 1996, in regards to ADHD, Stephen Faraone, a leading psychiatric genetic researcher, stated: “Many parents are reluctant for their children to take psychotropic medication and others find it difficult to maintain prescribed regimes. These problems are mitigated by discussing the genetic etiology of ADHD…” If parents really believe that their child has a measurable chemical imbalance, then just as they would treat their diabetic child, they would surely treat their child diagnosed with ADHD.

In January 2001, Time magazine declared: “Drugs of the Future: Amazing new medicines will be based on DNA.” Although a tremendous amount of money has been spent on the idea that medications can be designed to fit specific genetic profiles, the results have not been as promising. Writing in a 2013 Medscape article titled “Testing of Patients with Schizophrenia and their Families,” Lynn DeLisi, editor-in-chief of Schizophrenia Research, stated that “there is still no currently proven risk factor, consistently replicated in independent studies, that confers risk for schizophrenia, and, even if there were, the risk is likely to be so low that a test using it would not be at all useful. It also is a misuse of the concept of risk to assume that it is synonymous with ‘prediction,’ and thus it is able to determine who will become ill. Risk factors only elevate one’s chances of becoming ill.”

In addition, these studies cost an enormous amount of money. In the debate about how to spend our health care dollars, the general public should be very skeptical about the economics of this research. Although some geneticists have enthusiastically speculated about what genetic research might mean for treatment of DSM-V conditions, it is hard to imagine how to plan a therapeutic program based on genes which are not distinct for the condition in question. The geneticists suggest that the myriad of genes involved are all pointing toward specific systems that drug developers can focus on. However, another interpretation is that the discovery of genes-of-small-effect suggests that finding a specific drug with strong efficacy and few side effects is becoming less likely. In contrast to speculations about the development of magic bullets, Richard Bentall, a professor of clinical psychology at the University of Liverpool, has summed up the current state of psychiatric genetic research in very frank terms: “No effective treatments have so far been devised on the basis of genetic information and, given what we now know, it seems very unlikely that further research into the genetics of psychosis will lead to important therapeutic advances in the future. Indeed, from the point of view of patients, there can be few other areas of medical research that have yielded such a dismal return for effort expended.”

Genetics as destiny?

Even if genetics are implicated in a disease, development of the disease is not inevitable. Given the right environment, the disease will not necessarily develop. Diabetes can be prevented by changes in diet, and lung cancer deaths can be drastically reduced by no-smoking campaigns. Psychological conditions are even more dependent on the environment. Post-traumatic stress disorder (PTSD) is seen in veterans and abused children, for instance. Even if there is a genetic component to PTSD, it is still entirely preventable by removing the environmental stressor—not going to war or not growing up in an abusive household. With no biological markers that can be used to identify mental illness, even the diagnosis of these conditions is subject to society’s vagaries of what is considered abnormal. In America, 9 percent of school-aged children are diagnosed with ADHD, while in France it is less than 0.5 percent. It is unlikely that this is the result of a genetic difference between American and French children.

For most biologists, the nature-versus-nurture debate is not an either/or debate, but is about the relative contributions of each. A growing number of studies have shown that various environmental insults during childhood, such as sexual, physical, or emotional abuse, peer victimization, and parental loss, are risk factors for schizophrenia. A recent study, “Accumulated Environmental Risk Determining Age at Schizophrenia Onset,” looked at both the genetic and the environmental risk factors in a group of 750 male patients. The researchers found that the environmental factors, but not genetic factors, were a major risk factor for schizophrenia onset. They discovered “robust effects of accumulated environmental risk on age-at-onset of schizophrenia” but “non-detectable effect of accumulated genome-wide association study-derived risk variants on lead phenotypes of schizophrenia.” Because early cannabis use—an avoidable risk factor—was an environmental predictor, the authors suggest the need for increased public awareness. At least in terms of prevention, it appears that the focus should be on the environment. Some authors expressed surprise at the fact that the polygenetic risk scores had no significant effect on the phenotypes. However, we know these genes play only a small role in the development of schizophrenia, that they are implicated in several other DSM-V diagnoses, and that they are spread far and wide in the general population.

It is ironic that the very body of research that was supposed to validate the most important theory of biological psychiatry is now calling this theory into question. There is nothing wrong, per se, with looking for genes of very small effect, but with no single gene emerging as a culprit, the justification for this research is weak. We now know that biomarkers or specific genes for psychological conditions do not exist, that this research will not lead to magic pharmacological bullets, and that many of our assumptions about mental illness were wrong. If the message for the general public is to be skeptical of how our health care dollars are spent, the message for the psychiatry community is to rethink how it treats patients, how it allocates research money, and its emphasis on the biological treatments of psychological conditions.

But this is wishful thinking. UCLA just announced the investment of $250 million into the “Depression Grand Challenge.” In a statement, reminiscent more of a marketing program than an accurate scientific appraisal of the field, Dr. Nelson Frelmer, professor of psychiatry and biobehavioral science and director of the Center for Neurobehavioral Genetics at UCLA, claimed that “advances in technology for genetic research have now made it possible for us to discover the causes of depression. We know a genetics-based strategy will be successful, just as it has been with heart disease, diabetes, and cancer.” Continuing with the familiar “Success is right around the corner, we just need more money” logic, the press release declared that this investment will make it possible for researchers to first discover the causes of depression through the largest-ever genetic study for a single disorder, and to then use these findings to “examine the molecular mechanisms and brain circuitry through which genetic and environmental factors lead to depression.” Can all the scientists involved with this research stand by these statements as accurate portrayals of the science of mental health? What happens ten years and $250 million from now, when we have explained 3 percent of the cause of depression and we don’t have a magic pharmacological bullet? The reality is that it hasn’t worked for schizophrenia, or any other psychological condition, so there is little reason to believe that it will work for clinical depression.

Forum – Winter 2016

Incarceration matters

In “The Effects of Mass Incarceration on Communities of Color” (Issues, Fall 2015), Robert D. Crutchfield and Gregory A. Weeks cite “coercive mobility” as a key driver of increased incarceration rates in communities of color. As a codeveloper of this concept, I think some additional background may be informative.

Back in the late 1990s, when Dina Rose and I were studying the Tallahassee neighborhoods that gave rise to the coercive mobility thesis, we spent many a day talking to residents of neighborhoods where more than 2% of the residents (of all ages) went to prison every year. These places were almost all black, and they included some of the most historic black neighborhoods in the South. Our suspicion was that these high rates of “prison cycling,” moving people in and out of prison, destabilized neighborhood life and disrupted informal social control, ultimately increasing the level of crime. It was not a very popular thesis among criminologists at the time.

So we were shocked to learn how much the people who live in high incarceration places already got it. We heard resonant themes from the residents there. “Put the bad guys in jail, but leave my brother (cousin/nephew) alone; he only needs some help.” Or, “Everyone knows that sending these men to prison doesn’t do anything good for the neighborhood; they all come back anyway, worse off than when they left.” Or—most chilling—“White people would never stand for this if it was going on in their neighborhoods.”

The idea that mass incarceration damages neighborhoods does not seem so controversial today. We know a lot about the negative impact of incarceration—on children of sending their male parent to prison, on lifetime earnings of people who go to prison, on job prospects for those with prison histories, on beliefs in the legitimacy of the law for people who repeatedly encounter the coercive state, and on and on. It does not take a leap of logic to predict that places where a large number of people go to prison and then come back, persistently over decades, would be places devoid of economic capacity, dominated by broken family relationships, where many people grow up with deep resentment for the state. In fact, to make any other prediction would be the social science equivalent of denying global warming. Why did we do it?

It is hard, then, not to conclude that in the end this is mostly about race. That there are no white neighborhoods where generations of men have been cycled through prison is the most telling point of all, isn’t it? White Americans use drugs as much or more than black Americans, but go to prison at a fraction of the rate. The drug war is the advance guard for the mass incarceration movement, spreading itself across all forms of criminal involvement, increasing both the rate of imprisonment and the length of imprisonment following criminal conviction. The writers who today analyze U.S. penal policy as a natural extension of a history of slavery, Michelle Alexander and Ta-Nehisi Coates as leading examples, shame us. But they also call us to collective action.

Here is the simple truth: We made this policy of mass incarceration and we can unmake it. If by some precious magic every penal law passed since 1976, in the 50 states and federal system, were taken off the books, and we immediately reverted to the laws of 1976, we would in a short time have the incarceration rate of 1976. By world standards, we would become “normal.” It is that simple.

Todd Clear

School of Criminal Justice

Rutgers University – Newark

Robert Crutchfield and Gregory Weeks highlight a critical dimension of the problem with mass incarceration: its disproportionate impact on communities of color. The authors provide a careful overview of the causes and consequences of the massively disproportionate imprisonment of people of color. The authors bring substantial expertise to the topic and their case is compelling; I heartily recommend their proposals be taken seriously.

I do wish to add what I see as a critical piece of context for understanding the problem, which also highlights the major barriers facing real reform. The context is this: the racially disproportionate nature of mass incarceration is no accident. It is the product of a series of public policies designed to privilege whites and disadvantage blacks—from slavery, convict leasing, and legalized segregation and discrimination, to legislative and sentencing policies targeting street and drug crimes (see Loïc Wacquant’s “new peculiar institution” and Michelle Alexander’s “new Jim Crow”).

This context is crucial for understanding the case laid out by the authors. For instance, they distinguish “warranted” disproportionate incarceration (that explained by differential offending) from “unwarranted” (that not based on differential offending). However, the disparities in offending are not the result of fundamental racial differences, but instead are rooted in socioeconomic disparities. So at best this represents a case of punishing racial minorities for being poor, but looks even worse when acknowledging the roots of these inequalities in the historical and contemporary mistreatment of people of color.

In this light, the compounding effects of incarceration on future generations—a phenomena the authors highlight—is especially pernicious: that harms to people of color caused by prior public policies are aggravated and perpetuated by existing policies.

This context also illuminates the potential barriers to reform. The authors review multiple possible solutions, most importantly and simply to “reduce the number of people going to prisons and create a more just society.” However, support for punitive polices and opposition to policies designed to ameliorate inequalities are both rooted in a racial animus toward racial minorities (an association that persists, as I found in recent work on implicit and explicit animus).

It is not surprising, then, that “states have shown little appetite for directly addressing the issue of racial disproportionality,” as noted in another article in this volume, “Reducing Incarceration Rates: When Science Meets Political Realities,” by Tony Fabelo and Michael Thompson. Unfortunately, prior attempts to address racial inequalities with race-neutral policies have not been unambiguously successful—whites are often better positioned to take advantage of such policies. Further, failing to acknowledge the racial history gives cover to those seeking to protect the racial hierarchy by deemphasizing group distinctions but then ignoring historical inequalities that leave members of groups differentially prepared to succeed (a strategy described by Mary Jackman, among others). Addressing the very real problems described by Crutchfield and Weeks, then, will require a fundamental reframing of the problem, one that includes an honest accounting of the nation’s racial history.

Kevin Drakulich

Associate Professor of Criminology and Criminal Justice

Northeastern University

The massive growth in imprisonment during the era of mass incarceration, accompanied by soaring correctional costs, has recently led to bipartisan interest in reforming state and federal prison systems. In “Reducing Incarceration Rates: When Science Meets Political Realities,” Fabelo and Thompson of the Council of State Governments Justice Center (CSGJC) share their experiences in working with states to reduce correctional spending. And they rightly emphasize that despite the consensus for reform, significant challenges remain. While the work being done by the CSGJC and the Pew Charitable Trusts (Pew) to achieve broad correctional reform is both critical and long overdue, there are four main points I think are worth making about their article and, more generally, the decarceration debate.

First, the CSGJC and Pew should be commended for grounding their work in an empirical, data-driven approach, which is a welcome alternative to policy recommendations that are too often based on emotion or “gut instinct.” Conducting extensive, descriptive analyses of multiagency data, however, is not necessarily tantamount to “science.” The scientific method involves forming hypotheses, designing experiments, collecting and analyzing data, rejecting (or not rejecting) hypotheses, and, later on, testing the reproducibility of results. Moreover, the publication of scientific research generally passes through the filter of a “blind” peer-review process. The state-level correctional analyses that the CSGJC and Pew have been conducting are clearly preferable to basing policy decisions on anecdotal evidence, but the degree to which their work has been scientific is debatable.

Second, providing states with technical assistance in analyzing data may be a little like putting the cart before the horse because, as Fabelo and Thompson suggest, most states lack the information technology infrastructure to accommodate data-driven policymaking. Rather than earmarking millions of dollars in appropriations for data analysis assistance, helping states upgrade and modernize their criminal justice information systems may be a far better use of federal funding in the future.

Third, one consideration that often gets lost in the decarceration discussion is that prisons don’t have to be criminogenic. Unfortunately, this is often the case, because many state prisoners are “warehoused” insofar as they don’t participate in any programming during their confinement. Yet, warehousing prisoners is at odds with the evidence on what works with offenders, which shows that there are a number of effective correctional interventions that deliver a favorable return on investment. Limiting use of prisons should remain a focus, but so should efforts to make them more effective when they are needed.

Finally, although many people acknowledge that the increased use of prisons during the late 20th century likely contributed, at least to some degree, to the crime drop that began in the 1990s, there is less consensus on the precise extent of this contribution or the point at which the costs of imprisonment exceed its benefits. As the pendulum continues its swing in favor of decarceration, it is critical that we be able to assess the impact of decarceration strategies on prison beds saved and, more importantly, on public safety in general. After all, if decarceration lowers correctional costs but does not maintain public safety, we will run the risk of dampening enthusiasm for prison reform.

Grant Duwe

Director, Research and Evaluation

Minnesota Department of Corrections

Protecting global diversity

In “Technologies for Conserving Biodiversity in the Anthropocene” (Issues, Fall 2015), John O’Brien provides an engaging overview of the technologies available to address global biodiversity loss.

Selecting appropriate technologies can overwhelm, particularly for those with little expertise in computational science or engineering. An article titled “Emerging Technologies to Conserve Biodiversity,” published in October 2015 in the journal Trends in Ecology & Evolution, which we coauthored with colleagues from the academic, commercial, and nonprofit sectors, recognizes this, and identifies key technological challenge areas that must be addressed.

Beyond our bedazzlement with new technologies, some difficult issues come into focus. For example, to what extent are the Sirens of technology distracting us from the voyage toward solutions for pressing conservation challenges?

Consider the on-going buzz around use of new genetic technologies to bring back extinct species from museum or other preserved specimens. This is, quite simply, an economic and academic dead-end. A few resurrected individuals from a tiny gene pool, in diminishing habitat, and under continued threat of re-extinction, would be, at best, expensive living museum specimens.

New technologies are often fragile, yet if we are to deploy them effectively they must work on a demanding, usually rural and remote front line, often without power, parts or servicing back-up, or technical expertise. Drones have the potential to revolutionize data collection, as do smartphones, but perhaps the biggest challenge in their deployment is making them robust to local conditions and user-friendly for local stakeholders.

The race to engage technology also risks masking the nascent issue of ethics in conservation. For example, the rush to attach instrumentation to animals may cause neglect of ethical considerations. While tags are becoming ever smaller, adoption rates are increasing rapidly. Unfortunately, examination and reporting of negative impacts (including capture mortality, failed transmitters, injuries, reduced animal ranging, and behavioral and physiological changes) are given low priority. These impacts can also skew data and render new technologies unfit for purpose. Investment in noninvasive technologies will perhaps yield more “bang for the buck” for monitoring, and benefits will accrue to local communities that are better able to manage them.

Ultimately, communities and careful consideration will carry the day. At the development level, academia, nongovernmental organizations, technology corporations, and professional societies should forge symbiotic interdisciplinary groups. At the deployment level, professional conservationists must work with local stakeholders to design systems to jointly deploy accessible, cost-effective, and sustainable technologies.

Technology has huge potential to deliver tools that will help us to reduce the rates of biodiversity loss. While baseline ecological data will remain central to the development of effective conservation strategies, rapidly unfolding threats now demand immediate remediation. We must prioritize technologies to ameliorate human-wildlife conflict (3,000 incidents have been reported in Namibia alone in the past 24 months), the decimation of endangered species for products, and rampant habitat destruction, before there is nothing left to monitor.

Interdisciplinary collaboration is essential if we are to quell these growing fires. Let us select, develop, integrate, and deploy the brightest and best technologies for the job, but always keeping our hearts and minds on the pulse of the planet.

Stuart Pimm

Doris Duke Professor of Conservation Ecology

Duke University

Zoe Jewell, Sky Alibhai

Founders and directors

WildTrack

Public role in reviewing gene editing

In “CRISPR Democracy: Gene Editing and the Need for Inclusive Deliberation” (Issues, Fall 2015), Sheila Jasanoff, J. Benjamin Hurlbut, and Krishanu Saha argue that the 1975 Asilomar summit is an unsuitable model for evaluating emerging science and technology. They maintain that although review by researchers and other experts is a necessary part of deliberation about science policy and practice, it is insufficient. In a democracy, members of the public should have a role in such deliberation.

I agree wholeheartedly. Gene editing is just one of many contemporary scientific developments that ought to receive more public consideration than they have. Examples of such developments include gene drives, human-animal chimeras, and dual-use research. To date, policy deliberation and debate on these topics has been conducted primarily by expert groups composed of scientists, bioethicists, and other professionals.

Not surprisingly, scientists tend to favor narrow limits on research. As Jasanoff, Hurlbut, and Saha observe, participants in the Asilomar summit adopted a restrictive definition of risk that greatly influenced subsequent formal regulation. The approach has contributed to ongoing controversies over recombinant DNA (rDNA) applications, such as genetically modified crops. It has also contributed to public mistrust about some rDNA applications.

The challenge is to design and conduct a process that allows meaningful, ongoing public engagement. As a scholar focused on bioethics and policy, I have spent my entire career as an outsider in medicine and science. Having participated as an outsider in many research policy activities, I know how difficult it can be to achieve truly inclusive deliberation.

A number of conditions must be met for successful deliberation among people from widely different backgrounds to occur. Both experts and members of the public need adequate education and preparation for the relevant activity. Participants must represent diverse constituencies and respect the knowledge that each participant brings to the table. Moderators must ensure that all have opportunities to contribute, rather than allowing particular individuals and interest groups to dominate. These are only some of the necessary elements of inclusive deliberation.

I do not mean to suggest that the impediments are insurmountable. For more than a decade, there have been extensive efforts to promote community engagement and public deliberation in decisions about health research and policy. Much has been learned about inclusive deliberation in the process. The National Academies of Sciences, Engineering, and Medicine, as well as other scientific organizations, should take this knowledge into account in structuring their policy activities. The Asilomar summit is an antiquated and inadequate deliberative model for today’s science policy.

Rebecca Dresser

Daniel Noyes Kirby Professor of Law and professor of ethics in medicine

Washington University in St. Louis

What’s the Big Idea?

I am a venture capitalist and have been for 27 years. Trained in nuclear engineering in the 1970s, I worked in that profession until 1988, when I joined Venrock, the private venture partnership of the Rockefeller family. Since then, I’ve learned a fair bit about entrepreneurship, risk taking, and how to build great companies. At Venrock, I led the financing of 53 innovative companies from their beginnings, including a 1991 investment in Spyglass, one of the very first Internet companies. Spyglass went public in 1994 and was acquired for $1.9 billion a few years later. Seven of my companies successfully completed initial public offerings, including Check Point Software, on whose board I still serve. Three dozen of the companies were successfully acquired at a profit to everyone involved. And the others—well, they are somewhere still in development, or didn’t make it. When I was a managing partner at Venrock, the firm launched over 300 companies. Our earliest greatest hits included Intel and Apple, followed more recently by Check Point, DoubleClick, Gilead Sciences, Imperva, Athena Health, Vontu, Anacor, and CloudFlare.

Venture capital (VC) is itself an innovation—a financial innovation—that can trace its roots back to the great inventors of the nineteenth century. Thomas Edison is famous for his pursuit of a practical and economically viable light bulb. After what some say were 6,000 attempts with various materials, gases, and shapes, in October 1879 he found just the right material and a design that worked. Who supported Edison in this risky endeavor? An early venture capitalist, his father, who set him up in a small laboratory in Menlo Park, New Jersey.

Today’s style of institutional venture capital investing began in the 1960s, with roots going back to the 1930s. It grew out of “wealthy family office” operations such as those of the Rockefellers, Whitneys, and Bessemers. George Doriat of Harvard Business School is considered the first venture fund founder. His big deal was the Digital Equipment Corporation (DEC), which is still the granddaddy of all VC investments, with a 50,000-to-one return on cash invested at the time of DEC’s initial public stock offering. Venture capital investing was institutionalized in the 1960s, when Congress changed banking laws to allow pension funds and banks to provide capital to new venture firms then being formed. The use of employee stock options to incentivize startups was another important economic innovation for venture investing. But the true magnet for venture capital is a great idea by an ambitious person or team.

Each year in the United States, about 20,000 companies are started, and between 1,000 and 1,200 young startups mature to the point of attracting their first professional venture capital—a number that has held steady for more than a decade. In 2014, venture capitalists invested $51 billion, according to surveys by PricewaterhouseCoopers and the National Venture Capital Association, a level that is likely to be maintained or slightly exceeded for 2015. In 2000, at the height of the dot-com boom, this number spiked to $103 billion.

Venture capital investment accounts for only 0.5% of all private capital investments in the United States. This amount is about 15% of the U.S. government’s annual research and development (R&D) investment, and 7% of all private R&D spending—but it packs a big economic wallop. CB Insights and the National Venture Capital Association estimate that 22% of America’s 2014 gross domestic product resulted from companies that were originally venture-backed. Forty-six of the Fortune 500 companies, with names like Apple, Intel, HP, Genentech/Roche, and Federal Express, were founded with the sponsorship of venture capital. Correspondingly, about 11% of private employment in the United States is by companies that were venture backed. These are astonishing figures at the macro level. In short, the benefits to society from venture capital, in terms of job creation, quality of life, and growth of the tax base, are huge.

Today I’m involved in what might be the riskiest and most significant venture project of my career. It’s an energy development company that, if it succeeds, would change the world in profound ways. The company is Tri Alpha Energy. It is developing a fusion-based technology called Plasma Electric Generator (PEG) that could ultimately deliver commercially competitive base load electric power. Tri Alpha’s approach is compact, carbon-free, and sustainable, with an incredibly clean environmental profile. The Tri Alpha fuel is hydrogen and boron. This fuel source is plentiful worldwide. If successful, this technology would address two of the world’s great challenges: climate change and the need for limitless, cheap electric power.

Why would a venture capitalist pursue a speculative line of technology? Governments have already poured tens of billions into fusion research for decades, with scientific progress, but little to show for it commercially. Aren’t we VCs supposed to be ruthlessly focused on finding ideas that we think can be brought to market with reasonable investments and in reasonable periods of time? Of course. But sometimes an opportunity presents itself that, if it works, can change the future forever and for everyone. Even if it is a long shot, it is worth pursuing. Tri Alpha Energy is a case in point.

Science primer

Historically, fusion-based electricity generation efforts have been hampered by two fundamental challenges: the inability to maintain fuel particles long enough and at temperatures hot enough to allow the nuclei to combine or fuse.

Many strategies to accomplish fusion have been attempted over the decades. The two main approaches are tokamak magnetic confinement and inertial confinement with lasers. The tokamak is a large toroidal-shaped machine developed by the Soviets in the 1950s. It is dependent on an extremely complex magnetic containment system. Tokamaks, which use a deuterium-tritium (DT) fuel cycle, have been the dominant design for fusion reactors.

This is the technology employed for the International Thermonuclear Experimental Reactor (ITER) project in France. ITER is funded by a consortium of governments, including the United States, with a price tag that could be upwards of $50 billion. It will take decades to construct, and its “first plasma” is not expected until at least the late 2020s.

The alternative approach, inertial confinement, is being pursued at the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory. This massive facility produces 500 terawatts of laser power simultaneously directed through 192 beams at a single tiny pellet of frozen DT. The basic idea is to capture heat from very tiny thermonuclear explosions at a high repetition rate. At this point, NIF does not seem like a good candidate for energy generation and is being used for scientific purposes.

Tri Alpha uses yet another approach conceived in the early 1990s by researchers at the University of California (UC), Irvine, led by Norman Rostoker. With the same physics principles used in particle accelerators (which scientists had already proven could confine charged particles), he thought that, using a cylindrically shaped fusion reactor, one could create, confine, and heat a football-shaped plasma rotating on its long axis inside a vacuum chamber. Then, with a well-known technology called the neutral beam injection (a device that converts a powerful accelerating beam of ions into neutral atoms), one could inject a beam of atoms at the outer edge of the plasma, imparting momentum and energy and confining it indefinitely. The configuration is not unlike a child’s top spinning on a flat surface. The top is the plasma. Your hands can act like the neutral beam if you use them to impart additional rotational momentum by simultaneously pushing and pulling on both sides of the top, thus keeping it spinning indefinitely.

Rostoker also proposed a hydrogen-boron fuel cycle, or pB11, which fuses a hydrogen nuclei, or a proton, with a boron-11 nuclei. The pB11 fuel cycle requires a temperature perhaps 20 to 30 times higher than a DT reaction, but Rostoker thought this was viable because accelerator technology worked at much higher energies than the tokamak. And pB11 has the great virtue of being aneutronic: it results in X-rays and three alpha particles with no primary reaction neutrons.

While the DT reaction is therefore easier to achieve scientifically, since the temperature required is relatively low, it has the downside of producing a very high energy (14 MeV) neutron for each fusion reaction, a highly radioactive process. Managing neutrons is a very difficult engineering problem that also results in large amounts of highly radioactive waste. pB11 produces no neutrons and is thus much easier to manage during the fusion process, while yielding no high-level waste to manage after the fusion shuts down. Calculations indicate that the potential radiation load for a Tri Alpha Energy reactor would be no more than a modern hospital MRI.

There is a trade-off between operating temperature and radiation, but it’s potentially a very smart one resulting in critically better economics. Indeed, from the very beginning, Rostoker had his eyes on a commercially viable approach to fusion-powered electricity. With science that was well understood, and with reduced machine complexity and radioactivity compared to the tokamak, Rostoker’s idea had great appeal. The fusion science community, however, was very skeptical. At least one analysis, published in Science in 1998, “proved” it was not possible.

Cash first

Venture capital typically provides three key elements to a company. First, it provides money to entrepreneurs. Second, it guides entrepreneurs to remain focused on the prize. Third, by being patient, it allows hard problems to be solved—problems too risky for most companies to tackle alone. Let me explain each of these key points.

Cash is everything to a startup. Money pays for employee salaries, capital equipment, rent, and many other essential things that every modern company requires. Really, though, the money buys time. Time for the people in the project to work together to refine and test their ideas, and to progressively reduce the risk of project failure.

Raising all the money you need on day one is very difficult if the science or engineering of the product isn’t yet proven. Often the CEO of the startup won’t know exactly how much money will be needed at the beginning. So she needs to sort out what can be done with a certain amount of money in a certain period of time—typically 18 months or so to start. If successful, the entrepreneur can go to her investors, show what’s been accomplished, and convince them to invest more money, while perhaps even bringing in new investors. Success at this stage generally means that some risk has been reduced. During the venture capital phase of financing, it is generally the case that as risk is reduced, the valuation of a company increases. Therefore, when a company completes a milestone and goes out to raise more capital, that new capital is invested at a higher valuation than the previous round of capital. As valuation rises, the existing owners’ stake in the company will be less diluted, which means they get a progressively greater payback for their efforts and investments. This milestone-based financing is core to building a product rooted in difficult science involving many rounds of financing, and therefore to ultimately building a company. During the early steps, when the company has no revenues, an incomplete product, and no real proof that it’ll work at all, the money is usually provided by a venture capitalist. There can be several rounds of financing over many years before the company can fund itself by selling products at a profit.

In the case of Tri Alpha, Glenn Seaborg, a former chairman of the Atomic Energy Commission, chancellor of the University of California system, and Nobel Laureate, along with a small group of visionary experts including George Sealy, a Bechtel executive, thought Rostoker’s idea had merit. Seaborg and other executives set up a small research project at UC Irvine in 1998 with Sealy as the CEO. With only a few million dollars of backing from individuals, Rostoker and his team demonstrated through a series of studies and computer models that the reactor design had promise and might confine a plasma.

Having achieved this theoretical milestone, but still not at a stage to attract professional venture capital, Tri Alpha raised additional seed money from wealthy family offices of smart investors such as Mike Buchanan in San Francisco and Art Samberg in New York. With this capital, the company built a small prototype confinement machine to demonstrate the most basic elements of Rostoker’s approach. In 2005, it worked.

The results of this experiment are what captured my imagination and that of a number of other professional venture investors. Still largely a science project with a long road ahead, the demonstration worked well enough that it led me to make an investment in Tri Alpha on behalf of Venrock. This was a larger capital raise, so I had to reach out to other Silicon Valley venture capitalists, including Dick Kramlich, founder of the venture firm New Enterprise Associates, who now sits on Tri Alpha’s board.

Focus

The deal between a venture capitalist and an entrepreneur is really pretty simple. From the VC point of view, I give you cash. In return, you the entrepreneur give me stock in your company at a negotiated price. But (there’s always a but), you must spend this money on the project you told me you were going to spend it on, mostly the way you told me you would, and in accordance with good operating principles. In short, you must stay focused on the objective you “sold me.”

In addition to cash, venture capitalists provide, through our presence on the board, the tough love necessary for controlling spending, or keeping it directed toward the agreed objective. Rookie entrepreneurs often just can’t keep themselves from spending money on things that don’t contribute to the company’s goal. With millions of dollars in the bank, they can seduce themselves into thinking, “I’ll just paint this wall red to spruce up the place,” or “I’ll buy a real office desk for the CEO.” Mostly such expenditures are a waste of precious startup capital. In my experience, the most offensive use of venture capital is presenting to your new investors at the first board meeting a coffee cup or sweater vests with the company logo—not a good sign of treasury stewardship or wise use of time. The board members can afford cups and vests.

But staying focused doesn’t just mean avoiding frills; it also means keeping your eyes on the prize. In any new endeavor, a lot of learning goes on. Sometimes that learning can push the company in new directions. R&D folks love to solve problems, and sometimes those problems are not closely related to the product being designed. This is the appeal of the R in R&D. Research is important, of course, but in a raw venture-backed startup, it needs to be ruthlessly carried out in the service of the development of the intended product.

Imagine this: You have created a privately financed energy project inventing a new device, the PEG. You have recruited a world-class scientific and engineering team from all over the globe, with hundreds of total years of experience, to work on this incredible science. You have a national lab-class facility in which to work. Can you imagine the temptation to try just one more idea? It’s tempting, for sure, but Tri Alpha has a management team that is singularly focused on bringing PEG to market.

Being privately backed, Tri Alpha also has a board of directors and investor group that is focused on making PEG a reality—the sooner the better. The operational focus and shared decision making between the board and the company management is a model for successful deployment of big dollars over a long time with minimal waste or disagreement. This relationship, experience, and result, in my opinion, make Tri Alpha one of the best cases from the annals of venture investing.

I take away one other significant focus lesson from Tri Alpha. Knowing that this was going to be new science if it worked, in 2005 the company engaged a world-class science panel, which meets twice a year to assess progress. The 10-member panel consists of Nobel laureates, Fermi Prize winners, Maxwell Prize winners, plasma lab directors, and the like. Their experience and perspective about the actual science risks, the public challenges the team might face when they publish results, and the need for theory and experimentation to advance hand-in-hand, were a considerable value in helping the team to maintain focus over the long haul.

Patience

In venture, we always say that it takes longer and costs more. That’s certainly been the case for Tri Alpha. The company is still years away from producing “net power out,” that is, generating more energy than it takes to create the fusion reaction. Continuing funding when it takes longer is a matter of trust, transparency, good operational capability, and commitment by all parties.

With any long-term venture project such as Tri Alpha, capital is always challenging and often viewed as the biggest risk to early investors. In recent years, the task has become more difficult, as many traditional venture capital sources withdrew from making energy investments with long time horizons, and as other private fusion projects began to compete for the few capital resources available for such investments. Nonetheless, from the perspective of milestone financing, Tri Alpha is incredibly successful—both in attracting capital and in developing a fusion reactor.

As of this writing, Tri Alpha has raised hundreds of millions of dollars in the past 17 years in a total of seven investment rounds, based on a milestone-financing strategy that demanded increasingly successful results. The round in which I participated, in 2005, was used to build a 60,000-square-foot facility on par with the national laboratories; to design and construct a large machine, known as the C-2; and to hire the best talent from more than 25 countries. The C-2 has operated for several years, collecting significant data about containment, control, and management of the plasma in the FRC system. And in August 2015, Science reported the containment breakthrough that Tri Alpha had sought, when the C-2 formed a plasma of about 10 million degrees Celsius and held it steady for 5 milliseconds.

Of course, Tri Alpha’s path isn’t any straighter than Edison’s. There were and will be many things that the Tri Alpha team tries that simply don’t work, or at least not well enough to pursue further. Sometimes these are just technical elements that take time and money to iron out, but you really don’t know unless you try. At the end of the day, Tri Alpha has run over 40,000 experiments over the past several years and has generated a new level of understanding about plasmas and the FRC. Most important, it has shown that Rostoker’s theory was spot-on. That’s what focus can achieve. With the containment problem solved, it’s now time to march up the temperature curve towards pB11 fusion and demonstration scale.

Any new science, to be credible, has to be reproducible and understood by the broader scientific community. Tri Alpha has gone to great lengths on these fronts. With its 40,000 experiments, Tri Alpha’s scientists can reproduce results on demand, easily and reliably. Just as important, the computer codes used to predict experimental results have vastly improved with time. With nearly 150 published papers on the science, the company is well on its way to building a body of vetted science that everyone with some relevant expertise can understand. And like many science-based companies before it—energy and health care alike—Tri Alpha’s science is built on the shoulders of decades of great research by many people around the world, mostly paid for by government R&D investment.

Solving the hard problem

Seaborg’s imprimatur, and the technical understanding provided by Rostoker and the team at Tri Alpha, gave the company an ability to raise that initial seed capital. One day, Rostoker’s containment idea might be recognized as a seminal contribution to the production of electricity from fusion—an innovation at least as important as Enrico Fermi’s controlled fission reaction that led to the development of today’s nuclear power industry. Microsoft’s Paul Allen, one of Tri Alpha’s early investors, says that if the PEG works, the scientists will get a Nobel for physics, but the investors will get one for peace. We should be so lucky. For now, my role is to make sure the company’s milestones are reasonable and reachable within the next funding cycle, and to assist in raising those funds.

I’m often asked, how much longer will it take for Tri Alpha to get to market? I don’t know exactly. But a good analogy here would be pharmaceutical development, where the risks, scientific difficulty, and proof required are generally comparable. It can take upwards of two decades and $500 million to bring a new drug to market. Tri Alpha is probably on track for that sort of schedule and investment, maybe even longer, and even more money. And like drug startup companies, partnering with industry—in Tri Alpha’s case, with the energy generation industry—is required to truly scale up the enterprise. Demonstrating PEG is one thing. Deploying it is another.

Based on my long experience helping to shepherd complex projects to market, based on what has been learned to date, what has been tested, and the clarity of the road map to achieve net power out, the PEG is tracking as well as can be expected. Ultimately, Tri Alpha’s technology could drive a multi-hundred-billion-dollar energy market with commercially competitive, emission-free, fusion-powered electricity. That is the prize for Tri Alpha’s visionary inventors and scientists, engineers, and investors, but most important, it is the prize for every citizen of the world, made possible by the innovation called venture capital.

The Fictional Age

Turns out the cute boy from history class is a complete drug-head. Can’t even write a Fictional Age paper. Or won’t. I don’t know which is more nauseating.

“You never got concentric?” he asks.

“Not until I got into Higher.”

“All the way through Lower and no Concentrex?” Drug-Head plays skeptical with me through long blond hair.

“None. I don’t think you deserve to be in Higher Schooling if you couldn’t get through Lower unaided. Not that I’m picking on you, of course.”

“Hmmm.” He attempts to take offense, but fails. No surprise there. “Hah, righto,” he bubbles, relieving himself of the effort. “If I’d have known that, I woulda started you on child’s ‘trex. I have some age 3-6 doses on me right now. So how about we sweeten the deal like this: one A on one paper, and I give you, say, 50 dollars and two 30-mic slides of child ‘trex? ‘Cause I mean, well you get it, you’re smart. You probably have the ‘trex receptors of a toddler anyways. Your face is already pretty red, and I’ve never heard you talk this much.”

“You’ve never heard me talk. Ever.”

But he’s right, I can feel my cheeks glowing. Just breathe in and breathe out. Flushed face and nausea are common for new users. I’m fine. Doing Concentrex is fine. My little sister has been slipping slides under her tongue ever since they started handing out homework, and she’s alright, medically speaking.

What I should be worried about is getting out of here. Zero hour is coming, and I don’t want to spend the night locked in a library with someone who is, judging by the size of his pupils, on something much stronger than Concentrex.

“Righto,” he giggles, getting up to leave. “Trex receptors. Trex-O-Raptor. Heheheh, Tyrannosaurus-Trex. Heheheh.”

“Wait.” I say, standing up to stop him. “Fifty dollars and two slides of Concentrex only buys a B. If you want an A, it has to be 100 bucks.”

“But I already fed you a 30-mic slide of adult ‘trex pro-bah-no,” he whines. “It’s just a Fictional Age paper. And you’re a buy-in, right? So what’s the problem?”

“My mother—” I never shout. I’m shouting right now, but I never shout. Especially in libraries. I look around to see if anyone is listening, but the place is nearly empty. Just a few students scattered across a sea of white monitors. Probably buy-ins like myself, already hustling for a few extra pre-midterm dollars.

“My mother,” I begin again, “works hard for me to be here because she wants her grandchildren to be legacy. Not me. I hate kids, and I don’t give a damn about legacy. I’m here to become one of the most valuable few. Not because I get to wear cool robes at public ceremonies, but because I actually care about preventing a second Fictional Age, a fate that seems more and more likely, the more time I spend with you.”

“C-calm down. Look, I-I’m not just some—”

“I was sober as a goddamn snow pea for every single moment of Lower. That means I know the difference between an A paper and a B paper, and that means I get to write whatever kind of paper I’m paid for. A hundred bucks and two 30-mic slides of Concentrex buys you a Higher Schooling A. That’s the deal. Curfew’s coming down in a few minutes, and I’m on the wrong side of a $5,000 student loan, so I really don’t have time to haggle.”

I reach out my hand for the deal closing shake.

“Fine. But half now, and half later.”

He gives my hand an abrupt chin-to-waist shake before plunging his arms up to his elbows into his pants pockets. The effort sends him stumbling back into a rolling library chair, and for a second I think he’s passed out. The chair is beginning to wobble out from underneath him, and the only sign that he’s still conscious is the determined rustling of his hands in his pockets. I want to warn him that he’s about to fall, but I can’t. His trousers have left me speechless. By the time he realizes his predicament, it’s too late. The chair sails out from beneath him, landing him flat on the floor with a hollow thud.

Is it possible he bought his pants like that? No, he must have modified them. Unless it’s a new trend. Maybe all the legacy students are sporting knee-deep pockets, only I didn’t notice because I’m lame and don’t have any hip friends to let me in on the latest fashions. They are pockets after all, not exactly the most eye catching part of an outfit. Who knows? This whole thing could go way deeper than I’m imagining. Maybe he’s part of an ancient clandestine textile trade. The Society of the Deep Pocket. “Ye shall know them by the depths of their pockets,” Matthew 7:16.

I emerge from my thoughts to find that Drug-Head is still on the ground, thrashing about like a trout, making a spectacle of himself for the entire library. Not that anyone is getting up to help. In fact, our new audience seems rather unimpressed.

“Ha,” an anonymous voice laughs, “the rapture took’m right there, didn’t it?”

I’m starting to fall back into one of those “Oh my God it’s come to this” evaluations of my life, a luxury I usually reserve for 5 a.m. Concentrex crashes, when I remember how late it is. If I don’t hurry up and show this boy how to pay me for writing his essay, I’m going to be taking care of him all night.

I bend down and yank his hands up out of his pockets. Both fists are clutching wads of pocket debris, so I decide to tap the left fist, causing him to reveal, among other things, two twenties and two slides of Concentrex. I grab all of it and position myself behind him. The library is still watching, but I help Drug-Head hoist himself up anyway. Some mushy backwards part of myself won’t spare me the pleasure of wrapping my arms around his body.

“That means 60 bucks when I’m done,” I say, slipping one of the slides under my tongue as I take off. But something papery feeling slips between my fingers as I struggle to shove the rest of his payment into my own toddler-sized pockets. Hoping that it’s overlooked money, I snatch it off the ground and continue speed walking toward the exit. It is not money. It is a miniature origami crane with writing on its wings. When I unfold the bird, I find the following written in bright purple ink:

Daganthony Foocow

Shaman Entrepreneur

(9)-245-3018

Hamilton Dormitories, Room #16

Heading for the dorm, the Concentrex must have me pedaling harder than I thought, because when I squeeze on the brakes to let a group of legacy students pass, my bike continues skidding forward until the front tire is nearly touching a suit-and-tie member of the procession.

“Do you know her?” commands a high-pitched voice from above the crowd. I look up to find a skeletal blonde girl towering over me. She’s perched atop the crowd on a platform of raised hands, flexing her nonexistent biceps as she points to the back of her tiny, cut-up t-shirt with two downturned thumbs. The shirt reads:

GET-OUT BUY-IN

in huge block lettering. Just above this imperative, printed in small lacy cursive are the words,

OTIUM CUM DIGNITATE

LEGACY CLASS ‘94

Do I know her? The question is nonsensical. Her smutty little shirt doesn’t even refer to anyone. Is she trying to imply that I am a buy-in, and thus asking if I know myself? Or rather, if I know that the message on her t-shirt is directed at students like myself? Either way, the fact that I’m ghosting Fictional Age papers for legacy students just so I can afford to stay in a place where I have to endure insults so lazily crafted that they are literally meaningless—yes, that I have to actually perform cognitive work just to be insulted by the very same students who rely on me to write their essays—well, it makes me want to dismount my bicycle and hurl it at blondie’s head like a discus.

“Your question is too stupid to answer,” I shout back. “Get your monkeys out of the way, I’m in a hurry.”

“I’m a monkey?” asks one of the guys in the group.

“Sure, we’re a barrel of monkeys,” says another guy.

The parade of students reaches back some 20 meters and is almost entirely male, all of them wearing suits. Sitting atop the procession are a handful of girls wearing almost nothing at all.

“Life ain’t so hard, is it baby? Why not smile a little, huh? What you need is to come out with us. Ditch the bike and we’ll give you a ride.”

“Ha,” mocks blondie. “Leave her alone. She wants to hide in her room before curfew starts. Look at her face. She’s afraid she might actually have fun.”

“Through or around,” I say. “I need to go through or around.”

“Two minutes to zero,” laughs another suit, making a show of checking his watch, “better put some verve in your vag.”

Blondie bends down to whap him on the head but ends up tumbling forward into the crowd. The confusion allows me to push my bike through the suits unmolested.

But it doesn’t matter. By the time I get to the buy-in dorms the gates are already locked for the night. Curfew is officially in effect.

“Émmmmmmm-eel-leeeeeee,” bellows a familiar voice. “Wuss happen’n, girl?”

It’s Inès, all 100 kilos of her.

“I can’t believe you’re out tonight, too!” she screams, encasing me in her form fitting arms.

“Actually, I just missed getting in,” I mumble, struggling to breathe through a mouthful of breast meat.

“Awwwww,” she sticks out her lower lip and makes a baby pouty face. “Émilie,” she croons, “come sit down and tell Inès all about it.”

“You’re really drunk.”

“Just ‘cause I’m floor counselor doesn’t mean I can’t have fun, right?”

Judging by her stare, the question isn’t rhetorical. “Right. But maybe you can use your floor counselor powers to unlock the dorms for a second. I have this Fictional Age paper, and—”

“Nopey-nope-nope. Sorry. Curfew is enforced for the good of everyone. Students with lots of homework or early classes need peace and quiet. If you have to be out past zero hour, you stay out until six. It’s the least we can do for our hardworking peers.”

“But I have a lot of homework and class in the morning. I am your hardworking peer.”

“Émilie,” she gasps. “I couldn’t open those gates even if I wanted to. Shush now and tell Inès about your day.”

“You want me to both shush and tell you about my day?”

“Shush now and tell me ‘bout yer day,” she confirms, stretching out on a nearby bench.

“And you’re positive you can’t open the gates?”

“Mmmmmmm?” she mumbles, shooting me a coy little shrug as she continues to situate herself on the bench.

“Listen, have you ever considered the possibility that zero-hour curfew was designed to thwart serious study, not encourage it? Like right now for instance. All the good first year buy-ins are supposed to be in their rooms, and all the bad first-year buy-ins are supposed to be out partying, right? But according to that logic, there are only about 12 good first-year buy-ins in all of Higher Schooling. Everyone vacates the dorms just before zero hour. It’s part of first-year culture.

“Curfew doesn’t insulate good students from bad students, it just radicalizes the otherwise moderate majority of students—the sorts who might go out around 22 and come home at 2—into all night bingers. Zero-hour curfew is, in fact, the worst thing to happen to serious study since the advent of co-ed dormitories, and I wouldn’t be surprised if the administration liked it that way.

“Think about it. First-year buy-ins pay more money and use less resources than any other caste of student. Their schedules are full of huge introductory lecture hall classes, so they’re cheap to educate, yet they’re forced to pay the grossly inflated costs of on-campus living because they ‘don’t come from a background of scholastic achievement’ and so ‘need the support of on-campus living to help them adjust to the rigors of Higher Schooling.’ When really, on-campus living is designed to sabotage first-year buy-ins through insidious policies like zero-hour curfew.

“But why?” I ask, reaching the crescendo of my performance. “Why would the administration undermine students? Because a first-year buy-in dropout never collects on her initial investment, that’s why. Her tuition fees and housing costs turn into donations, donations that the administration needs to keep Higher Schooling free for legacy students.”

I glance at Inès, hoping to catch some sign that my speech is working.

“Inès?” I ask.

“What?”

“Did you ever consider that?”

“Consider what?”

“Goddammit Inès, can you open the gates or can’t you?”

“Why would you want me to open the gates?”

“Because I need to research The Fictional Age!”

“Ohhh! The Fictional Age,” she moans, waving her hands in the air, “The Fictional Age. Hey everyone, stop everything, Émilie needs to do some more learning on The Fictional Age.”

“That is why I’m here.”

“The Fictional Age already happened. It’s over. Forget about it.”

“Oh, great. The floor counselor tells the history major to just ‘forget about it’ since it ‘already happened.’”

“Brigham Jackson told truth from lies in two thousand two hundred and forty-five. He—”

“Quit—”

“HE SOUGHT THE WISEST MEN AND CAUGHT,” she yells over me, “THE CLEVEREST THINK-TANK EVER BOUGHT.”

“You’re really—”

“TWO THOUSAND TWO HUNDRED AND FORTY NINE, HIGHER ED WAS BROUGHT IN LINE. NOW THE FACTS ARE KNOWN, SURE AND TRUE, BY OUR VALUABLE, MOST VALUABLE VERY FEW.”

“Is that it? You’re done now?”

She responds by sliding her face off the edge of the bench and exhaling two liters of pulpy, bright orange liquid onto the ground.

“There,” she spits. “That’s a free lesson on The Fictional Age. Go write a paper on it.”

“That’s nothing but a bullshit Lower Schooling nursery rhyme.”

“Just ‘cause you can’t go home doesn’t mean you have to stay here,” she slurs, rotating her body to face the back of the bench.

She’s right. I can’t go home, and thanks to that extra slide of Concentrex I slipped before leaving the library, I can’t sleep either. Dr. K’s 6:30 a.m. class can’t come soon enough.

“You’re going about this all wrong,” says Dr. K. “You don’t research The Fictional Age, you analyze it, as a phenomenon. What can we learn from it? What does it say about us as a people?

“The Fictional Age, everything between 2113 and 2245, is mostly a blank slate. Not so much a dark age as an age of blinding light. We have access to all the data, all the books, websites, and newspaper articles, but as conflicting primary sources approach 2113, they become equally credible, which is to say, equally dubious.

“Take the U.S. annexation of our beloved Quebec, which is supposed to have happened sometime around the 2190s. The president of the National Assembly of Quebec said one thing, the president of the United States said another, some talk show host said a third, and no one ever seems to have gotten to the bottom of it. All we know for sure is that Canada entered The Fictional Age as the second largest country in the world, water area included, and came out 1.542 million square kilometers lighter. It’s as if Quebecois tucked themselves into bed Canadian and woke up American.

“Except it wasn’t like that, it couldn’t have been. Now that The Fictional Age is over, we should be able look back and sift through the data, right? To learn what really happened?”

Dr. K puffs himself up with a short intake of breath, glances around the empty classroom, then exhales. I always thought he looked young, even for a professor, but now that I’m up close, I can see that only half his face looks young. The other half hangs limp and wrinkled, as if gravity forgot about the one side of his face and made up for it by focusing on the other. He scans the empty lecture hall once more before stepping down from his podium.

“Have you ever talked to anyone that was alive during The Fictional Age?” he asks, lowering his voice. “A grandparent, maybe?”

“My grandmother, but she died a few years ago.”

“Can you remember if she told you anything about The Fictional Age? Was there anything she was certain you should know?”

I look down at my shoes and pretend to think. The Fictional Age was the only topic my grandmother ever condescended to discuss. She would sit at the dinner table all day, waiting to ambush the family with her autobiographical vignettes while we were at our most vulnerable. But was there anything she was certain I should know?

That she and her two sisters were the only set of triplets in all of recorded history to have three different fathers. I heard about that one every night for 16 years.

“All conceived the same night,” she used to say. “Last time mother saw any of those rascals alive, I figure. Unless you count the TV. Colette’s father was none other than the Premier of Quebec, though I can’t say it was much use to her. She and Claudine—that was my other sister—died wrestling each other over an empty watermelon. ‘Course they didn’t know it was empty at the time. That was during the drought, back when the market was awash with phony produce. Used to be you couldn’t tell a basketball from a cantaloupe, those bootleggers got so clever.”

“She said a lot of things,” I say, trying to affect a sort of wizened world weariness. My early morning slide of Concentrex is fizzling out, and I feel it.

“That’s The Fictional Age,” agrees Dr. K. “Everyone was saying a lot of things. Trying to learn about the actual events of The Fictional Age—what ‘really happened’—it’s hackwork. Any looney with access to server archives can dig up support for anything they want.”

“Higher Schooling has archive access?”

Dr. K fixes me with an uneven stare. I don’t feel sleepy, exactly, though I wish did. I just feel blank.

“Yes. Of course. But as I’ve been patiently trying to tell you, there is no wisp of hay in that needle stack. Anyone could say anything they wanted to on the Internet back then. There were no safeguards, no arbiters of truth. Here.”

He whips out a thick, cream-colored slip of paper from his breast pocket and begins scribbling.

“Read this. J.D. Larson, The Dumb Led the Blind: How Laissez Faire Internet Policy Led to The Fictional Age. It’s a classic treatment. Larson argues that people in the 21st century, Americans especially, had a dialectical view of history—The Techno-Democratic Dialectic of History, she calls it. They thought the Internet was the next step in that dialectic, that it was going to bring people together, keep governments honest, make the free market freer, and provide a cheap, quality education to the masses. This ideological naiveté, Larson argues, enabled the Internet to go unregulated, and thus fill with misinformation. And once the Internet—”

Running ideas through my head is beginning to produce a dry, strained feeling, as though my neurons have run out of lubricant and are grinding up against each other.

“—which leads me to economist Burt Henzel’s Gresham’s Law in the Informational Economy.”

Some more Concentrex and I’ll be fine for the day. And some food. I need to eat some food.

“According to Gresham’s Law, bad money drives out good money. Henzel’s thesis is that the same goes for certain kinds of information. As the Internet began passing on more and more hoax articles and amateur editorializing, people started realizing that bad information was being accepted into common circulation. Dressing up a lie was easier, and maybe even more fun, than working to find the truth. People began passing off more and more lies, hoarding researched truths for those moments when they were really necessary. Thus, an overabundance of worthless information led to The Fictional Age. People were papering their homes in lies, so to speak.”

Drug-head has the Concentrex, I have his number, stores have food, and Higher Schooling has archive access. It all fits.

“And along the same vein,” he continues, flipping the card over. “Miguel Silano’s The Internet, Degrees of Freedom, and the Entropic Journey into The Fictional Age. Silano gets at the same idea as Larson, only he comes at it from the standpoint of analytic philosophy, information theory, and thermodynamics. Perhaps that’s more up your alley?”

“Not really,” I mumble, as Dr. K continues writing. “Rhetoric and grammar were my strongest subjects.”

“Ok, well, let’s start with this. There are many more false propositions concerning any given state of affairs than there are true propositions, right? True propositions are rare, well organized, and useful. They have low entropy.

“But what the Internet did, see, is it introduced more ways for information to change. It gave information more freedom, and the more ways there are for something to become disorganized, the more quickly it will become disorganized. You understand? And a disorganized proposition is more likely to be false than true.”

“I don’t know anything about entropy. I study history to learn about phenomena at the level of human agency, not all-encompassing physical law. By whose decree did we become Americans? What kinds of people caused The Fictional Age, and how did they get out of it? These are the questions I want to answer.”

“Almost everything said by real people during The Fictional Age was false. Do you understand that?”

“But there was a fact of the matter. Things did happen.”

“I have another class coming in soon—”

“Where are the server archives?”

“Fine. Maybe you need to be dwarfed by the past before you understand. But keeping in mind that I’m talking to a young historian, a most valuable few in training, I feel it’s my duty to warn you that you are on a dangerous track. Academic papers based on questionable sources will earn you a failing grade at this institution, and what’s more, they risk throwing our entire society back into a second Fictional Age.”

“Where are the server archives?”

“Top floor of the Jefferson Building. Consider taking a nap before you go.”

“So the young, foxy, enigmatic historian calls upon Daganthony F. once again.”

Thank God he’s finally here.

“Not once again,” I say, spinning around in my dusty office chair. “First time. It was you who called upon me to do your homework, remember?”

“I remember. So how’s my paper coming?”

“How does it look like it’s coming?” I snap, referencing our surroundings.

We’re peering at each other over a honeycomb of exposed support beams on the top floor of the Jefferson Building, which is really just a small, sweaty garret smelling of untreated wood. What little space remains is filled by a modest computer crowded by column after column of external storage. These are the server archives.

“I hope you’re not taking this paper too seriously,” he says. “It’s just The Fictional Age. You shouldn’t have to do any research.”

“Yet here I am, going above and beyond. Doesn’t that qualify me for some kind of discount?”

“I don’t think we met under, uhh, the best of circumstances. You know my name, but I don’t seem to remember—”

“I don’t know your name.”

“It’s on my card, the one you used to call me.”

“Your name is not Daganthony Foocow.”

“Why not?”

“Because that’s ridiculous. Look, can we please just get this over with? I would like some more Concentrex. You have Concentrex. What happens next?”

“Hold up, am I getting this right? Are you actually trying to pretend that you’re above buying drugs while buying drugs?

“Our last meeting ended with you rolling around on the ground, hands stuffed down your pants, flapping and squirming like a penguin in a straightjacket. Forgive me if I’m eager to get down to business.”

“Cool then, let’s get down to business. First question: What are you really doing up here? Because if you’re really spending your time looking through websites from The Fictional Age, then I don’t think I can sell you any more ‘trex.”

“I’m doing it for your paper.”

“Do you know how whacked out you look right now, hunched over that little monitor in this weird attic? You’re wearing the same clothes as you were last night.”

“Of course, that would be the one thing you remember.”

“I’m not some slum lord dope pusher, alright? I don’t sell to ‘trex heads. And don’t say you’re doing this for my paper. Anything you learn in here can only hurt my grade.”

“Fine, you’re right. I’m doing it for me. Who else would have any interest in The Fictional Age? No one in Higher Schooling, that’s for sure. I just wish someone would have told me that back in Lower, before I turned myself into a grade grubbing show dog to get here. It would have taken only a minute. Just a quick tap on the shoulder and a, ‘Oh, Émilie, you want to learn about The Fictional Age? I’m so sorry. Yeah, see, we actually kind of frown on that in Higher. It tends to make our drug dealers uncomfortable.’”

“Shaman Entrepreneur.”

“What?”

“I prefer Shaman Entrepreneur.”

“Right, so you’re a shaman and I’m a crazy ‘trex head. Well what if I told you that I’ve been finding blog entries from as late as 2255 describing the Brigham Jackson Interventions of ‘45 in the present tense, like they were just happening then? I bet that would make me a real kook, wouldn’t it? And what if I told you that some articles refer to Brigham Jackson as ‘Brigham-Jackson,’ or ‘Brigham & Jackson,’ like they were two separate people, or a company or something? And here,” I shout, pointing to the monitor, “the Brigham Jackson Interventions entry on Wikipedia is listed under the category ‘Hoax History’ for eight months in 2246, before being listed under ‘Coup d’Etat’ for two months in 2247. Is that interesting, or am I just some sweaty tweaker?”

“It’s interesting, but it’s also only as likely to be true as anything else on the Internet at that time.”

“Exactly, which means it’s also just as likely to be true as the version of history we’ve been taught. We could still be in The Fictional Age.”

“So what, then? That leaves us where we started, like all questions about what ‘really happened’ always do. Maybe The Fictional Age wasn’t the tragedy you think it was, maybe it was the beginning of a realization. There are so many ways to look at things, so many perspectives. And just because something is true for me, doesn’t mean it has be to true for you, right?”

“Everything is either true or false, but not both.”

“How can you be so sure? How do you know there’s only one right answer?”

“Sloppy questions allow for many right answers. But if the question is specific enough, if it’s careful and exhaustive, then there can only be one right answer. Always. Forever.”

Neither of us says anything as we stare at each other over the attic’s wooden support beams. The lull in conversation stretches into an uneasy silence.

“Ok,” he says, breaking the tension, “Well there’s no way I’m selling you any more ‘trex.”

Then, as if in response to his resolution, a burst of laughter and applause fills the attic from somewhere below.

“Then go,” I shout over the tumult. It’s Thursday, and the inaugural whooping of today’s on-campus parties is only going to get louder. “Or is that too ambiguous a message? I really need our truths to match up here. How about this: Get-the-hell-out-of-my-attic. There. What’s your perspective on that?”

He takes the hint, leaving me with the vague impression that I should feel bad. Luckily I’m too tired to care. I lay my chin down on the desk and wonder what would have happened had I laid it down differently. Would the universe have conformed to my action and recorded the change? Or would it have just kept on chugging in the same direction it is now, unaltered?

These are the sorts of thoughts I find in the long, blank hallway that separates a Concentrex crash from real sleep. And though the hallway is already extending, pulling me backwards into unconsciousness, I can still make out the archive monitor at its far end, flashing with defunct advertisements for penis extenders, diet pills, and car insurance. These are advertisements without origin, incapable of selling anything, cut adrift from their creator like the flickering light of a long dead star, and eventually, as I continue receding toward sleep, the dead lights of these advertisements merge with the rest of the world at the end of the hallway, until my eyes are closing and everything has become small and indistinguishable.

Soon I’ll be asleep, and this small, diminishing light will be nothing but an uncorroborated memory. That’s ok. It really happened. And that’s all it takes to end a Fictional Age, one person who can remember something that really happened.

Kelle Dhein is a Ph.D. student in the Biology & Society Program at Arizona State University.

Better Data for Better Mental Health Services

In 1948, Mary Jane Ward’s best-selling semi-autobiographical novel, The Snake Pit, brought widespread attention to the deplorable conditions in state psychiatric hospitals. Subsequently made into an Academy Award-winning movie, the novel’s vivid descriptions of understaffing, overcrowding, and inhumane treatment profoundly affected the general perception of treatment for individuals with serious mental illness (SMI) and prompted many states to begin making significant reforms. Widespread recognition of the need to improve the care of this vulnerable population, which had been so shockingly neglected, served as a major impetus to the development of a policy known as “deinstitutionalization.”

Deinstitutionalization shifted much of mental health care for individuals with SMI (schizophrenia, bipolar disorder, major depression, and other disorders that can result in significant functional impairments) from inpatient state psychiatric hospitals to outpatient community settings. The guiding principle of deinstitutionalization was that individuals with SMI should receive treatment in the “least restrictive setting.” This view emerged from the confluence of many factors, including the history of abuses in state hospitals, the development and widespread availability of new psychotropic medications, and an increasing societal concern for civil liberties. In particular, the advent of new antipsychotic medications in the 1950s and 1960s allowed, for the first time, limited control of delusions and hallucinations, and therefore made life in the community a possibility for persons with serious and chronic mental disorders.

The dramatic changes in the mental health system that have taken place over the past 50 years had their origins in the Community Mental Health Systems Act, signed in October 1963 by President John F. Kennedy and conceived with the noblest of intentions. But in policy makers’ haste to correct the abuses revealed in state hospitals, deinstitutionalization was carried out, in the words of psychiatrists H. Richard Lamb and John Talbott, writing in the Journal of the American Medical Association (JAMA) in 1986, with “much naïveté and many simplistic notions.” In his recent book, American Psychosis, E. Fuller Torrey, a former National Institutes of Mental Health psychiatrist, traced such notions to the Interagency Committee on Mental Health, whose 1962 report influenced the subsequent law: “Because no committee member really understood what the hospitals were doing, there was nobody who could explain to the committee that large numbers of the patients in these hospitals had no families to go to if they were released; that large numbers of the patients had a brain impairment that precluded their understanding of their illness and need for medication; and that a small number of the patients had a history of dangerousness and required confinement and treatment.”

Torrey, founder of the Treatment Advocacy Center, a national nonprofit organization dedicated to eliminating barriers to the treatment of severe mental illness, argues that the 1963 law was fatally flawed because it encouraged the closing of state mental hospitals without any realistic plan as to what would happen to the discharged patients, especially those who refused to take medication they needed to remain well. It did not include a plan for the future funding of mental health centers, and it focused on prevention when no one understood enough about mental illnesses to know how to prevent them.

Discharging long-term patients from institutions was a way for states to cut their expenses, since outpatient therapy and drug treatment were less expensive than inpatient care. Increasing attention to the civil liberties of those involuntarily hospitalized also brought the enactment of laws in many states that made it much more difficult to hospitalize the mentally ill against their will.

However, although the number of patients discharged from state hospitals increased and the number of inpatient psychiatric beds declined precipitously after 1960, the planned network of 1,500 community mental health centers, which was intended to assume responsibility for the care of those with SMI, failed to fully materialize because of a chronic lack of funding and shifts in political priorities. Only half of the proposed centers were ever built.

For years following the initial wave of deinstitutionalization, many individuals with SMI—either newly discharged from state hospitals, or in psychiatric crisis—were left to fend for themselves in ”board-and-care” homes or group homes with little or no supervision or treatment other than psychotropic medication. These homes were often clustered in certain communities; one of the best-known locations in the 1970s was Long Beach on New York’s Long Island, which housed hundreds of former patients discharged from several very large state hospitals located nearby (some of these hospitals, such as Creedmoor and Pilgrim State, had more than 10,000 beds in the mid-1960s). At the time, the concerns of mental health professionals and advocates focused on the potential for residents of these board-and-care homes to be victims of crime and on quality of life issues raised by a lack of appropriate treatment, lack of daily structure or employment, and isolation and lack of social support.

Deinstitutionalization debacle

Today, deinstitutionalization is viewed by most experts as a policy failure, and the mental health system more broadly is recognized as unable to meet the needs of persons with SMI. Many experts also believe that these failures are the cause of increases in homelessness among seriously mentally ill persons, as well as a dramatic increase in the number of persons with SMI seen in hospital emergency departments, which increased from 5.4% of total visits in 2000 to 12.5% in 2007. This increase has also led to the need for “boarding” when no psychiatric beds are available; the average wait time for a psychiatric admission in general hospitals is currently more than 18 hours, compared with just under six hours for non-psychiatric admissions.

One consequence of these problems has especially come to the fore. In the past few years, tragic and violent events such as the mass shootings in Newtown, CT, and Aurora, CO, have given a new impetus to ongoing concerns about the adequacy of mental health treatment. Following the shooting at the Navy Yard in Washington, DC, in September 2013, Jeffrey Lieberman, president of the American Psychiatric Association, issued a statement titled “The U.S. Mental Health Care System is Broken” in which he noted that there had been 21 mass shootings in the country since 2009, and the perpetrators in over half of these were suffering from or suspected to have an SMI. He emphasized that the system of mental health care in the United States is “inadequate, where individuals with mental illness too often fall through the cracks.”

The dramatic and continuing reduction in the number of inpatient state psychiatric beds in recent decades has been a source of concern and alarm among many observers in the field. According to “No Room at the Inn,” a 2011 report by the Treatment Advocacy Center, the number of public psychiatric beds in the United States per 100,000 population fell from 340 in 1955 to 17 in 2005.

Some observers have suggested that the decrease in public beds has been at least partially offset by increases in beds in private psychiatric and general hospitals. But as psychiatrists Benjamin Liptzin, Garry Gottlieb, and Paul Summergrad pointed out in a 2007 commentary in the American Journal of Psychiatry, although it is true that there were very modest increases in both types of facilities in the 1980s and 1990s, the numbers have subsequently decreased to near their previous levels. Among the likely reasons for the decline are poor reimbursement for psychiatric hospitalizations from all payer sources and conversion of these beds to medical-surgical beds, which were needed and also contribute much more to hospital margins.

Similarly, the Subcommittee on Acute Care of the New Freedom Commission appointed by President George W. Bush reported in 2004 that from 1990 to 2000, the number of inpatient beds per capita declined 44 percent in state and county mental hospitals, 43 percent in private psychiatric hospitals, and 32 percent in nonfederal general hospitals. And more than three-fourths of psychiatric beds in general hospitals are in private facilities that are often reluctant to admit uninsured individuals or those who are deemed to be “disruptive” or “too violent.” The American Medical Association describes the problem of access to psychiatric beds and overcrowding of emergency departments as “an urgent crisis and a national disgrace.”

The sharp decline in the number of beds and the changing philosophy regarding hospitalization have led to a decrease in the median length of stay (LOS) in state facilities. Historically, the presumed purposes of state mental hospitals were to monitor the course of illness and provide psychiatric treatment, medical care, rehabilitation, short- and long-term asylum, residential care, crisis intervention, and social structure. With a relatively short LOS, however, many of these goals are not attainable, leading to a qualitative change in the type of care these facilities are able to provide, from long-term treatment to acute care with relatively quick discharge. When patients’ conditions are only partially stabilized at discharge, and if they are discharged without adequate attention to transition planning, then outpatient treatment regimens are more likely to fail, resulting in frequent hospital readmissions.

But there has been another alarming, if predictable, consequence of the reduction in national capacity to treat people with SMI. Seventy-five years ago, in a seminal article called “Mental Disease and Crime: Outline of a Comparative Study of European Statistics,” Lionel Penrose, a British psychiatrist, medical geneticist, and mathematician, found an inverse relationship between prison and mental health populations, and theorized that if one of these forms of confinement is reduced, the other will increase. It seems reasonable to assume that many of the individuals with SMI who are seen today in jails and prisons in the United States, particularly those who committed minor crimes, could just have easily been hospitalized if psychiatric beds had been available. As H. Richard Lamb points out, “Unfortunately, the inadequate and underfunded community treatment of persons who are the most difficult to treat, and the insufficient number of hospital beds (acute, intermediate, and long term) for those who need them, are some of the realities of deinstitutionalization that have set the stage for criminalization.”

A special report by the Bureau of Justice Statistics in 2006 found that at mid-year 2005, more than half of all prison and jail inmates had some type of mental health problem; other studies have found that between 1984 and 2002, the estimated prevalence of SMI among male jail inmates tripled, from 6.4 percent to 17.5. percent. In November 2014, the Treatment Advocacy Center reported that approximately 20 percent of inmates in jails and 15 percent of inmates in state prisons had an SMI. Based on the total number of inmates, this would translate into approximately 356,000 inmates with SMI in jails and state prisons—10 times more than the approximately 35,000 individuals with SMI remaining in state hospitals.

Ethicists Dominic Sisti, Andrea Segal, and Ezekiel Emanuel point out that care for an inmate with mental illness in a correctional institution ranges from $30,000 to $50,000 per year, compared with $22,000 per year for an inmate without mental illness. Moreover, they describe the environment for inmates with mental illness as “anathema to the goals of psychiatric recovery”—often unsafe, violent, and designed to control and punish. They maintain that long-term inpatient settings are a “necessary but not sufficient component of a reformed spectrum of psychiatric services” that will continue to be essential to mental health patients who cannot live alone, cannot care for themselves, or are a danger to themselves or others.

“The New Asylums,” a segment of PBS’s Frontline series that first aired in 2006, provided an in-depth look at Ohio’s state prison system and the complex and growing issue of caring for mentally ill prisoners. The filmmakers had access to prison therapy sessions, mental health treatment meetings, crisis wards, and prison disciplinary tribunals; and the film provides a graphic and disturbing portrait of the new reality for the prison system and for its mentally ill inmates. Scenes of group therapy sessions conducted while the participants are confined to metal enclosures—essentially small cages—are particularly unnerving, as is the sequence when the camera follows a group of prison guards dressed in black riot gear with facemasks and helmets as they enter the cell of an agitated psychotic prisoner who needs to be transported to the prison’s infirmary.

A dearth of data

In a world where both policy and medicine are increasingly expected to be “evidence-based,” the evidentiary basis for addressing SMI in the United States is disturbingly weak. From assessing hospital and residential care capacity, to developing consensus diagnosis and treatment regimes, many important questions remain unanswered.

Making progress on helping people with serious mental illness will depend not just on new drugs but on good information on which effective policies and treatment regimens can be based.

A recent search of the Department of Health and Human Services’ (HHS) National Registry of Evidence-Based Practices, which focuses exclusively on mental health and substance abuse services, found that only 30 of the 355 entries in the database mentioned “serious mental illness,” “schizophrenia,” or “psychosis,” suggesting the need for increased efforts at consensus development regarding evidence-based practices for people with SMI. A July 2015 report by the National Academy of Medicine points out that “a considerable gap exists in mental health and substance abuse treatments known as psychosocial interventions between what is known to be effective and those interventions that are commonly delivered . . . [This gap] is due to problems of access, insurance coverage, and fragmentation of care—which include different systems of providers, separation of primary and specialty care, and different entities sponsoring and paying for care.”

Although the prevalence rate of SMI is relatively similar across states, the number of state psychiatric hospital beds per 100,000 civilian population currently varies widely from state to state, from 3.9 beds per 100,000 population in Arizona, to 30.1 beds per 100,000 in Wyoming. The reasons for these differences are not well understood, but they seem to reflect the lack of consensus on the purpose of these beds; many are used for forensic purposes (for example, for defendants found not guilty by reason of insanity) and thus are not available to other persons with SMI living in the community. In 2010, 38.2 percent of state psychiatric hospital budgets nationwide were accounted for by forensics and sex offender services, up from 25.6 percent in 2001. A report by the Treatment Advocacy Center found that nationwide in 2010 about one-third of public psychiatric beds were occupied by forensic patients, but the situation differed dramatically among the states, ranging from 66 percent in Ohio and 57 percent in Oregon, to less than 5 percent in in Idaho, Iowa, Mississippi, New Hampshire, North Carolina, North Dakota, and South Dakota.

One possible reason for the apparent differences among the states is that there is neither consensus on the nature of an inpatient psychiatric bed, nor on the number of inpatient psychiatric beds needed in the United States; nor is there a generally accepted or agreed-upon method among policy makers or researchers for projecting or estimating how many beds are needed. Although planners in many states have devoted serious effort to grappling with this issue, as evidenced by recent reports issued by such organizations as the Washington State Institute for Public Policy, the California Hospital Association, and the North Carolina Department of Health and Human Services, a consistent and effective strategy remains elusive. A 2012 survey by the National Association of State Mental Health Program Directors found that only 16 states reported having any method for making such projections; most of those states that did report a method indicated that they relied on previous use data or benchmarking against other states. My own search of the literature did not uncover other documented methods, with the exception of a commercial simulation model for mental health planning called Planning by the Numbers (PBN), which was initially developed over 30 years ago and may still be in use, although I could not find any information on its potential users or their experiences. As with any model, projections using PBN will vary widely, depending on assumptions about the availability of resources in the community and attitudes toward hospitalization. Its value for informing policy is thus unclear, all the more so given the lack of uniformity around definitions of key variables such as “inpatient bed,” as well as other data problems discussed here.

Complexities in gathering and analyzing data related to psychiatric beds do not end there. Forensic patients may have longer lengths of stay than other patients, which complicates state-by-state comparisons. Many states contract with private psychiatric hospitals or community hospitals for the use of beds, complicating the definition of a “state” bed. Obtaining consistent information regarding the number of psychiatric beds in private psychiatric hospitals and community and general hospitals is also difficult, since beds in designated psychiatric units in general and community hospitals often do not generate as much income as beds for other purposes, and psychiatric patients may be housed instead in “swing” or “scatter” beds in medical and surgical units.

The federal government has no oversight or regulatory role in relation to the number of psychiatric beds or the appropriate ratio to total beds, and experts and stakeholders alike disagree about how many beds there should be—or even if they are needed at all. For example, the “consumer/survivor” movement, which has gained widespread attention over the past two decades, is predicated on the idea that SMI is best dealt with through mutual support from peers with mental illness who have “survived” the interventions of psychiatry. Some individuals within that movement believe that encouraging adherence to medication regimes is “paternalistic,” that inpatient hospitalization has no place in the mental health system, and that “recovery” should be entirely self-directed. Many professionals, however, believe that state hospitals play a crucial role in the continuum of care, and that there will always be some individuals whose disorders cannot be treated solely in the community and who need the structure of a more protected setting. In the words of Howard Goldman, a well-known expert in mental health policy, “Many [researchers] have foundered on the shoals of trying to address and answer the question of how many psychiatric beds are needed.”

If good data on psychiatric hospital capacities are scarce and ambiguous, data on residential care—another important component of the treatment system for people with SMI—is even more problematic. Detailed and reliable information on the number of beds in residential settings is very limited and difficult to interpret. Definitions vary widely across states, leading to inconsistencies in reported numbers. In a 2004 article on this problem, psychiatrist Martin Fleishman observed that residential care facilities (RCFs) are also known as board-and-care homes, adult residential facilities, community care homes, and sheltered care facilities, among many other names, which, in turn, has discouraged national statistical categorizations. Fleishman further notes that “Data collection for RCF patients is complicated by the fact that it is difficult to distinguish the RCF population from the population of other community-based domiciles for long-term patients, such as nursing homes.” Based on data from California, he estimates that almost 160,000 persons who are mentally ill occupy RCF beds in the United States, although he suggests it is likely that the real number is considerably higher.

The lack of good data on psychiatric residential facilities is hardly surprising. A 2007 national survey of regulation and certification for these facilities found the regulatory environment to be very complex: in most states, several separate state agencies with differing missions and functions (state mental health authorities, departments of health, departments of social services, and so on) are involved in the licensing, funding, and oversight of the facilities. According to Fleishman, “Because of the difficulties in obtaining reliable statistics, little research has been done on the population of persons with mental illness who require long-term care, and the most effective modalities of treatment have yet to be determined.”

Comprehensive national data on residential psychiatric facilities is also critical to a complete understanding of treatment for persons with SMI. Such data are not available, however. The most recent national survey of psychiatric residential facilities for adults in the United States was conducted in 1987. And although the U.S. Department of Housing and Urban Development conducted a new national survey of residential facilities in 2010, that survey description states that “residences licensed to serve exclusively persons with mental illness, mental retardation, or developmental disabilities are ineligible,” although no explanation is given. An extensive, multi-stage national survey of psychiatric residential facilities currently being conducted in Italy might serve as a useful model for such an effort in the United States. The Progetto Rezidenze (PROGRES) residential care project, funded by the Italian Institute of Health, is described as “the first systematic attempt in Italy to fill the gap between psychiatric services planning and evaluation, by setting up a network of investigators throughout the country and evaluating an entire typology of services in a consistent fashion.” Similar studies of residential facilities have recently been conducted in Australia and Denmark.

Needed: policies for evidence

Researchers working to understand the prevalence of behavioral health disorders currently depend on large-scale, federally funded household surveys, such as the National Survey on Drug Use and Health (NSDUH). Such computer-assisted data collection efforts excel in providing self-reported data on trends such as the use of illegal drugs, but are limited in the data they can provide on mental disorders, particularly those of a more serious nature. For example, as household surveys, they capture the “civilian, non-institutionalized population,” but they do not collect data from individuals in correctional and psychiatric institutions, or from the homeless. Although they do provide some information on self-reported SMI, the methodology of these surveys limits their ability to capture data on individuals with the most severe conditions.

How, then, might data collection on SMI be improved? One possibility, suggests William Eaton of Johns Hopkins University, would be to re-examine the potential for the type of “psychosis registries” that existed in certain locations, such as the states of Maryland, Hawaii, and North Carolina; Rochester County, New York; and Washington Heights, New York, in the 1950s through 1970s. There are comprehensive psychiatric registries in Denmark, Sweden, Finland, Israel, and Taiwan, among other countries. Comparable comprehensive longitudinal databases do exist today in the United States for other types of illnesses; for example the Surveillance, Epidemiology, and End Results (SEER) Cancer Registries have been run by the National Cancer Institute since 1973. It is possible that health maintenance organizations such as Kaiser Permanente, Geisinger, and Group Health Cooperative will push their data collection toward these types of comprehensive longitudinal registries. Although confidentiality concerns may present a considerable challenge, the breadth of information that such registries could provide would potentially be extremely valuable to researchers, stakeholders, and policy makers seeking to improve services to the SMI population. Another approach might be to consider altering existing household surveys to inquire specifically about siblings and other household members with SMI who are not currently residing in the home, assuming that this could be done within the limitations of confidentiality and privacy laws.

Improved data from modified survey methods or psychosis registries might help researchers to develop algorithms for estimating the frequency and intensity of episodes that are likely to require crisis intervention, short-term, and longer-term hospitalization among persons with SMI. Such estimates are a necessary foundation for planning new facilities. Increased attention to the collection of data on location and availability of mental health resources in communities, and improved identification of areas with shortages of mental health facilities and providers, is also important; new mapping technologies may prove to be valuable tools for the assessment and redistribution of such resources.

National-level data are also needed on the availability and effectiveness of other services. In addition to a lack of community mental health centers, many communities are unable to provide the “wraparound” services that persons with SMI often need, such as supported housing, vocational education, social and peer support, crisis management teams, and interventions such as “assertive community treatment.” Such services are often costly and not reimbursable, although they are widely believed to be important for individuals with SMI.

Making progress on helping people with SMI will depend not just on new drugs but on good information on which effective policies and treatment regimens can be based. To start with, we need to understand how the current policy regime, with its focus on deinstitutionalization, has influenced the delivery of mental health services over the past five decades. As Richard Frank and Sherry Glied observed in their 2006 book Better But Not Well, a comprehensive, longitudinal database would provide the best foundation for such an assessment. Given that no such single database exists, Frank and Glied instead combined information from multiple sources—administrative data, epidemiological surveys, general health and medical surveys, and research studies on the effectiveness of specific therapies. They concluded that “improved treatment for mental illness, a growing supply of mental health professionals, and enhanced private insurance coverage have contributed to greater use of services by those with less serious conditions,” although it is not clear that people with SMI have benefitted from these improvements to the same degree.

Lack of coordination among federal programs also contributes to the challenge of good data, sound analysis, and effective policies. A 2014 U.S. Government Accountability Office (GAO) report entitled “Mental Health: HHS Leadership Needed to Coordinate Federal Efforts Related to Serious Mental Illness” found that the 112 federal programs that generally supported individuals with SMI were spread across eight federal agencies, and that only 30 of the 112 programs were specifically targeted toward persons with SMI.

The report also found that “agencies completed few evaluations of the programs specifically targeting individuals with serious mental illness.” GAO recommended that HHS establish a mechanism to facilitate interagency coordination across all programs that support individuals with SMI, and also that a coordinated approach to program evaluation should be implemented. In its written comments on the report, HHS disagreed with both recommendations.

The development of a comprehensive and coordinated research agenda for improving delivery of services to persons with SMI is crucial if the situation is to be improved. This agenda, on which federal, state, and local governments should collaborate, must include a focus on the identification and dissemination of evidence-based practices, and should emphasize the development of financial and regulatory incentives, such as pay-for-performance approaches, to encourage high quality care.

Awareness of the urgent need for such efforts is growing. The Helping Families in Mental Health Crisis Act of 2013 (H.R. 3717), re-introduced in the House in June 2015 by Rep. Timothy Murphy (R-PA) and Rep. Eddie Bernice Johnson (D-TX), would create an Assistant Secretary for Mental Health and Substance Use Disorders as well as a National Mental Health Policy Laboratory and an interagency Serious Mental Illness Coordinating Committee. The legislation has strong endorsements by organizations such as the American Psychiatric Association and the National Alliance for the Mentally Ill, and momentum for mental health reform appears to be building. In a bipartisan vote on November 5, 2015, the Energy and Commerce Health Subcommittee voted to advance the legislation, which would increase funding for additional outpatient and inpatient treatment slots, add new enforcement provisions to the mental health parity law, and ease some privacy restrictions to help parents obtain more information about their adult children’s treatment. The bill would also allocate more money for research into the causes and treatment of mental illness and remove a rule that bars Medicaid from paying for mental health treatment and physical health treatment on the same day. It would also establish a new office at HHS devoted to providing oversight of the federal government’s role in mental health care, headed by the Assistant Secretary for Mental Health and Substance Abuse Disorders.

Better data are critically needed in order to inform these proposed policy changes. Steps to achieve this goal should include:

As the Affordable Care Act brings positive changes in the health care system as a whole, it will be vitally important to ensure that substantial and widespread improvements in the care of persons with SMI, and increases in appropriate and adequate facilities, are included. More comprehensive data are crucial to assist policy makers in focusing on those parts of the mental health system most in need of attention, and to aid in developing solutions for this most vulnerable population.

Citizen Engineers at the Fenceline

Environmental regulators would do a better job protecting air quality and public health if they worked with local communities.

On August 22, 1994, the Unocal refinery in Rodeo, California, along the north end of the San Francisco Bay, began to leak a solution of Catacarb through a small hole in a processing unit. While the prevailing winds blew the toxic gas (used by the refinery to separate carbon dioxide from other gases) over the neighboring community of Crockett, Unocal workers were instructed to contain the release by hosing down the unit, but to keep operating. Unaware of the ever-expanding leak, Crockett residents began to experience sore throats, nausea, headaches, dizziness, and other problems, their symptoms worsening over the next two weeks. Unocal finally shut down the unit on September 6, when a neighboring industrial facility, the Wickland Oil Terminal, complained that the refinery’s expanding leak was sickening its workers.

The 16-day release made the shortcomings of ambient air monitoring, which residents of Rodeo, Crockett, and other so-called “fenceline communities” had been complaining about for years, suddenly very visible. If there had been regular air monitoring in the area, or even if residents had had the capacity to test the air once they started to suspect that their symptoms stemmed from chemical exposures, the release likely would have been detected sooner, and the damage to workers’ and community members’ health—which in many cases seems to have been permanent—would have been mitigated.

Community-led innovation

The release galvanized residents of Crockett and Rodeo, who until that point had largely been complacent about the refinery’s presence. As a result of their activism, two new community-centered monitoring techniques emerged. Residents’ lawyers commissioned an engineering firm to develop an inexpensive, easy-to-use air sampler to give them a way to quantify chemical levels when air quality seemed particularly bad. The device, known as the “bucket,” was subsequently adapted by engineers and organizers with the nonprofit Communities for a Better Environment (CBE) for widespread dissemination, and it is currently used by fenceline communities around the world. Beyond helping neighborhoods closest to refineries know what they’re breathing, the bucket has become a cornerstone of advocacy for more comprehensive air monitoring: users take each bucket sample as an occasion to point out the lack of information being generated in their communities during potentially dangerous releases and to criticize industry and government agencies for their apparent lack of interest in finding out what fenceline communities are breathing.

The second innovation in air monitoring that arose from the Catacarb release, although less well known, has been an important complement to the bucket. In the wake of the accident, Crockett and Rodeo residents successfully demanded that Unocal’s land-use permit not be renewed unless a real-time, “state-of-the-art” air monitoring system was installed at the refinery’s fenceline. Residents were instrumental in designing “the Fenceline,” as the system has come to be known locally, and for nearly two decades it has been held up by bucket users around the country as the “gold standard” for air monitoring that should be required of all petrochemical facilities.

These two new monitoring technologies partially addressed and amplified fenceline communities’ long-standing criticisms of the way environmental regulators measured air quality. Monitoring sites established by state and regional agencies to assess compliance with the Clean Air Act are set up away from large sources like refineries with the aim of getting data that would represent the airshed as a whole. And the little monitoring that was being done by agencies in fenceline communities was conducted only after residents complained about odors, flaring, or other releases from neighboring facilities—and always, residents charged, many hours after the worst pollution had dissipated. Fenceline communities thus were left without information about what they were breathing.

With the invention of the bucket, community members suddenly had the ability to respond to releases themselves, taking 5-minute samples that represented air quality during the worst periods of pollution. These data helped fill the information gap. When regulators and industry criticized bucket data—raising questions about their credibility or arguing that they painted a skewed picture of air quality—community groups took it as an opportunity to highlight the shortcomings of agency monitoring, pointing out that, in most cases, the supposed experts had no data at all. Simultaneously, the development of the Fenceline not only gave Crockett and Rodeo residents continuous information about their air quality in real time; it also offered a concrete example of what regulators could and, according to residents, should be doing to assess air quality and protect communities.

Yet it still took two decades of sustained community activism around air monitoring to push regulators to change their approach. In 2013, the Bay Area Air Quality Management District (BAAQMD) proposed a new refinery rule that would require monitoring at the fencelines of the five northern California oil refineries it regulates (including the Rodeo refinery, now owned by ConocoPhillips spin-off Phillips 66) and in nearby residential areas. Fenceline air monitoring requirements are also a feature of the U.S. Environmental Protection Agency’s (EPA) new refinery rules, adopted in September 2015. (The BAAQMD’s rule was still in the process of getting final approval at the time this article went to press.)

The specifics of the air monitoring required by the two rules are very different: the BAAQMD rule calls for real-time monitoring of a number of chemicals, whereas the EPA rule requires benzene sampling that triggers remedial action if measured concentrations exceed a specified level. Although the very inclusion of monitoring in these rules is a victory for fenceline communities, arguably neither is up to the task of protecting air quality and, ultimately, residents’ health. Looking back to the story of how the Fenceline was set up and how it has evolved points to two important ways that the rules should be strengthened—if not now, then in subsequent iterations: by creating better mechanisms for presenting, interpreting, and using monitoring data, and by including neighboring communities in the design and operational oversight of fenceline monitors.

Issued by Contra Costa County just a few months after the Catacarb release, Unocal’s 1994 renewal of its land-use permit stipulated that the company would design, install, and test “an improved air pollution monitoring system” that would “include infrared or other state-of-the-art remote sensing technology.” But when Unocal brought its design to the county, residents found that “their [Unocal’s] idea of what state-of-the-art was and our idea of what state-of-the-art was were considerably different,” according to Jay Gunkelman, who lived in Crockett at the time of the release. Unocal wanted to use technology already common in the industry: hydrocarbon monitors at a few points around the refinery’s perimeter. Residents—including Gunkelman, a self-described “geek” who at one time developed brainwave biofeedback instruments and now specializes in computer analyses of electroencephalograms (EEGs), and Howard Adams, a Ph.D. chemist who worked in the research department at Chevron for 20 years before moving to Crockett in the mid-1980s—wanted instead to adopt remote sensing devices used by the military to detect chemical weapons in the first Gulf War. The monitors used infrared or ultraviolet (UV) beams of light in conjunction with advanced sensor technology to measure chemical concentrations along a section of the refinery fenceline, not just at a single point, by analyzing the wavelengths absorbed by chemicals along the light beam’s path. The residents’ proposal also included Tunable Diode Lasers (TDLs) to measure hydrogen sulfide and ammonia and, because “open-path” monitors of this sort don’t function well in inclement weather, hydrocarbon monitors similar to the ones in Unocal’s proposed system.

In the disagreement over what counted as “state-of-the-art,” county officials sided with the community, and Unocal was sent back to create a detailed plan that included elements of the community’s proposal. Meanwhile, Crockett residents worked with CBE engineer Julia May to do their own testing of the instruments for which they had advocated. Andy Mechling, who in his job at a camera shop was always the one to set up new equipment when it came in, found that testing out the monitors was right up his alley. Although he had moved away from Crockett with his family when a 1995 tank fire exacerbated chemical sensitivities they had developed in the wake of the Catacarb release, he commuted back to town to operate a borrowed infrared open-path monitor, known as an FTIR, on a neighbor’s roof.

The battle for data access

In the end, residents got the system they wanted. The final Memorandum of Understanding (MOU) that spelled out the monitoring required by Unocal’s permit included FTIRs, comparable UV systems, the TDLs, and organic gas detectors for both the Crockett and Rodeo sides of the refinery. But the residents’ fight for information about what was in the air wasn’t over, because the agreement did not provide for open access to the data. Instead, a video of the monitors’ computer interface was transmitted by modem to a terminal in one resident’s home. That resident, according to the terms of the MOU, could monitor measured chemical levels but was restricted from sharing any of the data until three business days after they were recorded.

Where the Catacarb release catalyzed innovations in community monitoring technology, the Richmond fire spurred a new wave of efforts to make fenceline monitoring standard at refineries.

The arrangement was cooked up to quell refinery official’s fears that data—especially data which had not undergone thorough quality assurance—might panic the public. To resident Ed Tannenbaum, it was technologically backwards: “I saw this screen and I said ‘Well, why isn’t this online?’ This was 2000 by now.” An electronic artist who already had experience building websites, Tannenbaum found a way to capture screenshots from the terminal in his neighbor’s house and put them on the web where anyone could look at the air data. Technically, his website violated the terms of the MOU between the refinery and the community. So the county stepped in: the government was also entitled to the data but not bound by the same rules. They agreed that the data should be made public, and Tannenbaum developed his website—which posted a .gif file every few minutes—under the auspices of the county. As Tannenbaum remembers it, the refinery was finally forced to allow the site to pull actual data directly from the monitors to make the website compliant with disability access law: his screenshots couldn’t be read by assistive technologies for the visually impaired, whereas the numbers could be. Still under pressure from the county, the refinery contracted with Argos Scientific to build a real-time website as part of satisfying requirements for a new land use permit in 2002. When Argos’s design did not meet with community members’ approval, the company helped Tannenbaum incorporate a real-time feed into a site he had designed.

As residents were trying to improve their access to data from the Fenceline, the limitations of the system were becoming clear. The TDLs could detect only large-scale releases of chemicals, making them good for emergency response but not for giving residents information about the odors they periodically experienced. The FTIRs went offline frequently because they had been set up with a path length that was too long, which resulted in too little light reaching the sensor for the system to quantify the chemicals in the air. And the UV systems couldn’t tell the difference between benzene and ozone.

Community members pushed for upgrades to the system, but—as with the initial installation of the Fenceline—the refinery agreed only when it needed permits for new land uses. The UV monitors were replaced in 2002, in conjunction with a new Ultra Low Sulfur Diesel project at the refinery; and when ConocoPhillips applied for a land-use permit for its Clean Fuel Project in 2006, residents used the company’s eagerness to get on with the project to negotiate an updated MOU: the FTIRs would be replaced, the UV systems upgraded, and all of the monitors would have to be online and operational an average of 95% of the time. Under the terms of the new agreement, Conoco also agreed to pay for a portable open-path monitor to be operated by residents and for laboratory analysis of a fixed number of ad hoc, short-term air samples collected at residents’ discretion—creating the possibility of representing air quality in the community itself and not just at the refinery’s fenceline.

The spread of fenceline monitoring

Crockett and Rodeo residents’ struggle to get and maintain a working fenceline monitoring system had ripple effects in other refinery communities. Even early in the history of the Fenceline, Denny Larson, former CBE organizer and founder of Global Community Monitor, a nonprofit that spreads buckets to communities around the world, was portraying the system in Crockett to communities in Texas, Louisiana, and elsewhere as an example of what the refineries next door to them could—and should—be doing. Communities wanting to emulate Crockett’s example gained an important resource and ally when the contract for operating the Crockett-Rodeo Fenceline was taken over by Cerex Environmental Services. Co-owner Don Gamiles, a physicist and entrepreneur with a passion for open-path monitors like the ones deployed in the Fenceline, saw in fenceline communities an important new market: although community groups could not afford to purchase and maintain the sophisticated instruments, organized communities could still create business for Gamiles by compelling companies to agree to monitoring.

With Cerex—and Gamiles’s next company, Argos Scientific—willing to rent monitors to community groups for demonstration projects, provide technical support to communities, and work with refineries compelled to develop monitoring systems that would satisfy residents, other communities began to advocate for, and win, fenceline monitoring. In Chalmette, Louisiana, residents working with the Louisiana Bucket Brigade, an environmental justice nonprofit that offers technical and organizing assistance to communities, independently operated an open-path UV monitor for several months, documenting a violation of the EPA’s 24-hour sulfur dioxide standard in the process. Their efforts not only prompted an enforcement action against nearby Exxon-Mobil, they helped residents win a real-time fenceline monitoring system that the Louisiana Department of Environmental Quality (LDEQ) operated for the next several years. Activists in Port Arthur, Texas, and Benicia, California, also successfully fought for real-time monitoring.

All of these projects were, however, plagued to some extent by the same issues that Crockett residents faced. How the information would be provided was a frequent bone of contention. In Benicia, for example, monitors operated for two years without data ever becoming available to the public because residents and the Valero refinery could not agree on a data-presentation format. A significant issue, which had also arisen in Crockett, was whether monitoring readings should in some way be flagged as dangerous (“red”) or potentially dangerous (“yellow”) when they reached certain concentrations—and, if so, what concentrations would mark the change from safe to unsafe. The sustainability of the monitoring programs was also a problem: Crockett and Rodeo residents had to remain active on the monitoring issue, at times appealing to the county’s authority to withhold the refinery’s land-use permit, to ensure that the Fenceline was kept up to date and in good repair. In Benicia, residents were not able to convince Valero to extend the monitoring program beyond their initial two-year commitment, nor could they persuade the city government to take it over. And the LDEQ-run monitoring program in Chalmette was pared back dramatically after a “final report” concluded that air quality met all relevant standards.

Imperfect regulations

On August 6, 2012, the Bay area saw another major refinery accident. At the Chevron refinery in Richmond, a corroded pipe ruptured, releasing flammable gas which subsequently ignited and sent up a large smoky cloud over the East Bay. Hundreds of residents went to area emergency rooms to treat respiratory problems and vomiting presumably caused by exposure to the large amounts of sulfuric acid and nitrogen dioxide released by the fire. Yet in the immediate aftermath of the accident, residents did not have access to information about the chemicals they were breathing. Contra Costa County officials told reporters and the public that their monitors had detected no hazardous chemicals. No fenceline monitoring was in place at the time of the accident, either: Chevron had been talking for years to residents, City of Richmond officials, and Don Gamiles about establishing a system similar to the one at the Rodeo refinery but, in the absence of a deadline from the city, it had not been completed.

Where the Catacarb release catalyzed innovations in community monitoring technology, the Richmond fire spurred a new wave of efforts to make fenceline monitoring standard at refineries. Chevron moved quickly to establish real-time air monitoring at its perimeter and in the community with the help of Argos Scientific, whose design for the system was informed by the company’s experience with the fenceline at Rodeo. By the spring of 2013, residents could look at real-time data on a website, modeled on the one developed for Benicia but never made public.

The fire also prompted the BAAQMD to make the development of new refinery rules a priority. In March 2013, they released a draft rule that, among other things, requires the five refineries in its jurisdiction to establish fenceline and community air monitoring systems. The accompanying guidelines for monitoring, informed by both public comments and an “expert panel” that included Gunkelman and Larson, clearly reflect lessons learned from residents’ experience with the Fenceline in Crockett and Rodeo. Guidelines include requirements for data completeness—a minimum of 75%—and specify that refineries’ monitoring plans must provide for making monitoring data available to the public in real time, via a website or some similar means.

During the same period, the EPA was revising its own refinery regulations. The rule it put out for comment in May 2014 and adopted in September 2015 also calls for ambient air monitoring at refinery fencelines, but its focus is on controlling “fugitive emissions” from leaky valves and seals. Real-time, open-path monitoring is among the approaches the rule considers, but it concludes that fenceline monitoring with open-path UV systems, although technically feasible, is cost prohibitive. The rule opts instead for passive sampling for just one chemical, benzene, with each sample representing a two-week period. Environmental justice activists, including those associated with Bay area refinery communities, have criticized the proposed requirements for their limited scope and their poor temporal resolution, both clear weaknesses of the strategy relative to the kind of fenceline systems the BAAQMD rule calls for.

But the EPA’s monitoring scheme has an additional feature that highlights weaknesses in BAAQMD’s rule. While the Bay area rule focuses exclusively on generating and providing air quality information, the EPA’s rule lays out a plan for assessing and acting on the air data generated. Data from two-week samples are to be compiled into an annual average, which in turn is compared to a “concentration action level.” If the average benzene concentration at a refinery’s fenceline exceeds nine micrograms per meter cubed for any 26-sample (52-week) period, the refinery must take action to reduce its fugitive emissions.

Despite its other shortcomings, then, the EPA’s rule addresses one problem that has plagued fenceline systems: how the data should be interpreted and presented to the public. Where communities like Crockett and Benicia have struggled with refineries over the levels at which readings should be coded “yellow” or “red,” the EPA’s rule specifies exactly what concentration is considered to be of concern. It remains unclear whether the presence of the Fenceline in Crockett and Rodeo has resulted in any change at the refinery. Although residents maintain that the refinery is more vigilant because it knows it is being watched, there is no evidence that the data are informing refinery practices, or even that refinery officials pay attention to the data. In contrast, the monitoring required by the EPA rule has the potential to trigger corrective action that would measurably reduce the refinery’s fugitive emissions, which, because they occur at or near ground level, are thought to have an especially significant impact on community health.

The lesson from Crockett and Rodeo is as much about the value of community involvement and local learning and innovation as it is about the technical specifications of a monitoring system.

If government agencies are going to succeed in protecting the health and safety of fenceline and other communities vulnerable to airborne toxic chemical releases, rules for air monitoring at refineries and other large point sources will need to incorporate the best features of both the EPA rule and the BAAQMD’s regulation. As in the BAAQMD’s guidelines, air quality measurements should be taken often enough and be available quickly enough to bring to light releases from accidents and inform emergency response. Residents should also have information about the highest levels of pollutants they are being exposed to and the durations over which they are exposed—information not available from long-term sampling strategies. And monitoring should measure as many as possible of the potentially deleterious chemicals released by refineries, not just a single proxy chemical, no matter how strategically chosen. However, for extensive, temporally fine-grained monitoring programs to be effective in reducing communities’ exposure to toxic chemicals, they need to be implemented in the context of a well-specified framework for understanding the results, one that identifies not only what levels may be a cause for concern—as websites for data from Crockett-Rodeo and Richmond monitors now do—but also what levels require action to be taken by facilities responsible for the pollution.

The still-missing masses

Different as they are in their visions for fenceline monitoring, the BAAQMD and EPA regulations share one feature that misses a larger lesson of the Crockett-Rodeo Fenceline: monitoring design and implementation is left up to experts. The EPA rule prescribes in detail the monitoring approach to be used and lays out the site selection and data analysis tasks to be undertaken by technical people at the refinery. Community groups do not figure into the process of siting, sampling, or evaluation (although the rule-making itself, of course, included a public comment period). The BAAQMD’s rule is less prescriptive, outlining basic components of an acceptable monitoring system and requiring each refinery to make a plan and submit it for the agency’s approval. To the agency’s credit, its expectations for monitoring plans have been informed by residents’ experience designing and trouble-shooting the Fenceline in Crockett and Rodeo. In addition, refineries are expected to invite public comment on their monitoring plans—but BAAQMD’s guidelines suggest that community involvement is to happen after refinery experts have come up with a plan.

BAAQMD and the EPA fail to learn from the crucial contributions that residents played in setting up the fenceline air monitoring system in Crockett and Rodeo, and, just as importantly, the contributions that members of the Fenceline’s Community Working Group continue to make in overseeing the system’s operations and understanding its results. Engaged citizens were integral to designing a fast, sensitive monitoring system, at a time when there were no models for fenceline monitoring at refineries. They also catalyzed further improvements to the system by identifying weaknesses in the original design and remedies for them. They created means for data from the monitors to reach the community at large, and they now work with Argos on improvements to the website—most recently by asking the company to show wind direction on a map alongside readings from the monitors. Residents’ on-going engagement with air monitoring issues has even helped improve local emergency managers’ understanding of the effects of chemical exposures on human health: in June 2012, the refinery released hydrogen sulfide from a sour water tank, which stores wastewater during its treatment process. Monitors at Philips 66’s north fenceline and a monitor in a residential area of Crockett registered high levels of the gas, and many residents felt ill as a result of the incident. Yet the measured levels of about 10 parts per million (ppm) hydrogen sulfide did not trigger a shelter-in-place warning that would have indicated to residents that conditions were potentially dangerous to their health; such a warning would have been triggered at 15 ppm. Because fenceline monitoring enabled residents to make a clear link between the 10 ppm level and adverse health effects, they were subsequently able to convince Philips 66 to lower the threshold for a shelter-in-place specified in its MOU, thus improving the health benefits of the local Community Warning System.

Neither the BAAQMD nor EPA rules seems designed to encourage the kinds of opportunities to improve the Fenceline that Crockett and Rodeo residents have had as members of the Fenceline Community Working Group, and that they have made for themselves through their attention to local permitting processes. Community groups are not invited into the design of monitoring systems. Opportunities for public comment, where they exist at all, happen after a monitoring plan has already been sketched out. And public involvement ends once a plan is in place, suggesting that the plan can be implemented in its optimal form by experts with no need to learn from subsequent experience.

The history of community involvement in Crockett and Rodeo shows the limits of that approach. Community members’ experimentation with the system and knowledge of its operations were critical in the design phase, not just to ensuring the acceptability of the system to the community, but also to creating a robust system. And once operational, community involvement helped to improve the system, and to highlight data that could feed back into local government policy to improve outcomes. Given the particular geography and chemical profile of every refinery, the lesson from Crockett and Rodeo is as much about the value of community involvement and local learning and innovation as it is about the technical specifications of a monitoring system.

More effective government regulations would set up the design of fenceline and community monitoring systems as a collaborative process from the start—one in which community members, industry, contractors, and regulators have the opportunity to learn from each other. In the Bay area case, for example, this could be accomplished through monitoring guidelines that require community members to sign onto a monitoring plan before it can be approved. Regulators could further facilitate these collaborations by offering to lend their expertise in both air monitoring and community collaborations; both BAAQMD and the EPA can now point to projects that have been quite successful in the latter regard.

Regulations should also make provisions for ongoing community involvement. Oversight, in particular, should be a collaborative process, since community members have different incentives than refineries to keep monitoring systems up-to-date and operating well; their outside-the-fenceline perspective could also offer insight into how adequately the monitoring system, once it is up and running, is representing local conditions. The most recent MOU between Phillips 66 and Crockett and Rodeo residents builds in opportunities for continued involvement by stipulating that the company will meet with the Community Working Group at least quarterly “to review fenceline monitoring system performance.” Agency rules could specify that a similar agreement should be one component of a company’s monitoring plan.

Finally, although making fenceline data available online was an important step in the development of monitoring systems, more work remains to be done to create interfaces that allow members of the public to explore the data, query it, look for trends, and connect it to what they see, hear, smell, and feel when chemical concentrations spike. As part of creating frameworks for interpreting and acting on data, regulatory agencies should attend to this need for better interfaces, whether that means expanding monitoring guidelines to include issues of accessibility and ease-of-use, or finding ways to integrate fenceline monitoring data into the interactive tools for accessing environmental information that the EPA and other agencies already host. In any case, of course, members of fenceline communities will be important contributors to design teams for the interfaces.

In the realm of fenceline air monitoring, the 1994 Catacarb release was a seminal event. It inspired the buckets, which have become a potent symbol of activists’ calls for better air monitoring in fenceline communities. And, equally important, it led to the behind-the-scenes work that produced a “gold standard” fenceline monitoring system that bucket activists could advocate for—and that at least one California regulatory agency is now using as a model for the fenceline monitoring that it hopes to require of all oil refineries. But just as the fenceline monitoring requirements in the EPA’s proposed refinery rules would benefit from the BAAQMD’s more comprehensive monitoring approach, the BAAQMD rules would be strengthened considerably by the development of predetermined thresholds that trigger action to address toxic releases. And both should make provisions for community participation in all stages of monitoring design, implementation, oversight, improvement, and data interpretation. Not every resident of a fenceline community will have the interest, dedication, or time to delve into the intricacies of ambient air toxics monitoring. But as the history of the toxic releases in Rodeo and Crockett shows, local knowledge and motivation can be powerful sources of technological and policy innovation on behalf of public health and the public good at fencelines that separate everyday life from chemical hazards.

Recommended reading

Christine Overdevest and Brian Mayer, “Harnessing the Power of Information through Community Monitoring: Insights from Social Science,” Texas Law Review 86, no. 7 (2008): 1493-1526.

Gwen Ottinger and Rachel Zurer, “New Voices, New Approaches: Drowning in Data,” Issues in Science and Technology 27, no. 3 (2011).

Gwen Ottinger is an assistant professor in the Department of Politics and the Center for Science, Technology, and Society at Drexel University.

No, Really, Why Are We Waiting?

British economist Nicholas Stern’s Why Are We Waiting? revolves around an ethical argument he made a decade ago on the economics of climate change in his famous “Stern Review.” Stern pushes total intergenerational equity: we should not discount future generations’ welfare simply because they do not yet exist. It’s a bold position that’s unpopular among his economist peers, and it pushes Stern to endorse dozens of policy options to address climate change throughout this new book.

Stern’s policy pluralism is appealing in many ways. But the rest of his book stands on a jumbled account of political economy and history—in particular, the history of energy transitions which are so crucial to his argument about climate action. With this shaky foundation, it’s disappointing but not surprising that Stern doesn’t convincingly answer his own question: Why are we waiting on climate action?

Failed markets

In contrast to many climate authorities, Stern does not idolize one policy approach to climate change at the expense of others. Stern also seeks multiple motivations that policy makers might use to justify climate action. This is a hallmark of democratic decision making in which the different parties have distinct rationales for pursuing a course of action. In addition to a lengthy meditation on ethics, he also includes arguments about conventional pollution, “green growth,” and the co-benefits of zero-carbon technologies.

Unfortunately, his helpful suite of climate solutions emerges from an unhelpful assessment of the root of the climate problem.

Stern agrees with the environmental economic convention that climate change is fundamentally a problem of “market failure.” But where most economists would contend that climate policy is intended to fix one market failure—greenhouse gases—Stern identifies six: greenhouse gases; underinvestment in research, development, and deployment (RD&D) of energy technologies; imperfections in capital and risk markets; lack of coordination among networks; imperfect information; and under-recognized co-benefits. One wonders how a problem that results from six market failures can or should be fixed by marshaling “market-based” policy instruments.

Why Are We Waiting copy

To elaborate, the allure of a market-based policy is in its purported ability to fix something that’s broken within the market. This makes sense in a relatively static system. For instance, increase the cost of something socially undesirable (like smoking) with a tax, and demand for it will decrease. But neither energy infrastructure nor political economies are static; they are, rather, what theorists call “complex adaptive systems.” In these constantly evolving systems, policy and technology, rather than price signals alone, chart the course. Markets work within these systems to deliver information efficiently (prices are essentially information about the relative supply and demand of a good), but if the goal is to change the system, markets are not really up to the task. For instance, a market-based policy like a price on carbon might encourage consumers to buy more fuel-efficient cars, but it will fall well short of revolutionizing global energy infrastructure and technologies.

What makes Stern unique is the way he combines neoclassical economics—the conventional allegiance to rational markets, price signals, and static equilibrium—with evolutionary economics—which looks to institutions and directed technological innovation to help explain things like economic growth or structural change. He invokes foundational evolutionary theory, including Nikolai Kondratieff’s “waves of innovation” and Joseph Schumpeter’s “creative destruction,” arguing as they did that energy transitions are fundamentally shifts to new “techno-economic paradigms.”

By setting the table with Kondratieff and Schumpeter, Stern seems to understand the essential role of proactive states and technological innovation in driving energy transitions. But he immediately follows with an insistence that unlike past transitions, “policy to correct market failure is now central.” Why would we seek to use different tools for this transition than those that have worked in the past? How can a market-based solution be central to a transition that is not fundamentally market-based, but technology- and policy-based?

History reveals that prices and scarcity have rarely, if ever, driven large-scale energy transitions. We transitioned away from whale oil after discovering much more abundant and useful petroleum-derived kerosene, not because we were running out of whales. Electricity and the internal combustion engine emerged after decades of technological toiling and created whole new uses for energy. War, perhaps the most state-directed of all enterprises, accelerated the diffusion of several key energy technologies, most notably the internal combustion engine, jet turbines, and nuclear reactors. Today, hydraulic fracturing in shale emerged from government-funded labs and has diffused because of its usefulness in cost-effectively accessing previously hard-to-reach natural gas deposits. This is the work that governments have done since Alexander Hamilton invented industrial policy, not as a corrective to proliferating market failures, but as foundational and continuous policy to create and shape markets themselves.

Were we to build a new climate-friendly, techno-economic paradigm, we would recognize that energy systems are not fixed, but dynamic. We would realize that we are living through a moment in a centuries-long process of energy transitions, not one market fix away from perfecting the energy system. We would emphasize tools that have worked in the past: tools like industrial policy, public-private RD&D to make clean energy cheaper, and adaptive regulatory platforms to accommodate new technologies in society. No matter how dramatic the assessment of climate risk, the solution lies in government’s long-established responsibility to accelerate technological innovation and deliver clean, cheap, abundant energy.

Stern endorses many of these policies and deserves credit for venturing outside the neoclassical dogma. But each of Stern’s endorsements responds to one of his numerous market failures, belying an allegiance to a neoclassical framework that has little relevance to the history or future of technology and energy transitions. Stern devotes very little time in his book to actually studying these transitions. (For a better treatment of this history, see the work of Roger Fouquet, Vaclav Smil, or Arnulf Grubler.)

Deniers aren’t the problem, and more science isn’t the solution

Stern is confident that the shift to clean energy will be successful because he sees precedents in the past. But they’re the wrong precedents! I would have thought that in comparing the modern energy transition to history, Stern would examine earlier energy transitions. Instead, he highlights smoking, leaded gasoline, drunk driving, and HIV. This echoes the common comparisons “climate hawks” make between clean energy and marriage equality in the United States, the latter of which was actually successful precisely because it did not require a rapid, global transformation of all global infrastructure, investment patterns, and consumption.

With the wrong historical comparisons in mind, when Stern attempts to answer the question of “why we are waiting” to confront climate change, he disappoints.

In the first pages of the book, Stern insists that “communication of the science is crucial” to convince the public and politicians of the need for action. And in his final chapter, the first answer to his question “why are we waiting” is “the communication deficit.” This could not be further from the truth. In Stern’s own formulation, the science already tells us that global catastrophe is just around the corner, on a spectrum from massive economic losses to the possibility of human extinction, and drastic, decades-long action must commence immediately. What could more science do to deepen that conclusion or make it seem more dire?

To the contrary, mounting evidence over the past two decades confirms that “more science” actually sharpens polarization. There is an abundance of scholarship that would immediately complicate Stern’s position on the usefulness of more science to policy making (see, e.g., Daniel Sarewitz’s “How science makes environmental controversies worse” and Matthew Nisbet’s “Communicating climate change: Why frames matter for public engagement”). It is especially odd that Stern does not heed Dan Kahan’s criticisms of the “deficit model,” where Kahan argues that the public takes positions on issues through cultural and value lenses, not scientific information. Since Stern cites Kahan’s work later in the chapter, it is a wonder he does not realize this.

Stern blames the “deniers/delayers” for society’s climate inaction, writing that “climate deniers, vested interests, and ideologues have attacked both the scientific evidence on climate change and politicians’ attempts to regulate emissions and build strong carbon prices.” Although this is an extremely common excuse for inaction among climate policy advocates, there is little evidence for it, and it is a shame an expert like Stern would fall for it. Climate policy has failed in Europe because of weak economies and public unwillingness to accept higher energy prices. In the United States, the 2010 climate bill was defeated by manufacturing interests and coal-state Democrats. Many congressional Republicans, of course, also opposed the bill for small-government reasons, not primarily because they denied the science of climate change (Senators Lindsay Graham and John McCain being key examples). If there is any influential climate denial lobby driving growth in emissions in emerging economies such as China and India, I have yet to see anyone identify it. Stern does not.

Large majorities in every major nation on the planet accept the scientific consensus that humans are causing climate change. Acceptance (or denial) of the science is not the problem. Readers who had hoped for a real treatment of this subject would be better served by Mike Hulme’s phenomenal Why We Disagree About Climate Change (2009) or John Vogler’s Climate Change in World Politics (2015).

Innovation is key

The real problem is that despite the marketing on display in Stern’s book and elsewhere, clean energy simply hasn’t yet matched fossil fuels on cost, availability, or scalability in all contexts. Progress has been impressive, but we can’t take our eyes off of the goal of sustained technological innovation.

Because Stern heaps praise on so many different solutions, he speaks approvingly of a few of my favorites. He should be lauded for championing green industrial policy and ambitious clean energy RD&D. Even better, he demands that we use international institutions to accelerate energy innovation, arguing (as my colleagues and I did in our 2014 report, High-Energy Innovation) that we should emulate the successful agricultural innovations achieved by the organizations at the heart of the Green Revolution. He also advocates advanced nuclear power, agreeing with the International Energy Agency that nuclear power could become “the largest single source of electricity” in the world by 2050.

As such, I don’t want to be overly critical of Stern. Compiling a broad set of approaches to address the changing climate should have been the first step pursued by international negotiators 25 years ago. Instead, they demanded an unworkable framework of legally binding international emissions targets, influenced and supported by climate hawks who demanded a laser focus on increasing renewable deployment and energy efficiency, mainly through market mechanisms such as cap-and-trade and carbon taxes.

But Stern’s lack of historical perspective is unfortunate. For such an exhaustive text, Why Are We Waiting? confuses more than it clarifies.

Questions of Fairness

This is a highly useful and aptly titled book, virtually a one-volume education in the array of normative challenges posed by the various aspects of our global energy regime. That “regime” is the interlocking structures and practices by which energy of all sorts is generated, distributed, and consumed. An “energy-just world,” according to authors Benjamin Sovacool and Michael Dworkin, is one that “equitably shares both the benefits and burdens involved in the production and consumption of energy services, as well as one that is fair in how it treats people and communities in energy decision making.”

Inequity inevitably permeates this regime. Supply, cost, governance, and the hazards of extraction, generation, distribution, and use all create winners and losers. An “energy-justice” perspective attempts to grapple openly with these difficulties, acknowledging that technical and economic analyses alone cannot. A key problem—but one turned to an advantage by this volume’s authors—is that centuries of philosophical endeavor have yielded many ways to slice the apple of “justice.”

Current, comprehensive, and nicely integrative, Global Energy Justice should serve students, scholars, and policy makers well—insofar as they might want an excellent and well-referenced tour of a vast landscape. A book like this, however, will likely frustrate anyone looking for a “best” way to think about and sort through the tangle of policy dilemmas and competing priorities that face us on energy and the environment. After pushing my way through this long but unfailingly engrossing book, I emerged even more firmly convinced that real environmental justice necessitates a strong role for energy justice.

Structure is a major virtue in this work. Eight core chapters offer identical framing questions: What is reality? What is justice? What is to be done? Each chapter launches with a concrete example that effectively propels the reader toward one of eight energy problems linked to eight justice principles. In the process, the reader receives a short introductory course in the various philosophical approaches to justice; for some readers this will be new territory, for others a reunion, via energy policy, with old acquaintances. Enter Aristotelian “virtue” to pave the way for a foray into energy efficiency: the authors invoke Plato and Aristotle to suggest that from such a perspective, the virtuous fulfillment of the energy system’s “essential purpose” would be the minimization of energy waste, a goal far from realization in numerous respects “related to energy supply, conversion, and end-use.”

Next up is the problem of externalities, to be explored with reference to John Stuart Mill, Jeremy Bentham, and “utility.” In similar fashion, Jürgen Habermas matches up with due process and “procedural justice,” John Rawls and Amartya Sen with energy poverty and “welfare and happiness,” and so on. It will surprise no one that foremost anti-statists Robert Nozick and Milton Friedman turn up in the chapter on energy subsidies and “freedom.” The philosophical exposition is clear throughout, though perhaps (this reader suspects anyway) too simplistic to satisfy the most dedicated devotees of these veins of thought. More importantly, the authors promptly follow each of these excursions with a section entitled “What is reality?” detailing the many real-world energy-related departures from the philosophical ideals described.

Global Energy Justice

The book is a two-fold intellectual rescue mission. On the one hand, and most importantly, it attempts to restore (or arguably, to inject fully for the first time) a focus on justice to an energy policy discourse from which it has gone missing. In one early statement that frankly startled me, the authors report that “a series of recent content analyses of the top energy technology and policy journals confirms the perceived unimportance of justice as both a methodological and topical issue.” Only six of 5,318 authors had demonstrable “training in philosophy and/or ethics” and there was only one appearance of the word “justice” in a title or abstract.

The second, and related, component of rescue is the book’s determination to counter the perennially “dominant”—the quotation marks are the authors’—position in climate change and other energy policy matters taken by economic cost-benefit analysis, which naturally privileges readily quantified, often monetized, benefits and costs and, unless aggressively supplemented by normative assessment and deliberation, tends to subordinate matters of ethics and fairness. Indeed, I very much had the sense—perhaps necessary in a project that clearly consumed years of precious scholarly effort—that the authors wrote an up-to-date version of the book they wish they had been able to read while in school.

The book is in line with a recent maturation and deepening of environmental justice scholarship. We have moved past case studies of valiant, distressed communities and crude efforts at quantification (such as the 1987 report “Toxic Wastes and Race”) that served as transparent launching pads for whatever “Not in My Back Yard” (NIMBY) efforts activists believed worth conjuring under the guise of “environmental racism.” However effective this earlier work may have been in raising public awareness, it was much less helpful for deciding what is to be done. These authors understand that the pursuit of justice is complex, contingent on context, and approachable from multiple vantage points with intellectual honesty.

Sovacool and Dworkin’s “divergent strands” approach to energy justice is worth pausing over. A “global energy system that fairly disseminates energy services, and one that has representative and impartial energy decision-making” involves three “key” elements: costs (including “how the hazards and externalities of the energy system are imposed on communities unequally”); benefits (including “how access to modern energy services and systems are highly uneven”); and procedures (or “how many energy projects proceed with exclusionary forms of decision-making that lack due process and representation”).

The hazard for some naïve readers of a book like this is that the proliferation of both ideals (the “happiness, welfare, freedom, equity, and due process for both producers and consumers”) and tools meant to serve them (including various forms of participatory engagement and systematic second-guessing) can appear less problematic than they are likely to prove in practice. This is just to belabor an obvious point: all reforms have unanticipated costs, limitations, and consequences for which we must be ever alert. (On this point see David Konisky’s recent book, Failed Promises: Evaluating the Federal Government’s Response to Environmental Justice.) In the real world, of course, the philosopher’s eye for justice must mesh with the manager’s focus on implementation, the politician’s emphasis on “deals that are do-able,” and, yes, the quantitative analyst’s head for numbers, or so we preach to the students in every public policy graduate program of which I am aware.

Simple strengths of this volume for students and novices include some useful “energy basics,” a dose of technology history, and a cascade of arresting “factoids” laced throughout to great effect. Refrigerators, we learn, consume about 17 percent of the average household’s electricity. The coal and oil/gas markets differ notably in structure. The globe is home to 75,000 power plants, almost 20,000 of them in the United States, and “the country has more electric utilities and power providers than Burger King restaurants.”

For a reader who has himself tilled the fields of “environmental justice,” it is refreshing to read a presentation that deals capably and creatively with both the polluting externalities and the energy-supply deficiencies that bedevil the global poor. Too often in the past we have been offered only the former, analyses spawned by the industrial West and the peculiar varieties of disadvantage and grievance that predominate within it (that is, a focus on ambient emissions and fear of novel technologies more than the ruinous effects of energy and technological inadequacy). Such an environmental justice posture can be of scant comfort to those situated, say, among Paul Collier’s famous “Bottom Billion”—families and communities that must cook over fires stoked in wood and dung, if they have anything to cook at all. This is a major value of reconceiving environmental issues within the global energy justice frame.

I came away from the book with a few reservations, as I expected to. The discussion of nuclear power suggests perhaps less promise than it might if we were not confined to current or historically predominant reactor designs and the waste-management needs that flow from them. As my late University of Maryland colleague John Steinbruner reminded us on several occasions, nuclear power’s future, to be viable, must be considered quite separately from the organization and operation of today’s nuclear industry. I also wonder whether a book like this inherently solidifies, rather than mitigates, the nuclear NIMBY problem. The “chest of tools” for engagement and fair procedure, such as public consultation and environmental evaluation, can potentially morph into an armory of weapons meant to delay rather than refine action in the hands of advocates determined to “just say no.”

The challenge of environmental and energy problem prioritization, already enormous, will only grow as we place on the table even more considerations and means of parsing them. At the end of the day, can any book help make inevitable losses palatable? To frame it in justice terms: when conditions do not admit a win-win strategy, who deserves to lose? This book offers a foundation for grappling with these difficult questions.

Benjamin K. Sovacool and Michael H. Dworkin have produced a fine book that fills a niche in a vitally important literature. I look forward to many years of pressing it on my students and colleagues as a useful aid to their thinking.

From the Hill – Winter 2016

“From the Hill” is adapted from the e-newsletter Policy Alert, ­published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

Congressional budget deal eases spending limits

Nearly a month after avoiding a September shutdown, congressional leaders and the White House produced the Bipartisan Budget Act of 2015, a two-year deal to partially roll back the spending caps and to increase discretionary spending in FY 2016 by 5.2%. The deal allows federal agencies to avoid a return to sequestration-level spending and suspends the debt ceiling for over a year. The agreement should be a boon to federal science agencies, which had been operating with relatively modest appropriations during the summer.

To fully understand the contours of the deal, it’s worth taking a short stroll back in time. When the Budget Control Act was signed into law in 2011, it first established an original spending cap baseline and created a joint congressional committee to come up with some kind of trillion-dollar grand bargain to further reduce deficits. When the congressional committee failed to reach an agreement, the Budget Control Act required sequestration to kick in, resulting in across-the-board cuts in FY 2013, and capping federal agencies at a new lower spending baseline for the rest of the decade. This would have resulted in tens of billions of dollars of cumulative cuts in the federal R&D budget.

Fortunately, Congress did not abide by the original law. Every year the sequestration-level spending caps have been in place, Congress has acted to allow for additional spending. Nevertheless, total R&D spending fell by 9.3% in FY 2013, but the reduction would have been much greater under the rules of the Budget Act.

The challenge facing policymakers this year was that the prior deal lifted the caps only in FY 2014 and FY 2015. This meant a return to the sequestration-level baseline in FY 2016. Unsurprisingly, the president’s budget again proposed to roll back the spending caps with a big increase in FY 2016. This would have moved research agency budgets most of the way back to the pre-sequestration spending baseline. But Congress remained unwilling to follow the administration’s lead and in spring 2015 approved a budget resolution that locked in sequestration-level spending and recommended further reductions in future years. The research budget developed by appropriators pointed toward lean times for science.

The agreement reached in October will result in a research budget much closer to the president’s request than to what Congress had developed during the summer. The total discretionary budget will rise 5.2% in FY 2016 and remain flat in FY 2017. Unless Congress acts again to raise the spending ceiling, the budget will revert to the previous sequestration baseline in FY 2017.

A politically important aspect of the deal is how it treats Overseas Contingency Operations (OCO) funding, also known as war funding. The president had proposed a $58-billion OCO budget, which is not subject to the spending caps. Congressional defense hawks initially sought to bulk up the OCO budget as a means to skirt around the spending caps. In the final deal, policymakers did agree to increase the OCO budget over two years, but by only about $15 billion, and this is split between the Department of Defense and the Department of State, ensuring the defense/nondefense spending balance remains unchanged.

To offset this extra spending, the budget deal includes a combination of health savings, reductions in agriculture crop insurance subsidies, and other provisions. Congress would cover some of the costs through a series of changes to the Social Security disability and Medicare programs. Another significant offset calls for the sale of 58 million barrels of crude oil from the Strategic Petroleum Reserve over the next decade. The deal also incorporates a handful of minor tax code adjustments and several other revenue changes.

This all matters for science funding because discretionary spending and R&D tend to move hand-in-hand: Individual agencies may fare better or worse in different years, but overall research funding closely tracks total discretionary spending. Science advocates will make their case for the importance of research to the nation’s well-being, but stakeholders will be doing the same for other components of the discretionary budget.

Now that Congress has reached this agreement, appropriators will still have to hammer out a final spending bill, perhaps in the form of omnibus legislation. Here, appropriations from this summer may provide some clues. For instance, Senate appropriators sought to give the National Institutes of Health (NIH) a $2-billion increase, the largest single-year increase in a decade, and the budget deal improves the odds that NIH will receive it. A bipartisan coalition of more than 100 House members led by Reps. Chris Van Hollen (D-MD), Suzan DelBene (D-WA), and David McKinley (R-WV) sent a letter to House Appropriations Committee leadership supporting the increase.

The administration had sought increases of more than 5% for the National Science Foundation (NSF) and the Department of Energy’s (DOE) Office of Science, and a 5.2% discretionary-spending increase might open the door to these increases (as Rep. John Culberson [R-TX], chair of the NSF appropriations subcommittee, suggested back in May). Supporters of the administration’s manufacturing innovation initiative also hope to see some gains.

Elsewhere, appropriators face difficult decisions about funding for advanced computing and fusion energy research at DOE, and proposed cuts to basic research at the Defense Department. It also remains to be seen how appropriators will cope with major proposed cuts to social sciences and geosciences at NSF. The president’s proposed increases for climate science and renewable energy will remain controversial, but extra fiscal room might temper any push for cuts. Congress has given itself a mid-December deadline to make these decisions.

House to debate energy regulations

The House will debate two Senate-passed resolutions this week to overturn Environmental Protection Agency (EPA) climate-control regulations under the Congressional Review Act. S.J. Res. 23 would set aside an EPA rule for new and modified generating plants fueled by coal. S.J. Res. 24 would nullify a companion EPA rule setting greenhouse-gas emissions from existing coal-fired electric plants.

Science Committee and NOAA battle continues

The National Oceanic and Atmospheric Administration (NOAA) responded in late November to a letter sent by the House Science, Space, and Technology Committee to Secretary of Commerce Penny Pritzker requesting that NOAA comply with its previous requests (including a subpoena) for specific communications regarding the NOAA research paper on the global warming “hiatus” published in Science. The response from NOAA Administrator Kathryn Sullivan was firm: “I have not or will not allow anyone to manipulate the science or coerce the scientists who work for me.” The conflict between NOAA and the committee continues to draw attention, including an intersociety letter from eight professional societies led by the American Association for the Advancement of Science arguing that the threat of legal action could have a chilling effect on science. Chairman Lamar Smith (R-TX) recently argued in an op-ed in The Washington Times that the research paper focused on surface temperature data rather than atmospheric satellite data and is therefore flawed.

Bipartisan senators ask GAO to study climate change costs

Senators Susan Collins (R-ME) and Maria Cantwell (D-WA) sent a letter last week asking the Government Accountability Office to study three questions: (1) What is known about how estimates of economic benefits and costs of climate change in the United States are developed?; (2) what is known about the estimated range of economic benefits and costs of climate change in the United States (a) at present and (b) in the near future assuming no change in federal policy?; and (3) based on these estimates, what federal policy actions could have the largest influence in offsetting federal costs associated with climate change?

NSF releases updated data on higher education R&D

The National Science Foundation’s National Center for Science and Engineering Statistics has published updated data for FY 2014 from its Higher Education Research and Development (HERD) Survey. An accompanying InfoBrief reports that federal funding for higher education R&D declined by 5.1% between FY 2013 and FY 2014 and has fallen over 11% since its peak in FY 2011—the longest multiyear decline in federal funding for academic R&D since the beginning of the annual HERD survey in FY 1972.

A Cautionary Tale

People are causing climate to change and it poses serious risks to society. Although there are many options for dealing with it—including some that could be tailored to fit smoothly with virtually any political viewpoint—decision makers have largely failed to agree on meaningful climate change risk management. Indeed, policy makers even disagree about whether there are risks to manage at all. The scientific community has been largely ineffective at engaging with the broader society, which contributes to the confusion and dysfunction that characterizes public discussions of climate change.

This is the situation today. Naomi Oreskes and Erik M. Conway, both historians of science, build on it to create The Collapse of Western Civilization, a grim story of catastrophic climate change in the 21st century as viewed retrospectively from the close of the 24th century. From that distant perspective, the narrators look back at this century and analyze why the people of today failed to respond, even as the consequences of climate change became obvious and dire.

In short: scientists proved unable or unwilling to communicate with the public about climate change in a way that non-scientists could understand and that decision makers could act upon. This allowed powerful interests and some politicians to mislead the public and block meaningful policy responses with impunity. Climate changes escalated and disaster ensued.

The story is compelling because it uses a single depiction of the future to illustrate the potential consequences of climate change. This cuts through the ambiguity that comes with a broad range of possible outcomes in an uncertain future. The authors’ approach helps to focus the reader’s attention on risks that are real, but obscure and hard to quantify—risks that might matter most to decision making.

The authors are clear at the outset that their story is fiction and that this is only one possible future. Yet the consequences of climate change in the world Oreskes and Conway create are certainly plausible. Indeed, it is hard to imagine the ongoing large-scale disturbance of the planet’s climate system (a basic life support system, after all) not eventually leading directly to very serious societal consequences.

oreskes_conway_edited-12

But is it appropriate to present a single future when a wide range in outcomes remains possible? Even if the odds are dwindling, civilization could still get lucky. Technological advances could cause emissions to decline rapidly without major policy intervention, climate sensitivity could be toward the low end of the anticipated range, and biological resources and social institutions could prove surprisingly robust in the face of climate changes.

The goal for scientists (myself included) is the comprehensive consideration of climate change. That needs to include a discussion of the range of plausible outcomes, impacts, likelihoods, and potential responses, among other policy-relevant topics. In presenting a single future, The Collapse of Western Civilization takes a different approach and explicitly calls scientists out for two characteristics that have made engagement with the public on climate change ineffective.

First, scientists of our day misled the public, the authors argue, by “plac[ing] the burden of proof on novel claims.” That is, scientists require high statistical confidence to attribute impacts to climate change and largely ignore whatever does not achieve that statistical confidence. As a result, scientists downplay the potential of the most serious risks of climate change—singular events for which attribution is difficult, such as more frequent or intense extreme weather events, the widespread loss of biological resources, or the potential for societal breakdown. To the extent that scientists consider the truly awful outcomes, we often say the probabilities of them occurring are low when those probabilities are actually unknown.

Second, scientists tend to focus narrowly within disciplines and conduct reductionist experiments. This prevents understanding climate change in the broad terms that matter most for the public. A focus on narrow insights (e.g., climate sensitivity, degrees of global temperature increase, and centimeters of sea level rise) does not remotely capture the societal implications of climate change.

This explains, in part, why assessments of climate change risks so often depend on the disciplinary background of the experts who conduct them. Economists often conclude that climate change will most likely have small and manageable consequences, while physical and natural scientists see a higher probability for serious disruptions. Experts from different disciplines possess insights (and wear blinders) that are difficult to integrate (or overcome). This is a major problem that scientists need to address.

An additional issue for scientists to consider, which is raised only implicitly in the book, is the difference between scientific discussions and legal advocacy. Scientists, rightly, are expected to present all relevant evidence, even if contradictory. This is critical because science is the search for knowledge and understanding and all information is relevant. In a legal proceeding, lawyers present the strongest possible case for their side. The goal is advocacy on behalf of a client rather than advancing knowledge wherever it may lead.

Policy discussions tend toward the legal advocacy model but it would be a mistake for scientists to lose our commitment to knowledge and understanding in pursuit of greater policy influence. It would undermine scientific integrity and diminish scientists’ credibility with the public and decision makers. Worse, it would mean giving up scientists’ greatest political asset: the ability to provide useful and trustworthy information. Scientists will never (and shouldn’t aspire to) have the brute political power of wealthy interests or value-driven constituencies.

Scientists can contribute most effectively by helping policy discussions to be a little more scientific while recognizing that science is never the final word in policy. Scientists can communicate what is known and understood as accurately as possible and can acknowledge the ways our understanding might be limited or wrong. We can present a range of potential outcomes, including the worst outcomes and those in which society gets lucky. And scientists can fight very hard against misrepresentations of science in public discussions.

Scientists have a great deal to contribute to the public discussion, as The Collapse of Western Civilization asserts. But there are also challenges and dangers to navigate in going beyond the design and implementation of experiments that generate new understanding. Scientists have no special capability or authority when it comes to balancing choices or determining societal priorities. These are subjective value judgments based on whether you are, for example, more risk averse to changing the energy system or altering the earth’s climate, or the extent you trust governments to correct market failures.

Despite decades of intensive scientific research, scientists cannot predict the societal consequences of climate change ahead of time. Those consequences depend on too many complex and nonlinear interactions among the physical characteristics of the earth, the biological resources on which society depends, and the social institutions that people have created. Even with additional decades of intensive study (whether narrowly focused or interdisciplinary), it will almost certainly remain impossible to know the likelihood of catastrophic outcomes such as those depicted in The Collapse of Western Civilization. Of course, even if the future consequences of climate change were known, reasonable people will differ on their preferred policy responses. This makes climate change a complex risk management challenge that combines scientific information from dozens of disciplines and sub-disciplines, deep uncertainty about societal consequences, and value judgments that have little or nothing to do with science.

The Collapse of Western Civilization illustrates the potential dangers from climate change, which can help readers think more clearly about the risk management choices society faces. The book may also encourage scientists to reflect on their role in society. If it helps scientists engage more effectively with the public by focusing on the key strengths of science, the book could help improve a flawed political system and enhance the potential for all branches of science to further benefit society.

A Roadmap for US Nuclear Energy Innovation

The future role of nuclear energy is attracting new attention. Several recent climate policy assessments have concluded that meeting the world’s growing appetite for energy while achieving deep reductions in global greenhouse gas emissions will be impossible without rapid nuclear energy growth, along with massive increases in the deployment of solar, wind, and other low-carbon energy technologies. But if nuclear energy is indeed to play such a role, the United States seems unlikely to be much of a factor at this point.

Once the undisputed global leader, the US nuclear energy industry is now well on its way to second-tier status. The new leaders include China and, notably, Russia, whose aggressive nuclear exporters are one of the few bright spots in that nation’s troubled economy. Meanwhile, the US fleet of 100 nuclear power reactors, still the world’s largest and the source of almost two-thirds of the nation’s low-carbon electricity, is slowly shrinking. Five operating reactors have recently closed, and several more will be retired in the next few years. As the rest of the nuclear fleet ages, many more reactors seem likely to be shuttered over the next couple of decades.

The outlook for new reactors is also grim. Four reactors are under construction in the Southeast, and a fifth is being completed after a long delay. There are no firm plans to build more. High construction costs, an uncertain demand outlook, and the availability of inexpensive natural gas are the main deterrents to new nuclear investment, and today the nuclear part of the government’s climate policy amounts to little more than a hope that additional premature shutdowns can be avoided and that some reactors will be able to stay open for longer than planned. Without a more serious federal policy this may be a vain hope, and it is certainly a strategic weakness. Losing the existing nuclear fleet would wipe out much of reduction in carbon dioxide (CO2) emissions promised by the Obama administration’s Clean Power Plan.

And yet, mostly below the radar, a new wave of nuclear energy innovation is building. More than 30 advanced reactor development projects have been launched since the 1990s. Most of this activity has been funded privately. According to one estimate, more than $1.3 billion in private investment has already been committed. But public funding and risk-sharing will also be needed if new nuclear technologies are to be brought to market successfully. Today, though, the federal government has no strategy for nuclear innovation, and there is resistance to developing one on both sides of the political aisle. Some influential Democratic lawmakers believe that a combination of renewables and increased energy efficiency will be sufficient to achieve global emission reduction goals. Some also fear that the safety and security risks of an increased nuclear commitment would more than offset the climate benefits it would bring. Among Republicans, many assign far greater importance to reducing government spending than to reducing greenhouse gas emissions.

Against this picture, I envision a new roadmap for nuclear innovation in the United States. This roadmap identifies three successive waves of advances: the first breaking during the next decade or so and supporting longer operating lifetimes for at least some of the existing nuclear fleet; the second arriving during the critical period between 2030 and 2040, when rapid scale-up of nuclear energy will be needed to achieve deep emissions reductions just as much of the current reactor fleet is being phased out; and the third wave breaking during the post-2050 period, when further deep cuts in CO2 emissions will be needed even if the world succeeds in meeting the ambitious mid-century mitigation targets to which many countries have signed up.

New work on all three waves will need to begin immediately. The roadmap also calls for significant reform of the Nuclear Regulatory Commission (NRC), a new and unfamiliar role for the national laboratories, and a supporting, rather than directive, role for federal nuclear managers in the Department of Energy, including support for international collaborations in which US innovators are engaged.

This nuclear agenda is ambitious, but attainable. It draws on the deep strengths of the US economy in entrepreneurial risk-taking, as well as on a series of remarkable advances in other scientific fields that can now be applied to the traditionally insular and conservative nuclear industry. It also draws on the still-formidable capabilities of the nation’s nuclear research and security complex. But implementing this innovation agenda will require a new political coalition capable of neutralizing the longstanding opposition of people for whom the biggest dragons to be slain are nuclear energy or the federal government itself. A failure to act will undermine US climate goals. It will also compromise important national security objectives. And it will further disconnect the nation’s industry from a global nuclear marketplace that is likely to be worth many hundreds of billions of dollars in the coming decades.

Uncertain outlook for innovation

The most visible of the new wave of nuclear innovators is TerraPower, an 8-year-old company co-founded by Bill Gates. Its main development effort focuses on an old idea (the so-called “breed-and-burn” reactor design concept), combined with the latest developments in instrumentation and control, materials science, nanotechnology, and computation and simulation—capabilities that were unimaginable 50 years ago when breed-and-burn was first considered. A similar combination of old ideas and forefront science and engineering also characterizes several new ventures in the field of molten-salt-cooled reactors (where TerraPower is also active.)

The industry that supplies and operates light-water reactors (LWRs), the dominant nuclear reactor technology around the world, has been slower to adopt new technology. But even here, there are important innovations. NuScale, an early-stage US company, has developed a radically different (and much smaller) LWR configuration that promises major upgrades in safety relative to today’s LWRs. Fluor, a major engineering and construction company with decades of experience in nuclear power, is the majority investor in NuScale. Other developers are pursuing different systems, using different kinds of nuclear fuel and coolant.

The new nuclear agenda has captured the imagination of young researchers at the nation’s universities. NuScale was spun out of Oregon State University. And at my own department of nuclear science and engineering at the Massachusetts Institute of Technology (MIT), one group of faculty and students is developing a new concept for a floating nuclear plant, a second has co-invented and is advancing a new kind of molten-salt-cooled reactor, a third has proposed a new fusion reactor design that it believes has promise of early commercialization, and two new reactor development companies have recently been formed by graduate students.

It is premature at this stage to attempt to identify a winner among all these innovations, or even whether there will be one. What their developers have in common is the conviction that nuclear energy has a key role to play worldwide, but to realize its full potential, a technology that is already much safer than it was when the first LWRs were built a half-century ago will need to be made safer still. New reactors will also need to be less expensive, easier and faster to build, less vulnerable to security threats, better suited to the needs of developing countries, and more compatible with the rapidly changing characteristics of electric power grids, which are being transformed by the introduction of advanced grid technologies as well as growing amounts of intermittent wind and solar generating capacity.

The federal government, whose role in the nuclear energy field has long been atypically dirigiste, or centrally controlling, has been taken by surprise by these developments and is scrambling to catch up. In recent years, its support for nuclear innovation has zigzagged from one priority to another. A program to develop improvements to large LWRs was the main priority for a while, but has since been dropped. Another program to build a prototype high temperature gas-cooled reactor jointly with industry failed to attract sufficient industry interest and has also ended. The government then launched a program to assist in the commercialization of small, modular LWRs, but one of the two horses it backed has since dropped out of the race. Support for the other (NuScale) continues. Most recently, the government has announced a new competition to provide a small amount of funding for earlier-stage research and development (R&D) for two advanced reactor concepts, not limited either to small reactors or to light-water technology.

The history of unsuccessful government efforts to commercialize new nuclear power reactor technologies stretches back much further. The best-known example involved the liquid-metal-cooled fast breeder reactor, a costly effort that was abandoned in the 1970s. In fact, the only successful counterexample occurred at the outset of the nuclear energy era, when a government-funded civilian reactor demonstration program enabled the emergence of the LWR technology that subsequently came to dominate the industry worldwide. That outcome was, in turn, enabled by the earlier development of pressurized water reactor technology for naval propulsion. Critical to those developments was the extraordinary leadership of Admiral Hyman Rickover, who headed the naval reactors program and was also the principal driving force for transferring this technology into the civilian power sector. The uniqueness of that early success and the subsequent string of failures suggest that new models of government involvement will be needed if advanced nuclear power technologies are to be commercialized successfully in the future.

The highest priority of nuclear innovation policy should be to promote the availability of an advanced nuclear power system 15 to 20 years from now.

An even bigger deterrent to nuclear innovation today is the licensing and regulatory process administered by the NRC. The current body of technical requirements and procedures was developed with today’s LWR technologies in mind, and it is generally considered to work reasonably well for them. But those regulations are not always well-suited to advanced reactor concepts, which in some cases rely on fundamentally different approaches to achieving acceptable levels of safety. Also, licensing procedures that have evolved over the years to accommodate incremental changes in LWR designs are less suitable for radically different reactor technologies. Would-be developers of such technologies face the prospect of having to spend a billion dollars or more on an open-ended, all-or-nothing licensing process without any certainty of outcomes or even clear milestones along the way.

NRC officials have met the calls for regulatory reform with mixed signals. Some have dismissed the need for reform. Others have acknowledged that different approaches may be needed for new technologies, but have also suggested that they would need to see commercial commitments from prospective customers before embarking on a new regulatory development effort. This is an unrealistic demand, since no new customer would be prepared to make a commitment of that kind in the face of such large regulatory risks. But the NRC also points out that roughly 90 percent of its budget is funded by licensing fees paid by the utility operators of current nuclear power plants. Most of these operators are paying no attention to advanced nuclear technologies and have no interest in seeing their fees applied to a new regulatory development activity.

These obstacles have caused several US nuclear innovators to look to other countries, including Canada and South Korea, in search of a more encouraging regulatory environment. TerraPower declared that because of the regulatory problem it would not build its first prototype reactor in the United States. In September 2015, the company signed an agreement to jointly develop and commercialize its breed-and-burn technology with China National Nuclear Corporation, making it almost certain that the first reactor of this type will be built in China.

China is also setting the pace in other fields of advanced nuclear technology. Indeed, the US government itself, unsure of its domestic agenda, has been helping to boost China’s nuclear innovation efforts. It has encouraged the federal Oak Ridge National Laboratory, which pioneered the development of molten-salt-cooled reactors in the 1950s and 1960s, to share the residual knowledge and technology from that program with China. The Chinese have identified molten-salt reactors as a high development priority and are planning to start up a small prototype device within two years.

Thus the outlook for nuclear innovation in the United States today is uncertain. The upsurge of interest in advanced nuclear technologies is remarkable, and the possibility that these technologies could help solve the world’s climate challenges has attracted the attention of a small, but growing, group of US entrepreneurs and investors. But the combination of an uncertain regulatory environment and anemic federal policies toward nuclear innovation could help drive leadership of the new generation of technologies away from the United States—a dispiriting coda to the ongoing loss of US leadership in today’s LWR industry.

Why innovation matters

Would this matter? Some influential energy and environmental experts both in and out of government say no. These include a group of diehard nuclear opponents who view the risks of nuclear power as outweighing even the risks of climate change. Another group, probably larger, would prefer not to have to rely on nuclear and think the nation will be able to get by without it. These nuclear skeptics, who are well-represented in environmental advocacy organizations, are seriously concerned about the climate threat, but think that other low-carbon technologies such as solar and wind are either already adequate to the need or will soon become so. They point to the recent rapid declines in the cost of solar and wind technologies. They note that US electricity consumption has not increased for a decade. And, given the safety, security, and economic challenges facing the nuclear power industry and the still-unresolved problem of spent fuel management and disposal, they see no contradiction in advocating for strong climate policies while looking forward to a nuclear-free energy policy.

The electric power companies are not much interested in nuclear innovation either. Many are preoccupied with the nearer-term challenges of distributed solar and wind technologies, microgrids, smart residential energy management systems, and the rise of a new class of distributed energy service providers. These new developments are destabilizing traditional utility business models and seem poised to account for a growing share of utility electricity markets. They have captured the attention of utility executives, who have already been forced to adjust to the implications of a decade of zero electricity demand growth.

But even as the challenge of distributed energy resources and a shrinking power market has transfixed the electric power industry, the next major challenge is looming just over the horizon and, paradoxically, it has many of the opposite characteristics. What the power industry has yet to fully recognize is that the most plausible pathway to achieving deep reductions in carbon emissions by mid-century will require major growth of electricity output. Moreover, this increased output will be needed even as the vast bulk of the nation’s baseload generating capacity—comprising all of the coal plants and most of the nuclear plants, which together provide almost 60 percent of the current electricity supply and are the foundation of the reliability of the grid—will be forced to retire over the next 20 to 30 years.

Because it is more difficult to decarbonize liquid fuels and gas flows than electricity, achieving an 80 percent reduction in CO2 emissions by mid-century implies almost complete decarbonization of electricity supplies along with the substitution of electricity or electricity-generated fuels for the direct combustion of fossil fuels in other energy markets. That means more use of electricity. How much more will depend on the future performance of the US economy. This cannot be predicted, but it is to be hoped that recent history is not a reliable guide. Over the past decade, the nation’s annual economic growth rate averaged a dismal 1.3 percent, compared with an average of 3.3 percent during the previous three decades. This is the main reason electricity consumption has not grown since 2005.

A period of stronger growth would be welcome. A careful recent study by the Deep Decarbonization Pathways Project estimated that if the US economy were to grow at a somewhat more robust 2.5 percent per year through the year 2050, the 80 percent emission reduction target could be achieved at relatively modest incremental cost. But this would require aggressive improvements in energy efficiency, combined with a doubling of electricity use and a drastic reduction in the carbon intensity of the electric power system, to just 3-10 percent of its current level. That, in turn, would require eliminating essentially all coal and most natural gas from the electric power system, even as it doubled in size.

This, by the way, is the main reason natural gas cannot be the “bridge to a low carbon energy future” as suggested by some energy experts. It is true that the displacement of some coal by lower-cost natural gas in the electric power sector has been one of the main contributors, together with weak economic growth, to the recent decline in US CO2 emissions, which have fallen by about 10 percent since 2005. But if the rest of the nation’s coal-fired power plants, which still account for 35 percent of electricity generation, were replaced by natural gas, total CO2 emissions would decline by another 20 percent—an important result to be sure, but only a quarter of the overall reduction needed. And if the nation’s fleet of nuclear power reactors was also replaced by natural gas, the additional CO2 emissions that would result would offset more than half of those savings. (These additional roles for natural gas would also increase total US natural gas usage by nearly 70 percent, inevitably putting strong upward pressure on natural gas prices.)

The truth is that neither the power industry nor the government has a plan for replacing the coal and nuclear plants that will be closed over the next 20 to 30 years. It is instructive to contemplate the vast physical scale of this task. If all the coal that is currently consumed in a single year in the nation’s coal-fired power plants were loaded onto a single coal train, that train would be about 83,000 miles long. And all of that coal will need to be replaced, or the CO2 captured, in order to meet the mid-century emission reduction target.

If the coal plants were replaced by, say, wind turbines, and the turbines were arranged in one long line (of course, they would not be) with the individual turbines spaced optimally, the line would be 135,000 miles long, even longer than the coal train. And if the coal plants were phased out over, say, 20 years, these wind turbines would need to be deployed at a rate of about 30,000 megawatts per year, which would be about five times faster than the average rate of wind turbine installation over the past decade. The requirement would be greater still if wind resources were also contributing to the expansion of the power grid during this period. Incidentally, the same thought experiment that produced the 83,000-mile-long coal train would yield a nuclear train just one mile long—the length of the train that could carry all the nuclear fuel assemblies needed to power all of the nation’s 100 nuclear power reactors for a year.

Indeed, extreme compactness is a major advantage of nuclear over renewable technologies, which must labor under the yoke of very low solar and wind energy densities. In addition, nuclear provides major environmental and public health advantages over coal and other fossil fuels. For example, since the early 1970s, nuclear power is estimated to have saved almost 2 million lives worldwide that would otherwise have been lost due to air pollution from fossil fuel combustion. Of course, nuclear power also has numerous drawbacks. But it is a matter of basic common sense that when faced with a task as vast and challenging as deep decarbonization, the more options that are available, the more likely the nation is to be successful. So, although it is an interesting academic exercise to think about whether a single option—wind or solar, for example—could do the trick, no sensible strategy would advocate this, especially given the potential consequences if that option should fail.

For the world as a whole, the case for keeping nuclear energy in the mix is stronger still. Most of the growth in energy demand over the next several decades will occur in the developing world, where governments and energy firms will face enormous difficulties in satisfying the aspirations of billions of people for higher living standards while meeting stringent carbon emission limits.

According to the International Energy Agency, a two- or three-fold increase in worldwide nuclear generating capacity will be needed by mid-century to achieve the carbon emission reduction consistent with holding the increase in the global average surface temperature to 2°C or less by century’s end. But the world is far from achieving this. A few countries (most notably China, but also Russia, India, and Korea) have ambitious nuclear growth plans, and more reactors are in the construction pipeline around the world today than in many years (24 of the 67 reactors are in China). Several other countries, including Abu Dhabi, Vietnam, Turkey, and Bangladesh, have also embarked on new nuclear energy programs, and others are seriously considering doing so. But several advanced countries are backing away from nuclear, including Germany, Italy, Switzerland, and Sweden, and the future of the large Japanese nuclear program hangs in the balance.

When all of these national plans are aggregated and combined with the expected retirement of much of the existing global nuclear fleet as those plants reach end of life (they are already about 30 years old, on average), the projected nuclear contribution to global carbon mitigation falls far short of the projected need. Indeed, the share of nuclear in global energy output seems as likely to shrink as to grow.

Closing the gap

This gap between plans and need is an uncomfortable reality for advocates of a larger nuclear role. How could it be closed? Some observers predict that this will happen with today’s nuclear technologies, once the full costs of burning fossil fuels, including the environmental costs of carbon emissions, are charged to energy users. But nuclear innovators believe that nuclear energy technology will itself need to be made more competitive if its potential is to be fully realized.

A key question, then, is what should be the government’s role in future nuclear innovation? For many decades, the main objective of government-funded nuclear R&D around the world was to extend the uranium resource base. And the main focus was on the development of breeder reactors to achieve this. Today, though, other goals are far more important: reducing costs; increasing safety; reducing the burdens of nuclear waste disposal; controlling the threat of nuclear terrorism; making the siting of nuclear facilities easier; and reducing nuclear lead-times, which have become excessive in many parts of the world and are adding cost, reducing flexibility, and exposing investors to greater risk.

Nobody can say for sure which technologies are best suited to solving these problems, or whether the future nuclear power industry will be dominated by descendants of today’s LWRs or the offspring of other reactor technologies now under development. Indeed, nobody knows whether there will be a single, dominant nuclear technology or whether multiple technologies will co-exist, optimized for different segments of a global market that, later in the century, might include a Chinese power grid several times bigger than the US grid today; an East African grid serving, say, Kenya, Tanzania, and Uganda that is currently about 500 times smaller; and a host of non-grid applications, such as water desalination, industrial process heating, and fluid fuels production.

The total cost of commercializing a new nuclear technology could easily exceed $10 billion. Funding even one such effort in the federal budget would almost certainly be too costly in the current fiscal environment. Funding several of them this way would be out of the question. So the focus must shift to private investment. It is easy to dismiss this possibility given the long lead-times and high risks of commercializing nuclear technology. But private investors have already shown more interest in funding early-stage nuclear development than almost anyone expected. Thus, instead of asking which technologies the government should be developing, a better question at this stage is how the government—in pursuit of its climate policy goals—can reduce the risks and increase the returns to private nuclear developers.

In answering this question, there are three different timeframes to consider. The first—what can be called Nuclear 1.0—is the period through about the year 2030, when the focus must be on innovations to reduce the cost of operating and maintaining the existing fleet, making it more likely that plant lifetimes will be extended. The second and even more important timeframe is the period beginning in about 2030, when large-scale retirements of coal and nuclear plants will be well under way (Nuclear 2.0). The focus here must be on commercializing advanced nuclear reactors and fuel-cycle technologies that can meet one or more of three goals: replace the conventional baseload capacity; compete effectively in power grids that by then will have much larger amounts of intermittent renewable capacity, as well as more technology-enabled autonomy for electricity users; and make possible the penetration of nuclear power into a broader range of energy markets, including industrial processing, desalination, and transportation fuels production. The third timeframe, (Nuclear 3.0) is after 2050, when more advanced nuclear technologies may be needed to bring down carbon emissions even further.

What would be the government role in each case?

Nuclear 1.0 (2015-2030). Many of the most important actions the government can take to extend the life of the existing nuclear fleet are not innovation-related at all. For example, as long as wholesale electricity markets do not attach a value to the reliability that nuclear plants provide to the grid, which is the case in much of the country, nuclear will be at a competitive disadvantage relative to wind and solar, whose intermittency exacts a cost on the rest of the system. Electricity market rules are mainly determined at the state level, but the federal government can influence outcomes. Another needed policy correction concerns state and federal incentives for investment in low-carbon electricity generation, which also seem skewed against nuclear. The most sensible approach would be to impose a uniform price on all carbon emissions, most likely through a carbon tax.

In the absence of this, there is the Obama administration’s Clean Power Plan. This plan creates financial incentives for investment in new wind and solar capacity to displace fossil fuel generation, but provides no incentives for utilities to invest in extending the life of their existing nuclear plants, even though the investment required per unit of low-carbon electricity will typically be far smaller, and even though the nuclear plants today provide about five times more low-carbon electricity than the contribution of wind and solar combined. What is worse, in some states the Clean Power Plan creates perverse incentives for nuclear plants to be closed and replaced by new natural gas plants, which, of course, will emit additional amounts of CO2.

This is not good public policy, and if the Environmental Protection Agency, which administers the Clean Power Plan, cannot correct it, the government should instead create other incentives for utilities to keep their nuclear plants going, such as a production tax credit for plants whose licenses have been extended, similar to that provided for new wind turbines. Another helpful federal action would be to reactivate the government’s spent fuel management and disposal efforts, which have been largely dormant since the decision several years ago to stop the licensing review for the Yucca Mountain nuclear waste repository.

There are also technical opportunities to reduce the cost of operating and maintaining the nuclear fleet as it ages. These would include, for example, measures to prevent or slow corrosion; networked sensors enabling more efficient monitoring of plant conditions; new, more cost-effective physical security technologies; and business model innovations to reduce costs. Most of the R&D to exploit these opportunities should be done by the utilities and their suppliers, and the most important role of the government would be the policy measures described previously, since their impact would be to increase the returns to these R&D investments. But government laboratories also have a useful role to play in support of this innovation agenda.

Nuclear 2.0 (2030-2050). The highest priority of nuclear innovation policy should be to promote the availability of advanced nuclear power systems 15 to 20 years from now. This is not currently a goal of US policy, and it will seem unrealistic to many, but that is because pathologically long lead-times of all kinds in the nuclear industry have become the norm. The original LWR technology was commercialized in about half the time. Of course, there is no Admiral Rickover today, and the federal government is hamstrung in ways that congressional legislators of the 1950s and ’60s could not have imagined. But in other respects, the environment is more favorable for faster development. Dramatic improvements in data and in modeling and simulation of nuclear power plant behavior, enabled by new generations of supercomputers, are making possible much faster, more efficient, and more accurate approaches to design, new materials development, and analysis, while new modular construction techniques have the potential to shorten project lead-times greatly. There are several development groups, some of them led by private US interests, others based over seas, that could put forward a credible plan for commercializing their technologies on a 15- to 20-year timeframe under the right conditions.

The role of the federal government should be to create an environment in the United States that could attract and encourage such groups. This would involve:

In this new policy environment, a government agency would not do what the Department of Energy and its predecessors have tried so often to do in the past: choose the next nuclear technology. But the government would still need to make choices about which development groups merited its support. It ought to be guided in these choices by an independent advisory board with the knowledge and experience to judge whether developers have credible technical, management, and financial plans in place that would give them a reasonable chance of achieving commercialization in the 2030–2040 timeframe.

Nuclear 3.0 (after 2050). On this timescale, the government must play the lead role. In recent years, budget pressures have narrowed the scope of long-term nuclear energy research, which is now dominated by the US participation in ITER, the international tokamak fusion project that is under construction in France. Congressional support for ITER has wavered in the face of project delays and cost overruns and as promising alternative pathways to commercial fusion have opened up. The estimated lead time of 30 to 40 years for commercialization of tokamak technology has also sapped enthusiasm for this program. From a climate policy perspective, the best way to view the development of such long lead-time technologies is as an insurance policy—an option that may be needed if nearer-term low-carbon technologies lose their viability or fail to materialize at all. As with any other kind of insurance, the best policy in this case is the one that can be purchased at lowest cost with the highest likelihood of being available if needed. A careful assessment of the range of technological pathways to commercial fusion is now needed to design a long-term nuclear energy R&D portfolio that would have these characteristics. This assessment should probably also include promising nuclear fission technologies that could not “make the cut” for Nuclear 2.0.

Time for a clear departure

This new nuclear innovation agenda would be a clear departure from more than three decades of controversy, timidity, and indecision in US nuclear energy policy. During this period, the nation’s nuclear industry has lost ground to its international competitors, and US influence over the international nuclear security regime has waned. It is one of the unfortunate legacies of the years of policy drift that now, at the very moment that climate concerns are building and the need for new sources of low-carbon energy is growing more urgent, the ability of nuclear energy to respond to this need is in doubt.

But a new generation of nuclear technologies holds promise of reversing this decline. The outcome is far from certain, but no worthwhile innovation initiative ever is. Moreover, the need for nuclear innovation is global, since the current generation of nuclear technologies is struggling to compete with fossil fuels in much of the rest of the world, too. The innovation roadmap sketched here has the potential to restore US leadership in a field that, notwithstanding the hopes of many environmental activists and the gloomy prognostications of some pundits, is most likely still in the early stages of development. After all, it is often forgotten that the first practical demonstration of nuclear fission came just 16 years before a similar milestone for the first solar photovoltaic cell, which is still widely considered “new” technology.

So the United States now has a clear choice to make. The nation can decide to be one of the world’s leaders in shaping the next generation of nuclear energy technologies, or it can decide to stay on its current path and become a 21st-century nuclear also-ran. Given the stakes involved—the economic opportunity, the implications for nuclear safety and security, and the climate threat—there is really only one option.

Outlier Thoughts on Climate and Energy

As the Paris climate talks were starting, activist Bill McKibben wrote in Foreign Policy magazine that “The conference is not the game—it’s the scorecard.” He explained that he did not expect the negotiations to produce any significant breakthroughs, but they would consolidate the progress that has been made in recent years through many smaller agreements such as the one negotiated between the United States and China in 2014. Of course, he also argued that these actions are a woefully inadequate response to the threat of global climate change. His hope is that the meeting will set the stage for a “ratchet mechanism” by which nations will continue to adjust upward their commitment to reduce greenhouse gas emissions if the evidence of the damage of climate change becomes more compelling.

Issues in Science and Technology has published a steady stream of articles about climate change since the 1980s, beginning with a piece by Al Gore in 1985 worrying about nuclear winter. We have covered all the standard arguments from across the political spectrum. With more than 3,000 journalists crowding into Paris for the current round of talks, we didn’t see a need to replicate what can be read in newspapers and magazines in every corner of the globe. Instead, we have a somewhat idiosyncratic collection of articles that we hope will stimulate fresh thinking in the climate debate. This discussion will continue for decades, and little will be gained by repeating the familiar ideas.

Andrew Revkin has been reporting on the climate debate for decades, beginning in magazines, continuing for many years at the New York Times, and blogging now at earth.com. A thoughtful and reliable journalist, Revkin reflects on his personal evolution as an informed observer and offers his insights on the current state and likely future of climate discussions.

Many climate scientists and environmental activists are growing increasingly frustrated with the lack of public commitment to fighting climate change and the half-hearted actions of policymakers. Although aware of the gap between expert and public opinion on the urgency to do something about climate change, Nico Stehr worries that frustration is leading many climate experts and activists to lose faith in democratic processes and to advocate for granting more power to technocratic experts.

McKibbin and other activists are heartened by the rapid progress in lowering cost and boosting efficiency in renewable energy technologies, which leads them to envision a future in which renewable energy sources meet most of our needs. MIT nuclear engineer Richard Lester supports renewable energy development but sees no chance that renewable sources can replace all fossil fuels anytime soon. He therefore provides a roadmap for rapid expansion of a safer new generation of nuclear power plants.

Virtually all participants in the climate debate see a need for research to develop better energy technology, and most agree that the government should play the key role in early stage research and development. Venture capitalist Ray Rothrock has a different idea. Observing the high cost and slow progress of the government-funded ITER project to develop fusion energy using tokomak technology, Rothrock and other venture capitalists have launched a privately funded effort to pursue an alternative technological approach.

The Paris climate talks did ultimately produce an international agreement, which should reduce the sense of frustration of those who thought nations incapable of taking any action. But climate activists will be quick to point out that the agreement does not go far enough. The path to the future might look brighter if we widen it by incorporating more of the ideas that follow into the mainstream discussion.

Mapo Tofu with Spicy Cucumber Side

“Brain is like…black orb number eight,” Przemyslaw says.

Orange light streams into the diner. Usually, the reporters just ask a few questions, shoot video, and scatter. Not this time. Przemyslaw fingers the fresh cut on his head, his gaze wandering to the parking lot where he might be allowed to smoke. She sits opposite, palms pressed against her eye sockets, massaging a dull ache.

“All day, you rattle orb, and is making you decisions,” he says, shaking an imaginary sphere in front of her. “Should I take coffee? Yes. Should I wear blue shirt? Asking Again Later.”

“Magic Eight Ball,” Emma says with some effort. Her brown hair is either matted or frizzy depending on the angle, a result of restless contact with the vinyl bench. Slow, ragged thoughts congeal inside her head.

“Magic. This is like…Djed Mraz, yes?” he asks. She stares blankly. “Djed Mraz, is man giving toy to child in Christmas.” His breath causes her eyes to water as if she is staring into a tailpipe. Emma nods in silence. “Yes. So, why does magic orb not choose white shirt today? Then underarms will not be dark with this…” He waves his hands underneath his own armpits. “For this matter, why you bought blue shirt in desert?”

“Fashion advice from a talking cigarette,” Emma says, pulling her arms into her body. She traces her finger down the length of the sugar jar and considers what it might do to a mouth full of yellowed teeth.

The bench squeaks as Przemyslaw leans back with a nervous smile. “I mean no insult. Orb chooses blue shirt for reason. Maybe grudnjak shows through white shirt?” He cups his hands over his breasts. “Or, green shirt is not good with skin. Many reasons. But, you do not anticipate this oven.”

“Look, this is all fascinating…” she says too loud.

“Yes, yes, I explain.” As he touches the sleeve of her shirt, the waitress moves toward their table. Przemyslaw sits up and straightens the wispy blond hair on his scalp. “Thank you…Jor-eesh” he mispronounces the nametag. The waitress looks hard and walks away. “How do you decide shirt color?” he whispers. “Never enough data. Or, too much. So, you go with belly. Yes? Or, very American, you are blasting from hip.” Air bullets shoot from his fingers. “Of course, nowhere is gun or želudac involved. Is brain. Chemical potentials balance, like ledger. If you buy shirt ten times, maybe brain tells you buy ten different. Red shirt with pasta. Yellow shirt with collar. Appears random, but always is reason.”

“Pasta?” she asks. He holds an imaginary string between each thumb and forefinger and runs his hands the length of his torso, up to the neck.

Emma mutters disapproval as the waitress slides a plate of huevos rancheros in front of Przemyslaw. No matter what the wiry mathematician says, something below her neck has decided that she must avoid this abomination. Even though she likes salsa, enjoys eggs on occasion. A river of undeniably visceral impulses has defined her life, led to this exile in the desert.

“What does any of this have to do with the crash?” she asks. Her translucent visor beckons from the table.

Hours before she learned anything about Djed Mraz, Emma walked into Javier’s kitchen wrapped in a sheet. “That’s not fair,” she said. “I was going to sneak out on you this morning.” Instead of playful banter, it came out like a threat. His fingers froze, bunny ears floating above the tongue of his shoe. Like magic, her visor dinged from the living room and she stumbled backward, digging through the clothing next to the sofa. Automobile Accident, US-40. By the time she rushed back, he was gone. There was only the refrigerator, and the diffuse reflection of her pale, naked body.

The visor dinged again and she grabbed her blue, sweat-stained shirt, and raced to the bathroom. Soft blue-green light danced across the device as she placed it over her eyes. The sludge inside her head dissolved as she scanned a recipe from the night before, searched car accidents, and talked Alan off a ledge. Her car hummed through darkness toward the accident site while colors flickered against her forehead and cheekbones.

Place wok on low heat.

Honestly, you sound kind of crazy right now, Alan.

Out with the Old. The manual-drive vehicles looked apologetic as they rode flatbeds toward scrap yards and recycling plants. Automobile Accidents Plummet! They pleaded from the windows of museums that it wasn’t their fault. Add quarter cup of oil.

It had been many years since Emma had seen a crash in person. She remembered passing a particularly bad wreck as a child. Lights flashing in the median, flares burning beside twisted metal. Add peppers to oil, and cook for several minutes. A woman sat in the grass with a cut on her forehead. How silly, she thought now, the image of her mother’s hands on the steering wheel as they drove past the wreck. A woman unable to operate even the most basic of appliances.

We all have exes, Alan.

Her car stopped along the shoulder of a four-lane highway. Sheer rock walls loomed on either side, ink-black against the pre-dawn sky. Portable lamps illuminated the stretch where a vehicle had fused with the rock face. Stir until oil becomes bright red. A small group, mostly police, shuffled near the crumpled front end, shattered glass twinkling in the gravel under their feet. One man stood just outside the circle of light, his head obscured inside a cloud of smoke.

I still talk to Ian.

Emma hopped out and placed her camera on the roof of the car. When she tapped the device on her wrist, the camera whirred to life, spinning off the roof and hovering into position. Terrence West in Fatal Car Crash! Images raced before her eyes: dark highway, twisted vehicle mated with rock wall. Remove oil from heat and set aside.

His wife doesn’t care.

Video popped up on the usual sites. A young boy swiped a piece of cake from his fictional sister. Child Star Dead! A teen marched down the road in a ridiculous green jumpsuit. Glass glittered on the shoulder.

I don’t know—although I did sleep at Javier’s last night. Heat one-quarter cup of oil over medium burner.

Emma tapped her wrist again and the camera light blazed. Add pepper corns and cook for two to three minutes, stirring occasionally. On a small section of screen, she lined up the shot. Smashed vehicle over right shoulder.

I know. He has the most amazing teeth.

She raised her head to the camera. “Emma Clarke, Mojave 3. I am at the site of a tragic car crash outside of Kingman that has claimed the life of Terrence West. Mr. West was best known for his role as Billy Carter on the popular program He Did What. Officials behind me are working on details, but there is no information yet on what may have caused this accident.” Increase to medium-high heat. “We’ll keep you updated on this very sad developing story.”

Mother Gets the News! Emma leaned against her car, watching the video stream. She didn’t recognize the reporter who jogged up the driveway toward the house. As other reporters began arriving, competing video feeds burst onto her display.

We went to this little Sichuan place. Add ginger and garlic.

The jogging reporter knocked on the door and a phalanx of cameras blinded the woman that opened it. Middle-aged, bit of gray at the sides, confused. When garlic becomes fragrant, turn to high heat. A brief exchange followed. The lead reporter held out a tablet and, a moment later, Mrs. West crumpled to the floor. The reporter peppered her the whole way down but, through painful sobs, she managed to close the door.

“What are the car companies doing to make us safe?” an old woman yelled into the camera. 35 Percent of Americans Afraid to Drive! Her hand smoothed the fur between the ears of an orange cat.

You stay away from him, Alan.

A spokesman stood in front of an idealized cityscape. “Driving is safer than it ever has been. In fact, it’s safer than not driving.” Add ground pork and cook through. “Ten years ago, thirty thousand Americans were killed in car crashes each year. This year it will be less than one hundred.” Add spicy bean sauce to the mixture and stir well. “And, with our integrated medical alert technology, we save ten times that number from stroke, heart attack, and other medical emergencies. Driving saves lives. My own uncle…”

That’s not the point, Alan. Why do you care if Justin talks to his ex?

“Terry had one hell of a drug problem,” a man in a bathrobe said. He ran his hands through a tangle of wavy, brown hair. “I don’t know, he must have OD’d a dozen times.” A clip of Terrence West leaving a Los Angeles emergency room earlier that year. Press conference, Kingman Regional Hospital. Emma hopped into her car and sped toward the hospital.

Add two-thirds cup of chicken broth and stir for one minute. A young friend wiped a hand across his mouth. “It got so bad, we started dosing in the car. No need to call anyone with all those sensors and…whatever. If we got bent, straight to the ER.”

A doctor in light blue scrubs stood under the covered entrance to the hospital. “Terrence West died this morning as a result of injuries sustained in an automobile accident,” she said. “Emergency medical services received an automated alert early this morning and found Mr. West in a coma at the accident site. He was quickly transported to the hospital, but did not regain consciousness. He was pronounced dead at 4:03 AM.”

I’m sure I have pictures of Ian somewhere on my computer.

“Were there any drugs in his system?” a man shouted.

“I can’t answer at this time,” she said. Place one-quarter cup of water in a small bowl.

“Well…did he display any symptoms that might indicate the use of illegal drugs?”

I would be upset if you went through my stuff like that.

“Mr. West was brought to the hospital unconscious, with severe head injuries.”

“Unconscious…isn’t that a symptom of excessive drug use?”

The doctor paused, looking up at the awning.

Drugs and Death? “We are witnessing the beginning of a shocking new trend,” a man said in front of a neat bookcase. “Expect to see more ‘suicide-by-car’ in the future.” Add cornstarch to water and dissolve.

“This is smiješan,” a man in a blue windbreaker laughed. The image of his face was sandwiched between the man with the books and an animation of a car repeatedly driving over the side of a cliff. Break up clumps with fork. He disappeared into the bottom left of the frame, emerging with a thin line of smoke trailing from his nostrils. Emma drove back to the accident site for daylight video.

Are you telling me you don’t have any pictures of any of your boyfriends?

Foul Play? A woman’s face slid across, then filled the screen. “The Chinese could be hacking into your car right now!”

“He owed a lot of money around town…” Who Killed Terrence West? “…mostly to drug dealers.”

Emma emerged from her car as the sun rose over a gap in the cliffs. West’s vehicle had already been pulled free from the rock, the front end compacted to half its normal size. “Emma Clarke,” she said to the main in the windbreaker. “You with National Traffic Safety Bureau?”

Smoke swirled above his head. “Przemyslaw,” he grasped her hand. “Transportation Board.” He crushed a cigarette underfoot.

Wait—he’s naked in the pictures? Add cornstarch mixture to sauce and stir until it thickens.

“Naked?” Przemyslaw raised his chin and looked around.

Emma shook her head, pointing at her visor. “Did you say chef transportation…?” Add chili oil and stir.

“Chef? No, no…Przemyslaw,” he said slowly.

That would be a deal-breaker. Add tofu and gently toss in sauce.

Przemyslaw looked down at his hands. “Who is breaking?”

Toss. “Right, National Transportation Safety Bureau.”

“No, is Safety Board. Look, please to take this off?” he reached out to touch the visor but she stepped back quickly.

Ha, I knew it! It’s not about the pictures. Cook for three to five minutes.

“OK, you are taking…” he said, hands outstretched.

How big is it? Add scallions and stir. Serve wi— Her left hand grabbed at his wrist while he pried the visor up and over her head.

“…this off.”

“Ow!” she yelled. Her balance fluttered, right arm flailed. The tablet in her hand came down on his head with a sharp crack.

“Car crash here not like my country,” Przemyslaw says. “There, is human error. Every time. Old man goes around corner, kobasica rolls off seat. He bends to pick up and…” he claps his hands together.

“Kobasica?” Emma asks.

“Sausage. Is like…cylinder for meat.” He mimes the shape with his hands as she waves him off, pain spreading from the front of her head into her neck. When she squeezes her eyes shut, tiny flashes of light wink at the corners.

“Chicken flies into road.” He claps again. “Here, all sensors and computers. Many things have to happen for crash.”

“You don’t think he was killed?” Emma asks, scowling as Lech spoons the soggy egg concoction into his mouth.

“You should try,” he says, pushing the plate toward her. “Helps with…” he circles his hand around his head.

“No thanks.”

“Is possible…” He scoops up more of the egg. “But, so many safeties. You cannot simply loosen screw and wheel comes off. Is like making airplane crash. Not so easy, yes? Most accidents are from nature. Tree falls. Deer runs across road. If this is too fast for sensor…crash.”

“Everyone thinks it wasn’t an accident,” she looks down at her visor.

“Who everyone?”

Everyone, everyone. The last poll…”

“Which Pole? Maybe I am knowing him.”

“Poll. Survey. Ask lots of people what they think.”

Przemyslaw sniffs. “What kind of thinking this?” he says. “If I tell you one person does not have enough information to decide even white shirt blue shirt, imagine you are asking one hundred people. This is like…crack open all black orbs and pour cubes in pond.”

“Well, I didn’t see any sausages or chickens in the road.”

“This is correct. Is like…we stand on shore of big lake. Wave comes, but you do not see boat. Reason for wave is not…apparel. Yes?”

“For someone wearing a plaid shirt, you have an interesting fixation on…”

He shakes his head and stands up from the table. “Here, I show you.”

Przemyslaw walks out of the diner and crunches through the parking lot. A cigarette appears instantly in his hand, but he decides to wait until they are finished. It rests comfortably between his fingers as he walks into the middle of the highway. Emma follows at a short distance, her hand raised in salute against the glare. The intensity of the sun continues to amaze her in this place. She stops in the shoulder. Doris and a man in a dusty baseball cap peer out from the diner window.

“What are you doing?” Emma asks as a speck of car appears on the horizon.

“Will go around,” he says. He rubs his thin hair vigorously so that it stands on end, wincing as he brushes against the cut. “You never walk in traffic?” he asks above the approaching whine.

“Not at two hundred miles an hour,” she says.

“When first cars drive themselves, is all lasers, sensors. So, accidents stop. Everyone happy.” He jabs his cigarette into the air in front of him. “Then, everyone decides, store all data one place. Use big computer to talk to cars. Traffic is better, save energy. Very good, yes?” In an instant, the car is on him. Tires screech and the vehicle swerves at the last instant. “See…” he raises his palms as the car screams away.

“Can we discuss this back inside? I’m not feeling great.” She cradles her midsection with her free arm.

Another car appears. “Here, I show,” he says.

Emma paces as the second car approaches, but it is already switching lanes. As it flies past, dust swirls wildly around the Slav’s head. He stoops to pick up the cigarette that has blown out of his hand.

“Already it knows I am in road,” he says. “Big computer is telling car there is crazy man in middle of road. Go around.”

“So…ask Big Computer why the car crashed.”

“Ah!” A smile stretches across his face, tightening the skin around his eyes. “This computer is not regular.” He steps out of the road. “Too many decisions too fast, is more like brain. Can you tell me why you buy blue shirt? Is same problem.” Emma squints and begins to walk toward the diner. “There is always reason. Same for blue shirt as car.” He talks quickly as she hurries through the parking lot. “If you are in second car and there is no man in road, you ask, ‘Why did car move?’ Seems like…magic. Second car only sees ripple, we need to find pebble.” He sweeps his hand across the cliffs behind them, “Maybe is something in mountains? Sensors malfunction with certain rock? Certain time? Maybe combination. We only see outcome at edge of pond.”

Memories and thoughts swarm unfocused inside her head and she wants to vomit. “I have to go,” she says, barging into the diner.

“Black orb always has reason,” Przemyslaw shouts, holding the door open. The man in the baseball cap rushes to the other side of the dining room. “Perturbation in water, shape of decision cube. Fluid mechanics.”

Visor in hand, she trots to her car and slides into the seat. “I’m sorry,” she says. “I hope you find your rock.” She closes the door and fits the device snugly over her face. Cut cucumbers in half lengthwise. The nausea begins to release. Strike! A Phoenix man with a handful of grenades has barricaded himself inside a bowling alley. Cut cucumber halves into half-inch sticks. Brightly colored information untwists her brain, her thoughts soften.

Hi, Cynthia.

The car pulls away and she watches Przemyslaw get smaller through the window. The diner disappears as she directs the car to Phoenix, the next story. Place cucumber sticks in bowl, sprinkle with salt, and allow to drain. She buttons her blue shirt against the cold air.

Star’s Mother Talks. “I always thought he would die in a car, just not like this,” Mrs. West says, wiping at her nose and eyes with a tissue. Place a small pan over medium heat. “They were always shooting up, just driving around.” Add oil, garlic, and pepper flakes. “It was like their own personal ambulance service. I don’t know who is in charge of all that stuff, but if it was me,” she looks into the camera with a wry smile. When garlic browns, remove pan from heat. “I’d have been…”

“Stop,” Emma says. In a medium bowl, combine rice vinegar, sesame oil, and…

She removes her visor as the car eases to one side of the street. Ahead, shiny vehicles navigate the intersection like schools of silver fish darting silently between one another. For a moment, she nearly sees it all. The great, black orb controlling traffic from a cool, windowless room. A river of decisions flowing, even now, around her parked car. The temperature in the room rises as the drug addict climbs into his vehicle one more time. On a dark highway, there is an impulse. Glass rains onto the dirt and the room begins to cool. The great orb is still.

But, the area behind her eyes is tender and new swelling brings on a potent headache. She sits very quietly, but is unable to grasp the stone. She lifts the visor back over her face and her dreams of orbs and accidents fade along with the pain. Add oil mixture to bowl, stir, and allow to cool.

“I don’t know why,” Emma mumbles, looking down at her blue shirt.

No, not you. Hey—you remember that guy that I told you about last week?

Toss cucumber sticks in dressing.

Yeah, great smile.

There is a crazy man in Phoenix. Her car merges into traffic and speeds through the intersection. Serve at room temperature.

Chris Merchant ([email protected]) is a physicist and writer. He lives in Alexandria with his wife, three mobiles, and a Honda Fit.

Communicating the Value and Values of Science

Science is communicated to the public, press, and policymakers in various ways by distinguished entities, which I have characterized elsewhere as custodians of knowledge. These include governmental institutions such as the National Aeronautics and Space Administration (NASA); intergovernmental organizations such as the Intergovernmental Panel on Climate Change, associations of scholars such as the National Academy of Sciences (NAS); and the editorial voices of major scientific journals. I regularly come in contact with these groups and the content they produce in my role as the person who signs off on the FactCheck.org postings generated by the SciCheck project, funded by the Stanton Foundation and designed to hold those engaged in public debate accountable for their uses of evidence.

In discharging that function, I’m sometimes struck by worries about what can happen when science communicators violate science’s norms. When they do, they invite the audience to question whether the underlying science has done the same. And in the larger picture, when science communication forsakes the values of science, it increases the likelihood that bad science will affect the public and public policy, feed suspicion about scientists’ self-interests or conspiratorial motives, and confuse the public and muddy the policy debate. By contrast, good science communicated well increases the likelihood of good public policy.

In laying out my concerns, I will focus on four “norms”—sometimes called “values”—that I believe characterize science as a way of knowing: its championing of critique and self-correction; its acknowledgement of the limits of its data and methods; its faithful accounts of evidence; and its exacting definition of key terms. In the process, I’ll also cite some positive examples designed to show that, although sometimes difficult, it is possible to communicate science while also respecting these norms.

Championing a climate characterized by critique and self-correction

Before turning to a case in which science corrected expeditiously and another in which it did not, let me note that some forms of communication signal reporters about the state of knowledge about a specific topic. A consensus statement telegraphs widespread scientific agreement; retraction communicates that the published finding has been decertified. One of the reasons the press cast the false association between vaccination against measles, mumps, and rubella (commonly called MMR vaccinations) and autism as an open question for as long as it did is that despite repeated failures to replicate the bogus findings pushed by the British researcher Andrew Wakefield, and despite a press investigation that exposed his misconduct, it took the journal The Lancet 12 years to disavow that original 1998 article. Had Wakefield’s shoddy pseudoscience been retracted in a timely fashion, reporters would have been less likely to treat it as a certified, although contested, finding. We can’t know whether that change in reporting would have affected the likelihood or extent of the recent measles outbreak. But it wouldn’t have hurt.

By contrast, in the case of the error-ridden study by the Japanese researcher Haruko Obokata and her colleagues, published in January 2014 in the journal Nature, the scientific community’s ethic of self-correction functioned well. The study purportedly showed that distressing adult cells could transform them into pluripotent ones. But shortly after it appeared, scientists writing on the post-publication peer review site PubPeer began to flag problems. Within three months, Obokata’s home institute found her guilty of research misconduct. Within seven months, Nature retracted the paper and, importantly, announced an internal review of its practices.

Acknowledging the limitations in data and methods

When scientists communicate in scholarly outlets, they disclose the limitations in their data and methods. But when communicating to policymakers and the public, the temptation exists to simplify, lest the audience interprets uncertainty as lack of underlying knowledge or becomes confused by a complex explanation. However, when science communicators downplay the limits of existing methods and data, the public has more difficulty understanding that knowledge can evolve, as did our understanding of hormone replacement therapy and the ways in which eating foods high in cholesterol affects heart health.

Another price of conveying a false sense of certainty was on display when “Snowmageddon” was a no-show in New York City in January 2015, prompting conservative talk radio host Rush Limbaugh to observe: “Now, the weather guy is apologizing and blaming his models. The same people that tell us their models [that are] 50 to a hundred years out on climate change can be trusted.”

The back story? Led by forecasts of up to three feet of snow, public transportation and roads were shut down in New York City. Flights were cancelled and schools closed. But New York City received under a foot of snow. As criticism of the forecast mounted, the head of the National Weather Service noted that the weather service needed to better communicate uncertainty. I agree.

Major weather events are an opportunity to inform the public about what science knows—and also about how it knows and the limits of that knowledge. In this instance, modeling didn’t fail. Rather, humans failed to communicate that models deal in likelihood, not certainty, and that different models were forecasting different boundaries for the storm. The models correctly forecast that the Northeast was going to be hit. At issue were the boundaries: For example, would New York City be within the area of highest impact or outside the edge? The latter turned out to be the case. As the New York Times reported, “Parts of eastern Nassau County, on Long Island, for example, got as much as 18 inches, while parts of New York City received only four.”

The institutions that act as custodians of knowledge would be well served were they to ensure that error values and other relevant uncertainties are specified in all applicable communication of data. Here the social sciences provide a success story. Because scholars affiliated with the American Association of Public Opinion Research and the National Council on Public Polls promoted polling disclosure standards, most major media polls now report their margins of error. Had comparable standards been adopted in the January 2015 press release issued by NASA and the National Oceanic and Atmospheric Administration (NOAA) on the 2014 global temperature, headlines around the country probably would not have proclaimed that 2014 was the warmest year on record. And perhaps had the margin of error been featured, as it would have been in a report of an election poll, the NOAA report issued in December 2014 on that year’s land and ocean surface temperature wouldn’t have asserted that 2014 was “easily breaking the previous records,” because, as a slide (labeled slide 5) in the January NASA/NOAA press packet confirmed, that was “probably” or “likely” but not incontrovertibly the case. Accepting the NASA/NOAA headline (“2014 was Warmest Year on Record”) at face value and not catching the importance of slide 5 in the press packet, news accounts asserting 2014 was the warmest on record were vulnerable to critics who accurately pointed out the insignificant difference between it and two recent years.

Press skepticism about voices of authority is a healthy thing. But so, too, is press trust in the messaging of the custodians of knowledge. However, I suspect that for some reporters that trust erodes a bit when they need to issue a “clarification,” as the Associated Press did when noting that its January 16, 2015 account had “reported that 2014 was the hottest year on record, according to the National Oceanic and Atmospheric Administration and NASA, but did not include the caveat that other recent years had average temperatures that were almost as high—and they all fall within a margin of error that lessens the certainty that any one of the years was the hottest.”

Contrast the NOAA statement that 2014 was “easily” the warmest year or the NOAA/NASA headline labeling it as the “warmest” with a Yale Climate Connections posting by Zeke Hausfather, a senior researcher at Berkeley Earth. After noting that six groups gather temperature data (NASA; NOAA; Japan’s Meteorological Agency; Berkeley Earth; the Hadley Centre in the United Kingdom; and a team comprising British researcher Kevin Cowtan and Canadian researcher Robert Way), Hausfather concluded that “In all cases, 2014 is effectively tied with 2010 and 2005 within the uncertainty of measurements.”

He went on to note that “in important ways, what matters most is not which specific calendar year—2005, 2010, or 2014—is the warmest, but rather the continued long-term warming trend, particularly given the absence of an El Niño in 2014.” He explained why the six groups’ temperature measurements differ at times, but added, “All have quite similar results over the past 150 years, with differences primarily based on the ocean temperature series used and the method of spatially interpolating data from individual stations to areas with no station coverage.” In effect, this message confirmed the existence and importance of convergent data. He also explained why 2014 is not a safe bet to be the warmest: “In both cases, 2014 is more likely than any other year to be the warmest year on record; but at the same time 2014—and this is counterintuitive—is less than a safe bet to be the warmest year on record.” The reason: “The probability that 2014 is the warmest year is less than 50 percent.”

Although some observers have worried that communicating scientific uncertainty risks undermining public trust or interest in science, doing so at least in some cases may actually increase both. Indeed, the researcher Jakob Jenson of Purdue University found in an experiment reported in 2008 that “both scientists and journalists were viewed as more trustworthy when news coverage of cancer research was hedged (e.g., study limitations were reported)….”

Increasing public understanding of comparative certainty could address the disjuncture between what scientists mean when they cite, for example, a 95 percent confidence level and some people’s belief that if scientific knowledge is reliable, that percent would be 100. In a survey by the Associated Press in 2013, the answers of major scientists taken as a whole illustrate the kind of understanding available through comparative certainty claims: Science is as certain about anthropogenic climate change as it is that smoking is harmful to health, even as it is more certain that if you drop a stone, it will fall to earth. In short, science is certain enough about human-caused climate change to justify changing behavior.

Social scientists have long known that the analogies through which we see such phenomena as electricity can increase or impede people’s understanding of the science. Electricity is understood differently through the analogy of a teeming crowd rather than flowing water. Try this test of whether it is possible to communicate through analogy what it means to say that adding greenhouse gases to the atmosphere increases the likelihood of extreme weather. The analogy comes from Australian climate researcher Steve Sherwood. Think about rolling dice. Two dice. Six sides each. One to six dots. “Adding greenhouse gases to the atmosphere loads the dice, increasing odds of … extreme weather events,” Sherwood explains. In effect, it “paints an extra spot on each face of one of the dice, so that it goes from 2 to 7 instead of 1 to 6. This increases the odds of rolling 11 or 12, but also makes it possible to roll 13.”

Analogies can also anchor the notion that the presence of uncertainty is not an indication that we are lacking strong theory. On the relationship between climate change and extreme weather, Danish astrophysicist Peter Thejll explains that “It is like watching a pot boil…. We understand why it boils but cannot predict where the next bubble will be.” If science communication is to adhere to science’s norms, it needs to find optimal ways to communicate what it knows, how it knows it, the limitations of the involved methods, and also the uncertainties surrounding its findings.

Faithfully accounting for evidence

Explaining data that run counter to the dominant scientific narrative is difficult. A case in point occurred in September 2013, when NASA images used in a FoxNews.com posting appeared to say that in 2013 Arctic sea ice had recovered. If compared only to the 2012 historic low point that Fox featured, it had—but not if you track data to 1979 when satellite monitoring began. Instead of resolving the issue, three major 2014 scientific reports compounded the problem: the American Association for the Advancement of Science’s “What We Know: The Reality, Risks and Response to Climate Change”; the U.S. Global Change Research Program’s “National Climate Assessment”; and “Climate Change: Evidence & Causes,” a joint publication from NAS and The Royal Society. Rather than detailing possible reasons for the 2013 increase, each disregarded or downplayed it, while instead highlighting the 2012 data. In so doing, they inadvertently strengthened the hand of critics eager to assume that scientists were baffled by the 2012-13 change. One of the reports, from the NAS/Royal Society, acknowledged the 2013 data, but did so by appending a note to a chart showing the 2012 sea ice extent that stated “in 2013 Arctic summer sea ice extent rebounded somewhat….” Masked by the word “somewhat” was the fact that in mid-September 2012, the extent was 1.32 million square miles, and a year later 1.97 million square miles. To appropriate a phrase used by one of my grandchildren, that’s a lot of “somewhat.”

Contrast that statement with this one from Britain’s Met Office, made on its website by Ann Keen, one of that group’s sea ice scientists: “In 2012 we saw a record low which was likely to have been influenced by a storm which swept through the region in summer, but this year’s (2013) weather conditions appear to have been less conducive to ice loss.

“We know sea ice extent is going to vary from year to year due to weather conditions and that’s not at all inconsistent with the overall decline in extent. You wouldn’t expect to see records broken year after year, so this ‘recovery’ is not unexpected.

“In fact, model simulations of sea ice suggest that as the ice gets thinner you actually get more year to year variability in extent because larger areas of the ice are more vulnerable to melting away completely over the summer.”

Which is least vulnerable—the Met Office or the NAS/Royal Society—to conservative talk radio host Rush Limbaugh’s assertion in March 2014 that “In fact, the Arctic has more ice now than it’s had in a long, long time. It’s not melting. Everything they’re saying is a lie”? The answer, of course, is the statement from the Met Office.

Press skepticism about voices of authority is a healthy thing.

Those who oppose the scientific consensus on climate change are often accused of cherry picking data. When custodians of scientific knowledge seem to do the same, they not only lose the moral high ground but invite an attack on their lack of fidelity to a basic scientific norm. Seeming to downplay or ignore evidence also invites both questions about motive and accusations, including this one from Rush Limbaugh, alleging that climate scientists are trying “to scare people into supporting Big Government.”

Impugning motives is a classic means of undercutting credibility. But the attack is thwarted when science communication makes plain that accounting for seemingly anomalous data, a basic scientific norm, is a driver of new knowledge that leads to refinement of theory and to a better understanding of underlying patterns and regularities. In this case, accounting for the 2013 Arctic sea ice extent is improving our understanding of the factors explaining wide variability in the context of a multidecade downward trend.

Precise, clarifying specification of key terms

Because meaning exists at the intersection of a text, a context, and a receiving audience, on matters of public importance, science communication should not only precisely and clearly define its terms but should do so in ways that make sense to a reasonable nonscientist. In other words, the language in which science is communicated needs to help the public understand the science. In the first two cases I will outline, the words and phrases in question—“eliminated,” “eradicated,” and “genetically modified organisms”—obscure the science. By contrast, the labels “flu associated (or related) deaths” and “global climate change” focus the public in ways that advance understanding.

My first example is an instance of what I call incomprehensible precision, a distinction between eliminate and eradicate. When the Centers for Disease Control and Prevention (CDC) said in 2000 that measles had been “eliminated” from the United States, the agency didn’t mean what you and I mean when we use that word in casual conversation. Instead, it meant that there hadn’t been “continuous disease transmission for 12 months or more in a specific geographic area.” In effect, by “eliminated,” the CDC meant “no longer endemic (constantly present) in the United States.” Presumably sensing that the public might be confused by all of this, a CBS News web headline proclaimed “Measles still poses threat to U.S. despite being ‘eliminated’.”

To complicate matters further, the CDC distinguishes between eradication and elimination, words that thesauruses cast as synonyms, stating that “CDC defines eradication as the permanent reduction to zero of the worldwide incidence of infection caused by a specific agent…” Proclaiming that measles had been eliminated is worrisome because the statement fails to signal the ongoing need for vaccination—a need that exists if, despite its “elimination” in one geographic locale, measles persists in some places that people travel to and from.

Where this measles example shows problematic precision, the instance to which I will turn next—genetically modified organisms (GMOs)—includes one word that is, in many instances, inaccurate and a pair of others that fail to distinguish genetically engineered crops from those that are the by-product of other forms of breeding.

People who wonder how to account for the recent Pew Research Center’s finding that the public is much more wary of GMOs than are members of the American Association for the Advancement of Science might usefully reflect on the 2010 National Science Foundation Science and Engineering Indicators discovery that some members of the public think that ordinary tomatoes do not contain genes but genetically modified ones do. The person who doesn’t think tomatoes have genes presumably doesn’t realize that although comparatively little in the produce aisle has been bioengineered, virtually everything there has been genetically modified. Science has failed to inform the public that it is not modification of genes that distinguishes supposed-GMOs. All types of breeding—including hybridization, cross-breeding, mutagenesis, and recombinant DNA technology—involve modification and exchange of genes. The label is misleading for a second reason as well because, as the Food and Drug Administration (FDA) notes, “most foods do not contain organisms (seeds and foods like yogurt that contain microorganisms are exceptions).”

Finally, not only has virtually all of the produce in the grocery store at some point in time been genetically modified, but so, too, have the ancestors of the grocer and the customer perusing the pears. As a recent article in Genome Biology shows, some (albeit a relatively small number) of the approximately 20,000 genes in humans today were acquired through horizontal gene transfer—in other words, from other species. Among them are genes that produce antioxidants and enhance our innate immune responses. We are all GMOs, as it turns out. Yet some among our society of GMOs remain fearful of anything identified as a genetically modified organism and think that all GMOs should be labeled.

To begin the process of drawing clarity out of confusion, it will be necessary to craft a label that captures what is distinctive about the process and product. “Recombinant deoxyribonucleic acid (rDNA) technology (rDNA-t)” may work in technical discussions among scientists, but probably not with the public. At the risk of using an acronym that confuses crops that are genetically engineered with a golf tournament, my candidate for purpose of starting the discussion is Precise Genetic Adaptation (PGA). “Precise” because, unlike mutagenesis, the modification at issue here does not affect the broader genome. “Genetic” because genes are the focus of the change. “Adaptation” because that is the intended outcome. One might then characterize the newly FDA approved “Innate” potato, a trademarked creation of the Simplot company, as a Precisely Genetically Adapted potato.

Effective scientific labels invite questions whose answers will increase the audience’s understanding of the relevant science. Use of the terms “adaptation” or “adapted” invites the questions: How was it adapted? And adapted to do what? The answers should explain how genes can be added, edited, or suppressed through bioengineering and point out the reasons for doing so, which include increasing resistance (to drought, temperature, salinity, pests, pesticides, or pathogens); or enhancing nutritional value (Golden Rice) or suppressing a problematic characteristic (the Innate potato, which is claimed to reduce bruising and the production of carcinogens when cooked); or reducing the reproductive capacities of a disease carrier (the transgenic mosquito).

With these distinctions in place, those concerned about the recent report by an agency within the World Health Organization suggesting that the commonly used weed-killing chemical glyphosate may be carcinogenic could engage in debate about the effects of widespread use of glyphosate-resistant corn and soy beans and not have the audience assume that that specific concern applies to crops genetically engineered to be pest resistant or to suppress a possible carcinogenic property.

By contrast to the muddled debate over GMOs, science has moved toward a clearer conceptual phrase by shifting from “global warming” to “global climate change.” The alternative works because saying “global climate change” concentrates attention on phenomena—such as rising sea levels, changed precipitation patterns, and more extreme weather—that probably will have a greater effect on most of us than the increased temperature itself.

Where perceptions of whether global warming is occurring are affected by the temperature on the day of the survey, the tie between the experience of extreme weather and climate change doesn’t suffer the same problem. In other words, a focus on “climate change” can help the public understand why, even as the globe overall warms, some areas may experience cooling. In 2014, for example, although much of the western United States was warmer than average, temperatures in the East were cooler.

Not only does it better signal relevant phenomena, but the “global climate change” frame provides a ready response to the stock line of attack against “global warming,” expressed by Rush Limbaugh and Senator James Inhofe (R-Oklahoma), chairman of the Standing Committee on Environment and Public Works. After the “Snowmageddon” melted down, Limbaugh noted: “You’ve got people out there saying that this major ice and snowstorm has been brought about by global warming, and they apparently have no irony whatsoever that they’re blaming global warming for massive winter storms.” Senator Inhofe visualized the same line of argument with a picture of an igloo and an actual snowball. Where Limbaugh’s major ice and snow storm and Inhofe’s igloo seem to some to undercut the existence of “warming,” each could comfortably raise the questions: “Is such extreme weather now more likely than in the past? If so, is climate change a probable cause?”

My next example draws together all four of the norms on which I’ve focused. First, some background. From 2003 to 2010, the CDC used a 2003 estimate of 36,000 flu deaths as the yearly toll taken by the flu. In 2010, a reporter for National Public Radio recalled, “When we’ve asked the Centers for Disease Control and Prevention for updated figures, they told us 36,000 was the best they had.”

Because it was derived from flu seasons in the 1990s when the particularly lethal H3N2 strain was circulating, that number overestimated the deaths in the first decade of the 21st century. “Why did the CDC exaggerate the number of flu-associated deaths and seem to attribute the deaths to flu itself rather than attendant illnesses?” asked critics. In a polarized political environment, antagonists answer the question “why” with suggestions that scientists are self-interested. In the case of climate science, the ascribed motive involves funding and alleges that scientists who buck the dominant narrative will lose federal grant support and find their work closed out of major scholarly journals. In the case of the flu death numbers, some critics alleged that the CDC was in league with the pharmaceutical companies that profit from the vaccines.

The CDC’s vulnerability to these attacks was reduced when in 2010 it changed its message to read: “Flu seasons are unpredictable and can be severe. Over a period of 30 years, between 1976 and 2006, estimates of flu-associated deaths in the United States range from a low of about 3,000 to a high of about 49,000 people.” The definition is precise. Not “flu-caused” but “flu-associated,” terms the CDC defines by explaining that flu was a “likely” contributor “but not necessarily the primary cause of death.” The word “estimates” is used and a range specified. The limits of the CDC’s knowledge are articulated. The fact that the flu season is unpredictable is reported. The CDC site also accounts for the available data by providing links to the science justifying the answers.

This CDC message honors science’s disclosive, accountability, and definitional norms better than it did when it seemed to say that the flu exacted a quota of approximately 36,000 deaths year in and year out. In this change we see a dramatically altered relationship between the CDC and its intended audience.

As confirmed in a study published in 1982 by John T. Cacioppo, Richard E. Petty, and Joseph A. Sidera, messages are capable of priming a salient self-schema, an identity, in their audience. Specifically, people who are induced to think of themselves as religious evaluate messages differently than do those who self-schematize as legally oriented. In this current CDC message, we see a first persona—the speaker implied by the cues in the message—who not only trusts the audience but respects its intelligence. The voice here is modest but authoritative. It tells the audience not only what science knows but what it doesn’t know and why. The second persona in this message—the implied audience—is interested enough to navigate nuance and committed to understanding what and how the CDC knows. The trust engendered by these two personae should increase the likelihood that the audience will embrace the CDC statement that says: “The best way to prevent the flu is by getting vaccinated each year.”

No hole in ozone hole story

To this point I have suggested that when science communication fails to champion a climate characterized by critique and self-correction, as in the Wakefield case, it increases the public’s vulnerability to flawed science. When science communication fails to acknowledge the limitations in its data and methods, as it did when it failed to communicate the uncertainties in its predictions of “Snowmageddon,” it opens modeling in general and modeling of both weather and climate in particular to critique. When it fails to feature the level of certainty with which it is asserting that 2014 is the warmest year, it risks press trust and fuels the attack that alleges that climate science has an unacknowledged ideologically-driven agenda. When it fails to account for evidence, as it did in the case of the 2013 Arctic sea ice extent increase, it invites questions about motive that can feed suspicions about self-interest or conspiratorial intent. And when science communication fails to carefully select and define key terms, as it did by conventionalizing discussion of “genetically modified organisms” and saying the measles had been “eliminated,” it confuses the public and muddies the policy debate.

Having noted exemplary and problematic instances of science communication, let me end with a story about the value of science both done and communicated well.

Science came to understand the causes of ozone depletion in the atmosphere through a process of critique and self-correction. After discovering that the ozone layer was thinning in the Antarctic, scientists looked for possible causes: Volcanic eruptions. The workings of solar electrons and protons. Supersonic transport. Natural variability. None accounted for the loss. Ultimately, the hypothesis that ozone-destroying chlorine atoms were a key culprit was advanced and chlorofluorocarbons (CFCs) identified as one likely source.

The public and policymakers came to understand the problem through a well-specified analogy (the ozone layer is the Earth’s sunscreen), a metaphor (the “ozone hole”), and an iconic image that capsulized the phenomenon. The notion that the ozone layer is the Earth’s sunscreen is particularly apt because it points to the fact that ozone absorbs ultraviolet rays from the sun. At the same time, the analogy invokes the widely shared experience of suffering sunburn in the absence of sufficient sunscreen. Where the ozone hole metaphor mistakenly suggests a stratospheric space devoid of ozone, this personalized sunscreen analogy lends itself to the accurate conclusion that the ozone layer has thinned, a phenomenon analogous to wearing too fine a coat of sunscreen. Finally, the analogy shows the relationship between humans and the ozone layer. The Earth’s sunscreen is our protector.

In addition to sharing an iconic image and a compelling analogy, scientists offered a carefully specified estimate that tied nicely to the sunscreen analogy: “[F]or each 1 percent decline in ozone levels, humans will suffer as much as a 2 to 3 percent increase in the incidence of certain skin cancers.”

The story of protecting the ozone layer reminds us that within recent memory, well-communicated science resulted in Republicans and Democrats in the United States, as well as nations around the globe, working together for the collective good of the planet and its inhabitants. The Senate unanimously ratified the Montreal Protocol phasing out ozone-depleting substances. The Protocol was adopted by 196 states and the European Union. And when President Ronald Reagan signed it in 1988, he explicitly praised the role science played in this historic achievement.

The narrative is instructive for other reasons as well, among them its caution about unintended consequences. It was, after all, scientists who synthesized CFCs as a seemingly safe replacement for ammonia as a refrigerant. But it is also a story about the accuracy of scientific forecast. With the Montreal Protocol in effect, NASA reported in 2014 that Antarctic “[s]atellite and ground-based measurements show that chlorine levels are declining….” And in 2015, the Environmental Protection Agency estimated that the fully implemented Montreal Protocol “is expected to avoid… approximately 1.6 million skin cancer deaths… in the United States for cohort groups in birth years 1890-2100.”

When science embodies its integrity-protective norms, scientists increase the likelihood that the knowledge is durable. The results have changed our understanding of ourselves and our world. Whereas some ancients envisioned the sky as a roof supported by giant pillars, we see it now as billions of expanding galaxies, discoveries built on the ingenuity and engineering required to imagine and create the telescope. When science communicators hew to science’s norms, they not only signal that the underlying science is sound but also increase the likelihood that the public will embrace science’s findings. Without public trust in science, we would not have tap water that we trust or widespread smallpox, measles, and pneumonia vaccination. The confidence that institutional leaders place in science has improved jurisprudence in the form of cautions to juries about the unreliability of eyewitness testimony. Millions will experience greater economic security in their later years because, acting on the findings of behavioral economists, their employers initiated “opt out” retirement savings structures for their employees. Put simply, trust in science matters.

In sum, let me note that the million-plus people who otherwise would have died of skin cancer may not know that they were the ones saved by implementation of the Montreal Protocol. But if science communication does its job in telling the story of science as a way of knowing, they—and other members of the public—will be more likely to realize that implementing sound science has not only enhanced understanding of ourselves and our world but has also improved our lives. Science’s capacity to do so is magnified when scientists and science communicators honor science’s norms by specifying intended meanings, engaging seemingly uncongenial evidence, remaining acutely conscious of the limits of their methods and data, and championing a culture of critique and self-correction. The resulting sound science, communicated well, increases the likelihood of good public policy.

From the Hill – Fall 2015

White House budget guidance

In early July, the White House Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) issued their annual joint memo identifying science and technology priorities for the FY 2017 budget. That memo provides guidance to federal agencies as they prepare their FY 2017 budget plans, which must be submitted to OMB for review in September before they’re sent to Congress by the president in February.

“From the Hill” is adapted from the e-newsletter Policy Alert, ­published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

The FY 2017 guidance largely reiterates research and development (R&D) budget priorities from years past. A partial list includes:

Climate change. The memo highlights the need for “actionable data, information, and related tools” to assist with climate resilience and adaptation. Since the Republicans took back the House of Representatives in 2010, few areas of science and technology funding have been as controversial as climate science, with the possible exception of clean energy.

Clean energy. As in past years, the memo casts a wide net related to low-carbon energy technology, calling for grid modernization and innovation in renewable energy, transportation, and efficiency in homes and industry. But again, Congress tends to have very different ideas about where to put these dollars.

Advanced manufacturing, including enabling technologies such as nanotechnology and cyber-physical systems. Attempts to establish a National Network of Manufacturing Innovation have been a centerpiece of recent administration budgets. The network was authorized by law in December 2014, but so far funding has not been forthcoming.

Life sciences and neuroscience. This is one of the few areas on which both parties seem to agree, as reflected in the willingness of appropriators to embrace the BRAIN Initiative.

The memo also cites antimicrobial resistance, which received special focus in last year’s budget submission; biosurveillance; and mental health access. In addition, the memo directs agencies to prioritize resources for commercialization and technology transfer, requests agencies’ evaluation strategies for their R&D programs, and cites the Maker Movement as important potential collaborators.

Bipartisan energy bill passes Senate committee

The Senate Energy and Natural Resources Committee passed a broad, bipartisan bill aimed at modernizing the nation’s energy system—from infrastructure, to workforce, to R&D. The bill, a compromise worked out over several months by the committee’s chair, Sen. Lisa Murkowski (R-AK), and ranking member, Sen. Maria Cantwell (D-WA), was intentionally kept free of the most controversial issues, such as lifting the existing U.S. ban on exporting oil, though they are all but guaranteed to be raised when the bill reaches the Senate floor for debate. Tucked inside the broad energy policy legislation is the bipartisan “E-Competes Act” championed by Sen. Lamar Alexander, which would authorize five years of 4 percent annual funding increases, beginning with FY 2015 levels, for the Department of Energy’s Office of Science and Advanced Research Projects Agency—Energy. Though the bill is bipartisan, not everyone is supportive, with environmental groups notably expressing their displeasure with several of the bill’s provisions. No timetable has been set for the bill to reach the Senate floor.

House passes GMO labeling bill

The House has passed the Safe and Affordable Food Labeling Act (H.R. 1599) to regulate the labeling of genetically modified organisms (GMOs) in food. The legislation will prevent states and other localities from enacting mandatory labeling of food that contains GMOs, while at the same time setting up a voluntary non-GMO certification to be run by the U.S. Department of Agriculture. H.R. 1599 would block states that have already passed mandatory labeling laws from enforcing those regulations. The bill will also require that the Food and Drug Administration be consulted before a GMO is brought to market. This consultation is currently voluntary.

Supreme Court nixes EPA action

The Supreme Court ruled 5-4 that the Environmental Protection Agency (EPA) should have considered the costs of regulating emissions from power plants, before deciding that such regulation is “appropriate and necessary.” The EPA sought to regulate power plant emissions of mercury and other hazardous pollutants by using authority granted to the agency by the Clean Air Act to control air pollution. Justice Scalia, who wrote the majority opinion for Michigan v. EPA, concluded that the EPA acted “unreasonably” in its interpretation of the Clean Air Act. Scalia was joined by Justices Thomas, Alito, Roberts, and Kennedy. The majority position explains that it was irresponsible for the EPA to not consider a cost-benefit analysis of its regulation; “[i]t is not rational, never mind ‘appropriate,’ to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits.”

Senators create competitiveness caucus

In a recent op-ed in Roll Call, Sens. Chris Coons (D-DE) and Jerry Moran (R-KS) call attention to policy areas where they say the United States is losing its competitive edge, and note that the nation is at what they term a “competitive inflection point.” In response, Coons and Moran are launching a new bipartisan Competitiveness Caucus in partnership with the Council on Competitiveness as “a forum to bring together Democrats and Republicans to address the most pressing issues facing our economy.” They plan to tackle issues ranging from transportation infrastructure, to tax policy, to federal support for R&D.

NASA plans human travel to Mars

The National Aeronautics and Space Administration (NASA) is seeking input at an October workshop from planetary scientists, space technologists, and human spaceflight experts on where to land humans on Mars. The event will be held October 27-30 at the Lunar Planetary Institute in Houston, Texas

Census Bureau launches annual entrepreneur survey

The Census Bureau is planning to supplement the Survey of Business Owners and Self-Employed Persons, which is conducted every five years, with an annual survey of entrepreneurs. Once approved by OMB, the survey should begin being implemented this fall and will include questions on innovation, R&D activities, and access to capital. The survey is funded jointly by the Ewing Marion Kauffman Foundation, the Minority Business Development Agency, and the Census Bureau.

Forensics labs discuss reforms

The International Symposium on Forensic Science Error Management, organized by the National Institutes of Standards and Technology, involved some 500 scientists, managers, and practitioners across a range of disciplines discussing the many factors that have contributed to the growing number of reports of flawed forensic science practices. A 2009 National Academies report highlighting the lack of scientific rigor underlying certain forensic science practices is largely credited with kicking off the ongoing process of reforms. A major focus of the recent meeting was on the lack of standard blinding procedures for most practitioners. In many labs, practitioners are privy to irrelevant (to a given forensic test) information with the potential to bias interpretation of the results. Participants at the meeting discussed the potential for implementing or adapting safeguards like blinding at diverse labs across the country.

Modernizing biotech regulatory system

On July 2, the administration issued a memorandum to the Food and Drug Administration, EPA, and the U.S. Department of Agriculture outlining plans for modernizing the U.S. regulatory system for biotechnology products. According to the memo, a first task will be to clarify the responsibilities of each of the three agencies, and how each may overlap depending on the product. A second task will be to develop a long-term strategy to minimize the risks of biotechnology products. Finally, the administration will support an “independent analysis of the future landscape of the products of biotechnology” and has tasked the National Academies to lead the project. To maximize public engagement in these efforts, the administration plans to conduct a series of public meetings, the first of which will be held this fall in Washington, D.C.

Updated national HIV/AIDS strategy

The Obama administration has updated its National HIV/AIDS strategy for 2015-2020 to build on the foundation of the first comprehensive HIV/AIDS strategy that was released in 2010. This updated strategy will prioritize efforts to support groups that are most affected by HIV, and will focus on the following four actions: widespread testing and linkage to care; broad support for people living with HIV to enable them to remain in comprehensive care; universal viral suppression for people affected by HIV; and full access to comprehensive pre-exposure prophylaxis services within certain demographic groups.

Correctional Health Is Community Health

The recent and dramatic expansion of the criminal justice system in the United States has been described by legal scholars as hyperincarceration, or “mass incarceration.” Much of the increase in the size of the prisoner population is a result of the “War on Drugs” and associated federal reforms such as mandatory minimum sentencing laws. Over the past 40 years, “tough on crime” rhetoric and federal grants for law enforcement agencies produced an unprecedented increase in arrests for drug possession. Concurrently, severe mandatory minimum sentences were imposed en masse on people arrested for drug-related charges, resulting in an expanded population of prisoners who would serve longer sentences. Disproportionately, the burden of mass incarceration landed on the backs of the nation’s most vulnerable populations, namely low-income and undereducated people of color.

While the socioeconomic disparities between incarcerated and nonincarcerated populations are stark, the health disparities encountered in incarcerated populations are among the most dramatic. Over half of state prisoners and up to 90% of jail detainees suffer from drug dependence, compared with only 2% of the general population. Hepatitis C is nine to 10 times more prevalent in correctional facilities than in communities. Chronic health conditions, such as asthma and hypertension, and mental health disorders also affect prisoner populations at rates that far exceed their prevalence in the general population. Often, the health care and health status of prisoners is regarded as something insular, something of no concern to, and uniquely disjointed from, the general population. But over 95% of incarcerated individuals will eventually return to their communities, and their health problems and needs will often follow along.

Adding to the challenges, the communities to which inmates return tend overwhelmingly to be low-income communities of color, and they often lack adequate health care resources. For many members of the justice-involved population, emergency rooms serve as their primary care providers, and these services are sought only once symptoms of a health condition or injury have become sufficiently acute.

Although incarceration is often counter-productive to the health and well-being of the affected population, it does create a public health opportunity: providing screening, diagnosis, treatment, and post-release linkage to care for members of a vulnerable population who may not seek or have access to services otherwise. In fact, correctional health care, if it capitalizes on this opportunity, can reduce the burden of disease for communities that carry the greatest burden.

Population health profile

In examining these issues, it is useful to start by examining the demographic and epidemiological features of the incarcerated population. A number of social determinants are strongly associated with poor health. In the United States, being non-white, low-income, undereducated, homeless, and uninsured are among the strongest predictors. When compared with the general population, individuals in jails and prisons exhibit these predictors of poor health disproportionately. As a result, the population of inmates typically shares a number of health profile characteristics, including mental health disorders, drug dependence, infectious disease, and chronic conditions. Moreover, some groups pose unique challenges to correctional health care. Examining these factors in order:

Mental health disorders. In the 1970s, psychiatric hospitals across the nation began to be deinstitutionalized with the intention of shifting patients to more humane care within their communities. However, insufficient funding for community-based mental health programs left many patients without access to care altogether. As a consequence, people with undiagnosed, untreated, or inadequately treated mental health disorders experienced heightened risks of incarceration. Indeed, there are now more people with serious mental health disorders in Chicago’s Cook County Jail, New York’s Riker’s Island, or the Los Angeles County Jail than there are in any single psychiatric hospital in the nation.

Estimates of the number of inmates who have symptoms of a psychiatric disorder—as specified by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV)—vary widely but often exceed half of the incarceration population. In contrast, approximately one in 10 people in the general population has symptoms of a psychiatric disorder by the same criteria. Additionally, an estimated 10 to 25% suffers from serious mental health problems, such as schizophrenia or major affective disorders; by comparison, an estimated 5% of the general population suffers from a serious mental illness.

Drug dependence. Given the role of the war on drugs in mass incarceration, high rates of drug dependence among inmates are not surprising. Over 50% of all inmates meet the diagnostic criteria for drug dependence or abuse, and one in five state prisoners has a history of injection drug use. Up to a third of all heroin users—an estimated 200,000 people—pass through U.S. prisons and jails each year. The co-morbidity of substance abuse and mental illness among inmates is strikingly high. Among those who have a serious mental illness, over 70% also have a co-occurring substance abuse disorder; in the general population, the corresponding percentage is about 25%.

Infectious disease. Contagious diseases such as tuberculosis (TB), sexually transmitted infections (STIs), HIV, and hepatitis C (HCV) are prevalent in correctional facilities. In 2002, it was estimated that jails and prisons, respectively, had a 17 and 4 times greater prevalence of TB than the general population. Although the prevalence of TB in correctional facilities appears to have declined in more recent years, outbreaks are still possible, as poorly ventilated, enclosed, densely populated dwellings are highly conducive to the spread of TB.

Although the true prevalence of STIs in correctional facilities is difficult to estimate due to differences in screening procedures (most specifically, universal opt-out vs. opt-in screening), studies consistently report that the prevalence of chlamydia, gonorrhea, and syphilis are higher in correctional populations, particularly jails, than in the general population. The prevalence of STIs is also especially high among female inmates, who are more likely to have a history of sex work than their male counterparts.

The prevalence of diagnosed HIV in correctional facilities has recently declined, but remains four to five times higher among inmates than in the general population. Correctional facilities, which are increasingly adopting routine screening procedures, have played an important role in diagnosing those who would not otherwise seek testing. Because injection drug use is a common route of transmission for both HIV and HCV, coinfections of these diseases are common. HCV is estimated to be nine to 10 times more prevalent among inmates than in the general population, and over half of prisoners with HIV are estimated to also have HCV.

Chronic conditions. Chronic health conditions, such as diabetes, hypertension, and asthma, now comprise a growing proportion of correctional health care needs. This increasing prevalence comes as the result of two trends: the aging prison population and the nation’s general obesity epidemic. About 40% of all inmates are estimated to have at least one chronic health condition. With a few exceptions, nearly all chronic health conditions are more prevalent among inmates than in the general population.

Special populations. Certain populations pose a unique challenge to correctional health care; these include women, juveniles, and aging populations. Female inmates, while comprising only 10% of the incarcerated population, have a greater burden of disease than their male counterparts. Post-traumatic stress disorder is particularly common among incarcerated women, about a third of whom experienced physical abuse and a third of whom experienced sexual abuse prior to incarceration. An estimated 5 to 6% of incarcerated women are pregnant upon entry to jail or prison, and the prevalence of STIs among female inmates is at least twice that of the incarcerated male population.

Incarcerated juveniles also have a higher burden of disease than their nonincarcerated peers. Dental decay, injury, and substance use are common, and many were subject to abuse prior to incarceration. Twenty% of incarcerated juveniles are parents or expecting, and STIs are highly prevalent among incarcerated juveniles. Although incarcerated juveniles are typically held in facilities separate from adults, about 10% are held in adult prisons; in both settings, this population is highly vulnerable disease and abuse.

The imposition of longer sentences in the 1980s and 1990s produced a dramatic increase in the number of older adults in corrections. From 1990 to 2012, the number of inmates aged 55 or older increased by 550% as the prison population doubled. Older inmates, as in the general population, have higher rates of chronic health condition; geriatric syndromes, such as cognitive impairment or dementia; and disabilities, compared with younger inmates. Given the aging incarcerated population, prisons and jails—which were designed to hold young and healthy inmates—are increasingly becoming a site for nursing home-level care and treatment for chronic conditions.

Challenges in correctional health care

In sum, correctional institutions are the sole health care providers for some of the nation’s sickest people. Yet the quality and quantity of health care that is provided across correctional institutions remains unclear. Several factors contribute to this problem.

There are various legal issues. In 1976, the Supreme Court codified what it called the “evolving standard of decency” for the provision of health care in correctional institutions. In the case of Estelle v. Gamble, the court found that “deliberate indifference to serious medical needs” was the “unnecessary and wanton infliction of pain,” and therefore a violation of the Eighth Amendment prohibition of cruel and unusual punishment. The Estelle decision and a series of subsequent litigations have led to expanded health care services for inmates.

Lawmakers should amend the Prison Litigation Reform Act to provide increased pressure for improved correctional health care.

Although the introduction of constitutionally mandated standards of care for inmates represents progress, many observers have argued that these standards for care are quite low. In lawsuits alleging inadequate care, inmates must prove not only that they received substandard care but also that correctional providers demonstrated “deliberate indifference.” That is, they must prove that a correctional facility staff member or health care professional knew of and disregarded the risk to the inmate—a tremendously difficult circumstance to prove without reasonable doubt. The Prison Litigation Reform Act of 1996 imposed additional limitations in litigating for better medical care, including the requirement that prisoners pay fees to file a suit and that inmates adhere to the “exhaustion rule,” which requires inmates to exhaust all administrative appeal options prior to filing a case, a process that can often take years.

There are also cost issues. Correctional institutions are a key component of public safety, yet many critics have noted that the costs associated with the unprecedented expansion of the criminal justice system now far outweigh the benefits. In five states, correctional spending now exceeds spending on higher education. Since 1980, state correctional spending has increased by 300% to $50 billion per year, and after Medicaid, correctional spending is now the fastest-growing area of government spending. In Rhode Island, the average cost of incarcerating one inmate in minimum security is $53,462 per year; for an inmate in high security, the cost jumps to $182,396 per year. Many observers have noted that addiction and mental health treatment programs, as alternatives to incarceration, would be more cost-effective and would better address the underlying problems.

Varying standards add another confounding factor. There are many international guidelines for correctional health care—the most notable being those framed by the World Health Organization and the United Nations High Commissioner for Human Rights—but the United States has neither regularly monitored nor enforced these guidelines. Within the United States, standards for inmate care have been outlined by the American Public Health Association, the American Correctional Association, and the National Commission on Correctional Health Care. The latter group offers voluntary accreditation to facilities that demonstrate adoption of the commission’s health standards, but only a fraction of facilities have pursued accreditation, and no systematic study has been conducted to evince improved conditions following the adoption of these standards. Uniform quality-of-care standards that are monitored and enforced would allow for meaningful comparisons across facilities and with community populations; more timely identification of underperformance; and a framework to guide improvements in care delivery.

Screening protocols and procedures, especially for infectious disease, also vary widely across states and institutions. Although all facilities offer some screening, particularly for TB, syphilis, and HIV, a much smaller number screen routinely. As of 2012, only about 19% of prison systems and 35% of jails provided routine opt-out HIV testing. Traditionally, HIV screening in correctional institutions has been opt-in; that is, it occurs only at the request of the inmate. Increasingly, however, facilities are adopting opt-out procedures, whereby HIV screening is automatic, though still optional, for all inmates. At individual facilities, barriers to improving screening procedures include reluctant administrators; logistical challenges, such as insufficient staffing; and in the case of jails, where many people taken in are released within 48 hours, high turnover makes screening and subsequent receipt of test results especially challenging.

Overall, screening procedures and policies are inconsistent across the nation, and this inconsistency can be attributed, in part, to the absence of national screening procedures, as well as to the disconnect between correctional health care and local health departments. Despite these challenges, however, a few facilities have served as model public health collaborators in screening for infectious disease. Notably, at correctional facilities in Rhode Island, routine HIV testing led to a diagnosis of one-third of all HIV cases in the state in the 1990s.

Differences in how correctional health care is administered comprise another variable. Health care is typically provided in one of three ways: public correctional care, private industries, or academic medical centers. Occasionally, medical services are contracted to multiple types of care providers within a single facility. The largest correctional facilities generally represent public correctional care and are equipped to provide a full range of in-house medical services, whereas smaller municipal and local jails contract medical care to local providers. As of 2004, 32 states contracted with private correctional care industries for some or all of their medical services, accounting for about $3 billion of the estimated $7.5 billion allocated for correctional health care. In 2005, 40%% of all correctional health care was administered by for-profit, private correctional care industries. Findings from state audits and anecdotal evidence suggest that some private correctional care industries may administer substandard care. However, no comprehensive studies have been conducted on which type of correctional health provider (public, private, or academic) is associated with best quality of care or health outcomes.

Drug treatment adds to the complexity. Well over half of all inmates in jails and prisons suffer from drug dependence and have a substantial need for evidence-based drug treatment. Correctional health care systems have taken a variety of approaches to administering drug treatment: referral to drug courts where treatment is provided with judicial oversight, assignments to interventions within the community, treatment provided while incarcerated, and participation in reentry programs. Drug treatments offered to those who are incarcerated have included individual drug and alcohol education, group counseling, relapse prevention, case management, cognitive behavioral therapy, medication-assisted therapy (MAT), and others. Although MAT with methadone or buprenorphine is among the most promising treatment options for opioid dependence, this approach remains dramatically underutilized in correctional care due to concerns for cost, stigma, and limited awareness of the social, medical, and economic benefits of providing such therapy in corrections.

Similarly, regulating drug availability in prisons varies. Although it is difficult to assess the quantity of illicit drugs that are available in prisons, ample anecdotal evidence suggests that, in some cases, illicit drugs can be highly available. From 2001 to 2011, the New York State Department of Corrections reported that the annual rate of positive drug tests among inmates ranged between 2.9% and 3.8%. Illicit drugs can enter correctional facilities through a variety of routes: via mail, by visiting relatives, through prison staff, and by other means. Prisons have attempted to regulate drug availability through supply-reduction strategies, demand-reduction strategies, or both. Supply-reduction strategies include the use of drug dogs, random searches, random urine tests, and incentivizing noncontact visits. Demand-reduction strategies include providing medication-assisted therapy and drug-free units, the latter of which have been used in Australia and aim to allow inmates to maintain distance from a prison drug scene. No comprehensive studies have been conducted on which strategy to regulate drug availability is most effective, although MAT is likely promising.

To address and reduce these challenges facing correctional health care, we offer a series of recommendations:

Conditions of confinement

As noted, many people who are confined to jails and prisons enter these facilities with serious health conditions, including mental health disorders, drug dependence, infectious disease, and chronic conditions. Importantly, inmates’ health is also known to change over the course of their confinement in correctional facilities—and the conditions of confinement may improve health outcomes for some but exacerbate health conditions for others.

For inmates whose lives on the outside are particularly chaotic, incarceration can offer stabilization. In addition to providing access to health care, prisons and jails provide guaranteed meals, stable housing, clean clothes, showers, structured days, and reduced access to substances to those who were previously dependent. For those inmates who had inconsistent access to food, shelter, and other basic needs, incarceration can dramatically improve physical health.

Although conditions of confinement have significant implications for correctional health, it is important to note that environments within facilities—and their corresponding impacts on health—may vary dramatically across institutions. Some of the key dimensions determining this variability include health care budget, staffing, facility layout, resources, correctional philosophy, and correctional leadership, among many other things. Moreover, facilities at differing custody levels (minimum, medium, maximum, and high) operate differently from one another. Here, the focus will primarily be on the impact of conditions of confinement for those who are confined to medium- and maximum-security prisons.

The effects of incarceration on the transmission of infectious disease are complex. Although the prevalence of infectious disease among inmates is relatively high, the incidence (or acquisition) of infectious disease within correctional facilities is low compared with many other areas of the world. In particular, the incidence of infectious diseases that require blood-to-blood transmission, such as HIV or HCV, is fairly low in correctional institutions; one explanation is that the primary routes of transmission for these diseases—sex and injection drug use, although potentially riskier when evaluated per event—occur more frequently outside than within correctional facilities. The overwhelming majority of HIV and HCV infections among incarcerated populations occur prior to incarceration or shortly following release. Conversely, however, the incidence of airborne infections, such as TB and influenza, can increase quickly in crowded conditions.

Incarceration can exacerbate some chronic conditions, such as asthma, because of poor ventilation, overcrowding, and stress. The impact of incarceration on general health is fairly difficult to evaluate. Findings on inmates’ physical activity are conflicting, and likely vary across institutions. Meals in corrections are often energy-dense, with high fat and calorie content, but may be better than those normally consumed by a large subset of the incarcerated population prior to incarceration.

Many key characteristics of daily life in correctional facilities—including restricted liberty, material deprivations, limited movement, the absence of meaningful endeavors, lack of privacy, and risks of interpersonal danger—expose inmates to stressors that can incite (short- or long-term) or exacerbate symptoms of mental health disorders. Although many of these facets are characteristic of correctional institutions, many of their negative impacts on emotional well-being can be negated through the reduction of idleness in increased availability of meaningful programming. The availability of programming varies across institutions, but general trends emerge: while vocational training programs have increased across state and federal prisons over the past 30 years, the number of facilities offering college courses has declined dramatically since 1990, corresponding with the elimination of Pell grant funding for inmates. In addition to promoting emotional well-being, meaningful programming can be highly rehabilitative, increasing inmates’ employment opportunities upon release from prison and reducing likelihood for recidivism.

Correctional system administrators should update current systems so that Medicaid coverage can be suspended rather than terminated to reduce interruptions to coverage for people who are justice-involved.

Extreme conditions of confinement, such as overcrowding and long-term isolation, can have strong deleterious effects in prisoner health. Overcrowded conditions, which are defined as facilities that are operating near or exceeding capacity, aids the spread of communicable diseases and places undue additional stress on inmates and facility staff. Overcrowding has also been associated with increased risk of suicide, as overcrowding reduces the availability of meaningful programming.

Segregation, or “solitary confinement,” is often used for protective custody and as punishment for disciplinary infractions. Increasingly, correctional systems are relying on long-term isolation in “supermax” facilities for punishment—a practice that, unlike traditional solitary confinement, enforces near-total isolation. Long-term isolation in supermax, however, has been shown to elicit a range of adverse psychological responses, including anxiety, rage, hallucinations, and self-mutilation in as little as 10 days; prisoners with preexisting mental health conditions are particularly vulnerable to the deleterious effects of this isolation. Many supermax prisoners are subject to these conditions for several years. Some critics have equated long-term confinement in supermax facilities with psychological torture.

To improve conditions of confinement, we offer the following recommendations:

Continuity of care

The time when an inmate transitions from incarceration back into society poses some special risks and opportunities. During the two weeks that follow release from prison, people are 13 times more likely to die than members of the general population. Drug overdose, cardiovascular disease, homicide, and suicide are among the most common causes of death during the weeks that follow release from prison. Risk of fatal drug overdose during this period is particularly staggering, with recently released prisoners being 129 times more likely to die from drug overdose than members of the general population. MAT in corrections with continuation into the community, paired with overdose education and naloxone distribution programs delivered prior to release, could dramatically reduce inmates’ risk of fatal drug overdose following release.

The next weeks or months that follow release often bring additional stresses. During this time, many individuals struggle to secure gainful employment and stable housing while also laboring to reestablish support networks and relationships within the community. This process of securing basic needs and rebuilding a life requires a focus of energy and effort, and as a result, health care access and continuity of care quickly become low priorities for many recently released inmates.

As of 2010, as many as 90% of people who were released from jails and prisons had no health insurance, which substantially limited their access to health care services. Because securing gainful employment and employer-provided health insurance can take considerable time following release from prison, Medicaid was and continues to be an important source of health care coverage for people who are justice-involved. The recent inception of the Affordable Care Act, which expands Medicaid eligibility to childless individuals whose incomes fall below 138% of the federal poverty level, has tremendous implications for health care access to people who were previously incarcerated. As many as 2.86 million, or 22%, of the estimated 13 million people who will now be eligible for Medicaid will be members of the justice-involved population.

Although the Medicaid expansion is certainly cause for optimism relating to continuity of care, collaborations between correctional systems and Medicaid to facilitate enrollment are lacking. Federal guidelines urge that states only suspend Medicaid coverage during a period of incarceration, but most states terminate inmates’ Medicaid altogether and take no action to reenroll inmates upon release. As a result, many justice-involved individuals experience a lapse in medical coverage during their transition from correctional facilities into their communities.

There is substantial need for re-entry programs that address employment, housing, and other transitional needs that ultimately affect health. Indeed, successful linkage to care should be understood and addressed within the context of individuals’ survival priorities and re-entry needs. A few innovative programs may serve as models for retaining justice-involved people in post-release medical care; these programs include the Transitions Clinic Network that currently operates in 10 cities across the nation; Project Bridge in Rhode Island; Community Partnerships and Supportive Services for HIV-Infected People Leaving Jail (COMPASS) in Rhode Island; and the Hampden County Model, in which the focus is on rehab and re-integration, was developed by and implemented in the Hampden County jail, in Ludlow, Massachusetts.

To improve continuity of care, we offer the following recommendations:

Challenges and benefits

The recommendations we have offered were developed to primarily target administrators of correctional facilities, people and groups who can influence standards and practices, and policymakers in positions to propose and adopt needed legislation. On another level, we hope these suggestions might guide the efforts of various other people, including staff members within correctional facilities, activists involved in prison reform, and other key stakeholders. It will take collective action to speed change.

And change is clearly needed. More than 2.2 million adults are incarcerated in U.S. jails or prisons, and over 95% of them will eventually return to their communities. On their return, they will carry with them any mental health disorders, drug dependence, or chronic conditions that were not diagnosed or treated through correctional health care systems or managed through continued care upon release. Any untreated infectious disease, such as HIV, HCV, or TB, will also join them on their journeys home. Addressing the challenges that face correctional health care, improving inmates’ conditions of confinement, and ensuring that justice-involved people receive continuity of care not only will reduce the burden of disease for the nation’s sickest but also will improve health conditions for the underprivileged communities to which the incarcerated will return.

People of color are disproportionately represented in the U.S. criminal justice system, and as a result, communities of color feel the strongest effects, good or bad, of incarceration. Although many of the community-level impacts of incarceration are overwhelmingly negative—such as exacerbating social, economic, and political inequalities for vulnerable populations—correctional health care offers a unique public health opportunity. By addressing the health care needs of people in corrections through routine screening, diagnosis, treatment, and linkage to care, the disproportionate burden of disease that is borne by communities of color can be somewhat mitigated.

Although the obstacles that lie ahead are towering, public interest and investment in resolving these issues are also mounting. On July 14, 2015, President Barack Obama delivered an impassioned speech on criminal justice reform at the NAACP annual convention, outlining his case for sweeping changes to policing, drug prosecutions, sentencing, and the conditions of confinement and release. It is our hope that this article will shed light on the challenges at hand and offer guidance to those who wish to enact change. Through passionate advocacy and informed policy, it is possible to dramatically improve correctional health—and ultimately to improve community health.

Unwinding Mass Incarceration

Consensus is now emerging that the United States should move away from its heavy reliance on mass incarceration, which has ramped up over the past 40 years, ending in more people being locked in jails and prisons than ever before. A variety of policies have been offered that may well begin to reduce the nation’s excessive incarceration. But even as these steps reverse some of the most egregious causes of the prison buildup, there is no insurance that this will unwind the overburden of incarceration for the generation of those already extensively involved with the criminal justice system.

This is the focus of our concern. In particular, we have been interested in identifying the challenges facing local jurisdictions (states and counties) that take up the charge to reduce their reliance on incarceration. Our views are informed by research and by experience in corrections in multiple jurisdictions. From this, we argue that unwinding mass incarceration will neither be cheap nor easy, and to be done responsibly will require a new infrastructure of coordinated community-based facilities and services that can meet evidence-based incarceration needs while also ensuring public safety.

Much recent reform-oriented rhetoric portrays most prisoners as nonviolent drug offenders who pose little danger to their communities. The reality is that the majority of those in state prison are serving sentences for violent crimes. And offenders do not neatly sort themselves as “nonviolent” or “violent,” but have marbleized offense histories that include some of both types of crimes. Drug dealing is often accompanied by firearm possession, and labeling individuals as nonviolent drug offenders may understate the seriousness of the crime. Further, three national decennial studies that examined the success rates of released prisoners from the 1980s, 1990s, and 2000s show a stubborn consistency in the high rates of re-arrest and re-incarceration after release. In each study, nearly two-thirds or higher were re-arrested within three years of release, and this figure has remained invariant to significant temporal changes in the economy, social mores, and the political landscape. In the current conversation, the widespread availability of criminal records is often pointed to as the scarlet letter that serves as a barrier to released prisoners, yet even in pre-Internet times when criminal history information was not so widely available, recidivism rates were as high as they are today.

The United States Sentencing Commission and a growing number of states have taken steps to reduce the disproportionate and ineffective sentences adopted during the excesses of the “war on drugs” at the end of the past century. The commission has applied some of these reforms retroactively, and in July 2015 President Barack Obama extended commutations to 46 federal prisoners whose prison terms would have been completed had they been arrested under the new regime. We applaud these steps. But there are many more incarcerated—2.2 million federal, state, and local prisoners. What would it take to unwind mass incarceration on a broader scale?

Any reform effort must sort out the dangerous from those who do not pose much risk to the community. As noted, this is easier said than done. One challenge is distinguishing between the addict who may have a high risk of recidivating for low-level offenses (for example, probation violations for positive drug tests) and a released prisoner with a lower risk to recidivate but whose offenses have a greater potential for lethality. The dramatic reductions in criminal victimization over the past 20 years have led to substantial improvements in quality of life across the nation. These gains must be maintained and improved on, as some neighborhoods continue to suffer from high rates of homicide and chaos due to the threat of violence. This requires that policymakers carefully target how prison populations are reduced. We worry that if there is substantial failure in the form of a spike in crime, homelessness, or other social ill, there could be political blowback—familiar to all involved in criminal justice for more than a decade or two. An increase in crime could put the whole reform agenda at risk.

Criminal history trap

It is a general truism that policy problems cannot be solved simply by stopping the action that yielded the problem. This is the case for the environmental degradation caused by toxic waste—and it is the case in mass incarceration. Due to changes in public policies and practices in recent decades, millions of people in the United States have criminal convictions, arrests, citations, and detentions in their histories, and this is not easily undone, stopped, or reversed. (It is a particular frustration that existing data systems do not allow the calculation of reliable estimates of just how many are in each of these categories.) And, as it currently operates, the criminal justice system frequently bases decisions not only on current conduct, but on one’s past criminal history. As a result, the likelihood of detention while a case is resolved and the degree of punishment depend upon the official record of past encounters with the criminal justice system. This system has logic, as there is a significant body of evidence that demonstrates that the greatest predictor of future crimes is one’s past criminal history. At the same time, without additional policy action, any events that resulted from the overly punitive enforcement environment of the past will long influence future levels of incarceration, stymying efforts to achieve proportionality or parsimony, much less the broader goals of social justice and citizenship that are being increasingly expressed in various quarters.

Employers, too, have begun to depend on criminal records as cheap and easy personnel screens. Often the decision is binary: application processes stop if there is the presence of a criminal history regardless on the nature of the specific offenses, the overall extent of criminal involvement, and how long ago it took place. Given the difficulty in interpreting criminal histories, the questionable accuracy of records in the county’s disaggregated criminal justice records database systems, and the inconsistent and unreliable practices of the third-party companies that provide employers with criminal record information, employers can almost be forgiven for skirting Equal Employment Opportunity regulations and “ban the box” legislation that requires fair review of criminal offenses as they apply to the duties of a vacant position. The widespread use of criminal background checks has reduced the employment opportunities, particularly with larger companies with more robust human resource and legal departments. It is no surprise, then, that small employers are generally the ones who hire released prisoners, but at lower salaries, with fewer benefits, and in positions with less growth opportunity.

Even if the country begins to punish with more parsimony going forward, there will still be several generations of people with criminal records accumulated during the era of mass incarceration. Massachusetts has adopted a “sunset clause” to criminal histories revealed to potential employers at 10 years for felony convictions and five years for misdemeanors. (Convictions for homicide and some sex-related crimes are not subject to the provision.) Again, this is an important policy innovation, based on research that shows that the risk of committing another crime strongly decreases with age and with longer-term abstinence of criminal activity. But it leaves much more work to be done to regulate how information in private hands is used, and how the system itself bases decisions on prior criminal justice outcomes.

And we are concerned that regardless of whether someone’s past punishment history was just or not, many of the individuals in the generations already deeply connected with the criminal justice system will fail without a substantial support system. Their health, educational, and employment-related deficits have been well documented in a number of reports, perhaps most notably by the Urban Institute. But there are also a number of lesser-known conditions that affect someone’s prospects following release from prison.

Complications of everyday life

Our experience has taught us that it is not easy for former inmates to disentangle themselves from the system or to extricate themselves from the relationships that the system will be monitoring. Many of them resume relationships with individuals in communities with high crime rates whose actions attract police attention. Even for those with the resolve to change behavior, past relationships can haunt them well into the future. This might work through the predictable mechanism of being drawn to old friends and old behaviors. But sometimes it works in surprising ways. Offenders of domestic violence typically do not merit sympathy, but those in law enforcement and corrections bear witness to situations in which a woman’s jealousy of her former abuser’s new relationship, for example, will lead her to report a violation of a protective order that she may have initiated. And, like it or not, criminal justice agencies are then involved. Once this happens, participants with criminal records frequently end up incarcerated, incurring probation or parole violations, or detained awaiting resolution of new charges.

The level of chaos and social disorganization of the extended families of released prisoners can also pull them off a seemingly successful path. The following is just one case with which we are familiar. This released prisoner, after a successful internship with a construction company and placement in a heavy-equipment training class arranged by a mentor, disappeared for several weeks and did not respond to emails or messages. After a search by correctional staff and program administrators, he finally surfaced and indicated that a cousin had been killed in Florida and that he felt the need to leave immediately to attend to the needs of his family. His urgent response to his family situation, jeopardizing valuable opportunities that he had spent months developing, is one that is repeated by many released prisoners. The frequency of violent and premature deaths, family medical problems, home foreclosures, job losses, criminal involvement of other family members, and other financial and social setbacks are orders of magnitude greater than in middle-class communities. These demands entangle many released prisoners, retarding their own prospects for success.

These external family and relationship factors are not the primary barriers for so many released prisoners, however. Many of those deeply connected to the criminal justice system exhibit behaviors and personalities that both explain their criminal history and prove so hard to accommodate in the workplace and civil society. An individual’s lack of impulse control that leads to an assault charge also raises legitimate questions to an employer about that person’s ability to take orders from a supervisor and to provide good customer service on the job. Of all the factors that drive recidivism, criminologists have identified poor “attitudes” and “orientation” as more predictive of failure than the availability of family support, employment, and housing, and as the factors in a person’s life hardest to change. Poor decision-making skills can and do cascade from the trivial to tragic. In another case known to us, someone on work release stole candy from his employer, and then assaulted a fellow employee who he erroneously thought had “snitched” on him. The decision to steal candy and then to assault the coworker now has him sitting in jail, revoked from the program, having lost his job and income, facing new criminal charges, and awaiting a possible return to a federal penitentiary.

Some correctional systems have responded to this research by offering cognitive behavioral programming. Through a dissection of past decisions and role-playing scenarios involving criminal activity, participants learn how to slow down impulsive tendencies and to develop more reflective thinking processes. Evidence shows that cognitive behavioral programs are generally effective at reducing recidivism. That said, thinking processes are not easily changed, and program effects are usually adjustments rather than transformations, particularly for those with mental health conditions.

Indeed, inmates with mental health problems pose some especially thorny challenges for correctional systems. The nation has long incarcerated a disproportionate number of individuals with serious and persistent mental health issues, including personality disorders (such as narcissism and lack of empathy), depression, bipolar disorder, and schizophrenia. Post-traumatic stress disorder (PTSD) is also commonly noted. Practitioners often comment about how matter-of-factly prisoners may describe some aspects of their background: placement in dozens of foster homes, victimized by emotional and physical abuse; witnessing the stabbing and death of a family member or the abuse of a mother, the absence of a father in their lives due to incarceration. Many have no recognition that such an upbringing is abnormal compared with others in civil society. Of those with mental illness or PTSD, a large fraction have co-occurring substance abuse issues that bring them to the attention of law enforcement through criminal behaviors that range from public nuisance crimes to the most serious violent offenses.

It is generally recognized that the deinstitutionalization of state mental health hospitals in the 1970s without the concomitant development of community housing options, coupled with the simultaneous disappearance of sheltered workshops and day programs, has driven the mentally ill onto the streets, into poverty, and then—following a journey through a justice system that often treats them unfavorably—into jails. But the extent to which this social condition affects the prospects for reducing jail and prison populations receives little attention. Now that prison populations have stabilized, perhaps attention that previously concerned overcrowding can be shifted to this issue.

Correctional leaders, policymakers, and public health advocates can join forces to develop better alternatives for mentally ill individuals when and if they can be managed safely and legally in the community.

But from a larger social perspective, this issue should not be the responsibility of corrections administrators. No one believes that jails are the best place for providing mental health services. Several recent efforts are demonstrating alternatives. For example, the Stepping Up Initiative—run by a nationwide coalition of organizations from the mental health, substance abuse prevention, legal, and law enforcement communities—uses a variety of tools to divert people with mental illness from jails and into treatment. Among its efforts, the initiative connects communities that are successfully reducing the number of people with mental illness in their jails with other communities seeking such change. In addition, a number of states and communities are developing mental health courts, often with support from the federal Bureau of Justice Assistance. More than 150 of these courts are now in operation, and more are being planned. Their goal is to divert appropriate individuals from incarceration and instead link them to employment, housing, treatment, and support services. Such efforts offer some hope that correctional leaders, policymakers, and public health advocates can join forces to develop better alternatives for mentally ill individuals when and if they can be managed safely and legally in the community.

Need for local infrastructure

No one enters prison directly from the community, yet upon release most return straight from the prison gates to their neighborhoods with little more than a token amount of money, a bus ticket, and a meshed bag with their few possessions. As courts and jails serve as the gateway to the correctional system, their role in preparing individuals for release has emerged as a promising model of reentry. After all, states fund prisons, but services are provided at the county level, and jails should be in a much better position to be at the center of reentry for all released prisoners returning to communities. However, few jails have the infrastructure and “correctional” culture to perform this mission. Rather, the energy, talents, and resources of most local corrections systems are consumed by meeting the constitutional requirements of due process and humane care for a predominantly pre-trial population characterized by short stays and frequent movements. Some jail systems do not incarcerate sentenced individuals at all, or do so for a very short time. Layering on the additional responsibility of preparing and assisting individuals as they return to their communities will require different staff, programs, facilities, and, often underappreciated, the assumption of increased risk and liability for the actions of released offenders in settings beyond the full span of control of the agencies. This explains why relatively few local correctional systems have adopted pre-trial community-based supervision programs, despite clear evidence that this is a safe and cost-effective way to reduce jail populations.

Studies have found that among inmates in federal, state, and local correctional systems, many of them are over-classified—that is, they are occupying prison and jail beds at security levels higher than warranted—often due to the lack of available beds in community correctional facilities or community supervision programs. This is a costly policy problem. If one considers medium- and maximum-security prison cells (with their accompanying high staffing ratios) as scarce resources, good correctional practice would reserve these beds for the truly dangerous. Increasing the number of community correctional pre-release beds and programs would make all prisons safer for staff and inmates by providing incentives for inmates to comply with rules while in custody to have a greater chance of being “stepped down” to a community program. The operational benefits of these programs and beds for correctional institutions are as important as the reductions in recidivism rates found by some studies.

Even as the number of prisons vastly expanded in recent decades, however, there has been no proportional increase in community correction facilities. Many nonprofit and religious organizations that had operated such centers have lacked the financial capital required to bring them up to higher building codes and correctional accreditation standards, and some beds and facilities have been taken offline. Community correctional beds are not necessarily cheaper than institutional beds, and in tight budget times, state and local correctional agencies often cut these programs first. As a sign of other hurdles, in Rhode Island and California, unions have opposed community corrections for fear of reduced correctional officer jobs.

There is a paradox that the infrastructure of community correction beds is inadequate while many beds go unused on a daily basis due to poor coordination among the different correctional agencies that contract for them. Probably the primary reason for the low usage rates concerns the lack of incentives that state and local correctional agencies have to fully engage in a reentry mission rather than retreat to the traditional goals of running clean, safe, and orderly institutions that meet correctional and constitutional standards. Simply put, correctional agencies bear the costs and risks of reentry while the benefits accrue to individuals and the general community in ways that are hard to measure. Officials running correctional agencies understand that they will be held fully accountable for the misdeeds of inmates in their custody—especially those in highly publicized cases—and will bear no responsibility for those released and no longer on their watch. By definition, reentry extends the reach of corrections into the community and beyond the safe confines of the prison and jail walls. This is what makes it feel risky to many correctional practitioners. The hardening of the function of probation and parole to one of supervision and away from services speaks to this incentive problem, as well as to the high caseloads and chronic underinvestment in these community correctional agencies. As one possible step to make parole and probation both more effective and less burdensome for administrators of prisons and jails, policymakers can eliminate supervision requirements that interfere with released inmates’ employment and other desired pro-social activities.

To their credit, some jurisdictions have invested in pre-release community correctional beds. The Federal Bureau of Prisons contracts with more than 200 facilities, and previous agency directors have made it a goal to release all federal prisoners through these programs. Similarly, the state correctional agencies in Ohio and Pennsylvania use an extensive network of halfway houses to transition soon-to-be released prisoners back into the community. One of us (LoBuglio) has spent 10 years managing a community-based pre-release center run by a local county correctional system. The Montgomery County Pre-Release Center (PRC) in Maryland has served over 18,000 individuals during its 43-year history, and uniquely this program has received and transitioned soon-to-be released inmates from all three levels of corrections: the local jail and the state and federal prisons.

Those most heavily involved in the criminal justice system will not succeed without the assistance of programs that provide services, discipline, and structure to guide their reintegration into society.

In general, these community correctional residential facilities aim to help soon-to-be released inmates find and secure private-sector employment, reengage with their families, and develop individualized reentry plans that address treatment, housing, finances, and other areas of need. They also require the inmates to pay program fees, taxes, restitution, and child support orders. As most of these facilities are small and privately run by nonprofit or for-profit agencies, the scope and quality of the services vary widely. Often, the contracting correctional agency and the facility itself will have restrictions on the type of offenders that can be served in these settings, and the offenders most commonly excluded include those convicted of violent, sex, firearm/ammunition, and gang-affiliated crimes. The contracting relationship also serves to de-couple the full responsibility for the success of the clients from the agency and residential facility.

Although there are a number of high-quality models, the Montgomery County Pre-Release Center enjoys several large advantages over most community-based residential correctional facilities. As part of the county’s Department of Correction and Rehabilitation, the PRC is better resourced and has a smaller staff-to-client ratio, and employees are better trained, credentialed, and paid than their counterparts in private halfway houses. As a consequence, the PRC receives offenders of all types—from murderers who have served 25 years in the federal system to those serving months for petty theft—excluding only those who have had past escape convictions. Also, the integration with the county jail allows the PRC to sanction noncompliance with the rules more swiftly and proportionally than other programs, and to privilege those who fulfill the program’s requirements. Participants who test positive for drugs can immediately be suspended to the jail for a period of days, following which they return to the program. Conversely, those who find jobs are eligible for home visits. Using these tools, the program helps those in custody change their thinking and behavior to conform to behaviors that will help them succeed post-release. Finally, the advantages of this government-run model are that the goals and responsibilities of the larger correctional department and the pre-release center are fully aligned. The PRC improves the overall safe and orderly flow of inmates through the jail into the center, and its excellent performance metrics of high employment rates and low recidivism rates reflect well on the entire agency.

One truth is that providing high-quality correctional services is expensive. And we have documented several reasons that most agencies have underinvested in this program model. But history teaches that state and local governments respond to federal incentives. In 1994, the Truth-in-Sentencing legislation tied federal subsidies for corrections to sentencing reforms and helped spur a boom in the construction of prisons. Our experience convinces us that it makes sense to use the same strategy to incentivize states and localities interested in building PRC-type facilities.

The bottom line

As we have learned from our experiences—and as others have observed as well—unwinding mass incarceration requires much more than stopping current practices or reversing course by mass commutations and early release programs. Those most heavily involved in the criminal justice system will not succeed without the assistance of programs that provide services, discipline, and structure to guide their reintegration into society prior to and after their release. This will require a large, expensive, and politically challenging investment in an infrastructure of community-based correctional facilities throughout the country and especially near communities that receive a disproportionate share of returning prisoners. Ideally, the centers will be located near job and transportation centers, and be run by local correctional and public safety agencies.

No matter the policies introduced, the key to success will be strong leadership and public commitment. And we are particularly concerned that new policies be pragmatic, established in ways that account for the way their populations interact with other sectors of the criminal justice system as well as the larger social environment. Moving individuals from incarceration to community liberty without proper support and accountability can jeopardize not only the entire reform agenda, but also individuals and communities that are already fragile.

Forum – Fall 2015

Reevaluating educational credentials

Mark Schneider’s work on the wide variation in the economic value of postsecondary educational programs, as described in “The Value of Sub-baccalaureate Credentials” (Issues, Summer 2015), is of great importance because it reflects new labor market realities that affect nearly everyone in the United States. Before the 1980s, high school was enough to provide middle-class earnings for most people. In the 1970s, for example, nearly three in four workers had a high school education or less, and the majority of these workers were still in the middle class. But that high school economy is gone and not coming back. Nowadays, you go nowhere after high school unless you get at least some college first. The only career strategy more expensive than paying for college is not going to college.

As the relationship between postsecondary programs and labor markets has become stronger, it has also become more complex. The economic value of postsecondary degrees and other rewards have less and less to do with institutional brands and more and more to do with an expanding array of programs in particular fields of study. Degrees and other postsecondary credentials have multiplied and diversified to include traditional degrees measured in years of seat time; bite-sized credentials that take a few months; boot camps, badges, stackable certificates, and massive open online courses (MOOCs) that take a few weeks; and test-based certifications and licenses based on proven competencies completely unmoored from traditional classroom training.

The new relationship between postsecondary education and the economy comes with new rules that require much more detailed information on the connection between individual postsecondary programs and career pathways:

Rule No. 1. On average, more education still pays. Over a career, high school graduates earn $1.3 million, a B.A. gets $2.3 million, a Ph.D. gets $3.3 million, and a professional degree gets $3.7 million.

Rule No. 2. What you make depends a lot less on where you go for your education and a lot more on what you study. A major in early childhood education pays $3.3 million less over a career compared with a major in petroleum engineering.

Rule No. 3. Sometimes less education is worth more. A one-year computer certificate earns up to $72,000 a year compared with $54,000 for the average B.A.

issues_summer15_cover

As program value spawns new credentials and training venues, the market signaling from postsecondary programs to students, workers, and employers becomes a Tower of Babel. Today, there is a need for clear, comprehensive, and actionable information that connects postsecondary education options with labor market demand. The nation has built a vast postsecondary network of institutions and programs with no common operating system that links programs to careers. To get a better handle on the big black box that postsecondary education and training has become and address the inefficient and inequitable use of education and workforce information, we need a new approach.

Anthony P. Carnevale

Research Professor and Director

McCourt School of Public Policy

Georgetown University Center on Education and the Workforce

Revisiting genetic engineering

In “Regulating Genetic Engineering: The Limits and Politics of Knowledge” (Issues, Summer 2015), Erik Millstone, Andy Stirling, and Dominic Glover accurately criticize some of the arguments set forth by Drew L. Kershen and Henry I. Miller in “Give Genetic Engineering Some Breathing Room” (Issues, Winter 2015). Millstone et al. correctly point out that Kershen and Miller oversimplify when they say that genetic engineering (GE) does not need government oversight. However, Millstone et al. also mislead their readers by asserting their own generalities and biases about GE crops. Both articles fail to provide evidence about current GE crops, or acknowledge that the real question is not whether GE technology is safe or unsafe, but whether particular applications are safe and beneficial when assessed on an individual basis.

The Center for Science in the Public Interest’s (CSPI) Biotechnology Project is a nongovernmental organization whose nuanced, fact-based approach falls neither into the “for” nor “against” camps. CSPI has stated that the current GE crops grown in the United States are safe, which is consistent with a growing international consensus. That same conclusion has been reached by the National Academy of Sciences, the U.S. Food and Drug Administration, the European Food Safety Agency, and numerous other scientific organizations and government regulatory bodies. That says nothing, however, about future GE products, the safety of which will need to be assessed on a case-by-case basis.

There is ample evidence that GE crops grown in the United States and around the world provide tremendous benefits to farmers and the environment. For example, Bt cotton has significantly reduced the use of chemical insecticides in the United States, India, and China. Although GE crops are not a panacea for solving food insecurity or world hunger, GE is a powerful tool scientists can use to create crop varieties helpful to farmers in developing countries.

Although current GE crops are safe and beneficial, government oversight is essential, and the current the U.S. regulatory system needs improvement. In particular, the Food and Drug Administration has a voluntary consultation process rather than a mandatory premarket government oversight system similar to what is found in the European Union, Canada, and other countries. Congress should enact a premarket approval process to ensure the safety of GE crops and instill confidence in consumers.

CSPI acknowledges the negative effects on agriculture and environment from the use of some current GE crops. Glyphosate-resistant weeds and resistant corn rootworms are a direct result of overuse and misuse of GE seeds with unsustainable farming practices. Resistant weeds and insects force farmers to revert to using pesticides and farming practices, such as tillage, that are more environmentally harmful. The solution, however, is not taking away GE seeds but requiring better industry and farmer stewardship. Farmers using GE crops must introduce them into integrated weed- and pest-management systems where rotation of crops and herbicides is required.

As with other technologies, society’s goal should be to reap the benefits and minimize the risks. The future of GE crops should be led by facts and case-by-case assessments, not general arguments from proponents or opponents.

Gregory Jaffe

Biotechnology Project Director

Center for Science in the Public Interest

Keeping fusion flexible

Robert L. Hirsch’s article “Fusion Research: Time to Set a New Path” (Issues, Summer 2015) is informative and thought-provoking. The issues he addresses, including economic viability, operational safety, and regulatory concerns, are important and require closer examination. His analysis makes a convincing, fact-based case for the need to examine the merits of current fusion efforts supported by public funds. As of June 2015, the United States has invested $751 million in the International Thermonuclear Experimental Reactor (ITER) tokamak project. As such, the public should be able to access the ITER team’s findings regarding the issues Hirsch pointed out. Greater disclosure will allow a more meaningful dialogue regarding the merits of the current publicly funded fusion research and development (R&D) path.

Based on my experience in managing a private fusion company, the current fusion funding landscape will be an important factor in the education of next-generation fusion scientists and engineers. Over the past decade, several privately funded startup companies have sprung up in the United States and elsewhere in pursuit of practical fusion power based on radically different approaches from the tokamak. The emergence of these startups is largely due to the past technical progress in fusion research stemming from a diverse portfolio of approaches supported by the government. These companies have generated a significant number of jobs despite the fact that their combined budget is only about 10% of government-funded fusion programs. However, they face a common challenge of filling critical technical roles as the talent pool of young scientists and engineers with a diverse background in fusion research is dwindling.

In the federal fusion energy science budget for fiscal year 2015, the lion’s share of funding is directed toward a single fusion concept—the tokamak. Combined with the $150 million allocated to the ITER tokamak program, the total funding for tokamak-specific R&D amounts to $361 million. In comparison, only $10.4 million goes toward R&D on high-pressure compact fusion devices. This type of approach is pursued by all but one private fusion company due to its compact size, low-cost development path, and potential for economic viability. In mid-2015, the Advanced Research Projects Agency-Energy announced that it would provide one-time funding of $10 million per year over three years for this work. This will provide some relief to support innovation in fusion, but it is far from sufficient. This lopsided federal fusion spending creates a huge mismatch between the needs of the nascent, but growing, private fusion industry and the focus of government-supported fusion R&D. Although the tokamak has provided the best-performing results to date, ITER has projected that the widespread deployment of practical fusion power based on tokamak will nevertheless begin only in 2075. This timetable suggests that the nation must continue to support diverse approaches to improve the odds for success.

Over the past couple of years, I have had the opportunity to share our own results and progress with the public. It has been encouraging to me that the public, on balance, views fusion research as a worthy endeavor that can one day address the world’s need for sustainable and economical sources of power. People also understand the challenges of developing practical fusion power—yet by and large, the public is willing to remain as a key stakeholder in support of fusion research. It is thus imperative for the fusion research community to keep its focus on innovations, while being judicious in its spending of public dollars. In that regard, I think Hirsch’s article is very timely, and deserves the attention of the fusion research community and the public at large.

Jaeyoung Park

President

Energy Matter Conversion Corporation

Since leaving the federal government’s magnetic confinement fusion program and the field in the mid-1970s, Robert Hirsch has contributed a series of diatribes against the most successful concept being developed worldwide in that program. What is surprising is not the familiar content of this latest installment, but that it was published in Issues, a journal seeking to present knowledgeable opinion in this area.

As for the article, Hirsch complains that the tokamak uses technologies that have been known to fail sometimes in other applications, notes that the ITER tokamak presently under construction is more expensive than a conventional light-water nuclear reactor that can be bought today, and concludes with a clarion call for setting a new path in fusion research (without any specifics except that it lead to an economical reactor).

Components do fail, particularly in the early stages of development of a technology. Hirsch mentions, for example, superconducting magnets failing in accelerators, causing long downtimes for repair, and plasma-disruptive shutdown in tokamaks. This argument ignores the learning curve of technology improvement. Bridges have collapsed and airplanes have crashed, with much more disastrous consequences than a tokamak shutting down unexpectedly would have, but improvements in technology have now made these events acceptably unlikely. Why can the same technology learning curve not be expected for magnetic fusion technologies?

Hirsch’s economic arguments based on comparison of the estimated cost of ITER and of a Westinghouse AP-600 light-water nuclear reactor are disingenuous (at best) and completely ignore both the learning curve and the difference in purpose of ITER and an AP-600. ITER is an international collaboration entered into by the seven parties (the United States, the European Union, Japan, Russia, China, South Korea, and India) for sharing the expense of gaining the industrial and scientific experience of building and operating an experimental fusion reactor, most of the components of which are first-of-a-kind and therefore require the development of new manufacturing procedures and a large and continuing amount of R&D. Each of the parties wants to share in this experience for as many of the technologies as possible. To initially achieve the ITER collaboration, an extremely awkward management arrangement was devised, including in-kind contribution of the components and the requirement of unanimity among all parties on all major decisions. By contrast, the AP-600 benefits from a half-century learning curve in which hundreds of light-water reactors have been built and operated, many of them by the single industrial firm (Westinghouse) that offers the AP-600. A more meaningful comparison would be to cost an AP-600 to be built in the 1950s (escalated to today’s dollars) by the same type of consortium as ITER, involving the same parties with the same purpose, and requiring the development in 1950 of what would be first-of-a-kind components of the present AP-600.

Weston M. Stacey

Regents’ Professor of Nuclear Engineering

Georgia Institute of Technology

Climate clubs and free-riding

“Climate Clubs to Overcome Free-Riding” (Issues, Summer 2015), by William Nordhaus, falls short by several measures, and in the end is unworkable.

First, its author claims that climate change agreements such as the Kyoto Protocol suffer from the free-rider dilemma of collective goods. However, the protocol had problems almost from the beginning. The United States defected before its full definition, implementation, and ratification. A cap-and-trade system covering all of the participants was never implemented. And the European trading mechanism, which was supposed to prepare it, was flawed and never worked properly. Therefore, it is likely that it was not free-riding that destroyed the Kyoto Protocol, but rather the failure to set up properly functioning institutions.

Second, the modeling framework set up by Nordhaus, even though commendable as a theoretical tool, remains a blunt instrument. To reach workable results, a number of simplifying assumptions are included: Discount rates are the same for all countries. The trade sectors are rudimentary and do not account for exchange rate fluctuations, which are often more important than tariff barriers. Retaliatory trade measures and the institutional rules of the international trade system (under the World Trade Organization) forbidding tariff hikes are ignored. All this would not matter if parameters representing these aspects would not play a major role. But they do influence results, as was widely noticed. Moreover, the Dynamic Integrated Climate-Economy model, or Dice model, uses the standard assumption that countries constitute unitary agents. If we were facing a world of homogenous countries, this would hardly matter. But the international system is composed of a relatively small set of very big powers that have a disproportionate influence on the evolution of world politics. For them, domestic considerations are as important if not more than international ones, and internal coalitions are strongly constraining their policies.

Third, it appears that just as in the international trade regime, where domestic lobbies hurt by liberalization will try to oppose and defeat it politically, the same pertains to environmental agreements. The United States exited the Kyoto Protocol because of the influence of the fossil-fuel lobby and its stranglehold on the Republican Party, a situation that persists today. Similar but more hidden influences exist in other powers (the European Union and China). Is there a way out of this situation? The useful analogy is the Montreal Protocol to eliminate ozone-destroying gases, a successful agreement. The protocol was made possible because a relatively cheap substitution technology existed for refrigeration gases. This ensured that manufacturers had trouble coalescing to fight the treaty. If a cheap alternative to fossil fuels would be found, a similar outcome could be obtained because the substitution technology would spread and the financial back of the fuel lobby could be broken. Is there some hope for this? Yes: renewable energy technologies are getting ever cheaper, and combined with more efficient energy storage facilities, the supremacy of fossil fuels could be threatened.

Urs Luterbacher

Professor emeritus

Centre for Environmental Studies

Centre for Finance and Development

Graduate Institute of International and Development Studies

Geneva, Switzerland

Technology governance alternatives

In “Coordinating Technology Governance” (Issues, Summer 2015), Gary E. Marchant and Wendell Wallach present a compelling argument for the need for a Governance Coordination Council (GCC) to correct a key deficiency in oversight of emerging technologies in the United States. Specifically, the authors say that the GCC would “give particular attention to underscoring the gaps in the existing regulatory regime that pose serious risks. It would search, in concert with the various stakeholders, methods to address those gaps and risks.” In light of the incredible recent advances in technologies that are changing the physical and natural world and life itself, I fully agree with their call to better synchronize funding, regulation, and other policy actions to make more reflective and deliberate decisions about emerging technologies. To date, these decisions have been piecemeal and delayed. Current approaches have left interest groups, academics, practitioners, and product developers frustrated, at best.

Visions for changing governance are as varied as the scholars that have written about them. Coordinating mechanisms such as a GCC have been proposed by others, including me and my colleagues. In particular, in 2011, we reported on a four-year project, funded by the National Science Foundation, that analyzed five case studies of governance and resulted in our calling for “an overall coordinating entity to capture the dimensions of risk and societal issues…as well as provide oversight throughout the life-cycle of the technology or product.” As with Marchant and Wallach’s GCC, we suggested that a coordinating group use foresight to anticipate and prepare for future decision-making by funding risk- and policy-relevant research and elucidating authorities well before regulatory submission of products.

However, there are some key differences between the GCC model and ours: 1) the authority that the coordinating group would have, 2) the role of the public(s), and 3) the overarching institutional structure. In our model, we proposed that the Office of Science and Technology Policy should take a lead for convergent and emerging technological products, and we stressed that it should have the authority to mandate interagency interactions and to ensure that stakeholder and public deliberations are incorporated into agency decision-making. I fail to see how a coordinating group such as the GCC, without having access to government resources and legal mechanisms, could add more to what already exists in the form of think-tanks, academic centers, and other advisory groups that convene diverse stakeholders and provide input into policies. A coordinating mechanism needs sharp political teeth, as well as independence from undue influence—hence the dilemma.

We also suggested having three groups working together with equal power: an interagency group, an advisory stakeholder committee, and a citizen group that would speak for the results of wide-scale public deliberation. Thus, our model rests on a central role for the public(s) that are often marginalized from discussion and decisions. The three groups would help to focus national resources toward technologies that are most desired by the taxpayers who fund them. The Marchant-Wallach GCC model takes on a more hierarchical structure, with staff of the GCC and the stakeholders they convene holding a significant amount of top-down power. In contrast, our model would be more networked and bottom-up in structure, with information and viewpoints from citizens feeding into a process that has legal authority. It is debatable which approach is best, and like Marchant and Wallach, I believe it is a crucial time to test some options.

Perhaps most important, the nation will be stuck with past models until we acknowledge and challenge the elephant in the room: the lack of political will to make a change. Currently, the vast majority of power lies in the hands of technology developers who can fund political campaigns. Until high-level policymakers are willing to consider alternatives to a largely neoliberal approach in which technological development and progress take precedent above all else, the rest of us will be resigned to watch the world change in ways that we do not necessarily want.

Jennifer Kuzma

Goodnight-NCGSK Foundation Distinguished Professor

School of Public and International Affairs

Co-Director, Genetic Engineering and Society Center

North Carolina State University

Bipartisan Science

It’s not enough for scientists to clearly communicate their findings to policy makers; they need to be politically smart, too. This means highlighting evidence and options that can appeal to opposing ideologies.

One evening more than 30 years ago, when working as a legislative assistant in the U.S. House of Representatives, I attended a dinner at which I was the target of an onslaught of attacks from scientists who believed Congress was clueless about the value of the National Institutes of Health and the need for more appropriations. I made my rebuttal in a commentary in The Journal of the American Medical Association (JAMA), saying: “Government and the scientific community must work together because two constants exist that are unlikely to change in the years ahead. First, government rightfully continues to demand accountability for the taxpayers’ money. Second, research will go on, if for no other reason than mysteries remain unanswered.”

Those constants still hold. And as one way to help reconcile these two very different worlds, I have collaborated with hundreds of researchers to help them tell their stories so that policymakers might listen. They have been experts on many subjects—malaria vaccines, teen sex, stem cell research, crop science, college drinking, homelessness, health in jails, community college reform, violence reduction, land rights in the Amazon, and many more.

Since then, there has been a seemingly endless chorus of concern that doesn’t seem to be dying down, calling for scientists to be better communicators. I am a member of that chorus. The theory is that better communication will lead to public support for needed scientific research, greater acceptance of validated scientific findings, and greater funding for research across the scientific spectrum. The “science of science communication” has itself become a subject for research championed by the National Academies. Programs such as The Aldo Leopold Leadership Program (started in 1998 by Oregon State University ecologist and former National Oceanic and Atmospheric Administration director Jane Lubchenco) have been created to train scientists to communicate more effectively to decisions makers. Duke and SUNY-Stony Brook offer workshops to train faculty and students, and scientific societies sponsor countless “Hill visit days” to bring scientists to Washington to speak to legislators.

But, by and large, scientists don’t appear much further along as a “cohort” (a word they would use) than they were three decades ago. Bitter disagreements over climate change, stem cell research, health care policy, education policy, and regulating air pollution and toxic chemicals often seem remarkably insensitive to scientific voices, however articulate and compelling. Meanwhile, social science research has revealed that attitudes about such issues are more a matter of whom one trusts and what core values are at stake than what “the evidence” says.

So, could it be that those of us who are proponents of science communication might be asking the wrong question this many years later? I suspect we’re focusing disproportionately on the imperative to communicate as an end in itself and neglecting the underlying political stakes. In these hyper-partisan times, a more strategic and effective approach might instead focus on identifying research findings that can actually help to skirt politics, seeking opportunities when effective communication of science might draw bipartisan support that actually leads to policy change.

It turns out that there is a history of successes, even recent ones, where science actually informed and even inspired policy agreements across the political aisles. These are successes where, without the science, government or institutional policies in the public interest would never have been enacted. We should pay close attention to these, because learning why some science communication translates to policy change would move us beyond communication for its own sake to a more strategic purpose.

Enabling strange bedfellows

In 2007, Randall Brown and colleagues at Mathematica Policy Research published results of a highly technical evaluation of a Medicaid program for people who needed help with basic activities of daily living. Rather than having a state agency decide which services participants would receive, who would provide them, and how and when, the agency provided them with a monthly allowance and let them decide how best to use it. Brown’s research found that under this “consumer-directed care” approach, more participant needs were met and quality of life improved dramatically compared to the traditional approach of having state-hired aides provide the care. And this happened for the same amount of money an agency would have spent.

But Brown, working with Kevin Mahoney, a long-term care innovator from Boston College, didn’t stop there. Together they engaged in a significant effort to communicate these findings not only to health professionals who read the policy journals, but also to the news media and to policymakers. Their research-based advocacy changed thinking of state officials across the country. Three states took the lead in trying out this “Cash and Counseling” approach—blue state New Jersey, purple state Florida, and red state Arkansas. There was, at most, token political opposition in these demonstration states, and the most vocal (and effective) proponent of the research-driven policy was former Arkansas Governor Mike Huckabee. The federal government now encourages state Medicaid programs to adopt this approach. Four different versions of this Medicaid program are now available in 49 states, and 23 states now offer this benefit through non-Medicaid programs, with 33 states expanding the program to veterans. Self-direction programs for life insurance policyholders are now also available nationwide.

Exhibit Two: Oxford University Professor Peter Tufano has also been interested in how state and federal policy can affect financing for the public good, specifically in what might motivate low-income Americans to save for their futures. This is not a new problem, but one that is becoming more acute in the United States. Between a quarter and a third of Americans have no retirement savings, and baby boomers are aging.

Asked by researchers whether they would be more likely to accumulate $500,000 by saving or by playing the lottery, more low-income Americans believe in the lottery. And Americans of all income levels spent some $78 billion in 2012 on lottery tickets. Research reports from the Harvard Business School, the University of Maryland, and the National Bureau of Economic Research have shown that a new savings model can appeal to current non-savers in the way lotteries do. And the research was promoted to policymaker audiences.

The research contributed to state credit union reforms, enabling people in a handful of states to play out their desire to gamble by opening prize-linked savings accounts through a program called Save-to-Win. Participants experience the thrill of playing the lottery but don’t have to incur the associated risks. They receive one raffle ticket just for opening a special Save-to-Win account and another for every $25 deposited. These raffle tickets each count as a chance to win a cash prize. Prizes can range from a $25 monthly prize for 150 winners to an annual Grand Prize of $10,000 for three winners. However, those not lucky enough to win are not losers; the money they deposited is still theirs for their savings accounts and will be eligible for entering in subsequent drawings. The amazing finding is that although participants are free to “cash out” at any time, winners often choose to deposit their winnings back into that same account, earning more raffle tickets for the next drawing. More important, they are increasing their savings and making saving a habit.

Media coverage of this counterintuitive innovation attracted congressional attention, and bipartisan legislation was introduced by Reps. Derek Kilmer (D-WA) and Tom Cotton (R-AK) in the House, and by Sens. Jerry Moran (R-KS), Sherrod Brown (D-OH), and Elizabeth Warren (D-MA) in the Senate. This coalition of strange bedfellows sought to expand Save-to-Win programs in state- and federally-regulated financial institutions—credit unions and banks alike. Previous racketeering, insurance, and gambling federal laws had prevented the unique program from moving forward. And to the amazement of Congress-watchers who know paralysis as the norm, President Obama signed the American Savings Promotion Act into law last December.

Exhibit Three: Sometimes research can lead to policy changes without requiring legislative action. Columbia University researcher David Rothman has compiled reams of previous research on physician conflict of interest. One piece of his work, a 2006 JAMA article co-authored with nine other experts in the medical field, focused on the modest swag pressed on physicians by representatives from pharmaceutical companies—the pads, pens, paperweights and other tchotchkes that seemed too insignificant to influence medical decisions. Few of us freely admit, even to ourselves, that we can “be bought,” but Rothman’s compendium of research showed that doctors’ prescribing behavior is indeed influenced by gifts of all kinds.

Then Rothman did something only the rare academic does: he made the effort to communicate clearly to the media and others influencing medical education and medical practice, such as physician associations, leaders of academic health centers, and reform-minded foundations, so that policymakers could learn, take notice, and act. Media attention for the research captured the attention (and subsequent support) of the Pew Charitable Trusts, and university administrators followed up as well. Some, such as Stanford’s Phil Pizzo, then dean of the School of Medicine, quickly realized patients should worry when their doctors scribble orders for a specific drug not because it’s best for the patient, but because they’ve been subtly nudged in that direction by the logo on their pen. Although the American Medical Association guidelines already prohibited doctors from accepting “substantial” gifts valued at $100 or more, Pizzo took it one step further and enacted a policy that disallowed physicians and scientists from accepting any and all gifts from companies, however small. Many other academic medical institutions have since followed Stanford’s lead and developed similar policies.

Tchotchkes were the tip of the iceberg. Republican Senator Charles Grassley undertook an investigation that uncovered the sinews of pharmaceutical gifting and its impact on physician conflict of interest. The result was the passage of the Physician Payments Sunshine Act in 2010, which required drug and medical device companies to disclose any payments made to doctors, ranging from research grants to any item worth at least $10. This law put pressure on pharma and its reps to stop the giveaways, and, in turn, for doctors to forgo paid gifts, vacations, and speaking engagements—all a lot more costly than writing pads.

The upshot was a decline in “gifting” to doctors, which has the important benefit of giving patients increased confidence that doctors are prescribing the right medication, free of bias planted by product salesmen. Rothman has directly affected the practice of medicine, drug company marketing practices, and inadvertently, the livelihoods of advertising specialties salespeople. And the solution had bipartisan appeal: to liberals, it helped limit the power of the pharmaceutical industry, and for conservatives, it improved the public’s health without resorting to a highly regulated government-financed program.

None of the specific policy solutions spotlighted here have fallen prey to divisive political warfare. The Cash and Counseling initiatives had near-unanimous support in state legislatures. And, Save-to-Win had polar opposites Elizabeth Warren and Jerry Moran as Senate co-sponsors. “Politicians from opposite ends of the ideological spectrum may come to support this idea for different reasons, but, in the end, that’s beside the point,” U.S. Rep. Derek Kilmer told me. He’s the former Washington state senator who pushed Save-to-Win in the state legislature and was essential to its recent passage in the House. “A message that really appealed to Democrats is that it helps people build assets. But, when I approached one of my Republican colleagues about becoming a co-sponsor, his response was: ‘So what we’re talking about is eliminating an unnecessary regulation to allow private financial institutions to offer an innovative product?’ I said, ‘Absolutely!’”

Overcoming partisanship

Kilmer is on to something. It doesn’t matter why political opponents arrive at the same conclusion. All that matters is that, in the end, they agree for whatever reason. Ideological purity is not the goal here. Rather, consensus is the end game—regardless of the reasoning that led to it.

This principle can help identify opportunities for research that can help build political consensus on important social policies. Public concern about the deaths of black men at the hands of local police in North Charleston, Ferguson, New York, and Baltimore has accelerated a national conversation about their excessive incarceration in this country, but it has also spotlighted urban violence more generally. One research-based initiative that is capturing media and policymaker interest is the Chicago-based Cure Violence model that is premised on a public health solution to gun violence. Cure Violence approaches gun violence as an epidemic, where violence begets violence, as opposed to a criminal justice problem that can be solved by incarcerating more offenders. The approach involves placing former felons back on the street as “credible messengers” who are uniquely qualified to interrupt the cycles of shootings before they occur.

The model is working. The Chicago and Baltimore programs have been rigorously evaluated by the U.S. Department of Justice and Johns Hopkins University, respectively, with glowing results. Gun-related activity is markedly less in neighborhoods where Cure Violence is being implemented when compared to those without this intervention. And the U.S. Conference of Mayors has taken notice, endorsing this approach that has spread to Kansas City, New York City, and New Orleans, along with other cities that are considering how to implement the model. Support has been bipartisan, often involving law enforcement officials, because policymakers are exasperated at the scope of the problem, and solid research has given them an alternative that works.

A related aim to reduce the prison population seems to trump the means for getting there at this particular moment in our history. The conservative Right on Crime initiative had been partnering with the American Civil Liberties Union and other politically progressive groups before the recent spate of shootings. Now, we see that most of the presidential candidates are calling for reducing the number of incarcerated adults in the United States, albeit for different reasons.

John Malcolm, the director of the Edwin Meese III Center for Legal and Judicial Studies at the Heritage Foundation, was quoted in an April New York Times article saying that Republican support comes from different perspectives. He notes that fiscal conservatives see an easing of the drain on public resources, social conservatives are focused on faith-based-redemption where sinners are given a second chance, and others see policy reform as part of scaling back the reach of government. And on the Democratic side, Hillary Clinton frames the issue as one of social justice and has said that we must avoid another “incarceration generation.” All seem to agree that “we need a true national debate about how to reduce our current prison population while keeping our communities safe,” and that a reevaluation of “current draconian mandatory minimum sentences” is necessary to ensure people are not sentenced to prison terms that they don’t deserve, as Clinton and Ted Cruz have said, respectively. When else have Cruz and Clinton agreed on anything? The growing bipartisan concern about the safety of black men in this country as well as an over-populated prison system creates an important opportunity for researchers to contribute knowledge and ideas that hasten a political convergence around solutions that are evidence-based.

Even today, there are areas about which left and right can agree, and it’s in these areas that conducting and communicating research is especially crucial.

And there appears to be promising new evidence that peer-based solutions might have a greater impact on childhood aggression than treating children under the traditional psychotherapeutic model wholly managed by clinical professionals. Marc Atkins, a research psychologist at the University of Illinois-Chicago and director of the Institute for Juvenile Research, is working with kids as young as grade-schoolers. His research makes the case for “context-based peer mentoring”—a model that integrates youth mentoring into wherever children learn, live, and play. Such a broad expansion of the mentoring concept could have huge implications for the mental health of low-income children in urban settings, and if expanded, could provide the potential for low-cost, community-based approaches that can appeal to left and right alike.

In these polarized times, research that has the best chance of attaining significance in the policy world is research on a topic that can somehow sidestep politics or point toward solutions that can appeal to competing ideologies. Even today, there are areas about which left and right can agree, and it’s in these areas that conducting and communicating research is especially crucial: improving patient safety, increasing clinically responsible care choices in our dying days, promoting personal savings for emergencies and retirement, incentivizing safe sex, reducing jail populations, ensuring economic competitiveness in the growing renewable energy market, and mitigating the risks of natural disasters are some examples.

More examples are on the horizon. An emerging national movement, spurred, in part, by the Federal Reserve Bank as an extremely nontraditional partner, argues that poverty is a principal driver of poor health and that improving the health of the U.S. population depends as much or more on combating unemployment, substandard housing, and poorly planned communities as it does on doctor visits. In 2013, then Federal Reserve Board Chair Ben Bernanke named the health care sector as “one of the most promising new partners” in the Fed’s poverty-focused community development work. When the San Francisco Fed’s David Erickson wrote this in 2009, it was groundbreaking:

“The reality is that people who live in supportive, connected, and economically-thriving communities tend to be healthier. Therefore, perhaps the most important contribution that community development finance provides—more than the affordable apartments, more than the startup capital for small businesses, more than the funding for a grocery store, charter school, or day care center—is the larger contribution of a more vibrant and healthier community. In the end, the most important contribution of community development finance may be something we don’t focus on or measure: the billions of dollars of social savings from fewer visits to the emergency room, fewer chronic diseases, and a population more capable of making a contribution as healthy productive citizens.”

Six years later, Erickson is no longer a lone voice or pioneer. A mountain of social science research on this topic informed the Robert Wood Johnson Foundation’s Commission to Build a Healthier America, led not by doctors and health experts, but by economists from both political parties. Commissioners from a range of political backgrounds, co-chaired by Republican Mark McClellan and Democrat Alice Rivlin, noted that “despite our differences” we must create a seismic shift in how we approach health to address how to stay healthy in the first place. Bipartisan support for social impact bonds and high-quality early childhood education are examples of bipartisan convergence on Commission-backed policy initiatives that can improve health.

The New York Times reflected this growing consensus in a December 2014 editorial, citing social determinants of health as a “big idea” for social change: “Health is undercut by substandard housing, air pollution, food deserts, dangerous streets, trauma and toxic stress. Being poor can make you sick, and doctors can’t always help.”

Perhaps research is needed to flag those policy areas most amenable to research. But the common ground appears to be linked in some way to existing momentum in the political world, the salience of the issue, the avoidance of locked-in issues such as abortion and gun rights, a focus on common goals over common rationale, and a conscious effort to use research results to help identify and explain counterintuitive interventions such as encouraging retirement savings with lotteries and improving doctors’ judgment by taking product logos off of pens and pads.

But the focus on communicating life-altering research that holds the greatest promise for skirting politics does not absolve most researchers from the basic sin of communicating too infrequently with the media, advocates, policymakers, nongovernmental policy experts, and a variety of vested interests. They represent where the action happens. Whether the research proves the profound risks associated with complacency around climate change or the merits of vaccination or the most promising alternatives to over-incarceration, the imperative of basic communication is as valid today as it has ever been.

“I used to have this argument with our provost,” said Kevin Mahoney, the Boston College professor who was a lead architect of the Cash and Counseling model. “He believed a researcher is never an advocate and would say ‘You’re always supposed to be finding flaws with our own work and moving on to the next iteration.’ And, I would say, ‘When you have really strong research results, it’s basically an obligation to get them out. It’s an obligation!’”

The “obligation” here is not only to communicate research, but to do so effectively by understanding the political contexts of the work and learning how to navigate them. Otherwise, really, really important research is destined for the proverbial shelf—or worse, to be captured by partisan politics. Social scientists need to realize that their conclusive findings might have implications great and small on policy, and ultimately on the people whom the research is intended to benefit in the first place.

In today’s world, failure to communicate online, with the commercial media, with those positioned to influence the policy process, and with policymakers themselves—in and out of government—is research malpractice. There is now plenty of evidence that well-communicated social science and clinical research, on issues that are ripe for action, can dissolve partisanship to the great benefit of our individual and collective lives.

Andy Burness ([email protected]) is the founder and president of Burness, a global communications firm in Bethesda, Maryland.

Technologies for Conserving Biodiversity in the Anthropocene

Conservation biologists have endeavored to preserve biodiversity from the most extreme excesses of human environmental destruction. Most of these efforts to reverse, halt, and even slow biodiversity decline have proven ineffective, with the downward trends in most biotic groups showing no signs of abating. Human pressure on remaining tracts of natural habitat has not eased and will likely intensify because of climate change. Although the quest for ever-increasing standards of living by an ever-growing human population is the cause of the biodiversity crisis, it can also be the source of its mitigation by harnessing the technological innovation that is driving economic development to stem biodiversity loss. Such an effort will require much greater invasive mediation in biological processes, thereby further blurring the line between nature and humans that conservation biologists have long sought to preserve. But perhaps it is time to embark on a more explicitly symbiotic relationship with our environment and the biota that it harbors. As a species, humans are distinguished by their ambition and capacity to control natural phenomena through technological innovation. This innovativeness is now needed by conservation biologists to combat the threats to biodiversity that technology itself has helped to create.

Competition for space remains an almost insurmountable challenge for biological conservation, and future efforts to provide habitat to secure species and biological processes will focus on maintaining and managing (rather than consolidating) protected areas. Globally, the protected status of established terrestrial and marine parks may be eroded if it becomes clear that their boundaries no longer preserve intact habitats or trophic webs due to mismanagement or other threats such as climate change and pollution, or even if their location is seen to impede economic development. For example, the plans for development along the northeast Australian coast are threatening the United Nations Education, Scientific, and Cultural Organization World Heritage Status of the Great Barrier Reef.

Our global complement of national parks and marine reserves will best assure their value by having real-time data on the health of the habitats, biota, and biological processes that they harbor, allowing us to better mitigate the threats they face. Hyperspectral imagery of landscapes can provide detailed information on a host of chemical and geological parameters and biological processes in both terrestrial and aquatic systems, and huge strides have been made in recent years in terms of imaging techniques, data analysis, and modes of deployment. Aerial and aquatic drones are increasingly being used to routinely monitor tracts of habitat and even individual animals. These types of remote sensing can help ensure that habitats remain healthy and protect the biota they are refuges for, while offering the possibility of rapid alert systems for failing food webs or trophic systems or excessive human interference.

Restoration ecology can play a significant role in augmenting the conservation value of marginal and degraded lands. Indeed, the growing land bank of damaged habitats around the world due to over-exploitation presents an opportunity for conservationists. Bioremediation techniques—for example, the use of plants and microbes to extract metal contaminants—have advanced to the point that we can use natural processes to help “re-wild” damaged habitats. But habitat recovery can also occur naturally in the most surprising locations if humans can be excluded. For example, the area surrounding Chernobyl, Ukraine, has recovered remarkably following the nuclear disaster in 1986, with native fauna taking advantage of the dearth of human activities to re-wild the exclusion zone, suggesting that even the most damaged landscapes are not beyond hope of recovery if technology can be employed to limit human incursions.

Advances in brain mapping may eventually be applied to technologies that can determine how species perceive their environment. Such information could help identify and ameliorate stressors that could be impediments to reproduction or survival.

Such technologies are now within reach. Robots or perhaps even cyborg animals (remotely-controlled by humans using microchips linked to the animal’s brain) could be used to enter areas that either cannot or should not be accessed by humans, and to limit unwanted contact between humans and a species targeted for protection, although there are ethical issues to be considered with this latter approach. With increasing affordability and improving technology, camera-trapping (the deployment of motion-detection cameras that trigger when an animal passes by) is rapidly gaining popularity. This technique can be used to non-invasively detect or monitor both vulnerable species and human presence in largely inaccessible areas. Advances in bioacoustics can greatly facilitate our ability to track enigmatic species such as marine mammals, but it could also provide a simple means to detect human encroachment on protected areas. Monitoring reproductive status and other physiological parameters in the wild can be facilitated by broader deployment of biotelemetry devices and the use of mobile communication networks. Advances in brain mapping may eventually be applied to technologies that can determine how species perceive their environment. Such information could help identify and ameliorate stressors that could be impediments to reproduction or survival and considerably improve animal welfare, although the resources and effort needed to develop these capabilities probably means they will be applied to charismatic keystone species, at least initially.

From climate engineering to cloning

Climate change may adversely affect habitats and species in many ways, but perhaps the greatest threat is altered weather patterns. The rapidly evolving science of weather modification may provide an option for counteracting its local effects. For example, reduced rainfall in certain areas could cause extended droughts, drastically affecting the local water cycle and vegetation structure of protected habitats. Considerable progress has been made in precipitation-inducing technologies, and cloud-seeding may need to be employed in particularly high risk or vulnerable habitats. In other areas, park managers may be able to pump groundwater to supply existing surface water bodies to preserve vegetation and fauna, although, of course, these groundwater reserves may themselves be depleted. Advances in solar-powered and flow electrode technologies for large-scale desalination of seawater may also provide a crucial solution to water shortages, at least in coastal areas. With better predictive models of climate behavior, other geoengineering interventions, from mitigating extreme weather to protecting against excessive ultraviolet (UV) exposure, may eventually become feasible; for now they remain controversial and highly uncertain.

Although climate change mitigation technologies will largely involve broad-scale interventions in habitats and biological processes, a complementary and more targeted approach will be necessary to support species and populations that have been exterminated or so diminished in size as to be unviable without human support. We must look to rescuing, reinforcing, restoring, and recovering threatened and extinct populations.

The techniques that have been developed for captive populations in zoos, aquaria, and botanic gardens will increasingly be employed in the wild, where fragmented and isolated populations will mirror the scenario of ex situ conservation. Cloning technology has the potential to remedy the ignominious extinction of the bucardo, or Pyrenean ibex, the last specimen of which was killed by a falling tree in January 2000. Indeed, proof-of-concept has already been achieved with the birth of a bucardo in 2009. Plant and animal germplasm can be preserved through the current biodiversity crisis by cryogenics, thereby providing a safety net for species by securing material for future cloning work if needed. Once competence with this technology has been achieved, future efforts should focus on recently-extinct keystone species rather than species such as mammoths and other long-extinct fauna and flora. Resurrecting an extinct species is pointless if the habitat in which it lived has disappeared and the factors that caused its demise have not been resolved; otherwise such efforts are nothing more than a conveyer belt for scientific curiosities.

Cloning could also be used to reintroduce genetic diversity back into genetically-depauperate populations using DNA taken from museum or other preserved specimens, as has already been attempted for the mouflon, a species of wild sheep. Plant propagation technology has developed rapidly for endangered plant species and with notable successes in preventing probable extinction, for example the large-flowered fiddleneck. Assisted migration is likely to become increasingly employed for threatened plants under climate change scenarios. Similarly, it is clear that current gene flow between isolated populations, such as those of the giant panda, is insufficient, and the number of individuals in some habitat fragments is inadequate for long-term persistence. Thus, humans may have to mediate the necessary gene flow either by relocating individuals or, more logically, transplanting their gametes or embryos by means of artificial reproductive techniques (ART). Although huge strides have been made in ART for humans and domestic animals, and some success has been achieved for captive animals, including the panda, applicability to wildlife management has received less attention. The San Diego Zoo Institute for Conservation Research (United States) and Kew Royal Botanic Gardens (United Kingdom) are pioneering efforts to make ART a more feasible option for the conservation of the most seriously threatened species in the wild.

A significant barrier to advancement of ART in wildlife is our limited knowledge of the mechanisms of mate choice in animals, particularly vertebrates. Greater deployment of biometric devices in targeted species will allow us to remotely monitor the physiology of both plants and animals, thereby facilitating real-time and targeted human interventions. For example, biometric monitoring of estrus in an endangered mammal would tell park managers when to deploy ART. And although nowadays biometric devices most often need to be implanted in individuals, the development of bioinks could one day allow biometric circuits to be printed directly onto the skin of animals or the leaves of plants to relay real-time warning signals of stress or other relevant data.

Repelling invaders

Conservation biologists widely regard invasive species as the second-most significant threat to biodiversity after habitat destruction. The ubiquity of the most successful invasive species could lead to a reduction and homogenization of biodiversity across broad ecotones, with only those species adapted to or tolerant of human disturbance persevering. Perhaps this process will ultimately be deemed an evolution of the global biosphere in response to human activities in the Anthropocene, and efforts to preserve faunal assemblages according to some historical census will be viewed as unattainable idealism or romanticism. However, invasive species cannot be allowed unfettered access to new territories as they clearly perturb food webs by competing with indigenous species for resources, modifying habitats, and altering the localized predator-prey balance, amongst other impacts. As trade barriers across the globe continue to fall, opportunities for invasive species will increase and this needs to be counteracted by improved monitoring and targeted interceptions. For example, ships discharging ballast water are historically major routes for aquatic invasive species. Rapid advances in technology, such as pre-discharge UV and chemical treatment, should allow implementation of routine control procedures.

Significant advances can be made in our use of bioindicators to signal environmental threats and to assess damage from invasives. Environmental DNA (eDNA; the residual DNA left in the ambient environment by plants and animals) could be used to quickly identify invasive species. Automated sequencing stations could routinely sample eDNA data from air, soil, or water to continuously monitor for biological invasions in critical habitats. Remote sensing from satellites and drones can be used to detect biological invasions and to track the progress of biological control agents, for example by mapping the resulting changes to vegetation.

Although biological controls can cause more harm than good (such as the introduction of cane toads into Australia to control beetle infestations), improving awareness of ecological interactions and demographics should allow us to develop more successful eradication programs for invasive plants and animals. One potential novel approach is to use the individualized volatile organic chemical signatures of invasive plants to attract targeted biocontrols. But given that we may never be able to prevent all human-derived biological invasions, new technological approaches can help manage those that do occur. The potential for hybridization with native fauna or for zoonotic disease transmission from invasive species can be monitored by using technology for recording animal movements and interactions. Genetic sequencing of a species of bee (Apis mellifera syriaca) has revealed genes that endow resistance to the mite Varroa sp. that causes colony collapse, highlighting the practical applicability of genetic profiling and the potential for technology to help reverse some of the damage caused by biological invasions.

Promoting peaceful cohabitation

As human activities increasingly pervade all ecosystems and habitats, human-wildlife conflict grows. The recent public and media furor over the illegal killing of Cecil the lion in Zimbabwe refocused attention on the continued problem of poaching and unlicensed trophy hunting. Strong international policy tools have been unable to counter consumer demand in some markets, ignorance of conservation issues in others, and weak enforcement in certain jurisdictions. Wildlife forensics is now a legitimate scientific field in its own right, facilitated by the rapid adoption of genetic and stable isotopic analyses to determine the source of illegally obtained biotic material. Furthermore, an array of tracking devices and drones are already being deployed to protect the most vulnerable animals, such as rhinos and elephants. Real-time data on animal movements can help with policing of the wildlife-human interface. A combination of global navigation satellite system (GNSS) technology (such as GPS) and rapidly diminishing transponder sizes will greatly assist in these efforts. In fact, GNSS transponders that transmit to multiple satellites are about to be routinely used by some military regimes to track soldiers and their equipment to millimeter accuracy—these transponders will be so small that they can be sown into soldiers’ uniforms, raising the future prospect of implantable tracking devices for many species of threatened wildlife.

Technology has the potential to mediate wildlife-human conflict in other ways. For example, crop-raiding animals can threaten the livelihoods of subsistence farmers and, in the case of some animals like elephants, also kill people. GNSS-tagging of herds or rogue individuals could allow park managers to intervene or to alert farmers when threatening animals are nearby, and allow the farmers to take preventative action. Enhanced knowledge of animal behavior through advanced monitoring technologies can further improve management techniques. Elephants, for example, dislike the sound of bees. Wildlife managers could use proximity loggers to trigger playback of recorded bee swarm sounds when elephants approach human settlements. Brain-mapping projects on wildlife could identify other interventions for mediating human-wildlife conflicts by clarifying the driving forces behind encroachment, which is only likely to increase in the future.

Technological advances in animal husbandry and plant propagation for highly marketable biological products could reduce the incentives for illegal trade (for example, crocodile farming has reduced poaching of wild populations for skins). It is even possible that future technology may bring about synthetic substitutes for some of the most sought after animal products, for example by 3D printing rhino horn tissue.

Rapid advances in monitoring wildlife movements and behavior must be matched by advances in the statistical and analytical methods necessary for making sense of the vast quantities of data that will be generated. Yet the field of conservation biology has not typically attracted the types of technical experts necessary to take advantage of emerging opportunities in “big data.” This problem is partly being addressed through the multi-disciplinary training that most biologists receive today and through the ever-expanding communication network between scientists in different fields. Conservation biologists must increasingly look beyond their typical network of biological specialists to experts in engineering, food science, information technology, agriculture, medicine, robotics, mathematics, architecture, and other fields to find technological solutions to aid them in their efforts to curb the threats to biodiversity and ecosystem health.

Though perhaps not in itself a technology, the emerging arena of citizen science can become an enormous asset in a variety of efforts aimed at conserving biodiversity in the Anthropocene. Amateur nature-lovers have a long history of contributing valuable observations and data to conservation efforts, for example with birdwatchers documenting rare species occurrences and long-term migration patterns. Social media and the phenomenal growth and ease of access to communications technology worldwide, combined with the growth in cloud computing, can facilitate collection, analysis, and dissemination of vast quantities of complex data by interested citizens. Horizon scanning, hackathons, and other such participatory techniques that are widely used in economic and communication sciences could easily be practiced by conservation biologists, and could realize rapid benefits in terms of identifying emerging issues and testing potential solutions.

The continued pursuit of higher standards of living and the material benefits of technological innovation by all societies will ensure a constant, if not increasing, pressure on the Earth’s habitats and biodiversity. And with most national economic policies founded on the notion of continuous economic growth it will take a remarkable change in economic ideology and social organization for the current trend in resource exploitation to be arrested. However, if we accept this inexorable trajectory based on current political, cultural, and economic realities—that is, if we accept the reality of the Anthropocene—it can pave the way for more widespread adoption of direct technological interventions aimed at reversing the most negative impacts of biodiversity decline, and greatly enhance the tools and strategies available to support and conserve existing biological processes. Indeed, the willing and even aggressive adoption by conservation biologists of novel tools to tackle the multiple threats faced by habitats and the biota they harbor will be crucial to counteract widespread species extinction and ecosystem collapse.

John O’Brien is a zoologist with a Ph.D. in conservation genetics from University College Dublin, Ireland, and now works at the Institute of Molecular Biology in Taiwan.

A New Social Science

Envious of several hundred years’ worth of advances in human understanding of the physical universe, some French thinkers, led by Auguste Comte, started clamoring in the early 1800s for pursuit of similarly rigorous mathematical insights into human behavior. They called these pursuits “social physics.”

Two centuries later, the days of social physics are drawing nigh. Arguably, the Isaac Newton of social physics is Alex “Sandy” Pentland, director of the Massachusetts Institute of Technology’s (MIT) Human Dynamics Laboratory and author of the first installment of his principia, Social Physics: How Good Ideas Spread–The Lessons from a New Science. This approachable and fascinating book pulls together years of Pentland’s work in the furtherance of Comte’s dream. Pentland, as did his forbearers, believes that researchers can derive “a causal theory of social structure” leading to “a mathematical explanation for why society reacts as it does and how these reactions may (or may not) solve human problems.”

The age of social physics is enabled by the digitization of information and the ease of capturing unimaginably large amounts of people’s activities that are revealed by tech gadgetry, credit-cards, and now m-money transactions, geo-location material captured by mobile devices, search engine queries, online browsing, social media activity, health records, and government data. For Pentland, social physics “is a deceptively simple family of mathematical models that can be explained in plain English and gives a reasonably accurate account for the dozens of real-world examples … including financial decision making…, ‘tipping point’-style cascades of behavior change, recruiting millions of people to help in a search, to save energy, or to get out and vote; as well as social influence and its role in shaping political views, purchasing behavior, and health choices.”

Social Physics

The most important research described here takes data-capture to another level. Pentland and colleagues have done a variety of “reality mining” experiments by giving people gizmos called “sociometers” that hang on a cord around their necks and track their movements, record their voices, and clock their interactions with others who are wearing similar badges. The millions (sometimes billions) of data points generated by these encounters generate something akin to a “god’s eye view” of the interplay of a particular bunch of actors.

Pentland is especially focused on two things: idea flow within groups that entails both exploration (finding new ideas) and engagement (getting everyone to coordinate their behavior), and social learning or “how new ideas become habits and how learning can be accelerated and shaped by social pressure.” And he provides abundant evidence that the data generated in this work illuminate such things as how ideas emerge in groups, who has influence, what social and personal traits are most helpful to groups, and how people behave in echo chambers of ideas.

One exploration of “collective intelligence” in groups found that the factors often extolled by experts, such as cohesion, motivation, and satisfaction, were not statistically significant in predicting how smart the group is. Instead, the largest factor in predicting group intelligence was the equality of conversational turn-taking; that is, groups where few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational contributors. The second most important factor was the social intelligence of group members, measured by their ability to read each other’s social signals. He concludes, “Women tend to do better at reading social signals, so groups with more women tended to do better” on measures of collective intelligence.

These kinds of empirical insights really push the boundaries of much of traditional social science, which often frames research around social class, race, ethnicity, political groupings, and gender. This can be a good thing because contemporary life is often powerfully shaped by networked peer groups, which Pentland appropriately notes are organized around “shared norms” rather than just “standard features such as income” or “their relationship to the means of production.” I would refine this by noting that people’s personal networks are critical, too. Knowing about the size, diversity, and composition of a person’s network is a strong predictor of all kinds of things, including his or her political views, civic activities, consumer choices, spiritual passions, and hobby interests. My guess is that this personal network structure would also be a useful indicator in the kinds of things Pentland probes in his examination of idea flow and social learning.

Pentland believes that researchers can derive “a causal theory of social structure” leading to “a mathematical explanation for why society reacts as it does and how these reactions may (or may not) solve human problems.”

At the same time, I read the wealth of Pentland case studies and wondered about demographic differences and whether the models might need to adjust for differences that clearly still show up when younger people are compared with their elders, whites are compared with African Americans, Latinos, or Asian Americans, and those in more upscale classes are compared with those in less-well-off classes. It is easy to imagine that the “laws of social physics” are not actually uniformly distributed and applicable.

Then there is the dicey question of how insights gained from social physics might be used by policymakers or corporate managers. Pentland openly hopes the models will help produce “sensing cities” and “data-driven societies” and will inspire the reconfiguration of organizations that are hamstrung by their hierarchical structures. A small, mostly virtuous example: One Pentland study of a bank’s call center (which generated “tens of gigabytes” of data) showed that a modest change in the coffee-break schedule to give team members a chance to be together informally as a group yielded greater productivity to the tune of $15 million per year. Some skeptics, however, fret that formulaic implementation of social physics principles will reduce human agency and autonomy, producing a kind of hyper-“nudge” society that steers people as much as it liberates them.

It is clear that some kind of analytical structure of social physics is an inevitable part of the future. Society is entering the age of data and all the algorithms that go with the data. Pentland knows that his high hopes will work only if there is sufficient trust among the public that the data being collected are being used appropriately. He has championed a “New Deal on Data” that provides “workable guarantees that the data needed for public goods are readily available while at the same time protecting the citizenry.” The three planks that undergird this deal: You have the right to possess data about you. You have the right to full control over the use of your data. You have the right to dispose of or distribute your data.

He would implement this arrangement by data share through “trust networks”—that is, a computer network that keeps track of user permissions on personal data combined with a legal contract that specifies what can and cannot be done with the data. There would be enforceable penalties if the contract were breached. In effect, this New Deal on Data is Pentland’s “Bill of Rights” for citizens in a world where they are “outsourcing” some major elements of decision making to the masters of “big data.”

Pentland’s work is persuasive that a new science really is emerging in this data-drenched world. Moreover, the case he makes for evidence-based policymaking is pretty compelling. It might not solve every problem—or even a healthy share of them. Still, it is nice to think that a more rigorous scientific method is being created to test theories and explore data about what makes us tick.

Economics Humanized

In the 1999 film The Matrix, there is a wonderful scene between Morpheus, the leader of a rebel group, and Neo, who is destined to be the movie’s hero. Neo is dissatisfied, sensing something wrong with the world. Morpheus offers Neo a red pill that will allow him to experience the real world. As Neo reaches for the pill, Morpheus intones: “Remember: All I am offering you is the truth.” The red pill wakes Neo to the reality that the world he has “lived in” is merely a computer simulation, with humans actually sustained in liquid vats tied into a vast system harvesting their heat and bioelectricity. The truth revealed, Neo joins the rebellion to try to liberate humans from their machine overlords.

Karl Polanyi’s The Great Transformation is a kind of red pill for anyone interested in seeing more clearly the nature of contemporary American society. Once you read his book, or the very fine recasting of Polanyi in The Power of Market Fundamentalism: Karl Polanyi’s Critique, written by prominent social theorists Fred Block and Margaret Somers, you can never see the social world in the same way.

Karl Polanyi is the most important social thinker of the 20th century that you have (probably) never heard of. The Great Transformation was published near the end of World War II and was intended to influence grand policy on reworking the social compact among labor, capital, and the state. The reach of Polanyi’s argument accounted for the rise of industrial capitalism, the disruption of traditional social protections and the creation of new ones, the emergence of colonization, World War I, fascism, and World War II. The weakest element of his book is Polanyi’s projection, contained in the last chapter, of where democracy was headed. He anticipated the end of utopian beliefs in free markets, or what Block and Somers refer to as “market fundamentalism.” He definitely got that wrong!

Polanyi’s work is the earliest coherent statement of how free market ideology came to dominate the Anglo-American policy landscape. His contemporaries argued over economic models or advocated for free markets as the path to individual liberty. They were trapped within market-oriented worldviews that privileged Homo economicus as the purest expression of human nature. Polanyi stands out for tackling the illusion of Homo economicus and the centrality of markets to life, society, and freedom.

What does the world look like through Polanyi’s eyes? His own words form the best summary:

Our thesis is that the idea of a self-adjusting market implied a stark utopia. Such an institution could not exist for any length of time without annihilating the human and natural substance of society; it would have physically destroyed man and transformed his surroundings into a wilderness.

Polanyi’s was a moral perspective. He clearly sees that humans are not merely “labor” and that nature is not merely “land.” Yet free markets require such commodification, and that puts all of us on a path that, if followed to its logical conclusion, leads to a wasteland. This tension between markets and society creates a “double movement” whereby the state acts to expand the reach of the market while society, often through state intervention, works to limit the destructive powers unleashed by the market.

Market Fundamentalism

Think of this as a kind of social plate boundary, to draw an analogy from the geosciences, where the tensions between the demands of the market and the host society produce disruptive events. Geology finds earthquakes, mountain ranges, and volcanic activity at these boundaries. What human society has experienced, according to Polanyi, includes authoritarianism, global war, and economic depressions.

Polanyi’s analytical approach to explaining the unfolding of the Industrial Revolution in England and subsequent countermovements makes clear that states set all of the boundary conditions for markets. States enforce contracts, property rights, competition policy, innovation policy, trade, money supply, and on and on. In Polanyi’s view, the belief of extreme free market advocates—think of Friedrich Hayek and Milton Friedman—that free markets can lead to the end of politics is a destructive fiction. Without a state, a market cannot exist.

If The Great Transformation is a great book—and it is a great book—what is the point of the volume produced by Block and Somers? Polanyi wrote 70 years ago amid the disruption of World War II as an émigré from Austria to England and then the United States. He had little time to rework the book, leaving some elements half-formed, others somewhat contradictory, and some of his history and predictions just plain wrong. The Power of Market Fundamentalism: Karl Polanyi’s Critique represents a successful effort to redress these elements of The Great Transformation. The authors also update the work to make it more clearly relevant to American society in the 21st century. They do an excellent job of laying out Polanyi’s key arguments.

Block and Somers work to clarify Polanyi in, for example, their discussion of “embeddedness.” The authors point to ambiguities in Polanyi’s writing about the degree to which markets are, or can ever be, “disembedded” from society. Reviewing the evolution of Polanyi’s treatment of the relationship of market to society, they convincingly conclude that Polanyi sketched a theory of an “always-embedded market economy,” in which politics and social forces inevitably structure the organization of the market. If the market could ever achieve “escape velocity” in exploiting the fruits of society, both institutions would probably disappear.

In Polanyi’s view, the belief of extreme free market advocates—think Friedrich Hayek and Milton Friedman—that free markets can lead to the end of politics is a destructive fiction. Without a state, a market cannot exist.

The authors also address Polanyi’s treatment of the Speenhamland relief system that emerged in parts of England during the late 18th century. In that system, workers were provided an assured minimum income. By breaking the “natural” link between misery and effort, Thomas Malthus argued that rural workers were placed on a perverse path where reproduction would outstrip production. These criticisms, embraced by a Parliamentary Committee, were used to justify the move to the state-sponsored workhouse system of the 1834 New Poor Law.

Polanyi discovered a different Speenhamland perversion, emphasizing how assured income reduced the incentive of employers to pay decent wages, since the parish would make up the difference. He speculated that if the anti-union Combination Laws had not prohibited worker organization, wages might have gone up. In this historical detail, Polanyi found an important reason to avoid generalizations about perverse outcomes, when attempts to alleviate suffering caused by a market economy leave the intended beneficiaries worse off.

Block and Somers find both approaches to Speenhamland to be based on bad history. In their careful account, Speenhamland becomes an example of how free market ideology, wielded by figures such as Malthus and the economist David Ricardo, suppressed complex realities to weave a simple fiction blaming the poor for their own condition. Instead, the struggle to propagandize Speenhamland in the early 19th century sheds light on the shifting power between the gentry and emerging capital. Although the authors correct Polanyi, they do so in a way consistent with Polanyi’s approach.

The final chapters of Block and Somers’ work push Polanyi into the 21st century. The authors compose an exceptional chapter on modern American discourse around poverty and relief. By the time of the “Great Society,” Americans had largely abandoned the “perversity thesis” that policy intervention in “natural,” self-adjusting markets—in order to, say, ensure a minimum wage for workers—will inevitably result in bad outcomes. Block and Somers demonstrate how Malthus’s deductive arguments about human nature and perverse incentives have been revived since the Great Society programs. The writings of Charles Murray were highly influential in spurring welfare “reforms” embraced by both political parties in the 1990s. Americans now overwhelmingly believe (again) that welfare creates dependency in recipients, despite the total lack of empirical evidence. The power of such zombie ideas is astonishing. The claim remains a touchstone of conservative commentary any time there is social unrest in America’s “inner cities.”

The authors’ second “applications” chapter on the rise of conservative-movement politics is less successful. They lump all business interests together when there are plain differences between sectors. It is true that business is broadly aligned with the Republican Party, but core support comes from extractive and polluting industries (such as oil, gas, coal, and chemical manufacturers). Other powerful sectors, such as Silicon Valley and some parts of the financial industry, tend to line up behind Democratic causes. The authors also treat conservative-movement politics as separate from conservative business interests. Carefully disaggregating the business community opens the door to exploring connections to conservative-movement organizations. The Tea Party movement of the summer of 2010 was not entirely spontaneous but received money and organizational assistance from the Koch brothers (and probably others).

The final chapter is the least convincing. It attempts to sketch a path forward for democracy, as conceived by Polanyi. Although Block and Somers struggle with what democracy could be, the exercise feels removed from U.S. political realities. They offer no reason to believe that there will be a shattering of the visceral American belief that individual freedom lies in the market. Despite wrestling with why “market fundamentalism” retains such power in the United States, contrary to Polanyi’s original prediction, they are unable to persuasively show how a reimagining of democracy could occur.

The Power of Market Fundamentalism is an important book. It comes at a time when the rising concentration of wealth in the top 1% of American society is combined with an end to effective limits on campaign contributions. In the wake of the Citizens United v. FEC ruling, which removed restrictions on political spending, we are witnessing the spectacle of presidential candidates actively trying to line up their “own” billionaire to support primary campaigns. If democracies could have warning systems, that of the United States would be ringing very, very loudly. By reintroducing Polanyi’s analytical style and critique of market fundamentalism, Block and Somers have crafted a red pill for the 21st century.

Advice to My Smart Phone

Until recently our friends were the ones who knew us best, and perhaps even knew what was best for us. But nowadays that role is being claimed by our smartphones: our new best friends. Loaded with sensors and apps, they know which shoes, books, and music we might like. And a new generation of apps is also helping us to lead physically and mentally healthier lives.

We’ve grown familiar with joggers being motivated by their running apps. But now there are even apps to assist you in things like raising your newborn child. And for the people who aren’t motivated by kind words and inspirational messages, apps working with wearables like the Pavlok-bracelet use electric shocks to get you off the couch.

Over the past two years I’ve studied the rise of apps aimed at monitoring and improving our personal lives. They signal the emergence of a new type of coach: the digital lifestyle coach, or e-coach. The e-coach differs in many ways from its analogue predecessor: its approach is data-driven, its monitoring regime is continuous, and its feedback is real-time and (hopefully) right on time. The promise of the e-coach is that with its objective data, clever algorithmic analysis, and personalized feedback it will help me change my behavior for the better.

So I’ve been asking myself whether I would listen to an app that tells me to change my behavior? Would I trust the advice of a digital, data-driven coach coming to me from the smart phone in my pocket? Within the stressful environment of a modern society constantly bombarding me with information and options, it might even be nice to have a companion app that tells me what choices to make. But before we move on to such a future, there are a couple of things that I feel the e-coach seriously does need to improve. So for now, let’s switch roles, and let me give the emerging digital coach a few words of advice.

Be honest about your imperfections

A smart phone is a wonder of modern technology. We carry around more computing power in our pockets than NASA used to put a man on the moon. But still, not everything our phones calculate is correct and accurate. This goes for apps that monitor our behavior as well. What they measure isn’t always correct, and how they analyze and translate this into advice isn’t either.

Getting accurate measurements of human behavior is tricky. For example, apps and wearables have a hard time interpreting some movements and activities. Popular activity trackers tend to underestimate the activity levels (such as calories burned or distance walked) of people who walk slowly, like pregnant women or obese people. Activities with more subtle movements, like yoga, are also tough to measure. One user we interviewed during our research said that his heart rate monitoring wristwatch would only work when it was strapped so tightly onto his wrist that it was uncomfortable and left a mark on his skin—and even then the data it provided was incomplete and inaccurate. So he stopped using it.

My own experience with sleep tracking ended in a similar way. I had set out to create a beautiful dataset on my sleeping behavior, but setting the tracker to the appropriate tracking mode required a series of taps on a wristband that for me turned out to be difficult to remember or execute properly in the dark right before dozing off. Some nights I failed to set the tracker correctly. On other nights, the data didn’t seem to capture what actually happened, for example, when I was awakened by the roar of a passing motorcycle. So I wasn’t getting the clean and complete dataset I had imagined, and I didn’t know whether I could trust the data enough to make decisions about altering my sleeping behavior. The tracker eventually ended up in a drawer—and that, according to a survey by Endeavour Partners, is the fate of more than 50 percent of trackers.

Regardless of improvements in technology, it seems likely the apps and gadgets designed to help me improve myself will continue make plenty of errors for the foreseeable future. But then again, so do my human friends. So how can I best take advantage of what my imperfect smart phone has to offer?

Research in robotics might provide us with an answer. Studies have shown that people are more inclined to trust technology if it communicates clearly and honestly about its limitations. In flight-simulator experiments, for example, pilots will trust and collaborate with a system more effectively if it informs them when it is unsure about its own judgement.

But being honest about imperfection is one thing; if they want us to trust them, apps are also going to have to do a better job explaining why they give the advice that they do. When your best friend tells you to take it easy, you can ask him why he thinks you should do so, and based on what he says you can decide whether or not to follow his advice. Most of the time technology is unable to provide such an explanation. If your stress monitor thinks you are stressed out, but you don’t feel stressed at all, you can’t ask your stress monitor how it came to its conclusion. Maybe it misinterpreted the data? Or maybe you are not accurately sensing your own stress.

At Delft Technical University researchers are working on the design of self-explaining agents, software that is able to provide users with the reasons for its actions. If applied to digital coaches, such a design could inform users about how advice is constructed. For instance, the app could display the measurements that led it to conclude that the user was stressed, and the reasoning why it recommended taking a walk to help ease that stress. Honesty is the way to go for me. Even if that means that a “smart” app must admit that it doesn’t know everything.

Stop talking behind my back

The digital coach is data-driven. In the process of monitoring and giving feedback, a continuous stream of data is collected about our behavior. This data has a literally intimate quality, as was shown when Fitbit users’ sexual activity showed up in Google searches. When you trust an app to collect and analyze intimate data, you want to be sure that it is handled with appropriate care and confidentiality. Doctors and coaches are bound to confidentiality by law or professional codes. But our apps and data-gathering smart phones are not.

Not surprisingly, many people have their worries about how wearables and health apps handle their personal data. And those worries seem to be justified, because software usually is not a neutral advisor: your app might appear to have your best interests in mind, even as it is sneaking around the back door to sell your personal data.

Evidon Research (now Ghostery Enterprise) found that 12 well-known health apps distributed data to as many as 76 different third parties. Research by the Federal Trade Commission showed that the types of data health apps and wearables are spreading are not just anonymized activity patterns, but usernames, email, unique device identifiers, and other highly personal data.

A patent by wearables manufacturer, Jawbone, offers some insight into where all this data might eventually end up. The company has developed what it calls lifeotype, a master profile that combines data from different apps, wearables, and external sources to create a complete picture of someone’s lifestyle. The patent describes how simple data points (life bits), such as data from an activity tracker, can be analyzed to conclude that someone leads a sedentary lifestyle (life bytes). By combining this information with other data about eating patterns, and perhaps medical history, a lifestyle profile can be created. This lifeotype can tell us that this person eats too much sugar, is slightly obese, exercises little, and is at risk of developing diabetes in the coming years.

This might be okay if we had total control over our own data, but the ways in which data from digital coaching apps are being traded, sold, and re-used remain largely opaque. A colleague of mine, who is a diabetic, tried to find out what happens with the data from her wearable insulin pump when she uploads it into the cloud. By studying the fine print of her privacy policy and making several calls with the service provider she learned that data she thought were used only for telecare purposes were actually also used (after being anonymized) for research and profiling. But she was unable to find out exactly how the data were being analyzed and who was doing the analysis. This worries her because the cloud service, which she used to pay for but is now free, encourages users to also upload their Fitbit data, suggesting to her that the costs of the service are now being covered by monetizing her data.

A digital coach should value personal autonomy, but the current generation of digital coaches doesn’t seem to recognize that there is no one formula for well-being, and they still have a long way to go in terms of allowing users to define their own goals.

These sorts of concerns are not a solid basis for a trusting relationship to emerge between human and app. If we really want to be able to profit from a digital coach, we have to be able to trust it with our personal data. Giving users clear, transparent choices about how their data will and will not be used can pave the way for a more healthy and trusting relationship.

I recently talked to someone at an insurance startup that uses data from driving behavior to establish personalized premiums. The use of personal data in insurance is always a touchy subject, but this company is managing to make it work. They give their users clear information about what data are being collected and how the information will and will not be used, as well as the controls to manage their data and even delete the information after the premiums are calculated. Their customers are very supportive of this type of transparent data use, and both sides benefit from the openness. I think the same approach would work for a digital coach.

Just let me be me

My final word of advice to future digital coaches would be to respect that people are different, that there isn’t a one-size-fits-all approach to being healthy and living well. Health apps promote a certain image of health and well-being. Usually that image is based on some set of guidelines about how much exercise you should get, and how much fruit and how many vegetables you should eat. But for some, a good life might not entail strict compliance with some app’s exercise or dietary standards; it would instead allow for a looser interpretation of such general rules, leaving more room for the social aspects of dining with friends, baking cookies with your kids, or the enjoyment of sloth.

Time magazine reports on an app aimed at kids that helps them manage their eating habits using a simple traffic light feedback system. High calorie foods such as ice-cream get a red-light classification; foods that should be consumed in moderation such as pasta and whole wheat bread are yellow; and things that you can eat as much of as you want, such as broccoli, are green (which of course begs the question of whether kids want to eat any of the green-light foods at all). The article reports on a young girl who, since she started using the app, is now seeing the world in red, yellow, and green. “Everything,” she says, “corresponds to how many red lights you eat.” While providing a useful tool for managing a diet, the app also instills a very specific perspective on food—a perspective informed by calories rather than other qualities of food, a perspective that focuses on the evaluation of food as “good” or “bad” rather than its social aspects, and a perspective that makes eating something you succeed or fail at.

By promoting certain actions and discouraging others, a digital coach presents a view of what is good and what parameters such judgments ought to be based on. Can a digital coach—or the tech company or government agency behind it—determine that for me? What is good for one person doesn’t automatically work for another. A digital coach should value personal autonomy, but the current generation of digital coaches doesn’t seem to recognize that there is no one formula for well-being, and they still have a long way to go in terms of allowing users to define their own goals.

One interesting exception is the Dutch Foodzy app, which tracks what you eat but refrains from telling you what you should eat or not eat. Foodzy users can earn badges for healthy as well as unhealthy behavior. You can get awards for stuffing away fruit and vegetables, but you can also become the King of the BBQ, or claim a hangover badge by consuming a certain amount of alcohol. Foodzy encourages healthful eating, but it doesn’t try to be a dictator about it.

Samsung called one of its smart phones a “Life Companion,” an appropriate description of a device that assists us in almost everything we do. But a real companion has to be reliable and honest, it must have integrity, and it should respect my personal values. These attributes, by the way, are part of the professional code that human coaches must live up to. Our pocket-companions still have a long way to go before they can earn our trust.

Jelte Timmer ([email protected]) is a researcher for the technology assessment department of the Rathenau Institute in the Netherlands. This article is based on the report Sincere Support: The Rise of the E-Coach.

CRISPR Democracy: Gene Editing and the Need for Inclusive Deliberation

Not since the early, heady days of recombinant DNA (rDNA) has a technique of molecular biology so gripped the scientific imagination as the CRISPR-Cas9 method of gene editing. Its promises are similar to those of rDNA, which radically transformed the economic and social practices of biotechnology in the mid-1970s. Ivory tower rDNA science morphed into a multibillion dollar technological enterprise built on individual entrepreneurship, venture capital, start-ups, and wide-ranging university-industry collaborations. But gene editing seems even more immediate and exciting in its promises. If rDNA techniques rewrote the book of life, making entire genomes readable, then CRISPR applies an editorial eye to the resulting book, searching for typos and other infelicities that mar the basic text. Gene editing shows many signs of being cheaper, faster, more accurate, and more widely applicable than older rDNA techniques because of its ability to cut and alter the DNA of any species at almost any genomic site with ease and precision.

Since their development, gene editing techniques have been used for many purposes: improving bacterial strains used in dairy products, making new animals for research, and experimenting with knocking out disease-inducing mutations in human genes. Some of these uses are already producing commercial benefits while others remain distinctly futuristic. Uncertainty, however, has not deterred speculation or hope. To many it appears all but certain that so precise and powerful a technique will revolutionize the treatment of genetically transmitted human disease, correcting defective genes within diseased bodies, and potentially banishing genetic errors from the germ-line by editing the DNA of human gametes and embryos. Some researchers have already initiated experiments on human gametes and embryos to develop techniques for this purpose.

Hope is understandable. Up to 10% of the U.S. population is estimated to carry traits for one or another rare genetic disease. The consequences for individuals and families may be tragic, as well as economically and psychologically devastating. Our moral intuition rebels against pointless suffering. Any discovery that serves medicine’s ethical mandate to help the sick therefore generates immense pressure to move quickly from labs into bodies.

These established, socially approved ways of thinking explain the air of inevitability surrounding CRISPR’s application to germline gene editing. In Craig Venter’s words “the question is when, not if.” Human curiosity and ingenuity have discovered a simple, effective means to snip out nature’s mistakes from the grammar of the human genome, and to substitute correct sequences for incorrect ones. It seems only logical, then, that the technique should be applied as soon as possible to those dealt losing hands in life’s lottery. Yet, as with all narratives of progress through science and technology, this one carries provisos and reservations. On closer inspection, it turns out to be anything but simple to decide how far we should go in researching and applying CRISPR to the human germline. CRISPR raises basic questions about the rightful place of science in governing the future in democratic societies.

Recapitulating the rDNA story, prominent biologists have been among the first to call for restraint. In March 2015, a group, including such luminaries as Nobel laureates David Baltimore of Caltech and Paul Berg of Stanford, proposed a worldwide moratorium on altering the genome to produce changes that could be passed on to future generations. In May, the U.S. National Academy of Sciences (NAS) and National Academy of Medicine (NAM) announced their intention to hold an “international summit” later this year “to convene researchers and other experts to explore the scientific, ethical, legal, and policy issues associated with human gene-editing research.” The NAS-NAM plan also calls for a “multidisciplinary, international committee” to undertake a comprehensive study of gene editing’s scientific underpinnings and its ethical, legal, and social implications.

That leading scientists should call for responsible research is wholly laudable. But the human genome is not the property of any particular culture, nation, or region; still less is it the property of science alone. It belongs equally to every member of our species, and decisions about how far we should go in tinkering with it have to be accountable to humanity as a whole. How might a U.S. or international summit on gene editing attempt to meet that heavy responsibility?

Thus far, one historical experience has dominated scientists’ imaginations about the right way to proceed, an experience that takes its name like many ground-breaking diplomatic accords from a meeting place. The place is Asilomar, the famed California conference center where in 1975 some of the same biologists now proposing a moratorium on germline gene editing met to recommend guidelines for rDNA experimentation. In the eyes of Paul Berg, one of its chief organizers, this too was a meeting that changed the world. Writing in Nature in 2008, he portrayed Asilomar as a brilliant success that paved the way for “geneticists to push research to its limits without endangering public health.”

That description, however, points to the dangers of using Asilomar as a model for dealing with CRISPR. It implies that geneticists have a right to “push research to its limits” and that restraint is warranted only where the research entails technically defined risks like “endangering public health.” But both notions are flawed. We argue here that an uncritical application of the Asilomar model to CRISPR would do a disservice to history as well as democracy.

Asilomar shows how under the guise of responsible self-regulation science steps in to shape the forms of governance that societies are allowed to consider. As a first step, questions are narrowed to the risks that scientists know best, thereby demanding that wider publics defer to scientists’ understandings of what is at stake. Even where there are calls for “broad public dialogue,” these are constrained by expert accounts of what is proper (and not proper) to talk about in ensuing deliberations. When larger questions arise, as they often do, dissent is dismissed as evidence that publics just do not get the science. But studies of technical controversies have repeatedly shown that public opposition reflects not technical misunderstanding but different ideas from those of experts about how to live well with emerging technologies. The impulse to dismiss public views as simply ill-informed is not only itself ill-informed, but is problematic because it deprives society of the freedom to decide what forms of progress are culturally and morally acceptable. Instead of looking backward to a mythic construct that we would call “Asilomar-in-memory,” future deliberations on CRISPR should actively rethink the relationship between science and democracy. That reflection, we suggest, should take note of four themes that would help steer study and deliberation in more democratic directions: envisioning futures, distribution, trust, and provisionality.

Whose futures?

Science and technology not only improve lives but shape our expectations, and eventually our experiences, of how lives ought to be lived. In these respects, science and technology govern lives as surely as law does, empowering some forms of life and making them natural while others, by comparison, come to seem deficient or unnatural. For example, contraception and assisted reproduction liberated women from the natural cycles of childbirth and enabled a degree of economic and social independence unthinkable just a half-century ago. But increased autonomy in these domains necessarily changed the meaning and even the economic viability of some previously normal choices, such as decisions to have many children or simply “stay home.” Similarly, the digital era vastly increased the number of “friends” one can call one’s own, but it curtailed leisure and privacy in ways that brought new demands for protection, such as employee rights not to answer email after hours, for instance in France and Germany, and the rights of individuals now recognized in European law to demand the erasure of their outdated digital footprints in search engines like Google. Prenatal genetic testing enabled parents to prevent the birth of seriously ill children but made disability rights groups anxious that members would be stigmatized as accidents who should never have been born.

The research community acknowledges the unfair distribution of health resources but tends to shrug it off as someone else’s business.

As in moments of lawmaking or constitutional change, the emergence of a far-reaching technology like CRISPR is a time when society takes stock of alternative imaginable futures and decides which ones are worth pursuing and which ones should be regulated, or even prevented. Asilomar represented for the molecular biology community just such a moment of envisioning. The eminent scientists who organized the meeting rightly recognized that at stake was the governance of genetic engineering. How should the balance be struck between science’s desire to push research to the limits on a new set of techniques with extraordinary potential, and society’s possibly countervailing interests in protecting public health, safety, and social values? Intelligence, expertise, a strong sense of social responsibility—all were amply represented at Asilomar. What was in shorter supply, however, was a diversity of viewpoints, both lay and expert.

To molecular biologists flushed with the excitement of snipping and splicing DNA, it seemed obvious that rDNA research should continue without what they saw as ill-advised political restrictions. Many scientists regarded this as “academic freedom,” a constitutionally guaranteed right to pursue research so long as inquiry harms no one. The primary risk, Asilomar participants believed, was that dangerous organisms might be accidentally released from the lab environment, injuring humans or ecosystems. What would happen, they asked, if a genetically engineered bacterium containing a cancer-causing gene escaped and colonized the human gut? To prevent such unwanted and potentially grave errors, the scientists adopted the principle of containment, a system of physical and biological controls to keep harmful organisms safely enclosed inside the experimental spaces where they were being made. Public health would not be risked and research would continue. The Reagan administration’s subsequent decision to use a coordinated framework of existing laws to regulate the products, but not the process, of genetic engineering reflected this end-of-pipe framing of risks. Upstream research remained virtually free from oversight beyond the narrow parameters of laboratory containment. This is the science-friendly settlement that Paul Berg celebrated in his Nature article and that the National Academies have invoked as a guiding precedent for the upcoming summit on gene editing.

A full accounting of the Asilomar rDNA conference, however, highlights not the prescience of the scientists but the narrow imagination of risk that their “summit” adopted. The focus on containment within the lab failed to foresee the breadth and intensity of the debates that would erupt, especially outside the United States, when genetically modified (GM) crops were released for commercial use. U.S. policymakers came to accept as an article of faith that GM crops are safe, as proved by decades of widespread use in food and feed. Ecologists and farmers around in the world, however, observed that Asilomar did not even consider the question of deliberate release of GM organisms outside the lab because the assembled scientists felt they could not reliably assess or manage those risks. As a result, when agricultural introductions were approved in the United States, with little further deliberation or public notice, activists had to sue to secure compliance with existing legal mandates, such as the need for an environmental impact statement.

If the Asilomar scientists’ imagination of risk was circumscribed, so too were their views of the forms and modes of deliberation that are appropriate for the democratic governance of technology. Understandably, given the United States’ lead in rDNA work, American voices dominated at the scientists’ meeting, with a sprinkling of representatives from Europe and none from the developing world. Questions about biosecurity and ethics were explicitly excluded from the agenda. Ecological questions, such as long-term effects on biodiversity or non-target species, received barely a nod. The differences between research at the lab scale and development at industrial scales did not enter the discussion, let alone questions about intellectual property or eventual impacts on farmers, consumers, crop diversity, and food security around the world. Yet, those emerged as points of bitter contestation, turning GM crops into a paradigm case of how not to handle the introduction of a revolutionary new technology. In retrospect, one can see the long, at times tragic, controversy over GM crops—marked by research plot destructions, boycotts and consumer rebellion, import restrictions against U.S. crops, a World Trade Organization case, a global movement against Monsanto—as a reopening by global citizens of all the dimensions of genetic engineering that Asilomar had excluded.

Biomedicine achieved greater political acceptance in the intervening decades than agricultural biotechnology, but even here the record is ambiguous. As we will discuss, the political economy of drug development, an issue that even scientists with substantial commercial interests typically regard as lying outside their remit, remains highly controversial. Specific public worries include the ethics of transnational clinical trials, access to essential medicines, and intellectual property laws that discriminate against generic drugs produced in developing countries.

Given these demonstrable gaps between what scientists deliberated in 1975 and what the world has seen fit to deliberate in the 40 years since, it is the myth of Asilomar as the “meeting that changed the world” that warrants revisiting.

Whose risks?

In biomedical research, the notion that scientists should “push research to its limits” reflects not only the desire to satisfy curiosity but the hope that progress in knowledge will produce victories against disease. Given its power and versatility, there is plenty of speculation that CRISPR might be not just any therapy, with hit or miss qualities, but a magic bullet for generating customized gene and cell therapies, more targeted treatments, and, most provocatively, direct editing out of disease-causing genes in human embryos. These visions are not unlike several that preceded them, for instance with embryonic stem cell research, gene therapy, rDNA, and others. As with these precursors, imaginations of the technique’s therapeutic potential—and thus the imperative to proceed with research—eclipsed the complexities of biomedicine in practice. Although CRISPR might produce treatments, people will benefit from them only if their ailments are the ones treated and only if they have adequate access to therapies. Access, in turn, depends in important respects upon the political economy of innovation. Thirty-five years after Genentech produced recombinant insulin, the first major biomedical payoff of rDNA, insulin remains an expensive drug. Its cost keeps it out of reach for some Americans, with disastrous implications for their health. A therapeutic as complex as CRISPR gene therapy with multiple macromolecular components (protein, RNA, and delivery agents) is likely to be engineered and reformulated for decades to come to maximize safety and efficacy. That process, in turn, may generate a succession of “evergreening” patents and limit the immediate benefits to those with the resources to afford them.

The research community acknowledges the unfair distribution of health resources but tends to shrug it off as someone else’s business. Science, after all, should not be burdened with solving complex political and economic problems. The social contract between society and science, as encapsulated in Vannevar Bush’s metaphor of the endless frontier, calls on science only to deliver new knowledge. Yet the commercial aspirations of twenty-first century normal science play no small part in sustaining the very political economy of invention that gives rise to distributive inequity. These days it is expected (and indeed required by law) that publicly funded discoveries with economic potential should be commercialized: science, in this view, best serves the public good by bringing goods to market. CRISPR is no exception. A patent battle is taking shape between the University of California, Berkeley and the Broad Institute, with predictions that upward of a billion dollars in royalties are at stake. With such forces in play, “pushing research to its limits” easily translates into pushing biomedicine’s commercial potential to its limits, meaning, in practice, that urgent needs of poor patients and overall public health may get sidelined in favor of developing non-essential treatments for affluent patients. Under these circumstances, it is hard not to read defenses of scientific autonomy and academic freedom as defenses of the freedom of the marketplace. Both freedoms are rooted in the same disparities of wealth and resources that separate the health expectations of the poor from those of the rich.

The apparent inevitability of CRISPR applications to editing embryos takes for granted the entire economics of biomedical innovation, with the assumption that the push to commercialize is by definition a universal good. These arrangements, however, are not natural expressions of the market’s invisible hand. They grow out of specific political and legal choices whose consequences have typically not been revisited in the decades since they were made, even where mechanisms exist to do so. The National Institutes of Health (NIH), for instance, retains march-in rights for intellectual property produced with its support, but it has never seen fit to exercise them, even where pushing profits to the limit has compromised access to therapeutics with detrimental effects on public health. In contrast, many developing countries initially exempted pharmaceutical drugs from patent protection on the belief that access to health should not be limited by commercial interests—an exemption eliminated by the 1994 Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS).

Good governance in a complex world does require accommodation of private interests, and democracies have struggled to insulate governance from undue influence by the power of money. CRISPR and its biotechnological predecessors exemplify cases where it is especially hard for democratic processes to strike the balance between public good and private benefit. For here, as already noted, delegating to experts the right to assess risk strips away many features of the social context that shape technologies and eventually give rise to disparities in health and health care access. Scientists at the frontiers of invention do not see it as their responsibility to address even the most obvious equity issues, such as whose illnesses are targeted for intervention or when money should be directed from high-cost individualized treatment to lower-cost public health interventions. As technologies come to market without prior collective assessment of their distributive implications, it is the potential users of those technologies who will have to confront these questions. Limiting early deliberation to narrowly technical constructions of risk permits science to define the harms and benefits of interest, leaving little opportunity for publics to deliberate on which imaginations need widening, and which patterns of winning and losing must be brought into view.

Trust

The leaders of the research community recognize that trust is essential in securing public support for any recommendations on how to handle CRISPR, including rules for the manipulation of germline cells. The NAS-NAM proposal seeks to build trust on three levels: (1) by invoking the National Academies’, and more generally science’s, prior achievements in consensus-building; (2) by reaching out to stakeholders in accordance with principles of pluralist democracy; and (3) by constructing a multilayered institutional structure for decision making. In important ways, however, these proposals misremember history, misconceive the role of participation, and misunderstand the relationship between expertise and democracy.

Looking back on the history of rDNA policy, it is crucial to remember that public trust was not cemented at Asilomar. It took years, even decades, to build anything like a consensus on how genetic and genomic developments affecting biomedicine should be governed, even in the United States. Indeed, many would say that trust-building is still a work in progress. Democratic demands soon forced the scientific community to open up its deliberations on rDNA to a wider public than had been invited to Asilomar. Publics and policymakers responded to Asilomar with skepticism for having neglected their concerns. As Senator Edward M. Kennedy put it, the Asilomar scientists “were making public policy. And they were making it in private.” Not only were the recommendations produced by those who stood to gain the most from a permissive regime, but the conference failed to entertain questions that mattered most to the wider public. Facing the threat of legislation, the scientific community sought to appease such criticisms, for instance, by adding a handful of public interest representatives to the Recombinant DNA Advisory Committee (RAC) of the NIH. Whether such token representation had effects on policy remains questionable.

For many U.S. biomedical scientists, demonstration of successful self-regulation was a tactic for avoiding premature legislative intervention—and in this they were consistently and eminently successful. The absence of national legislation, however, is not a good measure of Asilomar’s success or, more broadly, of trust in science. Indeed, it has proved necessary to add layers of institutional oversight at critical junctures in the development of genetic sciences and technologies, showing that the laissez faire approach did not sufficiently produce trust. One of these occurred at the start of the Human Genome Project (HGP), when James Watson, the HGP’s first director, set aside funds for research on the ethical, legal, and social implications (ELSI) of research. Regardless how one draws up the balance sheet with respect to ELSI (and it is not straightforward), the point is that the program was conceived as a defensive move by big biology to demonstrate enough ethical and social responsibility to deserve public funding and the trust that such funding implies. As Watson himself explained, “My not forming a genome ethics program quickly might be falsely used as evidence that I was a closet eugenicist.”

Limiting risk to accidental releases of pathogens left untouched the economic, social, and political implications of biotechnology.

Similarly, debates around human embryonic stem cell (hESC) research at the turn of the century show that claims of self-regulation were not alone enough to satisfy public concerns and silence politics. U.S. biomedical science had to publicly demonstrate its commitment to ethical norms. The National Academies issued guidelines for work with stem cells, in conformity with the congressional mandate not to use public funds for deriving hESCs, but going well beyond that minimum requirement. These included a new layer of formal supervision, comprising (Embryonic) Stem Cell Research Oversight (SCRO or ESCRO) committees, established at each institution working with these potentially controversial materials. In practice, therefore, the price of avoiding congressional oversight was a new, more visible, display of self-regulation that stem cell scientists accepted to shore up their claim on public trust.

Challenges to trust and legitimacy, moreover, may resurface at any moment, as NIH learned in 2010-11 through a protracted, though ultimately unsuccessful, legal challenge to its authority to fund downstream research on lawfully derived stem cell lines. The point is not so much that federally funded stem cell research survived the attack. It is that, in a robust, decentralized democracy, there is no one-shot silver bullet for building trust. Political power, as every citizen knows, demands continual regeneration at the polls and elsewhere to maintain its legitimacy. Trust in science is just as fragile and just as much in need of regeneration when science, in effect, takes on the tasks of governance by shaping society’s visions of the future. Decades of experience with the genetic revolution make it clear that narrowing the debatable questions, as at Asilomar, is not a strategy for maintaining trust over the long haul or for living up to the forms of responsibility that democracy rightfully demands from science.

Provisionality

Revolutionary moments do not reveal the future with map-like clarity. Far more, they are moments of provisionality, in which new horizons and previously foreclosed pathways become visible. The challenge for democracy and governance is to confront the unscripted future presented by technological advances and to guide it in ways that synchronize with democratically articulated visions of the good. This demands thoughtful conversations about alternatives for as long as it takes to build new norms for the new futures in view. Conversations are compromised if they are limited to narrow constructions of near-term risk, thereby foreclosing opportunities to build such norms.

Worldwide controversies about the limits of genetic modification, whether in agriculture or biomedicine, signal that Asilomar’s framing of the risks, the stakes, and the scope of deliberation was too narrow to encompass the wide range of ethical, legal, and social issues that accompany a scientific revolution and the forms of collective deliberation they demand. The history of half-measures and repeated eruptions of public distrust around rDNA reveals weaknesses in the NAS-NAM conception of an expert summit as the right instrument of democratic deliberation on gene editing. The very notion of a summit suggests that a view from the mountaintop will provide an authoritative image of the lay of the land, to be charted once and for all through ethics or regulation. Past experiences indicate, however, that good deliberative processes need to be recursive as well as inclusive. The initial framing of an issue shapes the analysis of alternatives, whether scientific, ethical, or political. This is one reason inclusivity at the agenda-setting table is so valuable: it helps to ensure that important perspectives are not left out at the start, only to surface after possibly unjust judgments and decisions have been taken.

The Asilomar meeting on rDNA framed the risks to society in terms of physical hazards to people and, to a limited extent, ecosystems. The solution provided was equally narrow: four levels of physical and three of biological containment of engineered organisms. But as noted above, limiting risk to accidental releases of pathogens left untouched the economic, social, and political implications of biotechnology, and consensus has not yet been achieved on those initially excluded issues. By treating risks as resolvable by technical experts and the responsibilities of governance as settled, Asilomar failed to recognize the virtues of social ambivalence as a resource for building and rebuilding solidarity between science and society by continually rearticulating norms and aspirations to guide an unfolding technological future.

Many experiments have been tried in recent years to involve publics in deliberating on emerging sciences and technologies before their course is set in stone. These “public engagement” exercises include focus groups, citizen juries, consensus panels, public consultations, and technology assessment processes. Initially such efforts presumed that the main reason for public hostility to technological innovation was lack of information. Although public engagement efforts have grown more sophisticated, they remain one-shot consultations whose agenda and terms of debate are still narrowly defined.

Approaching public engagement in this manner misses the point that living well with technology involves more than reacting to information about it. Changes in social interactions and relationships with technology are unpredictable and emerge only through long-term experiences in varied settings. The stakes cannot be accessed, let alone addressed, in highly scripted deliberations that “engage” a limited range of citizens in terms that are defined in advance. Though such exercises purport to satisfy the need for public engagement, they fail to reach the poor, the marginal, and the socially excluded in meaningful ways. They afford little opportunity for the emergence of dissenting voices and perspectives that challenge experts’ imaginations. Consequently, they are more likely to perpetuate than correct Asilomar’s legacy of exclusion. They are, at best, ineffectual in assessing ambivalence and doubt, and still worse at inviting sustained deliberation on humanity’s collective ownership of its technological future.

A 1996 report of the National Research Council proposed an alternative approach to understanding risk that would build in mechanisms for taking the provisionality of people’s judgments into account. This was the analytic-deliberative model, a recursive decision-making paradigm aimed at revisiting early framing choices in light of later experience. In this model, the movement from fact-finding to incorporating value judgments is not linear, as in the conventional risk assessment-risk management approach. Instead, the analytic-deliberative model presumes that, in democracies, the process of understanding risk requires constant revisiting, through deliberation, of the risks framed and the questions asked. Reframed questions in turn lay the ground for meaningful further analysis and keep publics engaged in the process of governance.

Ongoing debates on privacy and civility in the era of digital communication and social media illustrate this need to revisit apparently settled issues in light of lived experience. Facebook users only gradually discovered the need to filter their postings so that messages intended for friends would not be unintentionally disclosed to parents or prospective employers. Twitter users learned the devastating effects of casual messaging and careless jokes only after many episodes of such postings going destructively viral. In a celebrated and still not fully resolved development, European law has diverged from that of the United States in asking Google and other Internet search engines to remove links to excessive or irrelevant information. This controversial “right to be forgotten” emerged only after 20 years of rising information traffic on the Internet. Users could not have foreseen the potentially perverse consequences of a permanent digital memory bank, recording the most trivial aspects of daily lives, when they discovered the informational wealth of the Internet in the 1990s.

Provisionality in the face of new technologies includes, at the limit, the choice to say no to particular visions of progress. In 2011, Germany’s national Ethics Council issued a report on preimplantation genetic diagnosis (PGD) with a substantial minority of 11 members recommending that the procedure should not be permitted in Germany under any circumstances. Even the 13-member majority, followed by the German Parliament, only approved PGD under highly restrictive conditions, including prior ethical review and informed consent by the mother-to-be. These arguments and actions deserve attention as an affirmation that technology’s unimpeded progress is not the only collective good recognized by free societies: as the minority opinion stated, “an enlightened and emancipated relationship to technology is the decision not to use it if it violates fundamental norms or rights.” A regime of assessment that forecloses in advance the very possibility of rendering such enlightened and emancipated judgments opens the way to a politics of dissent and frustration rather than to shared democratic custodianship of the technological future. Perhaps this is Asilomar’s true legacy.

Coming down from the summit

CRISPR-Cas9 offers, at first sight, a technological turn that seems too good for humankind to refuse. It is a quick, cheap, and surprisingly precise way to get at nature’s genetic mistakes and make sure that the accidentally afflicted will get a fair deal, with medical interventions specifically tailored to their conditions. Not surprisingly, these are exhilarating prospects for science and they bring promises of salvation to patients suffering from incurable conditions. But excitement should not overwhelm society’s need to deliberate well on intervening into some of nature’s most basic functions. That deliberation, in our view, demands a more sophisticated model than “Asilomar-in-memory,” a flawed and simplistic approach to evaluating alternative technological futures in a global society.

Summitry organized by science, in particular, needs to be handled with care. Such events, as we have seen, start with the almost unquestionable presumptions that scientists should “push research to its limits,” and that risks worth considering are typically reduced to those foreseeable by science. Physical and biological risks therefore receive more attention than risks to social relationships or cultural values. Such narrowing is inconsistent with democratic ideals and has proved counterproductive in societal debates about genetic engineering. The planned NAS-NAM event would better serve science and society by moving down from the “summit” to engage with wider, more inclusive framings of what is at stake. Good governance depends on visions of progress that are collectively defined, drawing on the full richness of the democratic imagination. Opportunities for deliberation should not be reduced, in our view, to choreographed conversations on issues experts have predetermined to warrant debate. Confining public engagement exercises to such constrained parameters too easily presumes that the entry card for engendering deliberative democracy is speaking the right language, that of scientific rationality.

In the musical My Fair Lady, based on George Bernard Shaw’s Pygmalion, Eliza Doolittle, a Cockney flower girl, takes speech lessons from Professor Henry Higgins, a phoneticist, so that she may pass as a lady. Having transformed Eliza, the professor wishes to control not just how she speaks, but how she thinks. The authors of the NAS-NAM proposal run the risk of acting like Henry Higginses of CRISPR democracy. Having taught the Eliza Doolittles of the world how to articulate their concerns properly, they may be inclined to think that judgment should follow suit, because right language must lead to right reason about the need for research. Yet, the audience’s sympathy rests with Eliza, not Henry, when he sings, “Why can’t a woman be like me?” The rarefied reasons of science are essential to any good deliberation on gene editing, but it is to be hoped that the deliberative processes we design will be expansive enough to let the unbridled Cockney in the rest of humanity also sing and speak.

Jailhouse Rot

Americans seem to have a thing for prisons. Not only do we have the world’s largest prison population, we have a rich and incongruous pop culture heritage of films and songs about prison life. On film from Cool Hand Luke to Jailhouse Rock, from Shawshank Redemption to Orange Is the New Black. In song from the traditional “Midnight Special” to Snoop Dogg’s “Murder Was the Case,” with side trips to Merle Haggard’s “Life in Prison,” Sam Cooke’s “Chain Gang,” and Johnny Cash’s “Folsom Prison Blues.” The result is that many of us have vivid but completely inaccurate images of prison. Perhaps it’s not surprising, then, that we also have incarceration policies founded on myths and misunderstanding.

A recent National Academies report, The Growth of Incarceration in the United States, seeks to establish the facts about how incarceration policies have evolved in recent decades and what social science research can tell us about the effectiveness of these policies in deterring crime, rehabilitating prisoners, and making our neighborhoods safer and more livable. The extent of the changes in recent years is shocking. Even more disturbing is the absence of social science research or clearly stated normative principles to justify the new incarceration policies.

The report comes at an opportune time. After a long period during which politicians from both parties eagerly presented themselves as “tough on crime,” a recent bipartisan groundswell has begun to reconsider incarceration policies. The push for reform emerges from a diverse mix of rationales and a variety of ideological perspectives, which makes for a shaky coalition. This report provides the social science research and guiding principles that could unite these varied perspectives and create a foundation for sensible bipartisan incarceration reform.

From 1973 to 2009, U.S. state and federal prison populations rose from about 200,000 to 1.5 million; it declined slightly in the following four years largely because of reductions in state prison populations. An additional 700,000 men and women are being held in local jails. With only 5% of the world’s population, the United States has close to 25% of the world’s prisoners. Its incarceration rate is five to 10 times higher than rates in Western Europe and other democracies.

And of course there are further disparities within the U.S. system. Long and often mandatory prison sentences, as well as intensified enforcement of drug laws, contributed not only to overall high rates of incarceration, but also especially to extraordinary rates of incarceration of African Americans and Hispanics, who now comprise more than half of our prisoners. In 2010, the incarceration rate for African Americans was six times and for Hispanics three times that of non-Hispanic whites. And although there is no significant difference in the prevalence of illegal drug use in the white and minority communities, African Americans and Hispanics are far more likely to be arrested and to serve prison time for drug offenses.

The growth in the U.S. prison population is not a result of an increase in crime, but of a change in incarceration policy. A wave of concern about preserving social order swept the country in the late 1960s and early 1970s. One manifestation of this anxiety was that officials at all levels of government began implementing new policies, such as requiring prison time for lesser offenses, increasing the recommended sentences for violent crimes and for repeat offenders, and taking a much more aggressive approach to the sale and use of illegal drugs, particularly in urban areas. The trend continued into the 1980s. Federal and state legislatures enacted “three strikes and you’re out” laws and “truth in sentencing” provisions.

As the impact of changes in incarceration policy became apparent, social scientists began studies to determine if new policies were achieving their desired effect. The Growth of Incarceration study committee reviewed this research and reached the following consensus: “The incremental deterrent effect of increases in lengthy prison sentences is modest at best. Because recidivism rates decline markedly with age, lengthy prison sentences, unless they specifically target very high-rate or extremely dangerous offenders, are an inefficient approach to preventing crime by incapacitation.”

Social science and health researchers also examined the effects of incarceration on the physical and mental health of prisoners and on the stability and well-being of the communities from which prisoners came and to which they usually returned. For those who are imprisoned, “Research has found overcrowding, particularly when it persists at high levels, to be associated with a range of poor consequences for health and behavior and an increased risk of suicide. In many cases, prison provides far less medical care and rehabilitative programming than is needed.” The detrimental effects for families and children can be deduced from one shocking and tragic statistic: “From 1980 to 2000, the number of children with incarcerated fathers increased from about 350,000 to 2.1 million—about 3% of all U.S. children.”

Although the report emphasized the importance of heeding social science research and the need for more study, it also noted that scientific evidence cannot be the only factor guiding incarceration policy. It concluded: “The decision to deprive another human being of his or her liberty is, at root, anchored in beliefs about the relationship between the individual and society and the role of criminal sanctions in preserving the social compact. Thus, sound policies on crime and incarceration will reflect a combination of science and fundamental principles.” The committee proposed four principles that could light the way to a more humane and effective incarceration policy: “1) proportionality of offense to criminal sentences; 2) parsimony in sentence length to minimize the overuse of prison time; 3) citizenship so that the conditions and severity of punishment should not violate fundamental civil rights; and 4) social justice in which prisons do not undermine society’s aspirations for fairness.”

The committee did not presume to propose a detailed blueprint for a new incarceration policy. The system of federal, state, and local policies is too complex for any cookie-cutter remedy. Instead, it urged all responsible officials to reconsider the human, social, and economic costs of their incarceration policies in light of their modest crime-prevention effects and to consider reforms that are informed by social science research and guided by clearly stated principles.

The four articles that follow build on the findings and recommendations of The Growth of Incarceration, but in each case the authors go further in understanding particular aspects of incarceration and proposing ways to improve the performance of the system. These articles should serve as a catalyst for a local, state, and national effort to act on the report’s recommendations. Policymakers are recognizing the need for reform of a justice system that is often unjust, and these articles can help them identify the most pressing problems and most promising solutions.

Reducing Incarceration Rates: When Science Meets Political Realities

As the United States considers ways to improve how it incarcerates people in prisons and jails—and particularly how to reduce the number of people incarcerated—it is first necessary to recognize that there is no single national incarceration policy, but instead 50 distinct state policies and one federal policy. Accordingly, pursuing improvements will require the development and adoption of reforms in 51 separate jurisdictions. An undertaking of this scale may seem overwhelming. But over roughly the past decade, researchers have conducted comprehensive analyses in approximately half of the states to identify and promote significant changes to corrections and public safety policy.

As part of this story, the Pew Charitable Trusts (Pew) in 2006 launched the Public Safety Performance Project. Continuing today, the project aimed to “help states advance fiscally sound, data-driven policies and practices in the criminal and juvenile justice systems that protect public safety, hold offenders accountable, and control corrections costs.” The project drew on the experiences of some states, including lessons learned from a comprehensive analysis that the Council of State Governments Justice Center (CSGJC) conducted for Connecticut state leaders in 2003 and 2004.

Around the same time, sensing a growing demand among states for the types of analyses that the CSGJC provided, the Bureau of Justice Assistance (BJA) of the U.S. Department of Justice made funding available to complement Pew’s investment to support the delivery of intensive technical assistance to a limited number of states. Then in 2010, the BJA established the Justice Reinvestment Initiative, a public-private partnership between BJA and Pew, with $10 million in funding from Congress for that fiscal year. The Obama administration subsequently recommended a major increase in funding for the initiative, and by fiscal year 2014, with congressional bipartisan support, the annual appropriation had grown to $25 million.

Pew and BJA have described their goal for justice reinvestment as being to support state leaders who demonstrate working across party lines to slow, or even reduce, corrections spending and to reinvest some of those savings into strategies that will increase public safety. The CSGJC has played a central role in assessing conditions in the states and delivering technical assistance to them, starting before and continuing after the launch of the Justice Reinvestment Initiative. As of 2015, with support from Pew and BJA, the CSGJC has delivered intensive technical assistance to more than 20 states. In addition, Pew has funded, and in some cases provided, technical assistance to an additional 12 states as part of its Public Safety Performance Project.

In the normal course of CSGJC’s studies, a team of experts, working over the course of nine to 18 months, conducts exhaustive analyses, in conjunction with local elected officials and stakeholders, to identify areas for policy changes and develop a political consensus to achieve these changes through the legislative process. In almost every case, the level of analysis of the state’s data is more extensive and thorough than anything the state has previously used to inform criminal justice policy. The timeliness of the analyses is essential to coincide with legislative sessions. Findings and recommendations must be distilled into concise actionable reports for policymakers’ consideration.

Here, we summarize CSGJC’s work in four states—Texas, Pennsylvania, North Carolina, and Michigan—and then explore what we have learned from these efforts, including the potential and limits of science in this area.

State portraits

Based on our experiences in these states and others, we believe that scientific findings supporting the case for “less incarceration” will be insufficient to achieve dramatic shifts in the use of prison and jail. Such an outcome depends on overhauling 50 sets of state laws (to say nothing of the federal system); fundamentally changing the perspective and approach of thousands of locally elected judges, prosecutors, and sheriffs who operate in independent counties and cities; dismantling an industry that supports hundreds of thousands of state and local personnel; and effectively transforming how the public thinks about punishment, particularly for people convicted of violent crimes. It takes, and will continue to take, a lot of hard work by many people at the state level to accomplish these changes.

Experts and advocates pushing for reductions in the numbers of people in prison have not paid enough attention to the importance of indigent defense systems.

Understanding our efforts in the four states we have singled out first requires a basic grasp of what factors drive increases and decreases in state prison populations, and what type of data policymakers need to inform decisions designed to decrease the number people who are in prison.

An increase or a reduction in the number of people in prison results from a change in one of two factors: how many people are admitted to prison, or how long people stay in prison once admitted. Over the past several years, the average length of stay for people incarcerated in prison has increased considerably; according to a 2012 study conducted by Pew, people released from prison in 2009 had served 36% more time than those released in 1990. This increase is the result of a combination of factors, including sentencing laws that allow or require longer prison terms for certain types of offenses, longer percentages of sentences that must be served behind bars, prosecutorial and judicial discretion related to how defendants are charged and how their sentences are disposed, and a decline in the rate that parole is granted by many state parole boards. For these reasons, the number of people leaving prison may decline, and states that have sentenced fewer people to prison have not necessarily seen a reduction in the overall prison population. Consider the simple analogy of water in a bathtub. Even if the water coming out of the bathtub faucet slows, the water level will still rise if the bathtub is not draining at an equal or faster rate. So as sentence lengths have increased and release rates have decreased, even a significant decline in prison admissions due to lower crime is unlikely to cause a state to experience the drop in its prison population it might otherwise expect.

State-level data are available that can help policymakers understand changes in each of these areas (sentencing, probation, parole); however, the data comes from different agencies that are not necessarily coordinated or working with each other with the common purpose of providing a cohesive “diagram.” State agencies may have researchers, but they usually are stretched too thin, and more importantly, have no incentives or authority to analyze and review aspects of the system not under their agency’s umbrellas. Finally, given the usual political alliances in a state, researchers and their sponsors may be seen as having an agenda that does not give them broad credibility.

The technical assistance provided on a justice reinvestment project is purposely designed to compile a multiagency system analysis, developed through a transparent process working with all key stakeholders, and to create a diagram of the “bathtub and its plumbing.” A rigorous evaluation of corrections trends in a state typically requires the analysis of hundreds of thousands, and often millions, of individual case records, including those related to sentencing, corrections, parole, probation, and criminal history. Data matches involving stand-alone databases that maintain information about arrest histories, prison admissions, participation in treatment programs, and community supervision files are conducted, translating into thousands of hours of research labor. Even with such efforts, an analytical investment does not by itself translate into findings and recommendations that key decision makers in a state will find credible.

While the data gathering and analysis is taking place, a bipartisan group of policymakers and stakeholders representing all three branches of government is convened to discuss the information, identify areas for policy reforms, and develop proposals for the governor and legislature to consider. Hundreds of meetings with combinations of prosecutors, judges, law enforcement officials, defense counsel, community corrections administrators, treatment providers, and victim advocates are necessary to present interpretations of the data that incorporate the perspectives of these important stakeholders. Similarly, elected officials will not seriously consider policy recommendations unless they believe that they reflect the input of these key constituencies. This process often has to be repeated, as the flux of state players due to election turnover and changes in constituencies require active engagement.

Depending on the state, its complexity and expected timelines for collecting data, conducting the research, engaging stakeholders and legislators, and developing the communications strategy and proposal, the cost of providing technical assistance can range from $500,000 to $950,000 per site. This is funded from different sources as described above. If the policy changes are adopted, a “Phase II” commences to provide technical assistance to the state to help effectively implement the adopted policies.

Highlights from the experiences in the four target states include:

Texas. At the request of the chairman of the Texas House Corrections Committee (a Republican) and the chairman of the Texas Senate Criminal Justice Committee (a Democrat), the CSGJC in 2007 conducted an extensive analysis of the state’s criminal justice data and identified three important trends driving the growth in the state’s prison population. First, the number of people whose probation had been revoked and who had been sent to state prison increased 18% between 1997 and 2006, while the number of people on probation supervision had declined by 3% over the same period. Second, reductions in funding and the resulting closure of various community-based programs and facilities caused the number of people awaiting release from prison to the community to balloon (by 2007, more than 2,000 people remained incarcerated while awaiting placement to substance use and mental health treatment programs). Third, parole grant rates were much lower than the parole board’s own guideline requirements (among low-risk individuals, the board fell short of its minimum approval rate by 2,252 releases).

Drawing on these findings, the state legislature approved policies effective in the 2008-09 biennial budget that increased treatment capacity in the prison system by 3,700 program slots for substance use treatment (outpatient, in-prison, and post-release) and mental health treatment, and expanded diversion options in the probation and parole system by 3,000 slots for technical violations of the conditions of their supervision or transitional treatment and substance use treatment. The funding for these policies was $241 million for the 2008-09 budget, and was based on the assumption that these policy changes would make building new prisons unnecessary, avoiding $443 million in new construction and operating costs.

Indeed, thanks to these policies, the state did not need the approximately 9,000 more prison beds that were forecast in the budget plan first presented to the legislature in 2007. In fact, since then, there has been an unprecedented development in Texas: three prisons have been closed, one in 2011 and two in 2013, for a total reduction of about 5,000 beds. The prison population has declined 4%, even though the state resident population has increased by 20% in the previous decade alone. The state’s incarceration rate has dropped by 10% since 2007, as measured by the U.S. Bureau of Justice Statistics (part of the Department of Justice). Finally, between 2006 and 2013, the crime rate decreased by 20%.

North Carolina. In 2009, the governor and leaders of the North Carolina House and Senate (all Democrats) were faced with projections showing that the state’s prison population would grow 10percent in the next decade. Probation revocations accounted for more than half of prison admissions, and only about 15% of people released from prison received supervision. Over a two-year period, during which a Republican was elected governor and Republicans won majorities of both chambers in the state legislature, the CSGJC analyzed hundreds of thousands of files, conducted extensive policy reviews, and met with hundreds of state and local government officials.

In 2011, the state adopted policies addressing these problems, including mandatory post-release supervision; increased funding for community-based treatment and change to the way this treatment is delivered; and a comprehensive set of progressive sanctions for probationers to help them avoid revocation to prison. These policies are projected to save the state up to an estimated $560 million over six years in reduced spending and averted costs. The legislature allocated more than $8 million in treatment funding to its budget for fiscal year 2012 for improving existing community-based treatment resources. When the policies were adopted in June 2011, the state prison population was 41,032. By June 2014, the prison population had dropped to 37,665. Between fiscal years 2011 and 2014, there was a 41% decline in releases without supervision and a 50% decline in probation revocations. By 2013, the crime rate had decreased by 8%, and 10 prisons have been closed to date.

Pennsylvania. Between 2000 and 2011, state spending on corrections increased 76%, from $1.1 billion to $1.9 billion, while the prison population increased 40%, from 36,602 to 51,312. The CSGJC worked with state leaders in 2011 and 2012 to produce a set of policies that redesigned the state’s residential community corrections programs to serve as parole transition and violation centers, and provided for a more comprehensive set of sanctioning responses to reduce the number of people revoked to prison for technical violations of the conditions of their parole supervision. The policies also addressed various inefficiencies in the release process, by seeking to reduce the number of days from parole approval to actual release and increase the number of parole-eligible cases heard per month. Finally, the state pursued a performance-based contract system with many of its service providers, requiring that the recidivism outcomes for participants in those programs meet or exceed the baseline set by the state’s department of corrections.

These policies are projected to save the state up to an estimated $253 million over five years. According to a statutory formula, the state will reinvest a portion of realized savings into local law enforcement, county probation and parole, and victim services. The prison population began to fall in 2014, marking the first time in over 20 years that a decline had occurred in that state. The population as of June 2015 was 50,366. There has been a sharp decline in the number of people who return to prison due to technical violations of the conditions of their parole, and an aggressive use of local alternative sanctioning to address technical violations of supervision. Between 2011 and 2013, the crime rate in the state decreased by 7%. Based on the success of its first justice reinvestment effort, the state is preparing for another round of analysis to explore whether additional policy or sentencing changes can further reduce the criminal justice population.

Michigan. Statewide, one out of every three state employees works for the department of corrections, and one out of every five general fund dollars goes toward the operation of the state prison system. Michigan has analyzed this situation in recent years and implemented a range of diverse strategies, including statewide reentry programs to reduce recidivism and law enforcement efforts to deter crime in cities plagued by violence. Arrests for violent crime, parolee re-arrest rates, and the number of people in prison declined since 2008.

In 2013, legislative leaders and the governor expressed concern that corrections spending continued to make up a large portion of the state budget, the public remained concerned about high levels of crime in parts of the state, and the prison population showed signs of increasing again. At the state’s request, the CSGJC conducted an unprecedented study of the state’s felony sentencing system for its impacts on public safety, recidivism trends, and state and local spending.

After analyzing 7.5 million individual data records and conducting over 100 in-person meetings and 200 conference calls with stakeholders, our findings showed that the state could improve its sentencing system to achieve more consistency and predictability in sentencing outcomes. The recommended sentencing changes would stabilize and lower costs for the state and counties, and allow some resources to be re-directed to reduce recidivism and improve public safety. In a report to the Michigan Law Revision Commission, a bipartisan group of legislators and members of the general public, the CSGJC presented our analyses and recommendations. With the support of local prosecutors and other law enforcement groups, state legislators introduced four bills in November 2014 to implement the recommended changes to sentencing laws and enact policies designed to increase the effectiveness of community supervision and improve the collection of victim restitution. The bills made their way through the Republican-controlled legislature, received laudatory coverage in newspapers across the state, and neared passage, until the state attorney general launched an aggressive attempt to thwart their passage. The attorney general argued that the state had already shrunk the prison population effectively; these bills put public safety at risk in the name of cost savings; and the sponsor of the legislation intended to rush the bills through during a lame duck session without sufficient consideration for the life-and-death issues at stake.

Legislative support for the bills subsequently dissipated and the governor, in the midst of a reelection campaign, stayed silent. In the lame duck session that followed, where the bills’ legislative champion neared the end of his term, two bills passed, one to create a Criminal Justice Policy Commission for four years and another to update the objective and funding goals for the state-operated community corrections program. The more comprehensive sentencing policies died, although they may be reintroduced at a later time.

Lessons learned

Based on more than 10 years of experience conducting analyses of state criminal justice systems and the results of the CSGJC’s work in the states, we offer here nine lessons for consideration by the growing number of policymakers and others demanding the review of incarceration policies.

Lesson One. Conducting the type of “systematic reviews of penal policies” that we have described is a time-consuming, resource-intensive, state-by-state undertaking. Academic institutions typically are not positioned to conduct these types of time-sensitive quantitative analyses, coupled with hundreds of meetings over the course of just a few months. Nor are academic institutions usually accustomed to the quick distillation of extensive research into practical terms that elected officials can easily convert into policy. Few state agencies anywhere have the capacity (or independence or standing among all three branches of state government) to lead this type of process. Therefore, absent an effort similar to the justice reinvestment process, it is unlikely that the states themselves will undertake a systematic review of penal policies.

Lesson Two. Research-based recommendations that appear to have the singular goal of significantly reducing the prison population, regardless of the financial savings that would be generated, rarely attract broad, bipartisan support in the states. Similarly, a policy reform package presented to reduce the prison population purely for social justice reasons is typically insufficient to motivate elected officials across the political spectrum. Broad bipartisan support for policy changes that will reduce the prison population is generally contingent on the endorsement of (or at least the absence of concerted opposition from) a cross-section of stakeholders in the criminal justice system, including law enforcement, victim advocates, and other local officials. These constituencies are most inclined to lend such support when they are assured that the policy reforms recommended will strengthen the criminal justice system.

In the states where the CSGJC has worked, we have found that policy strategies that reduce a state’s use of prison are most likely to rally this level of support when they effectively demonstrate that they meet one or more of three criteria. The strategies must focus either on opportunities to target limited resources on people most likely to reoffend, reinvest in programs that are effective in changing the behaviors of people involved in the justice system, or strengthen systems of supervision and treatment. In each of these cases, there is extensive research that shows that policies that apply these principles can be successful in reducing recidivism, which may lower prison admissions. But such success is contingent on an adequately resourced system. It is well established, for example, that caseloads for many community corrections officers are in the hundreds and waiting lists for substance use treatment slots are hopelessly long. This picture is consistent in most states. If mental health issues are added to the mix, the picture is even more challenging, with most states not able to meet basic demand for public mental health services for the general population. Again, for example, in Texas, only one-third of adults with the most severe mental illnesses are served by the state-funded community mental health services.

Lesson Three. In general, states have been reluctant to reduce minimum sentences terms or to walk back efforts to achieve truth-in-sentencing, particularly for violent offenders. Without a major reduction in sentences and time served in prison for violent offenders, it is virtually impossible to achieve the 50% reduction in incarceration that some observers have advocated. Violent offenders disproportionally consume the most prison space due to their length of stay in prison. A recent examination by the Marshall Project showed that if “100% of all people convicted of drug, public order, and property crimes were released early or sentenced to punishments other than prison time, you would still need to free, say, 30% of robbery offenders to achieve a 50% reduction in the prison population.”

Lesson Four. Efforts to reduce the prison population by focusing on people who have committed nonviolent offenses will confront the reality that legislative discussions evolve quickly into a lack of agreement on how to operationalize “non-violent” and how this definition will be applied in reality to particular persons. In state legislatures, advocates for reducing high incarceration rates have to face law enforcement groups, prosecutors, and victim advocates who may agree that certain offenses can be considered non-violent but do not necessarily agree on which persons will qualify as non-violent under sentencing or parole release policies.

Lesson Five. States have shown little appetite for directly addressing the issue of racial disproportionality in prison. There have been individual policymakers and stakeholders in the state task forces that the CSGCJ has worked with who have raised the issue of racial disproportionality and have asked for analyses regarding this issue. But in most states where the CSGJC has worked, the governor, leaders in the legislature, and the judiciary have not worked together to make reducing racial disproportionality in the corrections system an explicit goal of their efforts. However, during the justice reinvestment process, some states have created incentives for people in prison or community supervision to earn time off their sentences or be subjected to a sanction for parole or probation violation short of returning to prison. These changes have reduced the number of people on probation or parole supervision returned to prison for minor violations of the conditions of their release. This appears to have had, either directly or indirectly, a positive impact on reducing the racial disproportionality in prison populations.

States must have the capacity to collect, analyze, and report data on a routine basis, so that policymakers can monitor trends and hold administrators accountable for outcomes, and adjust policies and funding as necessary.

Lesson Six. Experts and advocates pushing for reductions in the numbers of people in prison have not paid enough attention to the importance of indigent defense systems. In a 2009 report, the National Right to Counsel Committee of The Constitution Project extensively documented the neglect of the constitutional right to counsel. Most defendants that end up incarcerated are poor and, in general, receive indigent defense services from systems that are overloaded, short of investigators, and part of a low-paying defense plea machine. Significant improvements and increases in funding are needed in indigent defense systems in the states. In most of the states where the CSGJC has worked, this has not been addressed as part of the reform agenda in any salient manner. Compared with a person without effective counsel, a defendant represented effectively is more likely, following his or her arrest, to have the charges dismissed, to be released on pre-trial supervision, or to receive a sentence to probation instead of to prison. Similarly, a person who is effectively represented and convicted of a crime that carries a prison sentence is more likely to receive a shorter sentence than someone with a similar conviction who does not receive effective representation.

Lesson Seven. States must have the capacity to collect, analyze, and report data on a routine basis, so that policymakers can monitor trends and hold administrators accountable for outcomes, and adjust policies and funding as necessary. Some states have these capabilities (such as Texas and Pennsylvania); most do not. Without a consistent and credible way to measure the factors that impact the prison population, the impact of alternatives to incarceration after recidivism, and the cost-effectiveness of policies, it is difficult to sustain policy reform over time.

Lesson Eight. The implementation of adopted policies needs to be effective. As complicated as it is to make changes to state statutes or budgets, modifications to policy will not, in and of themselves, have a sustained impact on a state’s incarceration rate. Prosecutors, judges, and the defense bar must embrace the new policies and apply them to everyday decisions regarding arraignment, sentencing, responses to violations of conditions of supervision, and release from prison. Transforming the culture of a community corrections agency, for example, requires, among other things, extensive training and redefined personnel policies that use new metrics to evaluate someone’s performance. Requests for proposals and subsequent contracts to treatment providers must recognize the skills, workforce, and general capacity to deliver the types of services that are being promoted. This is why our justice reinvestment efforts are followed, if possible, by Phase II technical assistance to the states, to provide some momentum for effective implementation. Ultimately, though, no technical assistance can make up for lack of local interest, intense ground-level opposition, or deficient operational leadership. Addressing and overcoming such challenges are the responsibility of state and local policymakers.

Lesson Nine. When legislation contemplates significant systems change and carries with it political risk, it must enjoy the unqualified, focused, determined commitment of at least the governor, an especially powerful legislative leader, or an extraordinary judicial leader willing to insert him or herself into legislative affairs, to overcome the inevitable objection of a particular constituency. The example of Michigan illustrates how, even following exhaustive analysis, debate, media coverage, and painstaking negotiation of bipartisan agreements, legislation will die when concerted opposition materializes at the 11th hour and key elected officials determine that they must dedicate limited political capital to other legislative priorities.

As part of these lessons, it is worth noting that three large states that have achieved significant declines in incarceration rates did so not because of a data-driven, consensus-based approach to lawmaking (and have done so without the benefit of the justice reinvestment process). In California, the historic declines in its prison population have been achieved only after decades of neglect of the state prison system and severe overcrowding, which the state confronted fully only once the U.S. Supreme Court exhausted the state’s appeals. In 2011, in Brown v. Plata, the court declared: “This case arises from serious constitutional violations in California’s prison system. The violations have persisted for years. They remain uncorrected.” In New York, dramatic reductions in its state prison population have not been achieved as a result of particular legislative changes, but because so many fewer people in New York City were arrested for felony offenses. And in New Jersey, the decline in its prison population is attributed to a combination of crime decreases lowering prison admissions, a change in parole policies due to a court order, and new guidelines issued by the attorney general regarding exempting the lowest drug offenders from minimum sentences due to the “drug free zone laws”.

Progress, but challenges remain

Advocates seeking to reduce incarceration rates and eliminate mass incarceration view the developments we have described as progress, but hardly success. The incarceration rate of sentenced prisoners under state jurisdiction decreased by 6% between 2006 and 2013, but during the same period, the national violent crime rate decreased by 30% and the property crime rate decreased by 18.%. And as documented here, a lot of hard work by many people has been behind the progress that has been made. Therefore, elected officials, especially at the state and federal level, must now revisit how incarceration fits into their vision—which has become quite entrenched over the past 30 years—of how to provide public safety.

The majority of people who are in prison are there because they committed a violent crime. Reducing incarceration rates to levels that preceded the historic investment made in prison construction and operations over the past three decades means that policymakers must reject their long-held notion that lengthy prison sentences are a deterrent to violent crime. They must similarly recognize publicly that incapacitating people who commit such crimes for long periods does little to protect the public. Achieving that sort of shift in mindset requires nothing less than a fundamental cultural change. President Barack Obama’s speech to the NAACP in July 2015 in Philadelphia, where he talked about sentencing changes for non-violent offenders, and his subsequent visit to a federal prison represent unprecedented steps toward such a change. However, for some observers this is not enough, as they argue that to significantly reduce incarceration rates, it will be necessary to address the sentence lengths of those with violent offenses.

On a broader level, the public must change how it thinks about punishment—and in particular its desire to see people convicted of violent crimes and sex offenses, as well as people who repeatedly commit property crime, locked up for lengthy periods. Such a conversion in thinking requires a state-by-state (and in many cases county-by-county) conversation, because each state (and communities within that state) has different cultures predisposing residents to different punishment thresholds. For example, for complex historical, geographical, and cultural reasons, states in the South have shared more punitive attitudes than other areas in the country. Therefore, what will be perceived in some quarters as minor changes in how people are punished will require wholesale changes of thinking in other portions of the nation.

There is no question that the research community is more unified than ever regarding the negative consequences of high incarceration rates in the United States. The political conversation on incarceration has changed in recent years, with the “right” and the “left” finding more common ground on the need to reduce incarceration rates. Whether this consensus is sufficiently profound to translate into a historic reduction of incarceration rates—of a magnitude on the same order of the massive build up experienced over the past 30 years—remains to be seen. Much will depend on the values that candidates in upcoming elections at the state and national level articulate, along with their willingness to propose specific policy changes for reducing punishments for violent offenders.