Forum – Spring 2016

Purposeful science

In “Fact Check: Scientific Research in the National Interest Act” (Issues, Winter 2016), Congressman Lamar Smith critiques the concerns presented by Democrats, scientists, and a number of social science associations about his bill, H.R. 3293. As proposed, the bill confuses the nature of basic science and adds bureaucracy that would impose a layer of political review on the National Science Foundation’s (NSF) gold-standard merit review system. This bill will hurt the nation’s premier basic research agency, and ultimately leave America less competitive.

Many in the legislative majority have been clear in their belief that, according to their own subjective definitions of “worthy,” numerous grants that have successfully passed merit review are not worthy of federal funding.

As the ranking Democratic member of the Committee on Science, Space, and Technology, I feel it is neither our job nor my intent to defend every NSF grant. Most members of Congress lack the relevant expertise to fairly evaluate the merits or value of any particular grant. If we do not trust the nation’s scientific experts to make that judgment on whether a scientific grant is worthy of funding, then who are we to trust to make those judgments? The clear intent of this bill is to change how NSF makes funding decisions, according to what some majority members believe should or shouldn’t be funded.

H.R. 3293 restricts scientists and students from asking questions within science and technology fields for which a direct application may not be known. However, as Maria Zuber, vice president of research at the Massachusetts Institute of Technology said in a tweet, “Outstanding science in any field is in the national interest.” I fear that an unintended consequence of this bill will be to inhibit high-risk, high-reward research in all fields. We’ve heard from many scientists who are concerned that NSF, because of political pressures and budgetary constraints, is already pushing scientists to justify everything according to short-term return. If not corrected, this will inevitably reduce the ability of NSF and U.S. scientists to conduct truly transformative research.

NSF’s mission “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes” is applied throughout the merit review process for each award. The notion that every research project must, by itself, be justified by a congressionally mandated “national interest” criterion is antithetical to how basic research works.

I simply cannot support an effort that politicizes NSF-funded science and undermines the very notion of basic research. I sincerely hope that this bill will never become law.

Rep. Eddie Bernice Johnson

Democrat of Texas

The ongoing debate over what science is worthy of federal funding has the potential to result in a general loss of support for scientific research. Of note, passage of the Scientific Research in the National Interest Act, by a vote of 236-178, closely followed party lines.

The United States economy, and all that it supports, depends on scientific research. A number of studies, including one that formed the basis of a Nobel Prize, indicated that 50-85% of the growth in the nation’s gross domestic product can be attributed to advancements in science and technology. But the debate over a tiny fraction of research is endangering the whole.

Is it unreasonable that taxpayers should expect that their funds be devoted to endeavors that serve their interests? It would seem not. Is it in the interest of science—and therefore the taxpayers—that the choice of what science should be pursued be politically influenced? It would seem not. Researchers in the Soviet Union and China, not to mention Galileo, might suggest that such influences hinder, not promote, scientific progress.

The problem, of course, is how to determine what science is in the public interest—and who should make that determination. Past studies of butterfly wings, Weddell seals, and the Pacific yew tree produced important medical discoveries. Roentgen was not searching for a way to see inside the human body; he was studying streams of electrons. Or in Fleming’s words, “When I woke up just after dawn on September 28, 1928, I certainly didn’t plan to revolutionize medicine by discovering the world’s first antibiotic…”

The problem is that there is no bright line between science that produces “useful” outcomes and science that does not. The utility spectrum spans from the sequencing of the human genome partially in support of cancer research to the study of black holes, to the study of early human-set fires in New Zealand. The more difficult questions seemingly arise in the social sciences. Even there, arguably the greatest challenge in the fight against Ebola was not the remarkable biomedical research, but the cultural resistance to inoculations that had to be overcome among local tribes. Similarly, U.S. troops in Afghanistan were well served by the understanding that they were provided of local cultures derived from earlier research.

The greatness of U.S. science and its research universities, the latter generally agreed to represent about 18 of the world’s top 25, is heavily dependent upon freedom of inquiry and peer review. It would seem to be time to seek common ground rather than stark differences. Perhaps there is a tiny segment of research that is better funded by foundations. Perhaps peer review committees should include one or two highly regarded nonscientist members, as is not uncommonly the case when ethical issues are addressed. Or perhaps a commission is needed to address conflicting views on the above issues held by well-meaning individuals, as was done in guiding stem cell research. But to continue on the present course seems to assure that a bad outcome will not be left to chance.

Norman R. Augustine

Chair of the National Academies committee that produced Rising Above the Gathering Storm

The Scientific Research in the National Interest Act is a bad idea, but not for exactly the reasons its opponents most often cite.

Critics of the legislation contend that it would interfere with peer review or, more broadly, that it represents inappropriate congressional interference with scientific grant making. Yet the measure leaves the peer review system intact, and there’s nothing inherently out of bounds about Congress determining what categories of research can be supported with federal funds.

The problem with the bill is that it provides poor direction to the National Science Foundation (NSF) on what sort of research to back in two ways—many of the bill’s clearest provisions seem wrong-headed and unnecessary, and the parts that are muddy leave the agency in a dangerous kind of limbo.

The requirement that research use “established and widely accepted scientific methods,” for example, is clear enough. But why put that prescription in law when it could create a bias against the most innovative research? No one has accused NSF of funding work that isn’t well grounded in scientific methodology.

But the bigger problem is that most of the bill is quite unclear in indicating what Congress wants—except that it wants the agency to walk on pins and needles. The bill’s backers may have objections to social science research or to proposals related to climate change or to something else, but they apparently lacked the political courage, the focus, or enough votes to have an open debate on those issues and to draft specific language. As a result, the measure effectively codifies a vague threat—Congress saying you’d better be good at divining what we think is a proper grant—and leaves it to NSF to guess what battle will come next, perhaps in the hope that the agency will timidly avoid it.

This might be justifiable if there were some new, systemic problems at NSF, but there aren’t. Rather, there’s the age-old, periodic tussling over this or that grant. Congress has all of the oversight authority it needs (and then some) to expose questionable or controversial grants and, if need be, to bring the agency to heel. Yet this bill puts in place the kind of sign-off requirement Congress imposed on corporate CEOs to address the falsification of financial statements—an effort to get a handle on a real and systemic and sometimes criminal problem. And no one was left guessing what constituted a proper financial statement.

The Scientific Research in the National Interest Act is overkill, and dangerous overkill at that. Congress should, and regularly does, debate what kinds of research to fund. This bill is not a healthy addition to that ongoing discussion. It should not become law.

Sherwood Boehlert

Former Republican member of Congress from New York (1983 until 2007) and former chair of the House Science and Technology Committee

When scientists on February 11, 2015, played the chirping song of gravitational waves—ripples in space-time that confirmed Einstein’s theory of general relativity—listeners worldwide leaned toward the sound, amazed by the music of our universe. Scientist Gabriela González choked up a day later, when she again played the sound of two black holes colliding 1.3 billion years ago. “Isn’t it amazing?” she asked a spellbound audience at the annual meeting of the American Association for the Advancement of Science. “I can’t stop playing it.”

The discovery of gravitational waves—through research supported by the National Science Foundation (NSF) via the Laser Interferometer Gravitational-Wave Observatory (LIGO) project—will inspire a generation of scientists and engineers. Gravity’s “chirp” may well be every millennial’s Sputnik moment, powerful enough to trigger an explosion of creative thinking.

Yet, except for more accurate scientific measurements, the potential practical applications of the LIGO team’s discovery remain unknown for now. Gravitational waves will not carry voices on data securely between continents. They will not sharpen human medical images. Colliding black holes will not provide power at any person’s electric meter. Still, the chirps were not the frivolous amusements of scientists. Einstein wasn’t thinking about everyday uses for general relativity in 1916, either, but his ideas have profoundly guided our understanding of the natural world. The Scientific Research in the National Interest Act would limit discovery by requiring all NSF grants to pass a “national interest” test. The bill’s sponsor, Rep. Lamar Smith (R-TX), has said that the proposal would not undermine basic science, but his emphasis on “improving cybersecurity, discovering new energy sources, and creating new advanced materials” to “create millions of new jobs” seems to overlook the real value of long-term, fundamental scientific investigation.

Basic science expands human knowledge. Often, attempts to understand curious phenomena fail, and sometimes they pay off in unpredictable ways: In 1887, one scientist who was trying to confirm the predictions of another scientist accidentally produced radio waves. Similarly, an experiment with cathode ray tubes unexpectedly gave rise to X-rays, which are now an indispensable diagnostic tool. Important breakthroughs have emerged from many basic research projects that may have sounded silly at first. The “marshmallow test,” in which children are given a chance to eat one marshmallow right away, or two later, ultimately revealed that self-control can be learned—an insight that can improve education, human health, and even retirement savings. There is a long list of studies in the social sciences, physical sciences, environmental sciences, and mathematics that at first may have seemed esoteric and hardly in the public interest, but ultimately advanced human quality of life. I suspect many would not have passed a strict “national interest” test.

Tremendous, unforeseen benefits spring from basic science. If past research grants had only supported “national interests,” would an investigation of glowing jellyfish have given rise to medical advances that won the Nobel Prize for Chemistry in 2008? Would Google founders Larry Page and Sergey Brin have used NSF grants to follow their curiosity about an algorithm for ranking Web pages? Grants focused exclusively on national interests could also discourage international research collaboration, which builds bridges between nations and stimulates new ideas by applying many different perspectives to a shared problem.

The United States is already underinvesting in science and technology. As a share of our economy, our investments in research and development put the nation in 10th place among developed countries. The recent omnibus spending bill provided a much-needed boost for U.S. science, after a decade of declining investments. We are poised to seize the “LIGO moment” and unleash a renaissance in American innovation. The National Interest Act would work against that goal, by devaluing the potentially huge future benefits of sustained, long-term investment in basic science.

Maybe the authors of the “national interest” legislation would say they do not intend for it to be applied in a strict way, hampering research that would actually benefit people in the future. Why have it at all, then? The possible damage is apparent; any possible positive effect is very hard to see.

Rush D. Holt

Chief Executive Officer

American Association for the Advancement of Science

Most attention in the science community about the Scientific Research in the National Interest Act has debated whether it is torqueing peer review, questioning the way decisions to fund science are made. I want to address a different but related point, agreeing with the bill’s sponsor, Lamar Smith, that it is incumbent on the publicly funded science community—just as it is on elected and appointed government officials—to say and convey to the nonscience public: “I work for you—and I look forward to telling you how, and to answering your questions.” Smith writes that “…researchers should embrace the opportunity to better explain to the American people the potential value of their work.” (I do quibble with his use of the word “potential,” as there is current value to the public, whom surveys tell us agree in high percentages that basic research is important, and should be funded by the federal government.)

But let’s face it. For all that decades of surveys tell us about how our fellow citizens trust and admire scientists, and want them to succeed, scientists are essentially invisible in this country. It’s a problem when a member of Congress rarely if ever hears from his or her constituents that federally funded science is at risk; it’s a problem when scientists in that state or district don’t engage in the political conversation and even proudly say they wouldn’t recognize their own member of Congress. Embedded in the culture of science is the tacit understanding that scientists must eschew the public eye and disdain the political; this has led to lack of interest in public engagement. My observation, however, is that young scientists are different; they want to run toward, not away from, engaging the public and political actors; they rightly believe that what they are doing is making a difference for the nation right now, not just when or if they receive a Nobel Prize. They want to tell their story and convey their excitement. But they don’t know how to engage, and, worse, they are continually discouraged from doing so. There is a constant stream of anecdotes about how “department chair X” or “mentor Y” actively discourages any activity other than science by scientists. Since the academic reward structure defines “community service” as service to the science community, we shouldn’t expect change from that quarter any time soon. More’s the pity.

It’s time for the science community to meet congressional and public expectations by making public outreach and engagement a part of the culture and expectation of all scientists. In a few hardy places, such training and experience is being incorporated into graduate science education, but it is far from the norm. Perhaps all training grants awarded by federal science agencies should include this expectation. Scientists who are equipped during their training to effectively explain their work will do themselves, science at large, and the general public a great service over the course of their careers.

Mary Woolley

President

Research!America

Climate and democracy

It is understandable that climate scientists are disheartened, distressed, even alarmed in the face of climate change. Scientists predicted more than half a century ago that increased atmospheric greenhouse gases could disrupt Earth’s climate, and that prediction has sadly come true. As an empirical matter, none of our current forms of governance—neither democratic nor authoritarian—have made sufficient progress in controlling the greenhouse gas emissions that drive climate change. But in their alarm, some scientists are making questionable assertions about both technology and democracy, grasping at utopian visions of miracle technologies and benevolent autocracy. In “Exceptional Circumstances: Does Climate Change Trump Democracy?” (Issues, Winter 2016), Nico Stehr is right to critique this.

The worst impacts of climate change can be avoided if we pursue solutions grounded in technological and political realities. Several recent studies by credible researchers suggest that even allowing for growth, the United States can produce most, if not all, of its electricity from renewables, provided we do a few key things. These include putting a price on carbon, adopting demand-response pricing, and integrating the electricity grid. (Similar results have been found for other countries). None of these requires a “breakthrough” technology or form of governance that does not already exist, but all of them do require governance. A price on carbon is the most obvious example: it takes a government to set and collect a carbon tax, or to establish emissions trading as a legal requirement. Re-thinking regulation is another example. As we saw in the recent U.S. Supreme Court case involving the Federal Energy Regulatory Commission versus the Electric Power Supply Association, public utility commissions must be empowered (no pun intended) to adapt to new conditions. Demand-response pricing requires changes in the ways utilities operate, which requires reform of our regulatory structures or at least changes in our interpretation and implementation of them. An integrated grid could in theory be built by the private sector, but it took the federal government to build a nationwide system of electricity delivery and it will likely take the same to update that system to maximize renewable utilization.

However, 30 years of antigovernment rhetoric have persuaded many citizens that government agencies are necessarily ineffective (if not inept) and erased our collective memory of the many domains in which democratic governance has worked well. The demonization of government, coupled with the opposing romanticization of the “magic of the marketplace,” has so dominated our discourse that many people now find it difficult to imagine an alternative analysis. Yet history offers many refutations of the rhetoric of government incapacity, and gives us grounds for reasoned belief in the capacity of democratic governance to address climate change.

Placing the demands of democracy at the center of our thought also helps us to sort through the various options available to address climate change. One potent argument for carbon pricing is that market-based mechanisms help to preserve democracy by maximizing individual choice. The political right wing has said many untrue things about climate change, but conservatives are correct when they stress that properly functioning markets are intrinsically democratic, and top-down decision making is not. To the extent that we can address climate change in bottom-up rather than top-down ways, we should make every effort to do so. Where we can’t, democratically elected government can (and should) step in to build infrastructure, to foster research and development, to create reasonable incentives and eliminate perverse ones, and to adopt appropriate regulatory structures. This will require leadership, but of a kind that is completely compatible with democracy. Professor Stehr is correct: now is not the time to abandon democracy; it is the time to recommit to it.

Naomi Oreskes

Professor of the History of Science

Harvard University

Nico Stehr has done a service in bringing out the tensions between what passes for democracy in much of the world and the need to take resolute, concerted, and sustained action on climate change. Stehr overdoes the contrast a little, perhaps in an effort to bring the tension forcefully to our attention. Most of those whom he cites (including me) articulate the tensions between democracy and addressing climate change, but they do not actively advocate the overthrow of democracy. Stehr’s solution is also a bit of a letdown. Science studies as a field was born in the fear of antidemocratic technocracies, which were seen as a specter haunting North America, having already consolidated their grip on much of Europe. With populism on the rise and expertise in retreat, Stehr’s call to “enhance” democracy sounds like a one-size-fits-all solution drawn from an earlier playbook. The tension that he highlights is deep and real, and threatens to explode old concepts and disrupt existing divisions. It is worthy of our most serious thought.

Climate change is the vanguard of a new kind of problem. What is characteristic of such problems of the Anthropocene is that they are caused by humans who are largely engaged in acts that would traditionally be viewed as innocent and inconsequential. When amplified by technology and indefinite iteration, they produce outcomes that can be devastating, though no one desires them nor intends to bring them about. This contributes to the crisis of agency that is salient in the contemporary world. Never have people been so powerful that they can actually transform the entire planet. Yet never have people individually felt so powerless to obtain collective outcomes regarding nature (at least since the rise of modernity, but that is another story). For these reasons, the Anthropocene threatens our sense of agency, erodes the public/private distinction that is foundational to liberalism (since many of the acts that contribute to climate change would traditionally have been construed as private), and drains legitimacy from existing states (which are increasingly seen as unconsented to, failing to deliver beneficial consequences, and not in accord with Kantian or Rawlsian public reason). Climate change is not a one-off event, such as a war; in so many ways it is “the new normal,” and we’re going to have to figure out how to live with it and whatever else the Anthropocene will bring.

The democracies that Stehr thinks can cope with such problems find their own foundations threatened by them. Nor is it clear (if it ever was) exactly what democracy is supposed to consist of. What is clear is that this concept does not simplistically apply to countries such as the United States and the United Kingdom. Regarding the United States, it is enough to gesture at the fact that the popular vote is often at serious variance with the make-up of Congress and even the person of the presidency (Al Gore won the popular vote in 2000, not George W. Bush). Add to this the power of money in political campaigns and the fact that voter suppression in its various forms is a major political tactic employed by at least one major political party. As for the United Kingdom, it is enough to observe that in the elections of 2015 there was a swing toward Labor of 1.5% and toward the Conservatives of 0.8%, which resulted in Labor losing 26 seats and the Conservatives gaining 24 seats. The Conservatives won less than 37% of the popular vote but more than half of the parliamentary seats, and the resulting media storm soon hardened into the dogma that the Conservatives had won an overwhelming mandate. Democracy? I don’t think so.

Yes, climate change puts pressure on democracy. But before we pick sides, let’s try to better understand the kind of problem that climate change is and what we mean by democracy. Even more radically, we might even try democracy before giving up on it.

Dale Jamieson

Professor of Environmental Studies, Philosophy

New York University

Nico Stehr raises many valid objections to those climate change activists who think a shortcut to the political process is needed to avoid global disaster. Stehr’s argument is that not less, but more democracy will make us better off and help us in confronting existential threats to society. I agree with this general argument in principle, above all because democracy keeps our options open in case we have taken the wrong path to salvation, so to speak. Democracy is better at incorporating new insights, as it allows the election of new governments over time. The learning potential is better compared with nondemocratic political systems, and the danger of irreversibility of decisions is lower.

Having said this, I wonder if Stehr has not granted the skeptics of democracy too much. Talking about democracy, no matter if in a positive or skeptical mood, seems to eschew at least two important questions: what does democracy mean, and how can we achieve climate policies that are effective?

Regarding the first question, there is a range of political constellations that count as “forms of democracy.” Nearly every sovereign state that follows some principles of democracy has a different political system, with a different constitution, different rules of the political game, different rights for specific social groups, different legal systems, and so on. There is not one democracy, but many, and these different forms may have different potentials as regards to effective environmental governance. We have systems of proportional representation and of majority rule, federal and unitary states, and more or less centralized states. We have systems with checks and balances and an independent judiciary, but also systems of parliamentary monarchy and systems where the separation of powers is less pronounced compared with other systems.

It is an open question which of these yields the most in terms of political effectiveness. On paper, two of the alleged frontrunners in climate policy, Germany and the United Kingdom, have contrasting political systems. Arguably, the German political system allows for the representation of environmental interests in a more effective way than does the British system. It makes it easier for small parties to get representation in the system (through seats in parliament, and other administrative positions that follow), provided the party has at least 5% of the vote. The German system also knows of the formation of a grand coalition, an arrangement where the two largest parties govern together. This nearly eclipses the opposition and the government has much more leverage to devise and implement policies, including progressive climate policies. We do not see this possibility in countries such as the United Kingdom or the United States. But the United Kingdom, in contrast to the United States, has, based on its majoritarian government and centralized system, imposed a climate change policy through the Climate Change Act of 2008 that is celebrated as peerless in the world. It is another matter how successful the implementation of this legal instrument has been.

This leads to the second point, which is really about political instruments. We can distinguish between markets, hierarchies, and voluntary associations. The antidemocratic impetus of climate activists seems to narrowly focus on hierarchies as tools for change. A critique of the climate activist argument should not throw the baby out with the bath water. There is a place for all three modes of social coordination, and it is an open question which constellation, or mix of them, is most promising in which political system.

Reiner Grundmann

Professor of Science and Technology Studies

University of Nottingham

United Kingdom

Reporting on climate change

As a longtime climate policy wonk, I have for many years viewed the New York Times environmental writer Andrew C. Revkin as the gold standard for insightful reporting on climate change. His presence became even more important with the shrinking commitment to environmental reporting by the mainstream media. What I therefore found most affecting in reading Revkin’s journey recounted in “My Climate Change” (Issues, Winter 2016) is the small chance that it can be repeated. As the economics of newspapers continues to decline, there are fewer and fewer dedicated environmental writers who will have a similar opportunity to learn over time, to assess the merits of different opinions, and to contribute significantly to public discourse. This is a loss to be mourned by all of those who view an informed electorate as key to good environmental policy—particularly for an issue as complex and subject to partisan debate as climate change.

The challenge we face is highlighted by a recent survey reported in the journal Science on how climate change is being taught in high school science classes. A major finding was that among teachers addressing the subject, “31% report sending explicitly contradictory messages, emphasizing both the scientific consensus that recent global warming is due to human activity and that many scientists believe recent increases in temperature are due to natural causes.” And even among teachers who agree that human activities are the main cause of global warming, a bare majority know that a high percentage of scientists share their view. The basic problem with the discourse on climate change in the United States is thus not that the issues are complex but that even the basics are not widely understood—and that for a significant minority, the facts may be unwelcome.

At a more fundamental level, I also find Revkin’s equanimity in contemplating climate change (“change what can be changed, accept what can’t”) astonishing. As the National Research Council warned in a 2013 report, “…the scientific community has been paying increasing attention to the possibility that at least some changes will be abrupt, perhaps crossing a threshold or ‘tipping point’ to change so quickly that there will be little time to react.” Although a completely workable—and politically acceptable—solution has yet to be identified, many respectable sources, including the International Energy Agency, have concluded that (contrary to Revkin’s assertion) practical solutions are technically feasible—if we act soon. Yet Revkin nevertheless seems to be saying what else can we do but roll the dice and hope it isn’t as bad (or worse!) than identified by peer-reviewed science. Not “my” climate change!

Finally, Revkin’s “existential” search for rational explanations of irrational behavior seems to avoid the obvious. While evaluating academic research on status quo bias, confirmation bias, and motivated reasoning, he neglects the power of massive industry lobbying and calculated obfuscation—the strategy made famous by the tobacco industry but arguably elevated to new heights by fossil fuel interests. Such efforts have arguably become even more effective at a time of diverse news sources and selective receipt of information. No fancy theory required.

Alan Miller

Climate Change Policy and Finance Officer (retired)

International Finance Corporation

The science journalist Andrew Revkin has been narrating the climate change story since it first came to public notice in the mid-1980s, and his wide-ranging and insightful article here offers an insider’s perspective on the art of shaping coherent narratives out of complex and uncertain science. In this context, imagery is everything. Revkin recalls that his first published climate feature in 1984 (on the Cold War prospect of a “nuclear winter”) was illustrated by a graphic image of Earth frozen in an ice-cube, although, tellingly, only four years later, his report from Toronto’s World Conference on the Changing Atmosphere was illustrated by an image of Earth melting on a hot plate. Scientists tend to be suspicious of such emotive editorializing, but if there’s one thing we’ve learned about policy makers (and the publics they serve), it’s that they respond more readily to images than to evidence.

This is well illustrated by the Montreal Protocol of 1987, which secured a global ban on the manufacture of ozone-depleting chlorofluorocarbons (CFCs). The ban represented humanity’s first victory over a major environmental threat, but Revkin warns us against seeing the episode as a template for curbing atmospheric carbon dioxide. As he rightly points out, eliminating CFCs was a relatively simple matter, given their niche industrial usage, whereas the reduction of carbon dioxide emissions requires root-and-branch amendments to every industrial process on the planet. But I think Revkin tells only part of the story here: the victory over CFCs was, in large part, won by imagery, in the form of the striking, false-color graphics that showed the ever-widening “hole” in the ozone layer above Antarctica. First published in 1985 by the National Aeronautics and Space Administration’s Scientific Visualization Studio, these now-iconic artifacts succeeded in visualizing an otherwise invisible atmospheric process, bringing a looming environmental crisis to the world’s attention. Response to those images was swift and decisive, and only two years after their publication, effective legislation was in place.

Atmospheric warming is, of course, a different process than ozone depletion, with different environmental and economic implications, but it is just as invisible. Yet so far, no equivalent to an ozone hole visualization has been found for global warming. We are stuck with polar bears and melting ice caps, aging poster children who have lost any impact they may once have had. “I had long assumed the solution to global warming was, basically, clearer communication,” writes Revkin, who goes on to list some of the failed climate metaphors that he has put to rhetorical work over the years, including “carbon dioxide added to the atmosphere is like water flowing into a tub faster than the drain can remove it,” or “the greenhouse effect is building like unpaid credit card debt.” To write about climate change is to be in the metaphor business, but so far—with the possible exception of the Keeling curve, with its dramatically rising, saw-toothed blade—no clinching image has been found. But then how do you visualize something that has become too big to see?

Richard Hamblyn

Lecturer in Creative Writing

Birkbeck, University of London

Author of The Invention of Clouds

As someone who has greatly admired Andrew Revkin’s work over the years, I very much enjoyed reading his story about his life’s journey in the world of journalism and science communication. However, I took issue with one of the claims he makes about science.

Revkin claims, as if it were self-evident, that a major hurdle in our response to climate change is that “science doesn’t tell you what to do.” He then invokes the “is-ought” problem coined by the eighteenth century philosopher David Hume, which states that no description about the way the world is (facts) can tell us what we ought to do (values). I would argue, however, that this separation between facts and values is a myth. Values are reducible to specific kinds of facts: facts related to the experience and well-being of conscious creatures. There are, in fact, scientific truths to be known about human values (a view defended most notably by the philosopher and neuroscientist Sam Harris in his book, The Moral Landscape: How Science Can Determine Human Values).

I agree with Revkin that environmental, economic, and cultural forces influence the values adopted by individuals and societies, but the reason is because they change our brains and influence our experience of the world. These changes can be understood in the context of psychology, neuroscience, and other domains related to the science of the mind. Human well-being is ultimately related, at some level, to the human brain.

Similarly, the reason climate change is so worrying to us is because of the consequences that it will ultimately have on our well-being. Whether we realize it or not, our concerns for the environment are ultimately reducible to the impact it has on the conscious creatures in it (both human and non-human).

Revkin is by no means alone on this. Most people, scientists included, seem to agree not only that ethics is a domain that lies outside the purview of science, but that it is taboo to even suggest otherwise. But perpetuating this myth has consequences. Our failure to recognize the relationship between facts and values will have wider implications for public policy related to many rapidly emerging technologies and systems, from artificial intelligence to agricultural technology to stem cell research to driverless cars.

It’s important to note that in this context, “science” isn’t merely synonymous with data, models, and experiments; these are merely its tools. We must recognize that science is actually more comprehensive than this. The boundaries between science, philosophy, and the rest of rational thought cannot be easily distinguished. When considered in this way, it’s clear that science can answer moral questions, at least in principle. And, as Sam Harris puts it, “Just admitting this will change the way we talk about morality, and will change our expectations of human cooperation in the future.”

Mark Bessoudo

Licensed Professional Engineer

Toronto, Canada

Reviving nuclear power

In “A Roadmap for U.S. Nuclear Energy Innovation” (Issues, Winter 2016), Richard K. Lester outlines in a thought-provoking manner the significant obstacles to and absolute necessity of innovation in the nuclear industry in the United States, and he provides well-founded recommendations for how the federal government can be more supportive of nuclear innovation. That being said, we need to think more creatively about policies to support nuclear energy, based on the federal and state policies that are currently leading to a boom in both natural gas and renewable energy across the nation.

Natural gas has benefited from 30 years of federal support, not just through research and development, but through public-private partnerships and a 20-year production tax credit for unconventional gas exploration and hydraulic fracturing in shale. These investments have made shale gas so cheap today that it is disrupting the energy market, producing more electricity than coal for the first time ever. Similarly, a suite of federal and state policies have been implemented to both drive down the cost of renewable energy and incentivize deployment.

Federal support for nuclear energy could level the playing field and help expand all clean energy options as the United States tries to meet its agreements under the 2015 United Nations Climate Change Conference, or COP 21, regarding reducing greenhouse gas emissions. Lester is correct that a carbon price could help make existing nuclear plants more profitable in merchant markets subject to unstable wholesale prices, but more direct and tactical support for low-carbon baseload power is needed. Such policies could include capacity payments, priority grid access for low-carbon baseload, a production tax credit for re-licensed plants, including nuclear in state low-carbon power mandates, or an investment tax credit for plants that perform upgrades or up-rates.

For the second phase of nuclear development—what Lester calls Nuclear 2.0, which is projected to stretch from 2030 to 2050—he is correct that we should focus on innovation for advanced nuclear reactors. But we also need to ensure there is market demand for these significantly safer, cheaper, and more sustainable reactors. As we learned from the aircraft industry, Boeing needs to sell over 1,000 of its innovative 787 aircraft before the company breaks even on the significant research and development costs. It is hard to see how a single nuclear reactor design will even reach those economies of scale. Federal policies that could stimulate demand include procurement of reactors for federal sites such as military bases or national laboratories, an investment tax credit for new builds, or fast-tracked licensing for new reactors at existing sites.

From 2050 onwards, what Lester calls Nuclear 3.0, international collaboration will be crucial for large-scale projects such as fusion. But the government should be more supportive of advanced reactor development collaborations in emerging economies today. Russia, China, and South Korea are currently investing heavily in their domestic reactor fleets, but also in building advanced reactors around the world. Rather than compete with them, it is more feasible and advantageous to develop partnerships to leverage our long-standing experience in the nuclear industry with the rapidly growing demand for energy in these countries.

Jessica R. Lovering

Director of Energy Research

The Breakthrough Institute

Oakland, California

Fusion entrepreneurship

Ray Rothrock makes several critical points in “What’s the Big Idea?”(Issues, Winter 2016). The first is about venture capital and its role in providing patient capital to promising ventures. Rothrock describes the birth of the modern professional venture capital industry, with pioneers such as Rockefeller, Doriot, Schmidt, and Kleiner, but the fact is that venture capital has existed from the beginning of purposeful economic activity. What changed in the 1960s and beyond is the development of a profession in which men and women made their living investing in and helping early-stage companies.

I believe that two of the greatest inventions of the twentieth century were the startup and venture capital. At accelerating rates through the second half of the century, men and women started new high-potential ventures. These companies created new products or services or displaced incumbent players (or both) by out-executing them or by using novel business models. That is the essence of a dynamic, competitive economy.

These companies—think Intel, Federal Express, Genentech, Apple, Starbucks, Amazon, Google, Facebook, Tesla, and Uber—needed capital. Professional venture capitalists invested in these enterprises in a highly structured way. They gave modest amounts of money to enable the entrepreneurial team to perform experiments and test hypotheses. If the tests yielded positive results, they invested more. If the team couldn’t execute, they replaced members of the team. If the original idea wasn’t quite right, they changed strategy to reflect market needs. Jointly, the entrepreneurs and the investors created enduring companies—or they shut them down.

Indeed, a critical element of this process is failure. By some estimates, almost two-thirds of the investments that venture capitalists make fail to return capital. Of the remaining investments, returns more than offset losses, which enables venture capital funds to gain access to more money. In the United States, nonmoral failure is an acceptable outcome. Team members get redeployed in the economy and capital gets reallocated without enormous social, economic, legal, or political consequences.

Rothrock’s description of Tri Alpha Energy is a perfect example of the model. People have ideas that if proved correct can result in valuable companies. In the earliest stage of Tri Alpha, experienced angels invested to support various experiments. When those tests worked, the team was able to attract more financial and human capital. That process will be repeated many times before we know if Tri Alpha will succeed or fail.

What is distinctive about Tri Alpha is how audacious the plan is, how much time will pass before commercialization, and the large amounts of capital that will inevitably be needed. On the other hand, if the plan works, the implications for the world will be stunning. We need low-cost, safe, reliable, secure, zero-carbon sources of power everywhere. More pointedly, if leaders keep deploying coal-fired plants in countries such as India and China, the implications for global warming and air pollution are alarming.

From a societal perspective, the public returns from success at ventures such as Tri Alpha can be far greater than the private returns. The same would be true of ventures focused on curing diabetes or Alzheimer’s disease. What is remarkable is that people are willing to devote their lives to pursuing seemingly impossible goals, and investors are willing to invest even when confronted with a high likelihood of failure.

In the domain of energy, there are currently hundreds, if not thousands, of well-financed teams working on new technologies and business models, in such diverse areas as solar power and energy conservation. But for many years, particularly after the incident at Three Mile Island, essentially no private nuclear ventures were started. Even the largest players in the industry were unprepared to build new capacity given public safety concerns and costs. Universities scaled back research and teaching efforts as the nuclear power industry atrophied.

Today, there are at least 20 significant nuclear ventures trying to do for nuclear power what teams have accomplished in semiconductors, communications, and life sciences. They are focused on addressing each of the concerns that have plagued the industry (cost, safety, waste, and so on down the list). A company such as Terrapower intends to create a “traveling wave” reactor that uses spent fuel rods as its energy source. ThorCon has devised a “walk-away-safe” molten salt reactor. UPower is designing small reactors that can effectively be mass-produced and deployed off-grid. Transatomic has a different design for turning nuclear waste into safe, low-cost, zero-emissions power.

No one can predict which, if any, of these teams will crack the nut—and indeed, many will fail. Investors such as Rothrock deal with the risk of failure for a single venture by investing in many different companies simultaneously. The same is true of institutional investors who give money to firms such as Venrock but also diversify across many asset classes. Team members are less diversified but know that they can find jobs at successful ventures in the same industry or can apply their skills elsewhere.

In summary, Rothrock has shed light on a remarkably efficient and effective model for promoting economic progress. It’s a messy model that often results in failure. Without failure, as Edison discovered, you can’t find success. Although Tri Alpha hopes to create a large, valuable company, society is a major beneficiary of the process. We all need smart people and patient capital to address major global challenges in almost every domain, from water to education to health. That is the promise and reality of the modern startup and venture capital.

William A. Sahlman

Dimitri V. D’Arbeloff – MBA Class of 1955 Professor of Business Administration

Senior Associate Dean for External Relations

Harvard Business School

Ray Rothrock clearly describes one of the great “inventions” of the twentieth century, namely the modern venture capital process. Tri Alpha’s pursuit of fusion is a perfect example of the venture capital process in action. If the reward is seen as great enough, it is remarkable to see the challenges that entrepreneurs and private capital will undertake.

The venture capital process couples the scientific method’s idea of staged experiments with finance’s idea of the “option value” of milestone-based review. As the results of each experiment are reviewed at the milestone, each of the parties that make up a new venture—the employees, the investors, the essential partners, and even the potential customers—has the real option to make a clear, informed decision to continue to pursue the venture, to abandon it, or to renegotiate the various “contracts” that hold the parties together. In the latter regard, such renegotiation might include division of ownership, changes in future milestone dates or objectives, and changes in direction based on the need to attract new investors, new partners, or even new employees. The twists and turns of an evolving venture are nail-biting, and the failure rate along the way is known to be high. But as long as there is shared belief in the reward and in the venture’s ability to capture the reward, it will hold together to attempt the next milestone.

Rothrock also points out another tremendous virtue of the venture capital process. It allows “outlier” ideas to be pursued as long as the entrepreneur leading the charge can attract that necessary coalition of employees, investors, essential partners, and forward-looking customers. The idea conceived by university researcher Norman Rostoker was such an outlier, passed over by most and even dismissed by a prominent few in the nuclear field. Fortunately, Glenn Seaborg and George Sealy thought Rostoker’s idea was worth the bet, putting their reputations and their endorsement into the creation of Tri Alpha. The company launched and is now on the hunt for a challenging but potentially stunning target: limitless, low-cost energy.

At this point, Tri Alpha still might fail technically or economically or even competitively. There are a number of privately funded fusion and fission ventures on the horizon that promise limitless, low-cost energy. And at least some of those have been launched because of Tri Alpha’s own success.

Rothrock reminds us all that the “Big Idea” is encouraging the world’s entrepreneurs to imagine the world as they would have it be and then to do everything they can to make it so. Imagine the unimaginable: a world with limitless, low-cost energy. Let’s all hope that Tri Alpha makes it so.

Joseph B. Lassiter III

Senator John Heinz Professor of Management Practice in Environmental Management, Retired

Harvard Business School

Ray Rothrock provides a valuable description of how venture capitalists evaluate investment proposals, as well as how his organization decided to invest in the Tri Alpha fusion research and development project. As he noted, his firm’s interest was influenced at the outset by the fact that the company had secured encouraging evaluations from a number of world-class research luminaries.

With significant investment forthcoming, Tri Alpha recruited a team of talented physicists and technologists, and according to friends who visited the company, impressive experiments were built, yielding promising early results.

My understanding of the Tri Alpha effort is limited to Internet information and an impressive presentation by the company’s chief technology officer at the December 2015 annual meeting of Fusion Power Associates. Accordingly, I am in no position to evaluate the company’s status or outlook. However, on the basis of my two recent publications on fusion research, I can comment as follows.

First, the mainline international fusion research program is focused on the tokamak plasma confinement concept, which is the basis of the extremely expensive International Thermonuclear Experimental Reactor (ITER)—with a price tag on the order of $50 billion—that is currently under construction in France, with international funding. As I noted in “Fusion Research: Time to Set a New Path” in the Summer 2015 issue of this journal, a fusion power plant based on the ITER-tokamak concept will be totally unacceptable commercially for reasons explicitly enumerated. And as I noted in a follow-on article titled “Revamping Fusion Research” in the April 2016 issue of the Journal of Fusion Energy, the lessons learned from the extrapolated ITER-tokamak commercial failure, along with other factors, point to important considerations for successful future fusion research. A cursory look at the Tri Alpha concept indicates that its approach seems to be on a better track for success. Two important Tri Alpha positives include the goal of using the proton-boron 11 fuel cycle with its extremely low neutron production and using a target of high plasma beta (plasma-magnetic field pressure ratio).

Second, it is readily apparent that any new fusion power concept will involve a large number of engineering questions that must be addressed on a timely basis. Does the concept appear to extrapolate to an attractive power plant, meeting established utility criteria? How close to acceptable is it likely to be? What potential commercial problems must researchers address during project development, and are these issues being properly addressed on a timely basis? Have any showstoppers been identified, and if so, how will they be addressed?

In this regard, it is imperative that commercially experienced engineers who are independently managed (this is essential) be given free rein to design a power plant based on a proposed fusion power concept, and that those engineers provide guidance gleaned from their analyses to the researchers. Without that continuing, independent input and related interactions, physicists can veer off the track that is necessary for ultimate project success. The absence of such evaluations and guidance is one major reason why the ITER-tokamak debacle occurred and billions of dollars have been and are being squandered. It is hoped that Tri Alpha has benefitted from that very painful experience and is acting accordingly.

Robert L. Hirsch

Senior Energy Adviser, Management Information Systems Inc.

Former Head of the U.S. Fusion Program, 1972 to 1976

Genetic goose chase

In “The Search for Schizophrenia Genes” (Issues, Winter 2016), Jonathan Leo notes that genetic searches at best have revealed a set of small-effect genomic variants that together explain less than 4% of schizophrenia liability. He adds that these susceptibility variants are close to equally common in the general population, with the differences appearing significant only when enormous sample sizes are assembled.

Although a 4% explained variance in schizophrenia liability has no practical utility, as Leo asserts, it is also a question whether even this small number is reliable. In a study central to the field—the 108 single nucleotide polymorphism (SNP) study, conducted by the Schizophrenia Working Group—results from the replication sample are telling, if one takes the effort to find them in the extensive online supplementary material. In the replication sample, as compared with the discovery sample, 13 of the 108 SNPs differed in the opposite direction between cases and controls; 87 failed to reach an uncorrected significance level of p ≤ 0.05; and only three reached the adequate Bonferonni corrected significance level. Replication failures plague psychiatric genetics in general and indicate that the null hypothesis of no effect remains to be adequately rejected.

The validity of genomic sequence alterations in schizophrenia would be strengthened by demonstrating associations with the well-documented volumetric changes (usually shrinkage) that are seen in prefrontal cortical and subcortical brain regions in this disorder. In the hitherto largest attempt to do this, the Schizophrenia Working Group recently published a mega-analysis (Franke et al., Nature Neuroscience) of SNP associations to brain volumes in 11,840 patients and controls from 35 countries. No single SNPs or “polygenic risk scores” from the 108 study were significantly associated with size of the amygdala, hippocampus, thalamus, nucleus accumbens, or any other subcortical region. Hence, this study comes close to falsifying the hypothesis that SNPs contribute to the most commonly observed neurobiological changes in schizophrenia. Instead, they strengthen the hypothesis that any schizophrenia risk alleles have been eliminated from the population by natural selection, because people with the disorder have reduced fecundity, reproductive disadvantages, and increased mortality at early ages.

Studies of person-environment interactions suggest that the genetic hypothesis is redundant. As noted by Leo, child psychosocial adversity is highly prevalent in patients with psychosis, with repeated demonstrations of dose-response associations. The dynamic from stress exposure to psychotic symptoms is likely to include the impact of stress on epigenetic processes, such as cytosine methylation, that alter gene transcription rates and neural architecture. Accordingly, child psychosocial adversities lead to the same neurobiological changes as those seen in schizophrenia, including volume loss in subcortical and prefrontal cortical brain regions, changed morphology and functioning in interneurons and pyramidal cells, and altered functional connectivity patterns in neural networks. The behavioral consequences of early life adversities include increased stress reactions to daily life events, cognitive difficulties, and suspiciousness of others—all being characteristics also for psychosis.

As indicated by Leo, the success of future preventive and treatment efforts for schizophrenia will benefit from the endorsement of other perspectives than genetics.

Roar Fosse

Division of Mental Health and Addiction

Vestre Viken Hospital Trust, Norway

Seeing through the smoke

As Lynn Kozlowski correctly observes in “A Policy Experiment Is Worth a Million Lives (Issues, Winter 2016), not all tobacco products present equal health risks, and the nation’s regulatory response should take account of these differences.

The Family Smoking Prevention and Tobacco Control Act of 2009 empowers the Food and Drug Administration (FDA) to permit companies to market novel tobacco products as presenting less risk than traditional products when doing so “is appropriate for the protection of the public health,” and FDA is currently reviewing an application by Swedish Match seeking permission to claim that its “snus” products are less harmful than other tobacco products. Moreover, FDA has indicated its intention to assert regulatory jurisdiction over electronic nicotine delivery systems (ENDS), and in so doing has acknowledged that “[e]merging technologies, such as the e-cigarette, may have the potential to reduce the death and disease toll from overall tobacco product use . . .” The key regulatory challenge for FDA—and for the states within their own residual authority—is to craft policies that reduce initiation, promote cessation, and shift tobacco users toward less harmful patterns of use.

However, a “harm reduction” strategy of product regulation entails a serious risk of regulatory mistake if use patterns diverge significantly from those projected by policy analysts. As FDA observed in its ENDS proposal, if use of these products results “in minimal initiation by children and adolescents while significant numbers of smokers quit,” the net impact at the population level could be positive. If, on the other hand, “there is significant initiation by young people, minimal quitting, or significant dual use of combustible and non-combustible products, then the public health impact could be negative.” In such a delicate behavioral context, a cautious approach is imperative, especially in relation to initiation. The public health gains from shifting currently addicted smokers to ENDS could be completely offset by a new wave of tobacco-related morbidity and mortality attributable to a significant increase in initiation of ENDS by young people who would not otherwise have used tobacco.

This is why Kozlowski’s proposal that states “consider selectively and differentially raising the purchase age of tobacco and nicotine products” is worrisome. Risk-based policy making is a potentially sensible strategy for shifting users from combustible products to non-combustible ones, but it not a sensible way of thinking about tobacco use initiation, which occurs almost entirely during adolescence and young adulthood. The marked decline in initiation over the past 15 years is attributable to the emergence of a strong tobacco-free social norm among adolescents and young adults. It would be a huge mistake to choose, as a matter of policy, to send young people an equivocal message (e.g., “you’re not old enough to use alcohol or cigarettes, but it’s OK to use e-cigarettes”). This is especially dangerous when promotional expenditures for these products are escalating exponentially, largely targeted at young people; a significant proportion of teenagers are already using e-cigarettes; and the likely trajectories of use are unknown.

It is true, of course, that any legal age of purchase is arbitrary, but a line must be drawn somewhere, and where we draw the line matters. When the voting age was raised to 18, most states unthinkingly lowered the “age of majority,” including the minimum legal age for the purchase of alcohol, from 21 to 18. This turned out to be a major public health error, leading to a substantial increase in alcohol-related driving fatalities among young adults. In 1984, Congress leveraged threatened decreases in highway funds to induce states to restore the legal drinking age to 21. The Institute of Medicine’s recent study on the minimum legal age for tobacco products concluded that raising the age from 18 to 21 will significantly reduce rates of initiation, especially among 15-17 year olds (due to reducing access to social networks of smokers).

Although ENDS and reduced-risk products such as snus should not be treated the same as cigarettes for all regulatory purposes, the minimum legal age should be raised to 21, without equivocation, for all tobacco products. If every state raised the minimum legal age to 21 today, there would be 3 million fewer adult smokers in 2060. Initiation of ENDS use by teens and young adults might save even more lives by reducing the likelihood that they will ever initiate more harmful forms of tobacco use, but it is just as plausible to believe that it would have the opposite effect. In short, doing anything to encourage use of ENDS by teens is not a prudent public policy.

Richard J. Bonnie

Harrison Foundation Professor of Medicine and Law

University of Virginia

Lynn Kozlowski correctly suggests that public health would further improve if states raised their minimum legal age for cigarette sales to a higher age than for sales of far less harmful vapor products and smokeless tobacco products. The scientific and empirical evidence consistently indicates that cigarettes are attributable for more than 99% of all tobacco morbidity, disability, mortality, and health care costs; that noncombustible vapor and smokeless tobacco products are 99% (plus or minus 1%) less harmful than cigarettes; and that these low-risk smoke-free alternatives have helped millions of smokers quit smoking or sharply reduce their cigarette consumption, or both.

Cigars and pipe tobacco pose significantly lower risks than cigarettes, but those products are not effective risk reduction alternatives for cigarette smokers.

Public policies that treat all tobacco and vapor products the same not only protect cigarettes and threaten public health, but also deceive the public to inaccurately believe that all of these products pose similar risks.

By setting a higher minimum sales age for cigarettes (e.g., to 19 or 21 years) than for smoke-free alternatives, states not only would help prevent teen smoking (by banning cigarette sales to high school seniors), but also would inform the public that smoke-free alternatives are less harmful than cigarettes.

Similarly, public health and tobacco risk knowledge would be greatly enhanced if states taxed cigarettes at a significantly higher rate than low-risk smoke-free alternatives, and if state and municipal governments rejected lobbying efforts to ban the use of smoke-free alternatives everywhere smoking is banned.

Unfortunately, in their zeal to attain an unattainable “tobacco free world,” those people who lobby to regulate vapor products, smokeless tobacco products, and what are known in government-speak as “Other Tobacco Products” in the same manner that cigarettes are regulated want the public to inaccurately believe that all tobacco products and vapor products are just as harmful as cigarettes. This is partly why surveys consistently find that 90% of people in the United States inaccurately believe that smokeless tobacco is as harmful as cigarette smoking, and why increasingly more of them inaccurately believe vaping is as harmful as smoking.

One of the worst offenders of this unethical risk deception has been the U.S. Centers for Disease Control and Prevention, which replaced its long-standing statement that “cigarette smoking is the leading cause of disease and death” with the claim that “tobacco use is the leading cause of disease and death” in virtually all of its tobacco-related research and reports during the past decade.

Smokers have a human right to be truthfully informed that smoke-free tobacco and nicotine alternatives are far less harmful than cigarettes. Therefore, public health officials and professionals have an ethical duty to truthfully and consistently inform smokers that smoke-free alternatives are far less harmful than cigarettes, and to advocate for stricter regulations for cigarettes than for lower risk alternatives.

Bill Godshall

Executive Director

Smokefree Pennsylvania

Leveraging global R&D

“How to Bring Global R&D into Latin America: Lessons from Chile” (Issues, Winter 2016) is a highly illuminating and timely article that may well hit a nerve with policy makers and institutional managers. There are three reasons for making this claim:

First, as authors José Guimón, Laurens Klerkx, and Thierry de Saint Pierre rightly suggest, research and development has essentially become a global affair—not only for companies, eminent research universities, and individual researchers, but also (or maybe especially) for countries that are trying to close ranks with leading science nations. Many countries are pursuing increasingly sophisticated and innovative policy strategies to tie global knowledge networks to local needs and development priorities. Yet, these internationally oriented capacity-building efforts have received relatively little systematic attention from policy analysts and commentators, particularly in certain regions in the world. As the authors point out, Chile—a member of the Organisation for Economic Co-operation and Development since 2010—represents an interesting case study both for its paragon role within South America and its ambitious and innovative partnership program.

Second, large-scale-capacity partnerships of the type described in Chile are on the rise globally. In what colleagues and I have called Complex International Science, Technology, and Innovation Partnerships (or CISTIPs for short), countries are increasingly looking to team up with foreign expert partners to build institutional capacity in specific scientific and technological domains. Although CISTIPs have long existed in certain technological sectors (e.g., nuclear power or space technology, where countries have typically built their first power plant or satellite in collaboration with nuclear or space nations, respectively), upstream capacity-building centered on universities and other research organizations is a much more recent phenomenon, though not without precedence. Initiatives such as the Chilean International Centers of Excellence (ICE) program differ from traditional research collaborations and cross-border higher education activities in that they seek to address simultaneous concerns in human resource development, research capability, translational capability, institution-building, and regulatory frameworks, among others, and they often involve hybrid multinational, multi-institutional architectures. Here, too, the case of Chile provides valuable insights into how CISTIPs might fare more broadly within a Latin American context.

Third, research and development has to a certain extent become a branding game. For countries to put themselves on the map and benefit from globally mobile talent, knowledge, and capital, they need to demonstrate credibility in, and long-term commitment to, science and technology. Conversely, leading research organizations around the world increasingly depend on brand recognition for access to interesting research opportunities and additional funding sources. If successful, the Chilean case can provide guidance for emerging science nations on how to nimbly position themselves and exploit international collaboration opportunities to their advantage. At the same time, it could provide a model for eminent research universities for how to engage in emerging science nations. The growing popularity of CISTIPs indicates that research universities should consider international capacity-building efforts more seriously as part of their core mission, going beyond the traditional trifecta of education, research, and (local) economic development.

The article thus provides a laudable stepping-stone for further inquiry into the Chilean partnership program and beyond. Among the questions raised are: Where did the Chilean government look for inspiration for the ICE program? How did earlier capacity-building efforts in Chile and elsewhere in Latin America (e.g., the satellite development partnership between the Chilean Airforce and the British company SSTL, the telescope cluster in the Atacama desert, or Massachusetts Institute of Technology’s institution-building efforts in the Argentinian province of Mendoza) affect the choices made by the Chilean government—if at all? How do the various ICEs differ in terms of goals, partnership architecture, and actual performance—and how does this compare with similar initiatives in other countries? Indeed, it would be very illuminating to contrast the case of Chile with similar initiatives in Singapore, Portugal, and various Middle Eastern countries in terms of visions, architectures, and partner selection, as well as politics and social uptake.

Sebastian M. Pfotenhauer

Assistant Professor of Innovation Research

TUM School of Management

Technische Universität München

Munich, Germany

Should other countries imitate what Chile has been doing and try to attract the research and development (R&D) departments of foreign universities, as José Guimón, Laurens Klerkx, and Thierry de Saint Pierre seem to imply? Should the government of Honduras, for example, try to convince the University of California, Davis, a global leader in agricultural research, to open a research facility in the country to improve Honduran agricultural production? Should Peru continue with its Formula C program?

The answer crucially depends on one factor: the capacity to absorb the knowledge and innovation that spill over from the foreign research facility. It is true that an R&D center may help strengthen the national innovation system—through, for example, scientists and engineers finding jobs in local firms and universities and transferring with them all of their knowledge and experience, or new startup firms spinning off from the center—but this is not likely to happen in the absence of strong local scientific and technological capabilities. Research has shown that a certain level of research activity is indeed needed simply to absorb and make efficient use of scientific research and technologies developed elsewhere, and not only to create new knowledge. The question is not whether foreign R&D centers can replace local R&D: both are needed, and the former does not easily lead to the latter.

Moreover, it is worth reminding that knowledge increasingly requires a combination of different elements that cannot all be created—or are not available—in only one country. Having foreign R&D centers in the country can represent an advantage, as much as having national universities and firms located abroad, with strong linkages to foreign institutions.

Therefore, a policy such as the one adopted in Chile can serve as a lesson to other developing countries only if their firms, universities, and government institutions also invest in research, development, and innovation. Some form of innovation system, even if only fragile and precarious, must be in place and in the process of developing. This includes networks of research institutions; firms demanding research from them and investing themselves in research and innovation; and the government orchestrating and regulating efforts, fostering coordination, and addressing market failures.

The results of the Chilean program can be fully assessed only in the long run. However, its success will need to be measured not so much on the basis of the nationality of the owners of the R&D centers, but on the deep linkages and interactions that they will have generated with local firms, universities, and the economy. Chile’s lessons for developing countries can only be partial: foreign R&D centers cannot substitute for, but rather complement, a national innovation system.

Carlo Pietrobelli

Lead Economist, Competitiveness and Innovation Division

Inter-American Development Bank

Professor of economics, University Roma Tre, Italy

Cite this Article

“Forum – Spring 2016.” Issues in Science and Technology 32, no. 3 (Spring 2016).

Vol. XXXII, No. 3, Spring 2016