Forum – Fall 2011

Adapting to climate change

Patrick Gonzalez’s “Science for Natural Resource Management Under Climate Change” (Issues, Summer 2011) was a very good overview of federal natural resource management agencies’ attempts to deal with the challenges of climate change to their missions and their trust resources, as well as the role that science can play in meeting those challenges. But climate change is not just a federal issue, especially in the realm of fish and wildlife management. In fact, state governments have the principal responsibility and authority for managing our nation’s fish and wildlife resources.

The federal government has authorities and responsibilities for fish and wildlife management in the context of interstate commerce, the treaty powers of the United States, and laws such as the Endangered Species Act and the Migratory Bird Treaty Act. These authorities require us to anticipate and manage for the effects of climate change on fish and wildlife resources, as we are required to address any other factor affecting the long-term health and abundance of these resources.

They do not, however, give us the authority or responsibility to regulate the causative factors of climate change. Our responsibility is to ensure that fish and wildlife are able to adapt in a warming world, and that will be challenging enough.

Federal and state wildlife agencies have developed a close and mutually supportive working relationship to deal effectively with these shared responsibilities. If we are to effectively manage our living resources in this era of accelerating climate change, this working relationship will need to become even closer.

At the direction of Congress, the U.S. Fish and Wildlife Service USFWS), on behalf of the Department of Interior and in collaboration with the Council on Environmental Quality, is co-leading the development of a National Climate Adaptation Strategy for Fish, Wildlife and Plants. This strategy is being developed in partnership with the National Oceanic and Atmospheric Administration and the state wildlife agencies. Staff from 16 federal, 14 state, and 2 tribal fish and wildlife agencies are currently compiling a draft strategy that will identify the highest-priority strategies and actions for helping fish, wildlife, and plants adjust to the anticipated effects of climate change. A draft should be available for public review by mid-December of this year.

At the same time, we are not waiting to act. The USFWS has established regional climate science partnerships with the U.S. Geological Survey, other state and federal partners, and the conservation community, to expand our ability to turn continental climate science into knowledge that managers can use to make better decisions on the ground. We’ve also helped create a network of locally driven, solution-oriented Landscape Conservation Cooperatives that will take advantage of expertise across the conservation community to set population and habitat goals, plan and execute conservation across landscapes, and enhance our research, monitoring, and evaluation capabilities.

We urge your readers to look for and comment on the draft National Climate Adaptation Strategy for Fish, Wildlife and Plants when it is released and to join us in our efforts to help wildlife and natural systems cope with a changing climate.

DAN ASHE

Director, U.S. Fish and Wildlife Service

Washington, DC


Small modular reactors

As a Nuclear Regulatory Commission (NRC) commissioner (and former nuclear attack submarine skipper), I enjoyed reading Ross Carper’s and Sonja Schmid’s eloquent description of current small modular reactor (SMR) activities in “The Little Reactor That Could” (Issues, Summer 2011). I offer these comments in my capacity as an individual commissioner with responsibilities for regulating commercial nuclear safety. As a safety regulator, the NRC does not promote the use of nuclear technologies; that role belongs to the Department of Energy. But I do believe that the prospect of SMRs provides a valuable opportunity for the United States to enhance nuclear technology and thereby improve nuclear safety domestically and internationally.

COMPETITION IN TECHNOLOGY IS A GOOD THING—GOOD FOR INDUSTRY, GOOD FOR SAFETY, AND GOOD FOR THE PUBLIC.

The realization of breakthrough technology in any sector requires fortitude, imagination, and a focused effort by the government and the private sector working in unison to meet the needs of society. This is especially true when significant capital investment is required, as is the case in nuclear technology. SMRs conceptually offer passive safety features that may directly address lessons learned from the extended station blackout experienced at the Fukushima Daiichi station in March of this year. Our country has the potential to further explore nuclear technology safety enhancements in several arenas, SMRs being one.

The authors ask a highly relevant question: “Does inertia trump innovation in the U.S. nuclear industry?” Yes, there may perhaps be an underlying tension in the debate about whether small modular designs are more likely to be successful in the regulatory process if a design is evolutionary instead of revolutionary. But I think the NRC is well positioned to receive and competently address SMR designs of either flavor. I have observed firsthand the NRC’s sustained efforts to address SMR-related policy and technical issues. The NRC staff has been heavily engaged with the SMR community through workshops and pre-licensing application meetings. Potential policy issues being addressed include key issues such as control room staffing, security, and emergency preparedness. Whether a new SMR design is evolutionary or revolutionary, I feel safe in stating that the NRC’s highly professional staff is fully prepared to deal with prospective SMR license applications. Although revolutionary SMR designs may present novel issues, I am confident that the agency’s staff can and will address them. Furthermore, the NRC already has in place mature, well-vetted processes to bring any new policy issues to the commission for resolution.

Competition in technology is a good thing—good for industry, good for safety, and good for the public. We raise the bar and perform better when presented with different approaches to harnessing technology. In that light, I look forward to the forthcoming policy decisions affecting the realization of SMRs.

WILLIAM C. OSTENDORFF

Commissioner

U.S. Nuclear Regulatory Commission

Washington, DC


Ross Carper and Sonja Schmid are right to focus on the nuclear salesmanship of small modular reactors (SMRs), which slides seamlessly from the glib to the slick. A depiction of a $50 million machine to purify water in a small African village would be laughable if it did not involve spreading nuclear materials around the globe.

The Hyperion lead-bismuth–cooled reactors are unlikely to live up to the advertised low cost (just one-third to one-fifth of the pre-Fukushima cost per kilowatt of a large reactor). Economies of scale, characteristic of nuclear reactors and most thermal power generation, would be lost. Installing 25 megawatts in one location would still require transmission and grid connections in most cases. More than one large reactor is built at a single site in order to obtain economies of scale in transmission. Fukushima Daiichi had six large reactors. A lone reactor at a site would still require security arrangements, emergency response equipment and personnel, and a control room.

Despite the bury-it-and-forgetabout-it promotion, small reactors can leak. The first lead-bismuth reactor, installed on a Soviet submarine, leaked, and the molten metal mixture froze on contact with air. The result was so messy that the entire submarine had to be scrapped. Other Soviet liquid-metal reactors also suffered leaks.

The Hyperion reactor would probably use as fuel medium-enriched uranium, which is between 15 and 20% enriched, as compared to 3 to 5% for light water reactors. The additional separative work needed to enrich 15% to 90% bomb-grade uranium is quite small. Medium-enriched uranium is not a good material to spread around.

Liquid-metal reactor experience in the United States, Europe, and Japan has mainly been with sodium-cooled reactors and has had no discernible learning curve. The most recent demonstration reactor, Monju in Japan, was commissioned in 1994, had a sodium fire in 1995, was reopened for testing in 2010, and shut again in 2011 due to another accident. Even apart from mishaps, sodium-cooled reactors have had higher capital costs than light water reactors.

Nor are small light water reactors going to offer much comfort in the real world. Reactors were made large because they offer economies of scale. The materials and fabrication costs per unit of power go up as the size goes down. Although some costs could be lowered by mass manufacturing, they will not be sufficient to overcome the economies inherent in having a large amount of power at a single site. That is why most modular reactor proposals involve multiple reactors at a site. But the first unit constructed would be the most expensive, because the entire site must be set up to accommodate several reactors that may or may not be built.

The SMR venders claim that domestic jobs from mass manufacturing would be created, but the most likely place for mass manufacture would be China or another developing country. How would recalls be handled?

Nuclear reactors, small or large, are machines of the past: an expensive and risky way to boil water. We can do better in the age of iPhones.

ARJUN MAKHIJANI

President

Institute for Energy and Environmental Research

Takoma Park, Maryland

[email protected]


Aiding innovation

Federal Reserve Chairman Ben S. Bernanke’s “Promoting Research and Development: The Government’s Role” (Issues, Summer 2011) eloquently summarizes much of what is known and not known about the economics underlying the generation of R&D and the role of government in bringing it about. There is a consensus that in the economics of technological change, there does not exist what economists call “a first-best” policy. The free market does not work very well here, for all the reasons that he points out. Government does not work well here either. We just muddle through. Given how poorly our institutions are designed to deal with the complexities of innovation, it seems indeed surprising that we have experienced as much technological progress as we have in the past two centuries.

What should be stressed beyond the issues raised by Bernanke is the widespread fear of innovation. Technology has enemies of a variety of kinds, and these policies are no less complex and debatable than the ones mentioned by Bernanke. The enemies of new technology come in different forms. Some are vested interests that realize that a new technology may mean that they may lose their jobs or that their capital depreciates. Historically, workers and their organizations have often resisted innovations for that very reason, but some capitalists at times have been equally conservative.

Another source of resistance to innovation is that its costs and risks are overblown because they are unknowable in advance. This can lead to overreaction when something goes wrong. Thalidomide, a powerful and versatile medication (especially effective in treating leprosy), was banned for decades because of its dramatic effects in misshaping the babies of pregnant women who had taken the drug as a sedative, although it would have been easy to place warning labels on it. The response to nuclear power in the United States after the Three Mile Island problem was equally extreme. Despite the fact that there were no casualties and minimal damage, well-meaning antinuclear activists took advantage of it to prevent the growth of nuclear power (the same appears to be happening today after the Fukushima accident). The irradiation of vegetables (producing no negative effects except for killing pathogens) and the use of genetically modified organisms are strongly resisted in Europe out of a somewhat fanciful fear of “Frankenfoods.” Part of a good technology policy is to protect innovators from their enemies, while at the same time protecting the public from unbridled innovations that may endanger its health or the environment. Government policy should enlighten the public to ensure that exaggerated resistance is held in check and is not allowed to close down potentially beneficial avenues of innovation even if they appear radical.

Furthermore, a wise government policy must be built on the realization that research is inherently inefficient and wasteful. Innovation is an evolutionary process, and most new projects, even after winnowing, fail to deliver. But that is the price to be paid for progress: Out of a hundred projects, perhaps two will work—if only we knew in advance which two! But we don’t. Politicians such as the late William Proxmire, who became famous for ridiculing what he thought was wasteful research, failed to understand how the road to successful innovation is studded with duds and often takes unexpected detours. The latest installment of this misguided approach is Senator Coburn’s recent attack on the National Science Foundation (NSF), in which the senator, much like Proxmire, appoints himself to judge which government-funded projects are worthwhile and which are not. Politicians are in no position to make such judgments. Yet Coburn got 35 other senators to vote a few years ago to eliminate NSF social science funding, an ominous sign.

Things are much worse now. In the current budget-cutting atmosphere, R&D have a relatively weak constituency. The main beneficiaries of today’s research are future generations, the very grandchildren that some deficit hawks wish to save from the “burden” of a large government debt. Deficit hawks claim to represent these yet-unborn Americans. Yet will posterity really be better off in a technologically stagnant world in which innovation had slowed down 20 years before? Not all tax revenue spent on R&D will bear fruit, but experience shows that some of it will have enormous if unforeseeable benefits, possibly decades from now. A wise policy must be based on that premise.

JOEL MOKYR

Robert H. Strotz Professor of Arts and Sciences

Professor of Economics and History

Northwestern University

Evanston, Illinois

[email protected]


It is important to note that the policy ideas in Federal Reserve Chairman Ben S. Bernanke’s speech focus entirely on the traditional innovation paradigm and entirely ignore the new, open-user innovation paradigm that is increasingly dominating in the Internet age. These two paradigms have very different policy implications. In my view, it is very important that policymakers, and the economists whose research supports their work, update themselves and learn about the new and very attractive policy options that open-user innovation offers.

The traditional innovation paradigm has dominated policymaking discussions and practice at least since Schumpeter (1934). It begins, as did Bernanke’s speech, with the assumption that producers are the source of innovation and that these firms make the R&D investments needed to develop new products and services. Imitation is cheaper than innovation, the story then goes. As a result, if not stopped, imitators of an innovation could copy what innovators develop and sell it more cheaply than could the original innovators. The presence of cheap copies on the market would then reduce innovators’ profits, and so reduce incentives to invest in innovation, with resulting losses to society.

The implications of this traditional paradigm for policy? To offset the harmful “spillovers” from innovators to imitators, as Bernanke argues, producers must be granted public incentives in the form of R&D subsidies, and/or protections from imitators in the form of intellectual property law. The only thing required of policymakers, the story concludes, is to alertly tweak these known levers to achieve optimal results.

The new open-user innovation paradigm that the Internet age has brought into prominence is quite different, and its importance is well documented. For example, it has recently been shown that consumers invest much more in developing new products for themselves and freely revealing what they have developed (the new paradigm) than producers invest in developing new products for consumers (the traditional paradigm).

The new paradigm also has very different policy implications. The open-user innovation paradigm begins with the fundamental observation that user innovators have a very different basic innovation incentive than do producers. Users create an innovation in order to use it rather than sell it. When rivalry among user firms or individuals with respect to an innovation is low, as is generally the case, the users generally are willing to openly reveal their innovation to imitators without compensation. Indeed, they often reap private benefits from free revealing.

The net result is a very large flow of open—free for anyone to use—innovations whose existence is not contemplated in the traditional innovation paradigm. In other words, and specifically with respect to Bernanke’s policy suggestions, in the large and increasing arena where the open-user innovation paradigm has economic advantages today, intellectual property protections are not needed to induce private innovation investment.

Because intellectual property rights have well-known negative effects on social welfare, this crucial observation implies that policymakers can and should reexamine today’s secular trend toward increasingly zealous enforcement of these rights. In my view, it is exceedingly unfortunate that governments today spend many billions assisting innovators to monopolize access to their innovations via patent offices, legislation, etc., while at the same time not understanding or supporting the efforts of many innovators, user innovators especially, to volunteer open access to their innovations if and as they wish to do so.

Note that I am not suggesting the abolition of intellectual property rights nor forcing openness on innovators. Instead, I am suggesting that policy-makers level the playing field by also understanding and supporting the economically and socially valuable option to be open if and as innovators wish to do this. Creative Commons, Open Source software and Open Source Hardware practices and licenses, and patent pledging by major firms such as IBM to support the information commons are examples of private efforts currently underway to this end.

ERIC VON HIPPEL

T. Wilson Professor of Innovation Management

MIT Sloan School of Management

Massachusetts Institute of Technology

Cambridge, Massachusetts

[email protected]


In support of apprenticeships

In “Apprenticeships: Back to the Future (Issues, Summer 2011), Diane Auer Jones presents a thoughtful and convincing analysis of why the United States should embark on a major expansion of apprenticeships. As she ably points out, we should learn from high-quality, well-developed apprenticeship systems operating successfully in other countries, such as Switzerland.

Jones properly highlights many advantages of apprenticeships for students, including combining theoretical and practical learning, opportunities to cultivate critical thinking skills, help from mentors, transparency in learning about careers, and earning an income while learning to master occupational skills.

In addition, completing an apprenticeship provides students with a sense of pride and occupational identity. They begin to see themselves as part of a community of practice in ways that resemble what physicians and lawyers experience. Equally important, apprentices develop the critical employability skills that employers increasingly demand, such as communication, teamwork, problem-solving, reliability, persistence, and emotional stability. These skills improve as apprentices learn in a graduated fashion through experiences in real workplace settings, under the close supervision and mentoring of an occupational expert.

The evidence showing large long-term earnings gains for apprentices strengthens the case. In a study of workers in Washington state who entered Job Service offices and exited from various education and training programs, Kevin Hollenbeck estimated that the present value of earnings gains reached $269,000 per apprentice, as compared to $96,000 to $123,000 per community college attendee in an occupational field. Moreover, apprentices achieve these exceptional jumps in earnings without having to risk years of lost earnings and high tuition and without the extremely large government subsidies for college.

Firms benefit as well. In the United States, nearly all employer sponsors express satisfaction with their apprenticeships. They report that the programs help meet their demand for skilled workers, reliably show which workers have relevant skills, raise productivity, strengthen worker morale and pride, and improve recruitment and retention of workers. Studies in several other countries reveal that the majority of employers recoup the costs of their apprenticeships by the time the apprentices complete their training.

Although the U.S. registered apprenticeship system is training more than 400,000 workers, the United States is far behind other countries at 0.3% of the work force. In Australia, Germany, and Switzerland, apprentices make up about 4% of the work force; even in France, the figure is 1.7% or nearly six times the U.S. level.

Fortunately, the experience of England and South Carolina suggests that a large apprentice expansion is feasible. Between 2000 and 2010, apprenticeships in England have increased dramatically. By 2014, British officials expect that as many young people will enter apprenticeships as enter higher education. In South Carolina, the state government funded a $1 million per year initiative housed at the state’s technical college system and annual employer tax credits of $1,000 per apprentice per year. This effort has stimulated one new employer-sponsored apprenticeship program per week and more than doubled the number of apprentices in the state.

Making apprenticeship opportunities widely available and well utilized will require national leadership. The recommendations presented by Auer Jones generally offer a good start with two amendments. First, funding for marketing and technical assistance to achieve expanded employer participation is critical. The current tiny budget for the Office of Apprenticeship should be tripled, with the expectation that it will generate substantial employer use of apprenticeship training and ultimately bring the system to scale. Second, pushing high accreditation standards too quickly may scare off many employers. The British experience suggests that funding for the training component of apprenticeships can stimulate employer demand and that upgrading quality can follow in turn.

The time is right to attract public support for using an expanded apprenticeship system to widen the routes to career success. With sufficient leadership and at least acquiescence from the educational community, the country can improve career opportunities, worker satisfaction, and productivity while saving education and training dollars at the same time.

ROBERT I. LERMAN

Institute Fellow in Labor and Social Policy

Urban Institute Professor of Economics

American University

Washington, DC

[email protected]


Better regulation for research universities

The core point in Smith et al.’s “Reforming Regulation of Research Universities” (Issues, Summer 2011) is on the mark: While fully recognizing the need for accountability, we need to reduce the regulatory burden on researchers and their home institutions. This is essential if America is to reap the full benefits of its R&D investments. The fact that faculty are spending 42% of their research time on administrative requirements is not only onerous for them and a disincentive to go into research, but it is also wasteful of their time, effort, and expertise. Smith et al. offer a variety of useful suggestions about how we might reduce that burden. From my perspective, the best place to start may be the National Science and Technology Council (NSTC), which represents all of the major U.S. science funding agencies. The NSTC’s Research Business Models Subcommittee has been working on this issue for a long time, but, as Smith et al. point out, we have not seen much concrete progress from its work. I have argued before that we need a much more intensive effort to get this kind of waste under control. Toward that end, it might also be useful to have the President’s Council of Advisors on Science and Technology take up the issue. At a minimum, in the current budget climate, one can hope that the administration will now see this as an even more urgent agenda item and attend to it with vigor.

This issue has a global dimension too, and that makes action even more urgent. Every issue of modern life is both global in nature and has a science and technology component, either as a cause or cure. If we want to bring the full power of science to bear on the world’s problems, the scientific community must be capable of functioning in a much more global way. Yet the extensive variations and redundancies in governmental funding and reporting policies across countries work against international collaboration and global coherence.

One way to approach this problem is to address issues of global coherence and compatibility during international meetings and ensure that the products include concrete action plans. The annual Science, Technology in Society Forum in Kyoto or the Annual Meeting of the American Association for the Advancement of Science, as well as the biannual World Science Forum or the Euroscience Open Forum meetings in Europe, are obvious examples, because science leaders and policymakers tend to congregate at those meetings anyway. Efforts already begun at the regional level in Europe, Asia, and Africa are a useful start for identifying solutions.

As the world economy continues to struggle and as global societal problems persist and even intensify, we must enable both national and global scientific communities to function in the most efficient and effective ways.

ALAN I. LESHNER

Chief Executive Officer

American Association for the Advancement of Science

Washington, DC

[email protected]


Smith et al. provide a thoughtful overview of the growing regulatory burden on the nation’s academic research enterprise and make a compelling case for a concerted effort to remedy the situation. In light of the current financial threat facing universities in general, continuation of this waste of time and scarce resources is extremely detrimental to the nation’s capacity for innovation.

Smith et al. have done an outstanding job of identifying egregious examples of unnecessary overregulation. The Federation of American Societies for Experimental Biology, the nation’s largest federation of biomedical scientists and engineers, also recognizes the many harmful consequences of overregulation in research. As Smith et al. point out, the cost to individual researchers is substantial and reduces their productivity in the classroom and the laboratory. Their proposed framework for evaluation of research regulations is a comprehensive, timely, and thoughtful plan for addressing the regulatory issues. Harmonizing current regulations, establishing rigorous criteria for new rules, and exempting research organizations from the policies designed for large industrial organizations are goals that we salute.

We do not, however, share the enthusiasm for charging regulatory costs to federal awards or for prohibitions on cost sharing. Like Smith et al., we are concerned about the growing financial pressure on research universities. In many cases, faculty members and students have borne the brunt of the devastating consequences of the decline of federal, state, and other revenue streams. But the funding problem cannot and should not be resolved by diverting resources from the scarce federal funds for competitive research grants. Drawing away funds currently used for direct research costs will add to the challenges faculty are facing, making it difficult for them to carry out their proposed research, driving talented investigators from science, and producing a deleterious effect on our capacity for innovation.

But this is not the time to focus on our differences. The need to resolve the regulatory problem is urgent, and the climate for reform is favorable. Both the administration and Congress have called for reduction in regulatory burden. Their statements of principle, however, will not be sufficient to ensure success. In the short period of time since the Smith et al. article was published, new proposals for costly regulatory activities have appeared. The Digital Accountability and Transparency Act, for example, would apply the reporting standards adopted for the American Recovery and Reinvestment Act to all government grants. This hastily contrived plan does not present a compelling justification for the new system and would entail a costly new burden for federal agencies, institutions, and researchers, yet produce little useful information. Individual investigators, research institutions, and the groups that represent them should work closely to optimize the availability of research funds by preventing and eliminating wasteful regulatory excesses, which diminish our capacity for innovation. Smith et al. have proposed a comprehensive approach for the research community, and we look forward to collaborating with them to move that agenda forward.

HOWARD H. GARRISON

Director, Office of Public Affairs

Federation of American Societies for Experimental Biology

Bethesda, MD

([email protected])

JENNIFER A. HOBIN

Director of Science Policy

Federation of American Societies for Experimental Biology

Bethesda, MD

([email protected])


The article by Smith et al. appears at a critical time. An opportunity to significantly lessen the regulatory burden faced by scientists and universities may be at hand as various governmental agencies appear to be welcoming input from the interested parties. Especially in this austere funding environment, relief would be welcomed by all concerned. The thoughtful analysis provided by Smith et al. will hopefully serve as a catalyst to heighten this issue’s visibility. Most of the points raised in the article are embraced by all constituents in our community, in particular both researchers and university administrators.

This article also provides an entrée into a discussion that would benefit from more analysis and a more open dialogue between faculty and their universities, both on campus and through their national organizations. What is the appropriate balance between support for the university infrastructure that makes research possible and the funding that is available to cover the direct expenses incurred by investigators as they carry out their research projects? How can productivity from the federal research investment be optimized in a zero-sum environment?

IN THIS RESTRICTIVE BUDGET ENVIRONMENT, IT IS CERTAINLY DESIRABLE FOR THE RESEARCH COMMUNITY, FACULTY, AND UNIVERSITY ADMINISTRATION TO SPEAK WITH A UNITED VOICE.

A set point too near either extreme has significant and deleterious consequences from both university and faculty perspectives. Underpaying universities for costs incurred will result in the need to divert resources accrued from other sources (tuition, endowments, state support, philanthropy) to research and result in decreased availability of funds for the teaching and service missions of academe, deterioration in the support services underpinning investigators, and, perhaps most importantly, a decrease in the ability and willingness of universities to hire and retain faculty who spend a preponderance of their time in research laboratories. As they say, if you lose money on every transaction you can’t make up for it in volume.

If the pendulum swings too far the other way, faculty will be competing for even fewer dollars, success rates will drop even further, morale will suffer, productivity will decrease, and universities will be left with increased obligations for faculty salaries that again will sap resources and discourage hiring and investment in the research enterprise. In either case, non-optimal decisions will add to the challenges confronting both faculty and universities: fewer research appointments, a malaise that compromises what should be a rewarding career choice for faculty and a point of pride for universities, and less productivity for a nation in dire need of innovation.

In this restrictive budget environment, it is certainly desirable for the research community, faculty, and university administration to speak with a united voice. There may be differences in opinion as to the “right” balance between funding for university infrastructure and funding for research itself. However, these differences are heartily trumped by a universal desire to decrease the regulatory burden, see efficiencies increase, and allow scientists to spend more time doing science. The reforms outlined by Smith et al. provide a road map that benefits scientists, our universities, and our country.

RICHARD B. MARCHASE

Vice President for Research and Economic Development

University of Alabama at Birmingham

[email protected]

The author is a former president of the Federation of American Societies for Experimental Biology.


We must reassess federal regulations pertaining to university research, as called for by Smith et al. The nature of compliance needs to be carefully tuned to the degree of risk, and the costs of compliance need to be consistent with the dismal fiscal environment of universities. Lowering the regulatory burden on university faculty and administration will free up resources that could be used more productively in support of many deserving research priorities.

The accretion of regulation happens quietly, but relentlessly. With the best of intent, government agencies seek compliance and monitoring in all manner of important areas, such as health, safety, export controls, immigration, hazardous chemicals, and potential financial conflicts. But requirements across a couple dozen federal agencies overlap, perhaps conflict, and each requires individual reports, all of which layer on burden and costs. Because government reimbursement of these costs is arbitrarily limited, many new regulations essentially become unfunded mandates.

The costs of regulations do not just affect institutions; there are significant consequences for faculty. The article cites how, without adequate administrative staff, growing compliance burdens lower the morale and productivity of faculty. But there are additional penalties for faculty. Institutional funding for equipment, lab renovation, faculty startup, and research seed funding all compete with the costs of regulatory compliance. If an institution needs to hire additional staff to undertake and report on compliance, this becomes a sunk cost that is no longer available to invest in research.

A major challenge is that we do not have a good sense of how much these regulations cost. New regulations are added to the existing workload of administration and faculty, but rarely separately costed. In the aggregate, these slowly added responsibilities accumulate, either by demanding more faculty time or requiring additional staff. One intriguing notion, advanced by several university research leaders, is whether there ought to be a separate module for regulatory costs as part of university indirect cost pools for federal reimbursement. Though it would be complicated, such an optional accounting process would at least make compliance costs more transparent to both federal sponsors and university researchers. These ideas, and others, came from university leaders from the bulk of the Association of Public and Land-grant Universities’s ’s 221 members, who met at five regional meetings last year to discuss ways in which public universities might better serve our many constituents.

Reacting to intense financial constraints, a university president warned: “We are haunted by the specter that the best days of our enterprise are behind us.” That doesn’t have to be. But it will take collaboration among faculty, administrators, and government officials to optimize our investments in research and wring out every inefficiency in the system. Smith et al. have provided a key framework to help us create a far more coherent and rational process for judging and implementing necessary regulations on universities.

HOWARD GOBSTEIN

Executive Vice President

Association of Public and Land-grant Universities

Washington, DC

[email protected]


Smith et al. have performed a valuable service by describing the significant impact of federal regulations on the nation’s research universities. Much of the compliance cost that arises from federally sponsored research is not reimbursed by the government. As a consequence, the universities have had to divert billions of dollars from other important educational functions and to reduce the administrative assistance that helps faculty comply with the regulations. This, in turn, has forced faculty to spend, on average, more than 40% of their federally funded research time on administrative and compliance matters, rather than on the research itself. In times of financial stringency, when federal research money is limited and most public universities are facing drastic reductions in state funding, it is vital that the federal government ensure that its policies are not drawing funds away from education or wasting the valuable time of some of the nation’s most talented scientists and engineers.

It was heartening, therefore, that NIH issued its June 28, 2011 Request for Information: Input on Reduction of Cost and Burden Associated with Federal Cost Principles for Educational Institutions (OMB Circular A-21), on behalf of the A-21 Task Force of the National Science and Technology Council (NSTC) Interagency Working Group on Research Business Models almost simultaneously with the publication of the Smith et al. article. This request seeks input from the universities and the general public on how changes in A-21, which establishes the principles by which the universities are reimbursed for federally funded research, could reduce the university cost and faculty burden. The Smith et al. article’s “framework for remedies for some regulatory burdens faced by research universities” implies that considerable university (and government) expense and faculty burden reduction can be achieved through A-21 modification to alter or eliminate requirements like effort reporting.

That framework implies as well, however, that modification of A-21 alone is only part of the opportunity open to the NSTC working group. Amelioration of the costs and burdens associated with other regulations discussed there (human subjects, animal research, export controls, conflict of interest/research integrity, select toxins and agents, and hazardous materials) will require careful interagency planning to yield pan-agency policies and practices, as well as legislation in some cases.

It is important that the working group seeks to address these other regulations even though they may not have enough time to bring all their efforts to fruition, given the possibility of a change in administrations in less than 2 years. There has been great continuity in the NSTC’s efforts to improve the government-research-university relationship and increase the efficiency of the nation’s academic research enterprise through the Clinton, Bush, and Obama administrations. These efforts have led to major improvements in policies related to cost sharing, voluntary uncommitted cost sharing, graduate student status as both students and employees, research misconduct, export controls, and student visas. It is reasonable to assume that this continuity will persist, no matter who wins the next presidential election.

It is important, as well, that the NSTC establish a mechanism for reviewing the cost and faculty time burdens of any new regulations introduced by agencies or legislation with the goal of minimizing the effects or providing compensation for the additional costs. Most universities, and particularly the nation’s public universities, are facing extremely difficult financial situations, with the prospect of further funding cuts as a result of reduced federal aid to the states. It is vital that the NSTC and the universities minimize compliance costs so that university funds can be directed towards maintaining educational quality and access.

ARTHUR BIENENSTOCK

Professor (emeritus), Photon Science

Special Assistant to the President for Federal Research Policy

Director, Wallenberg Research Link

Stanford University

Stanford, CA

[email protected]


Developing perennial grains

In “Investing in Perennial Crops to Sustainably Feed the World” (Issues, Summer 2011), Peter C. Kahn, Thomas Molnar, Gengyun G. Zhang, and C. Reed Funk make a strong case for increased planting of perennial crops of all types in order to secure the future food supply. Most of the perennial crops they mention (fruit and nut trees, oil palm, grasses, and pasture legumes) are currently grown by farmers and only lack agronomic research or policy incentives to increase acreage. However, one class of plants stands out from the rest: perennial grain crops.

Although perennial grains do not, for the most part, currently exist, technologies are available to create perennial versions of rice, maize, wheat, sunflowers, and grain sorghum. These annual crops provide a large portion of the human diet and occupy much of the arable land. Perennial grains of these types could preserve global cropland productivity without requiring substantial dietary shifts.

Increasing the acreage of perennial crops on a global scale will ultimately require the new type of research stations the authors describe. But work toward developing perennial grains is unique. The basic genetics work can begin in developed nations immediately—before any international locations are operational.

Much could be accomplished toward the development of perennial grains by simply reorienting the objectives of research programs in the United States and other nations. The genetics and physiology of the perennial growth form should be a top priority of basic plant science research, and applied programs should focus on wide hybridization between annual crops and their perennial relatives. For this work to quickly advance, competitive grant programs need only to expand support of perennial grain development efforts that are already underway.

At first glance, perennial crops may appear to be in direct competition with annual crops for limited research dollars. Investing in strategies for future food production is difficult when faced with urgent disease problems and droughts today. However, wide hybridization work will benefit annual crops while the new perennials are in the pipeline. This is because the process of developing perennial crops produces intermediate plant types. These will be excellent breeding stock for annual crops, potentially providing increased disease resistance, pest resistance, heat tolerance, drought tolerance, and cold tolerance. This approach of tapping into genes from wild relatives has a proven track record of improving our current grain crops.

Perennial grain proposals have often been regarded as too long-term. But new molecular tools open possibilities for shortening the time frame to development while the need for grain production without soil degradation looms larger than ever. The time to invest in perennial crops is clearly now, while we have the opportunity.

LEE R. DEHAAN

Senior Fellow, Endowed Chair in Agricultural Systems

Minnesota Institute for Sustainable Agriculture

University of Minnesota

Saint Paul, Minnesota

[email protected]


Smarter defense spending

Jacques S. Gansler’s excellent article about the future of defense spending (“Solving the Nation’s Security Afford-ability Problem,” Issues, Summer 2011) includes two related notions that need a bit of further commentary.

The first, implicit in the sentence “With [the] growth in nondiscretionary expenditures and the need for the nation to borrow . . . to pay its tab. . .”, is the assumption that the government’s resources for defense are inherently limited. Actually, the defense budget now consumes about 5% of gross domestic product (GDP). Characteristically, during the 2001–2008 period, that number was about 4%, but during that time the costs of the two wars we were fighting, in Iraq and Afghanistan, were being kept off budget.

With the wars apparently winding down, and with the federal government’s income running on the order of 17 to 18% of GDP as compared with the period 1997–2001 (a period of high prosperity), when it was running between 19 and 21% of GDP, we should at least allow for the possibility that government income could be raised to pay for increased defense spending if necessary.

This possibility interacts with the second idea in Gansler’s letter, that the government “continues to buy ships, airplanes, tanks and other weapons of the 20th century, rather than shifting to the weapons required for the 21st century.” The latter are said to be systems suited to asymmetric warfare, including particular kinds of surveillance, unmanned attack, antimissile, and other systems suited to networked military operations against the kinds of enemies we are currently engaging. The problem is that if we focus our defenses on the kind of war we are fighting now, our enemies will come at us from the directions we have left unguarded. There are ample historical examples to illustrate the point.

During the early times of what became the Cold War, we prepared for a nuclear exchange with the Soviet Union, and the main threat turned out to be a conventional attack, for which we had to race to prepare (this writer was in the midst of that race for a goodly portion of his career). After World War II, we essentially disarmed while lending material support to help Greece and Turkey fight Soviet-inspired insurgencies, when North Korea attacked across the 38th parallel and pushed us into a forced mobilization for conventional war. With that preparation we were not prepared for Ho Chi Minh’s variety of “people’s war” in Vietnam—a war we lost mainly because the North Vietnamese were willing to take casualties indefinitely while we were not. And, in September 2001, we had the finest armed forces in the world when a few dedicated terrorists found a hole in our civilian defenses and killed more people on American soil than had been lost in war since the Civil War.

Our problem now is that we don’t know where the next threat needing our armed forces will come from. Iran is bidding fair to become a major power in the Middle East, certain to threaten our ally Israel, a newly constituted Iraq, and many other interests in the Arab world. Or a dust-up over Taiwanese independence from China, or a threat China perceives in her surrounding seas, could lead to armed clashes with that country. We know that North Korea continues to plant needles to prick our feet in places such as the offshore areas of South Korea and, it has been reported in the media, in Pakistan. And continuing threats to the sea lanes from variously based pirates require naval forces to stay alert and capable of fast responses. All this while we attempt to disengage from Afghanistan in a war whose origin in 9/11/2001 most of the American public seems to have forgotten. And we must also note that the onset of armed conflict of any kind can come on us suddenly, whereas the preparation to meet a particular kind of conflict, consistent with the development time of systems and training of the armed forces to use them, can take decades.

All of this says we need to be prepared to undertake diverse kinds of military action on many possible fronts, with many different kinds of effective military force, if we want to maintain U.S. military superiority in the world. Indeed, that capability is what U.S. military superiority in the world means. That doesn’t mean, however, that we have to keep spending tight money on every military system currently under development. Some of them may have to give, but that give should be treated on its merits. This means that major systems should be looked at individually for potential savings. We might want to concentrate on stealthy unmanned aircraft for surveillance, reconnaissance, and even delivery of some precision weaponry. Such systems are useful in any kind of warfare. The F-35 Joint Strike Fighter may not have to be acquired in three versions. A service life extension program for the existing AV-8 Harriers in the Marine Corps might be a better investment for the next decade or two than an expensive new short-take-off-andvertical-landing tactical fighter. Other examples will surely be found.

From the above arguments, it appears that U.S. armed forces must and can remain prepared for a wide variety of potential contingencies. There is flexibility in the national income to add some resources to maintain that preparation without trading capability to deal with one kind of warfare for capability to meet the needs of another, and therefore without leaving the nation again scrambling to meet a surprise attack from a quarter we had not prepared to guard against.

SY DEITCHMAN

Chevy Chase, Maryland

[email protected]


Although nearly all areas of government spending have come under close scrutiny, in hopes of finding ways to cut costs, the Pentagon has felt relatively less pressure to economize. This is the product of the bipartisan politics of being seen as “strong on defense” in wartime. Understandable, but there is a dangerous flip side: Politicians of all stripes who give the military whatever it asks for fail to perform their civilian oversight function. This results in the triumph of old habits of mind inside the military, which have kept the United States in the same spending rut it has been in for decades.

Overturning the inefficient acquisition system (see Jacques S. Gansler’s example of the near tripling of F-35 costs) will prove difficult. Beyond making sclerotic processes smoother, there is something else Gansler says is necessary: “Changing what the DOD buys also will require overcoming the cultural resistance of the military and the defense industry.” Although he focuses on mending acquisition processes, what may truly be needed is a new approach to acquisition strategy. That is, the focus should be on what systems are acquired. Must we spend over a trillion on the F-35, given that in the past 40 years just one U.S. fighter has been shot down by another fighter? (It happened during Operation Desert Storm).

A willingness to ask “What should we acquire?” opens up a world of possibilities. Extravagantly expensive attack aircraft aside, think about ships. The Navy is poised to spend well upward of $100 billion on a new class of aircraft carriers at a time when their utility is being seriously debated. Why not keep the existing Nimitz-class carriers a bit longer, while an honest discourse unfolds? When it comes to smaller ships, the question is: Why are we spending tens of billions of dollars on new classes of surface combatants whose aluminum superstructures will burn to the waterline when they’re hit by missiles? That the doctrine for their use calls for them to slug it out at “eyeball range” of the enemy, meaning well within missile range, is most troubling.

The Army and Marines cannot be left out. They love expensive things, too. Take their ardor for MRAPs (mine-resistant, ambush-protected vehicles). Rushed into production to curtail casualties caused by roadside bombs, there are now upward of 20,000 of these behemoths, which are still vulnerable to explosively formed projectiles (shaped charges). They did little to counter insurgencies in Iraq or Afghanistan. The tide was turned in Iraq by downshifting to small outposts and reaching out to “turn” many of the very insurgents who had been fighting our troops. As to Afghanistan, the MRAP will have no influence on the outcome there. MRAPs are no longer being procured, but their cost has been in the tens of billions.

As Gansler put it in his thoughtful article: “There are new modes of war.” To look at our defense spending, you’d never know it.

JOHN ARQUILLA

Professor, Defense Analysis

United States Naval Postgraduate School

Monterey, California

[email protected]


Disappearing bees

As a commercial beekeeper and bee researcher, I find a number of the claims made in “Disappearing Bees and Reluctant Regulators” (Issues, Summer 2011) to be troubling. Sainath Suryanarayanan and Daniel Lee Kleinman criticize the Environmental Protection Agency’s (EPA’s) “sound science” approach to regulation. I cannot imagine any other approach; otherwise a registrant company that had done diligent research in good faith could be denied registration of their product on the basis of hearsay and conjecture! Such a denial would not stand up in an impartial court of law.

The authors also state that traditional scientific research has “thus far not established a definitive role for imidacloprid in causing CCD.” This should tell them something, since if CCD (colony collapse disorder) were actually due to that insecticide, it would be simple to test Koch’s postulates and create CCD by administration of the chemical. No lab has ever been able to do so! However, I was principal investigator for a controlled trial that indeed duplicated colony collapse with all the signs of CCD, induced by an inoculation of the 72 test colonies with viruses extracted from another operation that had suffered from CCD the previous year!

In addition, a huge informal trial of neonicotinoid insecticides is performed every year in Canada, where tens of thousands of hives are placed on seed-treated canola. The canola nectar and pollen are the sole source of food for the colonies, and every drop and grain is contaminated with the insecticide. Yet year after year, the colonies thrive (I’ve spoken to numerous Canadian beekeepers).

In addition, the testing of hundreds of pollen samples by the U.S. Department of Agriculture has found no link between imidacloprid levels and collapse. So to my mind, neonicotinoid insecticides do not meet Koch’s postulates as causal factors for CCD. Indeed, a few large commercial beekeepers have told me that the seed treatments are the best thing that ever happened for bees, because their use has ended the normal bee kills due to the previously used insecticides!

I’ve studied most of the lab research that has claimed to find negative effects on bees at “field-realistic doses” of imidacloprid. I am often dismayed at the experimental designs, which often include protein-starved bees held at unnaturally low temperatures. So I take their results with a grain of salt. Furthermore, contrary to the authors’ assertions, the EPA does not require nor promote the need for good laboratory practices in the research that it uses for assessment.

In truth, I feel that the EPA, despite being under intense political pressure by the agricultural lobby, is doing an excellent job of phasing out the most ecologically harmful insecticides (such as organochlorines and organophosphates) and supporting the registration of “reduced-risk” products. Although as a beekeeper I would prefer to see a shift back to traditional agroecology, this is not a perfect world, and farmers will demand insecticides when their crops are at risk.

To me, the sublethal effects of neonics on nontarget insects are still an open question, but the preponderance of evidence to date does not support the authors’ contention that the EPA is being derelict in their duty.

RANDY OLIVER

Grass Valley, California

www.ScientificBeekeeping.com


I would like to comment briefly on “Disappearing Bees and Reluctant Regulators.” Although I commend the authors for speaking out, there is a serious error of assumption, I believe, and that is that we have a regulatory system that actually works. We don’t.

What began in 1970 as an honest effort to honor the charge to “protect man and the environment from unreasonable risk” has, at least in the pesticide arena, devolved into a sham of smoke and mirrors, ruses designed to give the illusion of protection when there is little or none, and rhetoric in place of substance. These aren’t just reluctant regulators, what we are experiencing is an active and premeditated effort to subvert both the letter and the spirit of pesticide regulation.

It was one of the most glaring false positives in recent history that swept me into this pesticide maelstrom, and like most beekeepers I would rather be tending my bees than defending my livelihood in a public forum. That false positive was a life-cycle study of the effects on bees of one of the neonicotinoids, clothianidin. Over the objections of EPA scientists, who recommended that the study be completed before registration, clothianidin was granted a “conditional registration” in 2003, the condition being the completion of the study during the first growing season. The study wasn’t forthcoming for several years, then the EPA hid it for another year and a half, ultimately reviewing it and concluding that it was “scientifically sound.”

Along with others I discovered this “scientifically sound” study and found an experimental design that would have been rejected by a fourth-grader, and I said so. Subsequently, the EPA reviewed the study again and concluded that it was invalid. The EPA’s own documents show that this study was critical to both conditional registration and full registration, but caught in their perfidy, they chose to simply disregard the failure, and clothianidin has now been on the market for nine growing seasons and has not met the requirements for registration.

My point here is that there is little protection unless it serves the convenience and profits of the EPA’s corporate clients. There is ample latitude under the Federal Insecticide, Fungicide, and Rodenticide Act for the EPA to take a much more conservative, precautionary stance, and yet they have chosen the opposite. They defend poor science and disregard sound science.

The neonicotiniods are pernicious, pervasive, cumulative, highly toxic to bees in miniscule amounts, and mobile in groundwater, and the science is emerging that shows disastrous consequences for the environment far beyond the bees.

Understand, I am not opposed to corporations, I think the corporate model is a good one. I spent the first 10 years of my working life in one of the best. I was in the smoke-filled rooms, I understand how they work, but this pesticide insanity must stop, we cannot hold out much longer.

I would encourage anyone who wants to understand this issue better to go to bouldercountybeekeepers.org and click on Tom’s Corner. Nearly all of the supporting documentation is there.

TOM THEOBALD

Beekeeper

Niwot, Colorado

[email protected]


Florida was the focus of early reports of unusual honeybee mortality through simple disappearance. I remember a late-night conference call with the U.S. Department of Agriculture (USDA), university colleagues, and other regulators about what we were seeing and what was being reported, and discussing what to call this phenomenon. Since the term colony collapse disorder (CCD) was coined that late night, I have lived with this daily. Even after the years that have passed, we still do not have a clue about what CCD is. It certainly is a failure of honeybee health, but it seems to be a collection of a myriad of negative inputs from parasites, pathogens, and pesticides that are creating the perfect storm. Teasing out which combination of what parasite and which pathogen and which combination of pesticides and fungicides and herbicides are interacting synergistically may be like jousting at windmills.

In 1984, honeybees and their keepers’ world changed from a fairly predictable one, from an agricultural management perspective, to a new confusing multinegative buffet of challenges. In 1984, the first find of the exotic acarine (tracheal) mite occurred in the United States. This small mite lives in the trachea of honeybees, causing predictable damage. In 1987, the large external Varroa destructor mite was identified in the United States. Make a fist. Place this fist anywhere on your body. Proportionally this is fairly close to how large the Varroa mite is to a honey bee body. This large parasitic mite feeds off the bodily fluids of adult, pupal, and larval stages of honeybees. Originally, Varroa had established a “normal” parasitic/host relationship with an Asian species of honeybee, Apis cerana. On the unadapted Apis mellifera, the result was devastating. For the first time in U.S. modern managed beekeeping, toxic pesticides were needed to support the industry through application inside honeybee colonies to control the Varroa mite. Please excuse the inaccuracy of the next statement, but trying to kill a bug on a bug is tough. Do this without damaging the good bug, the honeybee, and it is even harder.

In the process of feeding on honeybees, Varroa vector pathogens, leave open wounds ripe for secondary infections, activate viruses, and diminish the already shallow honeybee immune system. Add to this significant parasitic stress from other new introduced pathogens such as Nosema ceranae, which attacks the honeybee gut lining; a whole variety of newly identified viruses and introduced secondary predators; and the small hive beetle, which targets weakening honeybee colony populations; and it is not surprising that the yearly USDA-sponsored industry survey has recorded an average 30% loss of honeybee colonies in a defined window of time over the past several years.

THE ULTIMATE QUESTION IS HOW MUCH DO WE, OUR REGULATORY PARTNERS, AND THE GENERAL PUBLIC CARE ABOUT U.S. FOOD PRODUCTION.

Most of the above are directly or indirectly visually identifiable. What is more difficult to measure is the chronic result of chemicals in the environment that honeybees are exposed to. Honeybees are environmental samplers. They can forage efficiently in a 2- to 3-mile radius of their colony. Honeybees come in contact with a large number and diverse quantity of human-facilitated chemicals mixed with natural botanical toxins.

The authors correctly mention all of these factors. They also focus on neonicotinyl insecticides as a global beekeeping concern. The reality is that this class of insecticides kills bad insects very well that damage agricultural crops.

The ultimate selling and use argument is that neonic’s have low mammalian toxicity. They are much safer to use than many of the agrochemical products of the past. I agree that the EPA is several steps behind in how agrochemicals in particular are rated and ranked for acute and chronic toxicity in all life stages of honeybees. The organization has been gutted of resources, and their ability to create, review, and share data does not exist at a level helpful to their mandate. To use type II preference over type I as EPA determinants is “misguided” as the authors suggest.

Data are accumulating that honeybees are significantly and uniquely affected by low doses of some pesticides such as the neonicotinoid imidacloprid. New data indicate that imidacloprid in doses as low as 5 parts per million has little direct larval or adult toxicity, but only 30 to 40% of pupa emerge as adults. The continued use of LD50 as a general marker is of little worth as a value system for pollinator protection with current knowledge and technology.

The ultimate question is how much do we, our regulatory partners, and the general public care about U.S. food production. Managed and feral pollinators are responsible for approximately $20 billion worth of fruits, nuts, vegetables, and berry production. Ask most of the agriculturally unconnected public where food comes from, and the answer might be “the grocery store.” The United States is now importing approximately 40% of our vegetables. The USDA projects that the United States will be a net food importer in a generation. If turning our food production over to others is of no concern to the general public, none of this makes much difference.

The authors offer convincing perspectives that require more conversation. I myself am convinced that regardless of how bad certain agrochemicals are or are perceived to be for honeybees, the safety net of low mammalian toxicity in the United States will take precedence.

There is a successful model established, and it will be followed. I doubt that any government agency will take active ownership of protecting honeybees as long as the supply, variety, and quantity of pollinator-dependent crops remain cheap. And it is especially easy if these crops are not grown in the United States.

JERRY HAYES

Chief, Apiary Inspection Section

Division of Plant Industry

State of Florida

Gainesville, Florida

[email protected]


Improve chemical exposure standards

I certainly agree with Gwen Ottinger and Rachel Zurer (“Drowning in Data”, Issues, Spring 2011) and the followup letter by Sarah A.Vogel (Issues, Summer 2011, p. 16) that it is a fundamental challenge to translate chemical concentration data into information that is meaningful to the public. However, both of these pieces include some incorrect information and miss key aspects of the issues.

First, Ottinger refers to various ambient air standards and the fact that there is “no consensus on what constitute[s] a safe or permissible level.” A casual reader could get the impression that all of the health-effects research that has been done to date has basically been useless in terms of relating concentrations to health effects, which would be misleading. Although there is some variability in the underlying technical information, much of the variability in standards is because standards have different purposes and/or are applied to different situations. For example, contrary to what Ottinger and Zurer say, the federal Clean Air Act does not set ambient air standards for volatile organic compounds (VOCs); however, Clean Air Act regulations do set stack concentration limits for VOCs, for some industrial sources. Not surprisingly, if you are comparing stack concentration limits to ambient concentration limits, you are going to expect orders of magnitude differences, but these differences are not due to a lack of scientific consensus. They are due in part to the fact that they are applied to different locations (inside an exhaust stack versus in the ambient air) and in part because they are not both based on what is “safe.” For example, some industrial equipment standards are based on what is achievable with available control technologies, which may be more or less than what someone deems to be safe.  In addition, some air standards are not standards for what are safe levels in the ambient air, but are conservative standards used for issuing air pollution permits. In other words, the standards are for purposes of comparing the worst-case effects of a facility to a person standing at the facility’s fence line for an extended period of time.

Second, it needs to be recognized that while identifying safe levels can and should be based on scientific information, there is also some political judgment involved. Although there are some health effects with thresholds, others do not have clear thresholds. For example, the default assumption for cancer risk is that the only concentration that corresponds to zero risk is zero. In addition, there are questions about how to address scientific uncertainty, how to extrapolate animal data to humans and account for the most susceptible humans, and how to extrapolate data that were taken at very high doses in order to produce a measurable effect down at the low doses. Vogel states that “what is safe for a 180pound healthy man is not safe for a newborn, but our safety standards for industrial chemicals, except for pesticides, treat all humans alike.” But this is incorrect, because many of the health-based air standards are in fact designed to be protective of the most sensitive individuals and do take children explicitly into account (OSHA standards are one obvious exception, because they are applicable to workplace conditions experienced by adults).  For cancer risk, some areas of the country like using a 1-in-a-million lifetime cancer risk benchmark, but there is absolutely no technical basis for this standard. If people are made aware of the fact that calculations based on conservatively derived risk factors of lifetime cancer risk associated with urban ambient air quality are typically a couple of orders of magnitude higher than 1-in-a-million (but in many cases have been decreasing over the last several decades), that people’s exposures indoors and in their cars are in most cases significantly higher than if they were simply exposed to ambient air quality, and that the American Cancer Society’s calculations of Americans‘ actual lifetime risk of contracting cancer is closer to 300,000 to 500,000 in a million, they tend to feel that the 1-in-amillion standard is very or overly protective; however, if you simply put a 1-in-a-million standard in front of somebody, say it is health-based, and show concentrations that are above or even only slightly below it, they are likely to be much more alarmed. Vogel’s recommendation to obtain better information about real-life exposure scenarios is a good one because this information could be used to both establish the extent to which exposures are resulting from ambient air versus more localized exposures (not just industrial facilities, which appear to be the targets of the authors, but also situations such as poorly ventilated cooking, travel on busy streets, etc.) and help provide context to people as to what their current exposure levels are.

Third, there needs to be an understanding that epidemiology (the solution identified by Ottinger and Zurer and Vogel) can have significant limitations. Although it can show correlations, it does not show causality, and there is often a multitude of confounding correlating factors. In addition, it can effectively detect only correlations that are epidemics; for example, a 10% or more effect on the population, or at best maybe 1%. Although epidemiology may be useful for evaluating some of the highest exposures, many people would argue that regulatory standards should be set more stringently than what epidemiology is capable of detecting, as many of them currently are.

But perhaps most important, the fourth thing that needs to be recognized is that there are and always have been countless chemicals in the air at some concentration or another. We have always been and always will be, in Vogel’s words, “silently exposed to chemicals,” some of which are man-made and some of which are not, all of which are potentially dangerous at some level, and the number of variables that could be studied are endless. Therefore, there is a need for both prioritization (at a multimedia, comprehensive level) and science, by which I mean the organization of scientific information regarding health effects, and not just the existence and continued execution of scientific studies on individual chemicals or situations.

TODD TAMURA

Tamura Environmental, Inc.

Petaluma, California

[email protected]

Cite this Article

“Forum – Fall 2011.” Issues in Science and Technology 28, no. 1 (Fall 2011).

Vol. XXVIII, No. 1, Fall 2011