Forum – Spring 2005

Securing nuclear material

I agree with Matthew Bunn that the scope and pace of the world’s efforts to prevent terrorists from acquiring nuclear weapons or weapons-usable materials do not match the urgency of the threat (“Preventing a Nuclear 9/11,” Issues, Winter 2005). He correctly points out that if terrorists were to acquire nuclear materials, we must assume that they will eventually be able to produce a crude but deadly nuclear device and deliver it to a target.

Bunn correctly focuses on vulnerable nuclear facilities that house weapons-usable plutonium or highly enriched uranium (HEU). He lists Russia, Pakistan, and HEU-fueled research reactors around the world as serious concerns. My own list of greatest current nuclear threats, also based on the likelihood that terrorists could acquire weapons-usable materials, is topped by Pakistan, followed in order of decreasing priority by North Korea, HEU-fueled reactors, Russia, Kazakhstan, and Iran.

Bunn points out that U.S. cooperative threat reduction programs have done much to reduce the nuclear danger, but much more is needed. I agree that the United States must push a security-first agenda. However, threat reduction must be tailored to specific threats. For example, Pakistan and North Korea pose very different threats than Russia or Kazakhstan and hence will require very different solutions.

Bunn prescribes some useful steps in dealing with the remaining problems in the Russian nuclear complex. However, I have not found that Russians view the nuclear terrorist threat as “farfetched.” Instead, my Russian colleagues believe that their nuclear facilities are not vulnerable to theft or diversion of nuclear materials. Russian officials rely primarily on physical security (which was enhanced after 9/11 and again after Beslan), instead of a rigorous modern nuclear safeguards system that includes control and accounting of nuclear materials, along with physical protection. The single most important step to improve Russian nuclear materials security is for the Russians to own this problem: “loose” nuclear materials threaten Russia as much as they do the United States. Russia must implement its own modern materials protection, control, and accounting system. Our ability to help is stymied primarily by Russia’s belief that they don’t have a problem.

Finally, Bunn states that nuclear terrorism could be reliably prevented if the world’s stockpiles of materials and weapons could be effectively secured. I believe that securing the world’s huge stockpile of nuclear materials is so difficult that we cannot “reliably” prevent nuclear terrorism. The basic problem in the world today is that there are roughly 1,900,000 kilograms of HEU and almost as much plutonium (although 1,400,000 kilograms of plutonium are in spent nuclear fuel, which provides self-protection for some time). The uncertainty in these estimates and in the exact whereabouts of these materials is much greater than the tens of kilograms required for a crude nuclear device. Hence, the job of preventing these materials from getting into the wrong hands is daunting, to say the least. Nevertheless, as Bunn points out, we must act now and do the best we can, and this will take presidential leadership and international cooperation to be effective.

SIEGFRIED S. HECKER

Senior Fellow

Los Alamos National Laboratory

Los Alamos, New Mexico


Matthew Bunn is correct when he notes the urgency of the nuclear threat. However, his article’s focus on the challenges facing our nonproliferation programs emphasized failure where credit is due.

The National Nuclear Security Administration (NNSA) is working with its international partners to reduce the nuclear proliferation threat by removing dangerous materials, securing vulnerable facilities, and eliminating at-risk materials whenever and wherever possible. Challenges associated with developing new technologies and negotiating with sovereign countries can sometimes complicate our efforts. Despite these challenges, NNSA has made tremendous progress in meeting, even accelerating, its goals.

One program that has completed extensive security upgrades in Russia is NNSA’s Material, Protection, Control and Accounting (MPCA) program. MPCA began its efforts in 1994 by first securing the most vulnerable sites in Russia, which tended to be the smaller sites. The larger sites that remain to be secured are fewer in number but contain significant amounts of nuclear material. These remaining sites can be secured with roughly the same amount of time and effort as previously completed sites containing much less material. As a result, NNSA will secure much more material per year as the remaining sites are addressed. By the end of 2005, more than 70 percent of the sites will be complete, and we will continue to work toward the goal of securing the targeted 600 metric tons of nuclear material by 2008. It is important to note that none of this work could be completed without assistance of our Russian colleagues.

Last year, the Bush administration established the Global Threat Reduction Initiative (GTRI) to consolidate and accelerate the Department of Energy’s existing nuclear materials removal efforts. GTRI works to expeditiously secure and remove high-risk nuclear and radiological materials that pose a threat to the United States and the international community. Under GTRI programs, 105 kilograms of fresh highly enriched uranium has been repatriated to Russia and placed in more secure facilities. More than 6,000 spent fuel assemblies have been returned from research reactors around the world to the United States for final disposition. And between 2002 and 2004, more than 5,000 radioactive sources in the United States have been recovered and securely stored.

This administration has been steadfast in its commitment to preventing the spread of weapons of mass destruction and will remain so in the next four years. Each country, however, is responsible for the security of its own nuclear material. We must continue to hold nations responsible for actions that increase the risk of nuclear proliferation. Bunn’s efforts to shine a spotlight on these critical national security challenges should be applauded, but NNSA’s successful endeavors in reducing these same threats should also be recognized.

PAUL M. LONGSWORTH

Deputy Administrator for Defense Nuclear Nonproliferation

U.S. National Nuclear Security Administration

Washington, D.C.


I read with great interest Matthew Bunn’s article. That this issue featured so prominently in the recent presidential race brought public attention to a matter heretofore left to specialists. It is good that the public is concerned, and Bunn has written a cogent piece summarizing the issue fairly and highlighting the challenges we face in improving nuclear security. We fully embrace the goal of ensuring that terrorists cannot gain access to these catastrophically dangerous materials.

The article makes clear that this is a world-wide problem that includes actual nuclear weapons, military stockpiles of fissile materials, and highly enriched uranium in research reactors scattered around the globe. There is no fundamental disagreement on what action needs to be taken, but the pace and priorities for action can be debated.

The nongovernmental organization and academic communities are performing a vital service in raising public consciousness about this danger; without such an appreciation the needed resources will not be easily forthcoming. The U.S. government has not been inactive, however, as Bunn acknowledges in his piece. Since the collapse of the USSR, more than $9 billion has been invested in the broad range of Nunn-Lugar programs, which include the Materials Protection Control and Accounting assistance program of the Department of Energy, that are most directly involved in securing nuclear stockpiles and materials. Physical upgrades have improved security greatly at the most vulnerable facilities, and we have improved security at over 70 percent of the facilities with fissile material. We have also provided major support for: the safe storage of tens of tons of formerly military plutonium in Russia; the shutdown of plutonium production reactors; the program to dispose of such excess weapons plutonium by burning it in nuclear power plants; and programs to “redirect” scientists who had worked on weapons of mass destruction. By providing these scientist with a port in the storm of economic collapse, we have enabled them to serve as catalysts to spur research in public health, commercial science, and even direct antiterrorist applications.

In launching the Energy Department’s Global Threat Reduction Initiative last fall, the administration specifically recognized the danger presented by vulnerable research reactors, and within the G8 Global Partnership, concerted efforts led by Under Secretary of State John Bolton have lent special international urgency to applying resources around the world to address the issue. Certainly no one inside or outside government would deny that more remains to be done, and the faster the better. We are working hard to that end.

Finally, I would like to address the “nuclear culture” issue that Bunn rightly points to as crucial. It will do us very little good—in fact it buys us a false sense of security— to spend large sums of money installing systems and teaching procedures that are not accepted or will not be maintained or implemented as designed. This is probably the hardest problem to solve, but there are ways to get at it, and we have seen encouraging signs in the form of generational change.

Overall, I found Bunn’s article a reasoned and informative contribution to the debate on the issue of nuclear security.

EDWARD H. VAZQUEZ

Director

Office of Proliferation Threat Reduction

U.S. Department of State

Washington, D.C.


Conflicts of interest

In “Managing the Triple Helix in the Life Sciences” (Issues, Winter 2005), Eric G. Campbell, Greg Koski, Darren E. Zinner, and David Blumenthal provide a thoughtful and scholarly analysis of the benefits and risks of academic/industry relationships and offer their recommendations for managing them. All of their proposals involve public disclosure of the financial ties between academics and companies, a policy shared by several major professional organizations. No doubt such transparency would be an improvement on the current largely recondite nature of such relationships, but does disclosure truly solve the conflict of interest problem? Many academics approve of disclosure largely because it allows business as usual. I don’t see it as a satisfactory solution.

Take the opinion of an academic who writes a strongly positive appraisal of a drug made by a company for which he or she serves as a speaker or consultant. How are we, nonexperts, to interpret the assessment? It might be identical to one by a nonconflicted expert: rigorously objective. It might be biased in favor of the company, although the author, who cannot be expected to reach into his subconscious, is unaware that he has slanted the analysis. Or, least likely, out of an effort to enhance his status with the company, the author might consciously tilt his opinion. We just don’t know which explanation is closer to the truth. Thus, disclosure leaves the receiver of information in a difficult position, trying to interpret the motives of the conflicted author. In fact, people given such disclosure information often underestimate the severity of the conflicts.

James Surowiecki, who writes the economics page for the New Yorker magazine, says, “It has become a truism on Wall Street that conflicts of interest are unavoidable. In fact, most of them only seem so, because avoiding them makes it harder to get rich. That’s why full disclosure is so popular: it requires no substantive change.”

To avoid harm to patients and to protect the validity of our medical information, we must therefore go beyond disclosure. We should discourage faculty from participating in all industry-paid relationships except research collaborations that promise to benefit patient care. Physicians who eschew participation in company-sponsored speaker’s bureaus, continuing medical education (CME), and marketing efforts should be the first to be called on to evaluate drugs and devices, testify to the Food and Drug Administration (FDA), lead professional organizations, and edit medical journals. I would not exclude conflicted faculty from engaging in many of these activities, but I would require an additional layer of supervision. For example, I would require clinical practice guideline committees and FDA panels to have a predominant representation of nonconflicted experts and committees that review research proposals to have substantial oversight by a diverse and independent group of outsiders. Just as blinded peer review in medical journals tends to protect against bias in publications, mechanisms to have nonconflicted experts sample CME lectures by company-sponsored speakers would hold all speakers to a high standard.

Disclosure papers over an unresolved problem. If the profession refuses to give up its extensive financial connections to industry, patients must be protected with methods that override the hazards of these conflicts.

JEROME P. KASSIRER

Distinguished Professor

Tufts University School of Medicine

Boston, Massachusetts

Editor-in-Chief Emeritus, New England Journal of Medicine


“Managing the Triple Helix in the Life Sciences” makes a reasonable case for practical government guidelines and uniform institutional policies for reporting and tracking relationships that create real or perceived conflicts of interest in research. Nonetheless, it is difficult to be optimistic that those who could make the changes recommended by the authors will do so without more compelling reasons and/or a clear mandate, most likely from government.

Research institutions feel that they are already overburdened with costly requirements that affect the use of human and animal subjects, laboratory safety, cooperative agreements, workplace conditions, and so on. The additional staffing and resources needed to maintain well-trained, effective institutional review boards for human subjects research alone can run into the millions of dollars at a research-intensive university. To suggest adding a similar administrative structure for the review of disclosure forms, however helpful this might be, is sure to meet stiff resistance from budget-conscious research administrators. It is therefore reasonable to predict that the institutions that the authors recommend take the lead in instituting change will be the least likely to do so without compelling reasons for action.

Academic/industry and government/industry relationships (AIRs and GIRs) have increased in number, size, and complexity. Studies have identified correlations between industry funding and findings favorable to industry. Even so, there is no agreement on whether the documented cases of questionable relationships or the apparent biases that result from them are significant enough to justify the adoption of a more formal and expensive review process. Researchers commonly assume that they can work in situations that create conflicts without compromising their personal or scientific integrity. Presumably, most research institutions make the same assumption. It will therefore take more than a limited, albeit growing, number of case studies or suspicious correlations to move research institutions to act.

Finally, given the sometimes conflicting agendas of the different players in academic research, it is difficult to imagine any agreement on “uniform policies related to the disclosure of AIRs” that would both “give institutions significant discretion regarding the review and oversight of AIRs at the local level” and, at the same time, have any teeth. Any significant discretion at the local level already does and will lead to different decisions about appropriate relationships and management strategies, which could invite the sort of institutional shopping that the authors are trying to avoid.

Even if the proposed solution is not likely to be adopted by research institutions, the underlying problem that the authors identify cannot be ignored. At a time when federal funding for research is experiencing at best only modest growth and more frequently stagnation and cutbacks, the commitments some academic institutions are making to dramatically increase the size of their research programs (to double in some cases) should be cause for concern. AIRs may only be straining the system now, but in the future they could lead to serious breaks, such as the erosion of public confidence in research or the implementation of uncompromising policies.

Based on past experience, it is reasonable to assume that if researchers and research institutions do not take seriously proposals for some sort of meaningful self-regulation and restrain the unchecked growth of AIRs, government will, as it has recently done with the National Institutes of Health. This prospect may perhaps at last provide the compelling evidence needed to prompt research institutions to take “Managing the Triple Helix” as seriously as they should.

NICHOLAS H. STENECK

Professor of History

University of Michigan

Ann Arbor, Michigan


Eric G. Campbell et al. raise important conflict of interest concerns regarding university/industry and government/industry relationships (AIRs and GIRs). At the core of their argument is the belief that AIRs and GIRs have an appropriate place in the facilitation of life science innovation but that standardization of disclosure and management processes, especially for AIRs, is needed. Failure to act, they suggest, risks compromising the public trust placed in life science researchers that has served the enterprise so well for decades.

Campbell et al. should be complimented for their willingness to confront this problem and are uniquely appropriate spokespersons for such issues, given their stream of large-scale research on the AIR practices of life science researchers and their institutions. Institutional policies and procedures for the disclosure and management of potential conflicts of interest are varied. This creates ample room for possible violations to go undetected, human subjects to be put at potential risk, and ultimately end-consumers to be ill informed about the benefits and limitations of particular life science innovations. Thus, when the authors suggest that the academic community should set higher standards than those currently expected by the federal government (for example, require disclosure by all faculty and institutional administrators, not just principal investigators; and a reduction in exclusions to the $10,000 threshold for disclosure) and establish a uniform policy on publication delay time, they home in on actions that could do much to protect against the risks of conflicts of interest.

Yet in two key ways they might have gone further. First, they weakened their own argument by suggesting that the wide variations among institutional types, structures, and decisionmaking processes necessitate “flexibility to decide which relationships require oversight and how to design, implement, and evaluate institutional oversight plans and activities.” This could offer convenient cover for institutions to continue their pattern of selective oversight. Second, there was no mention of what should be done if a conflict is discovered. Should the faculty member receive an admonition “not to do it again” or a formal disciplinary action; be prevented from receiving future funding of some kind; and/or be terminated, depending on the severity of the violation? Academic tradition would suggest that the wisest course would be to consider conflict of interest violations as being on the same level as, say, plagiarism or doctoring data in publication. Achieving this, however, will require universities, and likely the federal labs, to decide whether they wish to employ researchers or entrepreneurs and whether both can realistically be employed. Furthermore, it will require federal and state governments to recognize the degree to which they exacerbate conflict of interest problems for universities through certain policy actions and budget decisions.

JOSHUA B. POWERS

Coordinator, Higher Education Leadership Program

Indiana State University

Terre Haute, Indiana


Although Eric G. Campbell et al. agree with most observers that academe and government need new tools for managing conflicts of interest in biomedical research, their preferred solution—greater disclosure—won’t get the job done.

For example, the authors claim that industry-funded scientists are more productive in publishing than their non-industry-funded peers. During the past year, there have been several high-profile cases [one involving selective serotonin reuptake inhibitors (SSRIs) and childhood suicidality and the other involving COX-2 inhibitors and heart disease] in which flawed industry-funded academic studies led to poor public health outcomes. In February 2004, the Center for Science in the Public Interest reviewed the entire academic literature involving SSRIs and children and found that more than 90 percent of published clinical trials, virtually all of them industry-funded, supported their use. Meanwhile, virtually all the clinical trials submitted to the Food and Drug Administration (FDA) to win extended patent life for the drugs, most of which were never published, showed that the drugs had no positive effects.

The scientists in the first group may have been more productive in publishing than their peers, but as David Blumenthal has pointed out elsewhere, thinly disguised marketing studies published in second-tier academic journals are hardly a good measure of the benefits of academic/industry research ties. Moreover, the funding sources and many of the financial ties of the studies’ authors were fully disclosed in the academic journals when they appeared. The results led not to increased skepticism but to increased sales.

The same can be said about patenting and associated commercialization activities at our academic medical centers and universities, another benefit touted by the authors. Just as businessmen have long known that not everything patentable is worth commercializing, the nation’s health care system every day confronts the fact that not everything that is commercialized contributes to public health. Juxtapose for a moment the uproar over heart ailments caused by COX-2 inhibitors with the efforts by the University of Rochester, which patented the mechanism of action, to cash in on those drugs’ massive cash flow. Clearly some of the commercialization activity at the nation’s universities has crossed the line that used to separate the institution’s larger public health mission from private gain.

As the authors point out, industry ties with academic researchers continue to grow. The result is that our health care system today suffers from a paucity of objective information. Industry-funded researchers and clinicians conduct most significant clinical trials, write many clinical practice guidelines, deliver most continuing medical education, and sit on (and in some cases dominate) government advisory bodies at the FDA and Centers for Medicare and Medicaid Services.

That some of these ties remain hidden is abominable. But although greater transparency is mandatory, it is no longer the only or even best disinfectant. In the wake of congressional investigations into the massive hidden corporate financial ties of some of its scientists, the National Institutes of Health recently imposed new rules prohibiting such ties with private firms. Events have outpaced the disclosure prescription. Stronger medicine is called for.

MERRILL GOOZNER

Director, Integrity in Science Project

Center for Science in the Public Interest

Washington, D.C.


Genetically modified crops

In their excellent “Agricultural Biotechnology: Overregulated and Underappreciated” (Issues, Winter 2005), Henry I. Miller and Gregory Conko lay out a compelling argument in support of ag biotech. I agree with their principal conclusions. However, I must set the record straight on one item in their piece that seems to have become an enduring myth about the early days of the science. They state that “some corporations … lobbied for excessive and discriminatory government regulation … they knew that the time and expense engendered by overregulation would also act as a barrier to market entry by smaller companies.” Monsanto, DuPont, and Ciba-Geigy (now Syngenta) were listed in the article as the short-sighted companies that brought long-lasting restrictive regulations. In reality, only Monsanto argued for regulation; the other companies were not then significant players in the field.

I was CEO of Monsanto in 1983 when our scientists for the first time put a foreign gene into a plant, which started the commercial path to the science. My job for the dozen years before first commercialization was to ensure funding for this far-future science. Wall Street hated something that might pay out in the mid-1990s—if ever. Even within Monsanto, there were quarrels about R&D resources being siphoned away from more traditional research, especially toward research that might never succeed. Besides, even if it did, it would face the avalanche of opposition sure to come from “tinkering with Mother Nature.” Consider: We were only a few years away from the Asilomar conference, which had debated that perhaps this “bioengineering” should be left stillborn. Rachel Carson had warned of science gone amok. Superfund had been enacted to clean up hazardous waste from science-based companies. A little later, an ambitious researcher in California had grown genetically modified strawberries on a lab rooftop, causing a furor by violating the rules at that time, which forbade outdoor testing. Pressure was mounting, as one opponent put it, “to test the science until it proved risk-free, since the scientists obviously couldn’t self-police.” “How long should we test?” I asked the opponent. “Oh, for about 20 years” was the response.

I had been invited to participate in a debate in support of ag biotech against Senator Al Gore at the National Academy of Sciences. I don’t think we won against his TV camera. As we proceeded in the research, the new Biotechnology Industry Organization (comprised primarily of the small companies doing research) was lobbying for no regulation. Their champion was Vice President Dan Quayle, who headed the Competitiveness Council in the first Bush administration. I visited Quayle on several occasions and finally persuaded him that the public would not accept this new science without regulation and that we needed the confidence that the public had in the regulatory bodies. I argued that each agency should practice its traditional oversight in its field: the Food and Drug Administration, Environmental Protection Agency, and U.S. Department of Agriculture—without establishing a new agency just for biotech, a move that was gaining traction in the opposing communities. I argued that the real test should be the end product and its risk/benefit, not the method of getting there. Quayle is one of the unsung heroes of the ag biotech saga. He carried the day on the “no new regulatory body” argument. Be assured that at no time did I or my associates working on these policies give a moment’s thought to shutting out smaller companies with a thicket of regulation. We wanted only “the right to operate with the public’s acceptance.”

The U.S. public now accepts the products of agricultural biotechnology in large part because they have confidence in the institutions that approved them. Regrettably, in Europe in the late 1990s another course was taken by those making decisions at the time: confrontation, not collaboration. The price is still being paid. The new leadership of Monsanto, with some 90 percent worldwide market share in these biotech crops, is making Herculean efforts to work within the culture, laws, and regulations of the European Union, much as we did in the United States in the 1980s and early 1990s. They will eventually succeed, because public confidence is a necessary ingredient of new technologies—something, to their credit, that this current management recognizes.

RICHARD J. MAHONEY

Chairman/CEO Monsanto Company (retired)

Executive in Residence

The Weidenbaum Center on the Economy, Government, and Public Policy

Washington University at St. Louis

St. Louis, Missouri


Henry I. Miller and Gregory Conko are valiant champions of reason against the forces of unreason. As someone who has seen the growing influence of the anti-science lobby in Britain and Europe, I find it disturbing to discover similar attitudes reflected in regulatory policy in the United States. It seems that for the world to benefit from new transgenic staple crops that could reduce hunger and poverty, we will have to look mainly to China, and in due course India, rather than Europe or America. By the end of 2003, more than 141 varieties of transgenic crops, mainly rice, had been developed in China, 65 of which were already undergoing field trials. However, overregulation in Europe casts a shadow even in China, because rules on labeling and traceability present a formidable hurdle to the export of transgenic crops or even of any crops that contain the slightest so-called “contamination” by genetically modified products.

Why has this technology not been treated according to its merits? The influence of green activists goes further than opposition to transgenic crops. It can be traced to a form of environmentalism that is more like religion than science. It is part of a back-to-nature cult with manifestations that include the fashion for organic farming and alternative medicine. The misnamed “organic” movement (all farming is of course organic) is based on the elementary fallacy that natural chemicals are good and synthetic ones bad, when any number of natural chemicals are poisonous (arsenic, ricin, and aflatoxin for starters) and any number of synthetic ones are beneficial (such as antibacterial drugs like sulphonamides or isoniazid, which kill the tuberculosis bacillus). The movement is essentially based on myth and mysticism.

Similarly, homeopathy is growing in popularity and official recognition, although it is based on the nonsense that “like cures like” and that a substance diluted to a strength of 1 to the power of 30 or more (1 followed by more than 30 zeros) can still have any effect except as a placebo. Many of those who believe in alternative medicine also argue that remedies that have been used for centuries must be good, as if medical practice is some kind of antique furniture whose value increases with age. It is belief in magic rather than science.

However, antiscience views are most passionately aroused in the debate about genetic modification. Campaigners even tear up crops in field trials that are specifically designed to discover if those crops cause harm to biodiversity. Like the burning of witches, such crops are eliminated before anyone can find out if they actually cause harm. In many parts of Europe, the green movement has become a crusade. That makes it dangerous. Whether the rejection of the evidence-based approach takes the form of religious fundamentalism (Islamic, Jewish, or Christian) or ecofundamentalism, the threat is not just to scientific progress but to a civilized and tolerant society.

LORD DICK TAVERNE

London, England

Lord Dick Taverne is a former member of Parliament and the founder of Sense About Science.


The overregulation of agricultural biotechnology, so well described by Henry I. Miller and Gregory Conko, carries a particularly heavy price for farmers in developing countries. In South Africa, the only nation in Africa to have permitted the planting of any genetically modified (GM) crops so far, small cotton farmers have seen their incomes increase by $50 per hectare per season as a result, and one group of academics has projected that if cotton farmers in the rest of Africa were also permitted to plant GM cotton, their combined incomes might increase by roughly $600 million per year. If India had not delayed the approval of GM cotton by two years, farmers in that country might have gained an additional $40 million. India has not yet approved any GM food or feed crops. One biotech company recently gave up trying to get regulatory approval for a GM mustard variety in India, after spending nearly 10 years and between $3 million and $4 million, in regulatory limbo. Robert Evenson at Yale University has recently estimated the total loss of potential farm income due to delayed regulatory approval of GM crops throughout the developing world, up through 2003, at roughly $6 billion.

The case made by Miller and Conko for a less stifling regulatory environment may soon grow even stronger, particularly for the poorest developing countries. Several biotech companies have recently been able to transfer genes conferring significant drought tolerance into a number of agricultural crop plants, including soybeans, rice, and maize, with exciting results in early greenhouse and field trials. Something of exceptional value to the poor will be provided if these drought-tolerant genes can also be engineered into crops grown by farmers in semiarid Asia and Africa. The drought of 2001-2002 in southern Africa put 15 million people at risk of starvation. If overregulation keeps new innovations such as drought-tolerant crops out of the hands of poor African farmers in the years ahead, the costs might have to be measured in lives as well as dollars.

Miller and Conko might have added to their argument a list of the international agencies that have recently acknowledged an apparent absence of new risks to human health or the environment, at least from the various GM crop varieties that have been commercialized to date. Even in Europe, the epicenter of skepticism about genetic modification, the Research Directorate General of the European Union (EU) in 2001 released a summary of 81 separate scientific studies conducted over a 15-year period (all financed by the EU rather than private industry) finding no scientific evidence of added harm to humans or to the environment from any approved GM crops or foods. In December 2002, the French Academies of Sciences and Medicine drew a similar conclusion, as did the French Food Safety Agency. In May 2003, the Royal Society in London and the British Medical Association joined this consensus, followed by the Union of German Academies of Science and Humanities. Then in May 2004, the Food and Agriculture Organization (FAO) of the United Nations issued a 106-page report summarizing the evidence—drawn largely from a 2003 report of the International Council for Science (ICSU)—that the environmental effects of the GM crops approved so far have been similar to those of conventional agricultural crops. As for food safety, the FAO concluded in 2004 that, “to date, no verifiable untoward toxic or nutritionally deleterious effects resulting from the consumption of foods derived from genetically modified foods have been discovered anywhere in the world.”

ROBERT PAARLBERG

Professor of Political Science

Wellesley College

Wellesley, Massachusetts


Henry I. Miller and Gregory Conko make a convincing case that a different paradigm for regulating agricultural biotechnology is desperately needed. Recombinant DNA technology allows plant breeders and biologists to identify and transfer single genes that encode specific traits, rather than relying on trial-and-error methods of conventional biotechnology. Thus, it is much more precise, better understood, and more predictable than conventional genetic modification.

Agricultural biotechnology should have been a boon for the green revolution, but gene-spliced crops still represent a small fraction of total world supply. Why has recombinant DNA technology not borne fruit? Unscientific fears, fanned by activists and short-sighted government policies, have led to a regulatory framework that singles out genetically modified crops for greater scrutiny and even prohibition. Guided by the “precautionary principle,” whose purpose is “to impose early preventive measures to ward off even those risks for which we have little or no basis on which to predict the future probability of harm,” European governments, in particular, have chosen to err on the side of caution when it comes to agricultural biotechnology.

The trouble with the precautionary principle is that it ignores the risks that would be reduced by a new technology and focuses only on the potential risks it might pose, creating an almost insurmountable bias against trying anything new. In the case of agricultural biotechnology, the implications can be heartbreaking. The authors describe how Harvest Plus, a charitable organization dedicated to providing nutrient-rich crops to hungry people, feels it must eschew gene-spliced crops because of regulatory barriers and uncertainties.

To begin to realize the potential that agricultural biotechnology holds, we must change the incentives that guide policymakers in Washington, the European Union, and elsewhere. Regulatory gatekeepers face well-documented incentives to err in the direction of disapproving or delaying approval of new products. If a gatekeeper approves a product that later turns out to have adverse effects, she faces the risk of being dragged before Congress and pilloried by the press. On the other hand, since the potential benefits of a new product or technology are not widely known, the risks of disapproval (or delays in approval) are largely invisible, so the consequences of delay are less severe.

Policymakers regulating agricultural biotechnology face pressure from well-organized activists to constrain the new technology. Large biotech companies do not speak out aggressively against unscientific policies, either because they don’t dare offend the regulators on whom their livelihood depends, or because regulations give them a competitive advantage. There is no constituency for sound science, and the general public, particularly in developing nations, who would gain so much from innovations in agricultural biotechnology, are unaware of their potential.

Miller and Conko encourage scientists, academics, the media, companies, and policymakers to help correct these biases by raising awareness of the potential benefits that molecular biotechnology promises, speaking out against irrational fears and unscientific arguments and championing sound scientific approaches to overseeing agricultural applications. We should heed Miller and Conko’s prescription for rehabilitating agricultural biotechnology if it is to fulfill its promise.

SUSAN E. DUDLEY

Director, Regulatory Studies Program

Mercatus Center at George Mason University

Fairfax, Virginia


Henry I. Miller and Gregory Conko’s assertion that high regulatory approval costs have limited the number and variety of transgenic crops on the market to four commodity crops and essentially two traits is supported by data presented at a November 2004 workshop sponsored by the U.S. Department of Agriculture’s (USDA’s) ARS/CSREES/APHIS, the National Center for Food and Agricultural Policy (NCFAP), and Langston University. (Workshop proceedings will be available on the USDA-CSREES Web site in June 2005.)

For readers unfamiliar with the long-established system for developing new crop varieties, the public sector assumes responsibility for funding research on small-market (i.e., not profitable) crops. Plant breeders at land-grant universities and government research institutes use the funds to genetically improve crops, and then they donate the germplasm or license it to private firms for commercialization. At the November workshop, public-sector scientists and small private firms described scores of small-acreage transgenic crops that they had developed but could not release to farmers because of the $5 million to $10 million price tag for regulatory approval, not to mention additional millions to meet regulatory requirements for postcommercialization monitoring of transgenic crops. (Based on recommendations from the November workshop, the USDA agencies are now developing methodologies to help public-sector researchers move transgenic crops through the regulatory approval process.)

Miller and Conko also attribute ag biotech’s disappointing performance to “resistance from the public and activists,” perhaps because the authors accept the media’s extrapolation from activist resistance to public resistance. During the 15 years that I have made presentations on ag biotech to diverse audiences that truly represent the general public, I have encountered little resistance (even in Europe!). Instead, I continually find open-minded people, eager for factual information, full of common sense, and perfectly capable of assimilating facts and making informed decisions. In addition, objective measures of public sentiment such as product sales, surveys, and ballot initiatives consistently reveal a public that is not resistant to transgenic crops.

The activist community’s contribution to the paltry number and variety of transgenic crops on the market is indisputable, however. Their remarkable success stems not only from effectively lobbying for a regulatory process that is so costly only large companies developing commodity crops can afford it, but also from causing the premature demise of transgenic crops that had made it through the approval process. Because food companies are fearful of activist demonstrations that are so imaginative and visually compelling that the media cannot resist making them front-page news, some companies have told their suppliers that they will not buy approved transgenic crops, such as herbicide-tolerant sugar beets and pest-resistant potatoes. According to NCFAP, if U.S. farmers had grown these twotransgenic crops, they would have increased total yields while decreasing pesticide and herbicide applications by 2.4 million pounds per year.

I find this paradox fascinating: Through their words and deeds, activists have created their own worst nightmare. In the words that a biologist would use, they have established an environment that selects for large corporations with deep pockets and large-scale farmers who grow huge acreages of a few commodity crops, and selects against small companies, small farms that promote sustainability through small acreages of diverse crops, and crops that maintain yields with fewer chemical inputs.

ADRIANNE MASSEY

A. Massey and Associates

Chapel Hill, North Carolina


Science and math education

Rodger W. Bybee and Elizabeth Stage have done an excellent job in highlighting some important results from the Program for International Assessment (PISA) and Trends in International Mathematics and Science Study (TIMSS), both administered in 2003 (“No Country Left Behind,” Issues, Winter 2005). Although I do not disagree with the policy implications that the authors draw based on their interpretation of the assessment results, I offer the following amplifications.

The authors stress the importance of understanding these tests. I completely agree. In this connection, two additional points may be helpful: First is the changing participation of countries in these assessments, particularly in TIMSS. For example, at the 8thgrade level, 14 developed countries, mostly Organization for Economic Cooperation and Development members that participated in TIMSS-1995, did not participate in TIMSS-2003. Seven of these countries outperformed U.S. students in mathematics in 1995, and six were not statistically different from the United States. All 14 of these countries participated in PISA, and 13 outperformed U.S. students in mathematics and in problem-solving in PISA-2003, including five that had outperformed U.S. students on the TIMSS-1995mathematics assessments: Canada, Ireland, Switzerland, France, and the Czech Republic. Thus, one needs to take care in comparing the international standing of U.S. students in 1995 and 2003.

Second, and more important, both PISA and TIMSS are snapshots in time, given to samples of students once every three or four years. A limited period of 60 to 90 minutes is allowed each student to respond to some 25 to 30 items. Because students take different forms of the test, it is possible to gather information much more broadly on students’ mathematics and science knowledge. In TIMSS2003, for instance, the whole sample of 4th graders provided responses on 313 items and 8th graders on 383 items.

Important goals of science and mathematics education cannot be assessed through these types of large-scale assessments. They cannot tell us whether students are motivated and can do the kind of sustained work necessary to succeed in science or mathematics; generate alternative solutions to problems and revise their approaches based on an iterative process of trial and revision; or search for and evaluate information needed to solve a problem or research a question. These are important competencies for students who may want to prepare for scientific and technical careers and for general scientific and mathematical literacy. To obtain the requisite information, we need well-done classroom-based assessments prepared and evaluated by teachers to add to the results of large-scale assessments. But our teachers receive little if any education, either in their preparation or professional development, on student testing and assessment. Hence, I would add to Bybee and Stage’s comments on well-qualified teachers a strong recommendation for a required two-year induction period for all new teachers, including a strand on how to evaluate students’ work.

Lastly, I strongly support the authors’ comments on assessing the tests. And we might be well served to look at the testing practices of other countries, many of which stress individual student effort and reward, particularly at the secondary level.

SENTA A. RAIZEN

West/Ed National Center for Improving Science Education

Arlington, Virginia


Democratizing science

Although I agree with David H. Guston (“Forget Politicizing Science. Let’s Democratize Science!” Issues, Fall 2004) that it is important to have greater interaction between scientists and lay citizens, I believe that the proposal to involve “lay citizens more fully in review of funding applications” is not sensible.

Unfortunately, most Americans have little knowledge about science. A sizeable fraction rejects the theory of evolution. When members of Congress, most of whom are reasonably representative of lay citizens, have intervened regarding specific scientific research proposals, the results have nearly always been damaging to the cause of science.

When the National Science Foundation (NSF) was first approved by Congress in line with the proposal of Vannevar Bush, it was vetoed by President Truman because it was not democratic enough. This led to a revised version in which the president selected the members of the National Science Board. In spite of this, NSF has worked successfully with little “democratic” intervention. My experience has been that, on the whole, the NSF peer review system has worked very well in advancing fundamental science.

In the case of applied research, the major problem is that most of the financing comes from the profit-seeking private sector, as noted by Guston, or the Department of Defense. In this case, I agree that alternative funding sources are important, although this may not be easy.

LINCOLN WOLFENSTEIN

University Professor of Physics

Carnegie Mellon University

Pittsburgh, Pennsylvania

Cite this Article

“Forum – Spring 2005.” Issues in Science and Technology 21, no. 3 (Spring 2005).

Vol. XXI, No. 3, Spring 2005