Forum – Winter 2002
The Kyoto Protocol
I have to congratulate you on publishing Richard E. Benedick’s essay on Kyoto and its aftermath (“Striking a New Deal on Climate Change,” Issues, Fall 2001). Many of us already knew that Benedick was an accomplished scholar and diplomat. What I didn’t know was that he could write with such style and humor. Nobody can do justice to, or make sense of, the Kyoto affair without humor.
Imagine respectable governments willing to actually pay money, or make their domestic industries pay money, to an ailing former enemy, in the guise of a sophisticated emissions-trading scheme, for the dual purposes of bribing the recipient to ratify a treaty and providing the “serious” governments a cheap way to buy out of emissions commitments. All under the pretense that it serves somehow to reduce emissions.
Benedick was rightly protective of the U.S. position. Maybe a little too protective. President Bush may have had some choice, and didn’t make the best choice; but one choice he didn’t have: to submit for ratification. The U.S. “commitment” was almost certainly infeasible when the Clinton administration signed the protocol in 1997. Three and a half years later, with no action toward reducing emissions and no evidence of any planning on how to reduce emissions, and no attempt to inform the public about what it might have been committed to or any effort to so inform the Congress, what might barely have been possible in 13 years–1997 to 2010–had become ridiculous. No Senate would confirm the treaty without any knowledge of what the commitment was, and no president could answer that question without a year’s preparation. No such preparation appears to have been done in the Clinton administration. President Bush at least avoided hypocrisy.
The argument for staying with Kyoto was, according to Bonn conference president Jan Pronk: “It’s the only game in town.” Benedick suggests that the game can be changed. It has to be, or a new game introduced. The United States suffered ignominy in the spring of 2001; Kyoto champions look no better as the year comes to a close.
The world stage has been transformed since September 11. Conformist critics of America have been silenced by a new need to face a more immediate challenge. The United States, a “renegade” in March, became a leader in October. Perhaps, behind the glare of international terrorism, Kyoto can take advantage of the shadow and find a new, and serious, approach to the biggest environmental problem of the new century.
After a week of hard bargaining, negotiators in the Moroccan city of Marrakech finally agreed on the details of how the Kyoto Protocol will operate. The Marrakech accord, which completes the Bonn agreement from earlier this year, is exactly what the United States pressed to achieve before it repudiated the whole process. It provides unrestricted emission trading, large-scale experiments with carbon sinks and unprecendentedly stiff international rules for compliance. Moreover, it de facto recognizes the need of several industrialized countries, notably Japan, for effectively lower targets. Japan experienced (like the United States) larger than expected increases of carbon dioxide emissions in the 1990s and faces a sluggish economic outlook.
The U.S. decision to drop out of Kyoto has made Japan, along with Russia, crucial for ratification. If either decides not to take part, the whole process will collapse. Both countries did take advantage of this exceptional bargaining position (as the U.S. could have done). But do these bazaar-like negotiations imply that the Kyoto protocol is fundamentally flawed, as Richard E. Benedick assumes? I don’t think so. Most successful international regimes experienced deviations from agreed targets. Examples are Russia and other Eastern European parties, which failed to comply with the ozone regime, or Norway, which failed to comply with the North Sea commitments.
Admittedly, the effectively agreed-on reductions of 3 percent will be only a tiny amount compared with what climatologists say is needed, and a far cry from the “technological revolution” that Benedick preaches. But let me borrow from his famous Ozone Diplomacy (p. 328): “A target, any target, will provide experience and can always be adjusted. It is essential to send unambiguous signals to the market in order to stimulate competition, innovation and investment to invest in less carbon-intensive technologies.” And there are promising indications that the signal from Bonn was well understood by industry. Only one day after the Bonn deal was struck, the values of renewable energy companies in Spain rose by five percent. From my point of view, there is no need for a government lead nor for obscure carbon taxes; the technology is there or it will be developed if we send the right signals. Everyone involved in the process acknowledges that Kyoto is a necessary start, but everyone also agrees that it will take progressive cuts in the future to get it right.
Richard E. Benedick provides an excellent account of the recent climate change negotiations. As he notes, the inadvertent hero of Kyoto’s revival has been President Bush, whose rejection of Kyoto produced a backlash that breathed new life into the negotiations. Countries drew together to adopt the Bonn Agreement in July 2001 and the recent Marrakesh Accords, which resolved the remaining issues relating to Kyoto’s implementation and now puts countries in a position to ratify the protocol and bring it into force.
In many respects, the Bonn Agreement was not significantly different from the deal almost reached in The Hague in November 2000. In The Hague, it was clear that the European Union (EU) would give up its insistence that countries achieve a specific percentage of their required emission reductions at home rather than through emissions trading (the so-called “supplementarity” issue) and would allow countries to receive significant credits for carbon absorption by forests and farmlands (carbon sinks). The failure at The Hague was due less to insurmountable differences than to the fact that countries began negotiating too late and simply ran out of time. Thus, although Japanese diplomacy was certainly adroit in Bonn as Benedick notes, the main new concession Japan obtained from the EU did not concern the supplementarity and sinks issues as he suggests, but rather compliance, where Japan succeeded in postponing a decision as to whether the compliance procedure would be legally binding. Moreover, it is not clear whether the sinks deal really solves Japan’s problems as Benedick suggests, since even with the new sinks credits allowed under the Bonn Agreement, Japan will still need to reduce its emissions very substantially or else buy credits from countries with a surplus such as Russia.
As Benedick notes, the United States would have significant negotiating leverage if it chose to reengage in the negotiations. Thus far, however, it has shown no sign of wanting to do so. Even before September 11, the Bush administration–to the extent that it was doing anything at all–appeared to be focused on domestic and possibly regional measures, not a new global agreement. Now, credible action appears even more unlikely, at least in the short term, both because September 11 has pushed issues such as climate change off the radar screen of high-level officials and because it has largely eliminated public and international pressure to act.
In the long term, however, if the scientific evidence regarding global warming continues to build, then pressure to take action will revive. Benedick proposes a technology strategy, which he suggests would be “far less costly and more productive” than Kyoto’s market-based approach. But although an emphasis on technology is certainly warranted, its superiority over Kyoto has not been established. Contrary to Benedick, Kyoto is not a “short-term perspective on a century-scale problem.” It establishes a long-term architecture to address climate change that relies on market-based instruments such as emissions trading, which have proved highly effective and efficient in other contexts. Emission targets for its first commitment period, from 2008 to 2012, are clearly inadequate to address climate change. But they are only the first of a series of targets, progressing toward the Framework Convention’s ultimate objective of stabilizing greenhouse gas concentrations at a safe level.
Whether Kyoto will be effective in combating climate change remains unclear, despite the breakthroughs in Bonn and Marrakesh. But the potential pitfalls relate to its practical workability, not its aspirations.
Soon after President Bush pronounced the Kyoto Protocol dead, Richard E. Benedick told me that the president might have inadvertently secured Kyoto’s survival. I thought otherwise. I thought the other Kyoto signatories might use the opportunity to let Kyoto die and to blame the United States for its demise, thereby securing a rhetorical victory. In the event, Benedick was right and I was wrong. Today, the prognosis for Kyoto entering into force looks pretty good.
This experience makes me hesitate before disagreeing with Ambassador Benedick again, but I find his suggestion that the United States might still join a revised Kyoto implausible. It certainly seems unlikely after Marrakesh (and Benedick’s article was written before that meeting). In any event, my view is that renegotiating Kyoto’s targets would be a waste of time. The essential flaw in the Kyoto approach is that it incorporates specific targets and timetables without backing this up with effective enforcement. This is a narrowly directed criticism of the treaty, but one that finds agreement with Benedick’s assertion that Kyoto may yet prove unworkable.
Enforcement is needed to promote both participation and compliance, but Kyoto does neither. Its minimum participation clause is set at such a low level that the agreement can enter into force while limiting the emissions of less than a third of the global total. This will not suffice to mitigate climate change. Moreover, the compliance mechanism, negotiated years after the emission limits were agreed, essentially requires that noncomplying countries punish themselves for failing to comply–a provision that is unlikely to influence behavior. Most astonishingly, Kyoto specifically prohibits compliance mechanisms with “binding consequences” unless approved by an amendment.
The consequences of this approach seem clear: Kyoto will either fail to enter into force, or it will enter into force but will not be implemented, or it will enter into force and be implemented but only because it requires that countries do next to nothing about limiting their emissions (and in Marrekesh the treaty was watered down even more to make it acceptable to Russia, Japan, and other countries). These weaknesses cannot be improved by a minor redesign of the treaty. The basic problem stems from the requirement that countries agree to, and meet, emission limitation ceilings: the most central element of the Kyoto Protocol.
Where to go from here? Benedick proposes a technology strategy, and I agree with him wholeheartedly. Let me just add a twist to his proposal.
My suggestion is for the United States to leave Kyoto as it is and propose new protocols under the umbrella of the Framework Convention on Climate Change. These should include a protocol for joint R&D and a series of protocols establishing technology standards; including, for example, standards for automobiles, requiring, say, the use of the new hybrid engines or fuel cells. Economists normally reject the setting of technology standards in a domestic setting, but they have a strategic advantage in an international treaty. This strategic advantage is that as more countries adopt a standard, it becomes more attractive for other countries to adopt the same standard. Standards create carrots (the promise of selling your product in more markets, for example) and sticks (standards create automatic trade restrictions, which are easy to enforce and are permitted by the World Trade Organization). These kinds of incentives are lacking in the Kyoto agreement. Moreover, the proposal is eminently simple and practical: A multilateral treaty for automobile standards already exists.
There are, of course, problems with the standards approach. But an ideal remedy is not achievable for global climate change because of the problems with international governance. We need to be thinking of the best second-best remedy.
Richard E. Benedick’s welcome article presents a coherent critique of the Kyoto Protocol, adding to the criticisms others of us have made of that treaty. His call for a positive U.S. initiative strikes the right note by arguing for a technology-based treaty instead of one based on quantified emissions reductions. His depiction of the Bonn negotiations shows how such emissions targets will lead participating countries into perpetual debates about how to measure and assess emissions reductions, drawing time and money away from actually doing something positive about the problem.
I would add a couple of items to his suggested policy initiative; things that the United States can do both unilaterally and in collaboration with other countries. Sweeping changes in our energy technology system will require more than the increases in R&D spending that Benedick advocates. First, such change requires regulatory reform. Numerous policies, standards, and practices make it difficult for renewable energy and energy efficiency technologies to penetrate the market. From building codes to housing development covenants to interconnection standards for distributed generation to certification for photovoltaic installers, these obstacles dramatically increase the transaction costs of making greater use of efficiency and renewable technologies. Such costs will discourage their use even when their nominal price goes down.
Second, changing the technological system requires targeted education programs. Many people and institutions make decisions that influence how easily consumers and businesses can adopt new energy technologies. Contractors, mortgage lenders, engineers, public utility commissioners, and many others could greatly promote or impede the diffusion of these technologies, yet they often know very little about them. Government policy could fund educational programs for these groups that provide them with information tailored to their particular needs.
The new economy
Dale W. Jorgenson’s excellent paper (“U.S. Economic Growth in the Information Age,” Issues, Fall 2001) finds that the fall in information technology (IT) prices helps explain the surge in U.S. growth in the 1990s. Research by the Organization for Economic Cooperation and Development (OECD) shows that the United States is not alone in this; IT plays an important role in explaining growth differentials in the OECD area in the 1990s. Rapid technological progress and strong competitive pressure in the production of IT have led to a steep decline in IT prices across the OECD, encouraging investment in IT. The available data for OECD countries show that IT investment rose from between 5 and 15 percent of total nonresidential investment in the business sector in 1980 to between 15 and 30 percent in 2000.
Although IT investment accelerated in most OECD countries, the pace of that investment and its impact on growth differed widely. For the countries for which data are available, IT investment accounted for between 0.3 and 0.9 percentage points of the growth in gross domestic product over the 1995-2000 period. The United States, Australia, and Finland received the largest boost; Japan, Germany, France, and Italy the smallest, with Canada and the United Kingdom taking an intermediate position. Software accounted for up to a third of this contribution.
IT has played two other roles in growth, however, through its impact on the overall efficiency of capital and labor, or multifactor productivity (MFP). First, in some countries, such as the United States, MFP growth reflects technological progress in the production of IT. This has enabled the amount of transistors packed on a microprocessor to double every 18 months since 1965 and even more rapidly since 1995. Although OECD statistics show that the IT sector is relatively small in most countries, it can make a large contribution to growth if it expands rapidly.
The other IT-related driver of MFP is linked to its use by firms. Firm-level studies show that IT can help to improve the overall efficiency of capital and labor, in particular when combined with organizational change, better skills, and strong competition. Moreover, certain services that have invested heavily in IT, such as wholesale and retail trade, have experienced a pickup in MFP growth in recent years, in the United States, Australia, and Finland, for example. And countries that experienced a more rapid diffusion of IT in the 1990s have typically had a more rapid pickup in MFP growth in the 1990s than countries where the diffusion process was slower.
The above does not imply that IT is the only factor explaining growth differentials in the OECD area. The OECD work shows that other factors, such as differences in labor use, are also important. Growth is not the result of a single factor or policy; it depends on an environment conducive to growth, innovation, and change.
Dale W. Jorgenson has given us an exceedingly careful analysis of the sources of the revival in the growth of total factor productivity and of gross national product (GNP) in the U.S. economy since the mid-1990s. He attributes the growth acceleration to dramatic growth in investment and technical change in the information technology (IT) industries. Although the IT industries account for less than 5 percent of GNP, they accounted for approximately half of the productivity bubble of the late 1990s.
Jorgenson is skeptical that these high growth rates are sustainable. One reason is a slowing of growth in labor inputs: both numbers of workers and hours worked.
A second is slower technological change in the IT-producing industries associated with an anticipated lengthening of the semiconductor product cycle.
I am even more skeptical than Jorgenson about the capacity of the U.S. economy to sustain the growth rates of the late 1990s. During the periods covered by Jorgenson’s productivity growth data–high (1948–73), slow (1973–90), and resurgent (1995–99)–growth has been sustained by substantially higher rates of productivity growth in the goods-producing sectors (agriculture, mining, and manufacturing) than in the rest of the economy. And within the manufacturing sector, rapid productivity growth has been highly concentrated in a few industries such as industrial machinery and equipment and electronic and electric equipment.
By the late 1990s, the goods-producing sectors accounted for less than 20 percent of U.S. GDP. It is not unreasonable to anticipate that during the second decade of the 21st century, the share of goods-producing industries will decline to somewhere in the range of 10 percent. This means that the burden of maintaining economy-wide productivity growth will fall almost entirely on the service sector.
Jorgenson has presented data elsewhere suggesting that the service sector’s contribution to total factor productivity growth during 1958-96 was negative. It may be objected that service sector output and productivity growth are particularly difficult to measure and are underestimated in the official productivity accounts. Some service sector industries, particularly those such as financial services, that have been able to make effective use of IT, have achieved relatively high rates of productivity growth.
My own sense, however, is that there are few significant industries in the service sector where substantial productivity gains can be readily anticipated. Some of these industries, such as entertainment, will be particularly subject to what Baumol long ago termed the service sector cost disease–characterized by some combination of increasing costs and/or lower-quality output. It will take some very creative growth accounting to avoid a conclusion that the “new economy” growth rates of the late 1990s are not sustainable, either in the short run or into the second decade of the 21st century.
Genetics and medicine
In “From Genomics and Informatics to Medical Practice” (Issues, Fall 2001), Samuel C. Silverstein accurately captures the extraordinary excitement and potential of medical research emerging from the disciplines of genomics and informatics. What is possible is nothing less than the unraveling of the mysteries of many medical illnesses, together with a clarification of the links between basic causes and pathophysiology. This would facilitate progress in our ability to develop real prevention methods, match better treatments to pathologies, and in general enhance the health care of the nation.
Our ability to fully exploit the advantages of this exciting research, however, could be compromised by rigid regulations that are emerging in the arena of information privacy, regulations that emanate from legitimate concerns about the confidentiality of people’s health information. It is certainly important for people to be protected against violations of confidentiality that might in any way compromise their work status or their ability to secure insurance. But it is also important that these protective regulations be formulated in such a way that they do not become major obstacles to the nation’s ability to reap the benefits of research and do not hobble the ability of our medical institutions to provide effective patient care.
In the spirit of strong privacy control, some groups have encouraged the development of regulations under the Health Insurance Portability and Accountability Act that have unintended and problematic side effects. An estimated 1,600 pages of regulations are about to descend on the health care system as a result of overextending the intention to protect privacy. As currently formulated, they represent as much of a substantial obstacle to the ability to deliver high-quality, efficient, and cost-effective care as they do to the conduct of the new research, and they will pose an extraordinary burden for the nation’s hospitals. It seems appropriate to reconsider these regulations and delay their implementation in order to come to a healthier balance between legitimate privacy concerns and the needs of the nation’s health care system and research programs.
Silverstein points out that a partnership between academic health centers and industry, facilitated by the government, would enable the nation to take advantage of the medical research opportunities now made possible by the rapid development of genomics and informatics. The result can be a nation far less compromised by illness and with substantial reductions in pain, time and productivity loss, and all the other negatives that accompany disease and poor health.
Let us hope that informed policymakers will revisit the proposed privacy regulations, modifying them to provide the appropriate protections for individual privacy while allowing the country and its population to benefit from a very hopeful vision for medical research and care.
U.S.-Russian cooperation
Kenneth N. Luongo’s “Improving U.S.-Russian Nuclear Cooperation” (Issues, Fall 2001) makes a convincing case for the need to renew the partnership with Russia to improve nuclear security. The impressive achievements cited by Luongo occurred mostly in the first half of the 1990s and resulted from a partnership established to meet common national security objectives. He correctly points out that an “undercurrent of political mistrust and resentment” curtailed additional progress by the end of the decade.
To make progress now, it is important for U.S. policymakers to realize just how broken this relationship is. Over the past three years, several key cooperative nuclear programs have effectively come to a halt. The U.S. side has had no clearly articulated strategic vision and no overarching strategy to guide the myriad of federal agencies or Congress in developing programs that enhance our national security while concurrently helping Russia deal with the vestiges of the huge Soviet nuclear complex. There has been little high-level U.S. attention paid to ensuring a constancy of purpose and continuity in implementing key cooperative programs.
Some programs pushed by the U.S. side ran contrary to Russia’s own national security interests or energy strategy. Other programs, such as upgrading the security of Russian weapons-usable fissile materials, were redirected by the U.S. side away from a partnership to a unilateral approach that insisted on intrusive and unnecessary physical access to sensitive Russian facilities in exchange for U.S. financial support. Such actions, along with political tensions caused by NATO expansion; the bombing of Serbia; disagreements over Iran, Iraq, and Chechnya; and the U.S. push for a national missile defense depleted the bank account of trust and good will built up in the early 1990s and inhibited further progress.
On the Russian side, the early cooperative spirit demonstrated by Russian military and scientific personnel was reined in gradually by a re-energized Russian government bureaucracy and re-empowered security services. Russia’s dire financial situation prompted it to aggressively export nuclear technologies worldwide (especially to Iran) over U.S. objections. Russia’s plea for help to downsize and convert its huge nuclear military complex to civilian applications did not receive strong U.S. support. The United States focused too narrowly on the “brain drain” of Russian nuclear scientists instead of tackling the root causes. Such programs should be directed at downsizing the vastly oversized Soviet complex safely and securely to reflect current requirements and on keeping the remaining Russian nuclear institutions and their people focused on the West, rather than selling their knowledge and technologies to less desirable states or groups.
Before September 11, the new administration, like its predecessor, appeared slow to take advantage of the historic opportunity to work with Russia to construct a new nuclear security framework. Now, the new Bush-Putin spirit of cooperation should enable a much broader common strategy to guide what is to be done. Luongo’s advice is both timely and on target. I strongly endorse most of his specific recommendations. They are quite similar to ones I make in Thoughts about an Integrated Strategy for Nuclear Cooperation with Russia ). In addition, I agree with Luongo that we must also focus on how to get things done. The critical element is restoring the partnership; without it, additional U.S. funds will be ineffective.
Nuclear cooperation with Russia is an expensive and long-term proposition with uncertain payoffs for U.S. security interests. Current U.S. programs are fraught with technological and conceptual gaps that could easily be exploited by determined adversaries, whether hostile states, criminal organizations, or terrorists.
Take, for example, the Department of Energy (DOE)-funded effort to improve materials protection, control, and accounting (MPC&A) at former Soviet nuclear facilities. This is touted as “the nation’s first line of defense” against the threat of proliferation from unsecured Russian stockpiles. Yet as of 2001, 10 years after the Soviet collapse and after the expenditure of approximately $750 million, less than 40 percent of the 600-odd tons of at-risk weapons material is protected in some fashion by MPC&A. Security upgrades will not be extended to the remainder until 2010 and possibly beyond, according to DOE projections. But opportunistic nuclear criminals would not obligingly wait until all facilities are MCP&A-ready before orchestrating a major diversion, so the strategic rationale for the program diminishes as the time frame for completing it lengthens. Increased funding and tighter management might accelerate the timetable, but by this time some proliferation damage may already have occurred.
Furthermore, insider corruption and economic hardship in Russia erode the deterrent value of even the advanced safeguards being installed. MPC&A systems depend on the diligence, competence and integrity of the people tending them. They are not designed to defend against high-level threats, such as a decision by senior plant managers to sell off stocks of fissile materials to nuclear-prone Middle Eastern customers. Willing suppliers of strategic nuclear goods might well abound in Russia’s formerly secret cities when average pay hovers at $50 per month and where some 60 percent of nuclear specialists feel compelled to supplement their regular salaries by moonlighting.
Washington is also building other lines of defense against nuclear smuggling by training and equipping former Soviet customs officials to intercept radioactive contraband at airports, ports, and border crossings. Yet Russia’s frontiers with Georgia, Azerbaijan, and Kazakhstan alone–the most likely conduits to Middle Eastern states and groups of concern–run more than 7,800 kilometers, partly through terrain where banditry and narcotics smuggling traditionally have flourished. A few radioactive monitors installed here and there across Russia’s vast southern tier would do little to deter savvy smugglers adept at deceiving or avoiding representatives of the state.
Further complicating the security picture is Russia’s international behavior in the nuclear realm, especially the wide-ranging technical and commercial relationship with Iran. Iran, which now makes no secret of its intentions to acquire weapons of mass destruction (WMD), can easily leverage networks of official contacts to gain access to Russia’s nuclear suppliers. How much fissile material has escaped from Russia under the umbrella of ostensibly legitimate business deals is anyone’s guess.
Clever adversaries and their inside collaborators simply can find too many ways to defeat or circumvent the technical fixes, export controls, and other containment measures being introduced under the cooperative programs. Certainly the programs themselves should not be defunded, but U.S. security policy must go beyond containment to focus attention on the demand side of the proliferation equation: on the main adversaries themselves. In the near term, this means deciphering adversaries’ military procurement chains (how they are organized and financed and what front companies and other intermediaries are used, for example) and disrupting nuclear deals, in the making when possible. It means monitoring the status of their nuclear programs and assessing the threats emanating from them. Such tasks must necessarily be intelligence-based, requiring a wider deployment of human collection resources in proliferation-sensitive zones in Soviet successor states and in the Middle East than is now the case.
Since nonproliferation cannot be pursued as though in a political vacuum, Washington must strive to fashion a demand-reduction strategy, exploring new options for curbing the international appetite for nuclear weapons. Demand engenders supply, as with the illicit drug trade. If adversaries are already stockpiling fissile material (which is not beyond the realm of possibility by now) the challenge is to influence them not to build or deploy such weapons. Various economic, diplomatic and military options might come into play here, but implementing them will require a more nuanced and differentiated vision of aspiring nuclear actors and of the security concerns driving their WMD programs.
Workforce productivity
The current downturn in the economy, which has been exacerbated by the events of September 11, is raising doubts and causing uncertainty about the future of the United States in an increasingly competitive and hostile world. In “The Skills Imperative: Talent and U.S. Competitiveness” (Issues, Fall 2001), Deborah van Opstal does an excellent job of addressing many of the major issues confronting the United States, including changing demographics; the disproportionately small number of female and minority scientists and engineers; and the failure of our nation to provide every American with the skills and education needed to foster U.S. competitiveness in the global economy.
During the 1990s, the psychological sciences community developed a national behavioral science research agenda, the Human Capital Initiative, which views human potential as a basic resource that can be maximized through an understanding of the brain and behavior. The initiative identified several problem areas facing the nation, including some mentioned by van Opstal, such as aging and literacy, and some not, such as substance abuse, health (including mental health), and violence. Each of these factors has profound effects on workforce productivity and is amenable to research and intervention.
The 1990s were also the Decade of the Brain, reflecting the beginning of a revolution in the brain and behavioral sciences. It is my belief that neurobehavioral technologies can be harnessed to power a second productivity explosion, similar to the one fueled by information technology, and indeed the two may meet at the human-machine interface. We can and must “apply and extend our knowledge of how people think, learn, and remember to improve education” (testimony of Alan Kraut on the fiscal year 2002 budget of the National Science Foundation). We also can and must apply and extend our knowledge about the prevention and treatment of substance abuse and mental illness to improve job performance, about group dynamics and interpersonal conflict to prevent violence, and about preventing the cognitive decline that occurs with aging to increase the productivity of older Americans, who will become an increasingly large and critical segment of our nation’s workforce and economy.
Finally, I suggest that we not lose sight of a potential national resource that is often overlooked: gifted children, those with special intellectual, artistic, or leadership talent. Recognizing and nurturing gifted students is in the national interest just as much as recognizing and nurturing at-risk populations.
Advanced Technology Program
Glenn R. Fong’s proposed Advanced Technology Program (ATP) reforms are not new and would dramatically move the program away from its original intent (“Repositioning the Advanced Technology Program,” Issues, Fall 2001). In 1988, the statute creating the ATP stated: “There is established . . . an Advanced Technology Program . . . for the purpose of assisting U.S. businesses in creating and applying the generic technology and research results necessary to commercialize significant new scientific discoveries and technologies rapidly.” The intention was to address problems with U.S. industrial competitiveness, and the program was directed at industry rather than at “institutions that are further back in the innovative pipeline,” such as universities.
One of the strengths of the U.S. system of innovation is the richness and diversity of institutions that support technological advancement. Our university science and national labs are preeminent in the world, but alone they were not sufficient to sustain competitiveness and economic growth. The ATP, as it is currently working, plays a valuable role by providing resources and incentives for innovative companies to develop early-stage, high-risk, enabling technologies that are defined as priorities by industry.
Internal and external economic assessments have been a major program component at the ATP from its inception and have led to experience-based modifications to the program. As a result, political attacks on the program have generally been philosophical rather than substantive. The prior lack of political support does not imply that there is something wrong with the program.
Maryellen Kelley and I studied the ATP’s 1998 applicants to see how award-winning firms were different from firms that did not receive awards. We then examined both groups of firms one year later to see whether the ATP made a difference. We concluded that the ATP awarded high-risk, potentially high-payoff, research projects in technical areas that were new to the firms. In addition, ATP awards led to new R&D partnerships, to more extensive linkages to other businesses, and to wider dissemination of research results, whereas nonwinners overwhelmingly did not proceed with their projects. The ATP funded the types of risky projects that firms are unlikely to pursue without government incentives and that have characteristics that economists expect to yield broad-based economic benefits. The pejorative term “corporate welfare” is very far off target.
We also found that an ATP award created a “halo effect” that attracted additional funding to ATP winners. With its rigorous independent review process, the ATP certifies that a company is a worthy investment. The ATP had become a political football; it now shows every prospect of gaining the full bipartisan support that it deserves.
I fully support Charles W. Wessner’s conclusion in ” The Advanced Technology Program: It Works” (Issues, Fall 2001) that the ATP has proven its success and justifies ongoing stable support from Congress and the president. However, there is far more to the program than simply helping to fill the “valley of death” with funding for applied research, as proposed by Glenn R. Fong.
I have tracked ATP awards to Industrial Research Institute (IRI) member companies since the program was begun in 1990. The record shows that 69 IRI members received awards for just over 200 projects worth nearly $1 billion. This means that they and their partners have contributed at least another billion dollars of their own funds toward the work. In general, the larger and more technology-intensive firms have applied for and received the most awards. For example, General Electric received 12 awards; IBM 10; General Motors, Honeywell, and 3M 8; and Du Pont 7. Each of these companies invests at least $1 billion a year in R&D, some of them three to seven times that amount. They would not make the effort to apply for an ATP award unless the work was particularly significant at the margin; that is, work that they may not have funded on their own but would if the shared funding for higher-risk studies (in most cases) were available, a point made by Wessner.
Fong is correct in saying that applied research, growing at a rate of 4.7 percent in the late 1990s, was the lagging category in the total R&D effort. However, industry invested over $35 billion in applied research in 2000, which is more than two orders of magnitude higher than that spent on the ATP in the same year. Clearly, the ATP does help to fill the valley of death, but justifying its continuation largely on that basis seems to be a stretch.
Fisheries management and fishing jobs
In “A New Approach to Managing Fisheries” (Issues, Fall 2001), Robert Repetto has made an extremely useful contribution to the field of fishery management, with his direct comparison between the U.S. and Canadian sea scallop fisheries. I am concerned, however, that Repetto may have left the impression that the benefits that will be obtained by the fishing industry through rights-based fishery management are likely to come at some substantial cost to fishing communities.
As Repetto points out, “there has never been an evaluation of actual experience in all ITQ systems worldwide using up-to-date data and an adequate, comparable assessment methodology.” My own study of rights-based fishery management leads me to question the prevailing belief that fishing communities will suffer under a system that encourages efficiency.
The “speculative and heated debate” to which Repetto refers has reached the point where many fishery stakeholders consider “efficiency” a dirty word. But the converse of efficiency is waste. And no one forthrightly defends waste. It is easy to demonstrate that efficient resource use can improve the standard of living of the people who rely on those resources, whether they are a family, a community, a nation, or the world.
Repetto puts wasteful fishery management in the context of business profits, suggesting that “if the U. S. scallop fishery were a business, its management would surely be fired, because its revenues could readily be increased by at least 50 percent while its costs were being reduced by an equal percentage.” What makes this analogy important to the average citizen is the fact that our fishery resources are public resources: poor management of fishery resources reduces the standard of living of every citizen by reducing the economic benefits that we receive from our fisheries. Redundant inputs (excess costs) used to overfish could be used elsewhere to improve medical care, education, housing, etc.
The key issue in this context, and the crux of the debate, is the distribution of the benefits of efficient resource use in both the short and long terms. Essential to this question is one’s beliefs about the role of the government as compared to the free market system in allocating scarce resources. If a government policy of rights-based fishing leads to profits in the fishing industry, should the government tax those profits away and use them for the benefit of all citizens, or should we rely on the free market to reinvest those profits for the betterment of society? Through which mechanism are local communities more likely to benefit?
Both theory and practical experience demonstrate that rights-based fishing can generate substantial economic benefits. What we need now are empirical case studies that follow the flow of benefits from efficient fisheries through their communities and the broader economy. With that knowledge we can design rights-based fishery management programs that achieve their expected benefits while accommodating legitimate concerns.
Redesigning food safety
I commend Michael R. Taylor and Sandra A. Hoffmann for their thought-provoking “Redesigning Food Safety” (Issues, Summer 2001). I fully concur that the government needs a more coordinated and structured approach to determine the most productive uses of its budget and resources for addressing food safety problems. This approach should be well founded in both the natural and social sciences and provide a framework that enables the priority ranking of issues regarding human health
Risk analysis is an excellent descriptor for this strategy. What is most needed is a well-conceived model for conducting a risk analysis of food safety issues. The development of such a model will require the input of a broad cast of strategists representing a variety of disciplines, including public health, sociology, infectious diseases, microbiology, economics, and public policy. Considering the growing frequency with which previously unrecognized food safety issues are confronting today’s regulators, the model must be designed to allow updating as new issues surface. Properly done, risk analysis should be a work in progress.
However, all the best efforts to formulate a well-designed food safety risk analysis model for government decisionmaking will be in vain if many of the archaic food safety laws presently in place are not rescinded and new policies focused on the food safety issues of today put in their place. As Taylor and Hoffmann point out, current statutory mandates for specific modes of regulation skew the allocation of resources in ways that may not be optimal for public health and the government’s ability to contribute to risk reduction. It is ironic that some U.S. laws, such as those mandating an outdated inspection system, are an impediment to enabling government agencies to better address today’s most pressing food safety issues.
It is time we recognize the weaknesses of government food safety programs and bring government decisionmaking in line with the food safety priorities of today. It is a matter of public health
Regulating genetically engineered foods
In “Patenting Agriculture” (Issues, Summer 2001), John H. Barton and Peter Berger describe how a few big agricultural biotechnology companies are increasingly consolidating control over the application of advanced molecular technologies in crop breeding, to the detriment of public-sector research programs with responsibility for genetic improvement of food staples in developing countries. They appropriately charge the narrow, money-motivated, intellectual property rights (IPR) licensing policies of advanced research universities with causing part of the problem. And, they offer promising strategies whereby the public sector could do a better job of managing its IPR to generate both public goods and income from its research.
However, poor management of IPR is only one of the ways in which the public sector is handing over control of this technology to the big multinational corporations. Increasingly onerous and expensive biosafety regulations are also a major cause. In the United States, the cost of obtaining regulatory approval for a new crop variety with a transgenic event can easily reach $30 million. Even the big companies are abandoning research programs for which the size of the market does not warrant this level of investment. Small seed and biotechnology companies are essentially priced out of the market unless they partner with the multinationals, and the public sector may be left out as well. If developing countries put in place biosafety regulations that are equally onerous, they too are likely to find themselves highly dependent on multinational corporations as their primary source of advanced new crop varieties. As with IPR, the public sector needs to find better and less expensive ways of addressing legitimate regulatory concerns, if it is to continue to play an important role in producing new crop varieties for the hundreds of millions of small-scale farmers who will not be served by the big companies.
I read with interest Patrice Laget and Mark Cantley’s “European Responses to Biotechnology: Research, Regulation, and Dialogue” (Issues, Summer 2001). In particular, I note the comment that critical and apprehensive spectators can generate “what if” questions faster than they can be answered.
But this is absolutely right. It is essential that processes remain open to question and debate. The public attitude toward genetically modified organisms (GMOs) has changed incrementally since the first releases in the United Kingdom, questioning not only safety, moral, and ethical concerns; the right to consumer choice; and the apparent speed of advancement toward commercialization but also whether there are real benefits of this technology.
In the United Kingdom, we feel it is time for public debate. We already have a new directive governing the release of GMOs, strengthening and clarifying the existing rules and increasing openness and transparency. It is time to build on this further and consider the many questions outstanding. These include not only reassessing the risks to embrace new scientific thinking and the implications of the latest research, but also the provision of consumer choice by the introduction of comprehensive labeling and traceability requirements and workable thresholds for the adventitious presence of GM material. We will need rules covering the cultivation of GM crops, incorporating the separation distances necessary to permit the coexistence of different types of agriculture, as well as the instigation of strong liability regimes to protect those adversely affected. Strict rules on seeds must also be considered.
It is only right that regulatory mechanisms be open to development and improvement, in order to remain not only highly effective but also trusted. A science-based regulatory regime can only function within the wider context, in which the issues of morals and ethics must also be taken into account. In this respect, we are leading the way in addressing the public’s questions and have set up the Agriculture and Environment Biotechnology Commission especially to consider these issues. In the United States, there is also increasing public awareness of GMOs, and it is important to be unafraid in answering the questions that may be raised or in reassessing existing systems. As an indication of this, the U.S. Food and Drug Administration has already circulated draft guidance for voluntary labeling of GM food.
In the United Kingdom, the uncertainty regarding GMOs runs deep, and we are only at the beginning of the process to address all the issues involved. To do so will take time if it is to be done properly in order to build a firm foundation on which GMOs can be used in a safe manner enabling freedom of choice. Biotechnology comprises many enabling technologies, only one branch of which uses genetic modification as its core. This in turn will be only one component of a program of sustainable development for agriculture, which the United Kingdom and indeed the world must now address; and the role of GM technology in that sustainable development remains to be assessed.
Biological invasions
The spread of invasive species is, together with climate change, one of the most serious global environmental changes underway. In “Needed: A National Center for Biological Invasions” (Issues, Summer 2001), Don C. Schmitz and Daniel Simberloff argue that current responses are “highly ineffective.” I could not agree more.
The interagency National Invasive Species Council laid out the federal government’s first invasive species management plan in January 2001. Less than 12 months later, the council estimates that agencies are already six to eight months late in implementing it. In the absence of timely federal leadership, states are fending for themselves. As a result, the policy response that Schmitz and Simberloff describe as “fragmented and piecemeal” is becoming more so.
Clearly, now is the time for bold ideas. Schmitz and Simberloff present one. They make a powerful case for a national center to coordinate efforts. This kind of coordination would be a major step forward but it will not alone solve our problems. Bad or overly lax policy, perfectly coordinated, is no solution. Additional ideas deserve a hearing.
Making the National Invasive Species Act (NISA) live up to its name is another bold idea, and one of the best. We must address invasive species more comprehensively. This means filling gaps in law, regulation, programs, and funding. For example, we need more resources to manage the relatively neglected nonagricultural invaders. We need to ensure that all intentionally imported species are effectively screened for invasiveness before import and that those known or highly likely to be harmful are kept out. In the long run, we will need new and more helpful legislation. Now, though, the 2002 reauthorization of NISA gives us a chance to improve efforts considerably.
Although useful, NISA addresses only a subset of invasive species problems: largely those related to organisms that arrive inadvertently in ships’ ballast water. Also, NISA affects only a portion of international ship traffic. Its toughest requirements apply to ships with just a few destinations.
To its credit, NISA set in motion a series of policy experiments and technological innovations. We should strengthen this approach and apply it to more of the routes by which aquatic species travel. For example, mandatory ballast water management should replace the voluntary program, which has a woefully inadequate rate of compliance. States should receive additional help for implementing their own management plans and making them more complete. We should better address the potentially devastating impacts of intentional aquatic introductions, especially those by the aquarium, aquaculture, and nursery industries.
In many ways, invasive species policy is in its infancy. As Schmitz and Simberloff show, the time is ripe to borrow the best and brightest ideas from other areas of environmental protection. Myriad possibilities remain untapped. Now we must be bold enough to try them.