Save our seas
As Carl Safina and Sarah Chasis point out in their article, “Saving the Oceans” (Issues, Fall 2004), public awareness about the condition of our oceans is growing, in part because of the release of reports by two blue ribbon oceans commissions. Providing a thorough comparison of the two commission reports, the authors drive home the fact that both commissions, despite their differences, reached the same conclusion: Our oceans are in dire straits. As cochairs of the bipartisan U.S. House of Representatives Oceans Caucus, we believe that the federal government is obligated to protect and sustainably cultivate our oceans.
Both the U.S. Commission on Oceans Policy and Pew Oceans Commission support the need for broad ocean policy changes. Without a more comprehensive approach, our nation is sorely limited in its ability to address issues like climate change, ecosystem-based management, shipping, invasive species, fisheries, water quality, human health, coastal development, and public education. Federal agencies need to coordinate better with one another as well as with state and regional agencies and groups working on oceans-re-lated issues.
In July 2004, the House Oceans Caucus introduced the Oceans Conservation, Education and National Strategy for the 21st Century Act (Oceans-21). Ours is a big bill with a big goal: compelling the government to rethink how it approaches oceans. The last time the government seriously considered its ocean management practices was more than 30 years ago, following the release of the Stratton Commission report. Since then, scientific understanding has grown by leaps and bounds, challenging policymakers to keep pace. The current ocean regulatory framework is piecemeal, ignoring interrelationships among diverse species and processes. Oceans-21 reworks this regulatory framework and puts ecosystem management at the forefront.
We, like the authors, believe that instilling an oceans stewardship ethic across the country is fundamental. Oceans-21 creates an education framework to promote public awareness and appreciation of the oceans in meeting our nation’s economic, social, and environmental needs. Only by understanding the critical role of oceans in our lives will people begin to understand the magnitude of our current crisis. The future of our oceans is a jobs issue, a security issue, and an environmental issue. How we deal with this crisis will determine what kind of world we pass on to our children and grandchildren.
The 109th Congress is now beginning. This session holds great promise for oceans legislation, especially with the president’s public response to the U.S. Commission’s report. The House Oceans Caucus will continue to tirelessly drive oceans issues into the limelight by expanding discussions and reintroducing Oceans-21, as well as other oceans-related legislation. Congress has heard the call for action, and we are answering.
The article by Carl Safina and Sarah Chasis presents a good summary of findings and recommendations from the Pew Ocean Commission and the U.S. Commission on Ocean Policy. Unfortunately, it was written before an election that may have condemned the good work of both commissions to the dustbin of history. It appears unrealistic to expect positive, meaningful (i.e., effective) action from Washington on ocean stewardship issues during the next several years. Sound science and proof of the need for action exist; probably lacking are the leadership vision and will to act. However, some action can be expected relative to increased emphasis on global ocean monitoring and observing systems in a continuing effort to increase understanding of ocean dynamics, ecosystems, and atmospheric interactions, including climate change. In any event, adequate funding for ocean initiatives will be problematic.
Perhaps the best hope for keeping both reports alive is to make the most of what little Washington is prepared to do, guard against regressive national ocean legislation, and focus energy and efforts on progressive initiatives at local, state, and international levels. California is a good example.
On October 18, 2004, Governor Schwarzenegger unveiled California’s Action Strategy, which seeks to restore and preserve the biological productivity of ocean waters and marine water quality, ensure an aquatic environment that the public can safely enjoy, and support ocean-dependent economic activities. It advances a strong policy for ocean protection and calls for effective integration of government efforts to achieve that goal. Accordingly, it improves the way in which California governs ocean resources and sets forth a strategy for ocean-related research, education, and technological advances. The action strategy includes the establishment of an Ocean Protection Council, funding to support council actions, implementation of a state ocean current monitoring system, and carrying out the state’s Marine Life Protection Act, which includes the establishment of marine protected areas. The strategy will be coordinated with the state’s coastal management, fisheries, and coastal water quality protection programs; the National Marine Sanctuary; the National Estuarine Research Reserve; and the Environmental Protection Agency’s National Estuary programs, among others. This pragmatic and active ocean agenda is consistent with Pew and U.S. Ocean Commission recommendations and can be emulated by other coastal states.
International efforts to advance ocean conservation programs should be supported. Much can also be done at the local level. Mirroring local, state, and international initiatives to address global climate change (initiatives taken despite inaction by the United States), this approach acts locally while embracing global concerns. It promotes public education about ocean issues and fosters coalition constituency-building that is vital to future campaigns to enact national ocean stewardship programs and policies. It may even serve to inspire, if not compel, national action.
The Pew and U.S. Ocean Commission reports were clarion calls to action that may well fall on deaf leadership ears in Washington. The crises and threats facing oceans will only grow in magnitude and intensity. Contrary to the implication in the title “Saving the Oceans,” oceans and coasts, like coveted geography everywhere, are never finally saved— they are always being saved. This is another reason why the work of ocean and coastal conservation supporters and advocates is never done, and why we can never give up the struggle.
In “Building a Transatlantic Biotech Partnership” (Issues, Fall 2004), Nigel Purvis suggests that it is time for the United States and Europe to look toward their mutual interests in biotechnology, thus avoiding further harm from the current impasse. He proposes that the United States and Europe jointly address the needs of developing nations as one step toward a more productive relationship.
I fully support this recommendation. As Purvis notes, the U.S. Agency for International Development (USAID) has already renewed its focus on agriculture programs, and I want to assure him that biotechnology is fully a part of this focus. Our renewed emphasis includes a more than fourfold increase in support for biotechnology to contribute to improving agricultural productivity. USAID currently supports bilateral biotechnology programs with more than a dozen countries and several African regional research and intergovernmental organizations.
In addition to these bilateral efforts, we are already working with our European colleagues and other donors to support multilateral approaches. The Consultative Group on International Agricultural Research recently launched two new programs, on the biofortification of staple crops and on genetic resources, which include the use of modern biotechnology. The United States has worked through the G8 process to include biotechnology as one tool in the arsenal for addressing economic growth and hunger prevention.
Where I differ from Purvis’s analysis is over his characterization of developing countries’ interests. First, developing countries are not bystanders in this debate. They were active participants long before the recent controversy over U.S. food aid. The outcome of theCartagena Protocol in 2000 was due in large part to the strong participation of developing countries, whose negotiating positions were independent of those of the United States or European Union. Second, a greater range of developing countries such as India, South Africa, the Philippines, and Burkina Faso are also becoming producers, and thus potential exporters, of these crops. The United States and Europe cannot chart the way forward for biotechnology alone; developing countries are engaged in the technical and policy discussions already.
As is evident in these positions, developing countries are not likely to accept assistance “directed primarily to . . . keep their markets open to biotech imports and respect global norms on intellectual property rights.” USAID’s highest priority is to ensure that developing countries themselves have access to the tools of modern biotechnology to develop bioengineered crops that meet their agricultural needs. Many crops of importance to developing countries—cassava, bananas, sorghum, and sweet potatoes—are not marketed by the multinational seed companies and thus require public support. This will help us realize our first goal for biotechnology, which is economic prosperity and reduced hunger through agricultural development.
Tangible experience with biotechnology among more developing countries is a prerequisite to achieving Purvis’s goals of global scientific regulatory standards and open markets. We will not succeed until developing countries have more at stake than acceptance of U.S. and European products andhave the scientific expertise to implement technical regulations effectively. This can be achieved, as evidenced by the Green Revolution, which turned chronically food-insecure countries into agricultural exporters who now flex their muscles in the World Trade Organization. Ensuring that the current impasse between the United States and Europe does not cause broader harm will require that we recognize that developing countries may have as great a role in ensuring the future of biotechnology as the United States and Europe.
Public attitudes about biopharmaceuticals among the general public in Europe and the United States have more in common than not, as Nigel Purvis points out.
Yet there the similarities end. The regulatory approaches pursued by governments on both continents differ significantly, adversely affecting patient care and, in part, accounting for the departure of many of Europe’s best scientists to the United States.
Both European Commission and European Union member state regulations deny European consumers access to valuable information about new biotech drugs. By limiting consumer awareness, the European Commission and member states limit the ability of patients and doctors to choose from the best medically available therapies.
When setting drug reimbursement policies, some countries place restrictive limits on the ability of physicians to prescribe innovative biopharmaceuticals and then pay inflated prices for generic products. In the end, patient care suffers.
A recent report by the German Association of Research-Based Pharmaceutical Companies highlights the extent of the problem. In any given year, the report found that nearly 20 million cases can be identified—including cases of hypertension, dementia, depression, coronary heart disease, migraines, multiple sclerosis, osteoporosis, and rheumatoid arthritis—in which patients either did not receive any drug therapy or were treated insufficiently.
Patients in both the United States and Europe are optimistic about the benefits and improved health care available today through biotechnology. But how government policies limit or encourage access to those benefits affects patients everywhere.
Lewis M. Branscomb’s penetrating and comprehensive article “Science, Politics, and U.S. Democracy” (Issues, Fall 2004) ends with the sentence “Policymaking by ideology requires reality to be set aside; it can be maintained only by moving toward ever more authoritarian forms of governance.” This should be read as a warning that what has gone awry at the intersection between science and politics is dangerous not only because it can lead to policies that are wasteful, damaging, or futile, but because this development contributes to forces that can, over time, endanger American democracy itself.
As Branscomb emphasizes, in the United States the paths by which science feeds into government form a fragile organism that cannot withstand sustained abuse by the powers that be. The law is too blunt an instrument to provide appropriate protection. The Whistleblower Protection Act illustrates the problem, for it only applies if an existing statute or regulation has been violated, not if government scientists allege that their superiors have engaged in or ordered breaches of the ethical code on which science is founded. Furthermore, it is difficult to construct legislation that would provide such protection without unduly hampering the effectiveness of the government’s scientific institutions.
Democratic government depends for its survival not only on a body of recorded law but equally on an unwritten code of ethical conduct that the powerful components of the body politic respect. If that code is seen as a quaint annoyance that can be forgotten whenever it stands in the way, the whole body is threatened, not just its scientific organs.
The primacy of ideology over science to which Branscomb refers is just one facet of the growing primacy of ideology in American politics. This trend appears to have deep roots in American culture and is not about to disappear. The friction that this trend is producing is so visible at the interface between politics and science because this is where conflicts between ideology and reality are starkly evident and most difficult to ignore. For that reason, scientists have a special responsibility to make clear what is at stake to their fellow citizens. The scientific community has the potential to meet this responsibility because it enjoys the respect of the public, and established scientists are relatively invulnerable to political retribution. Whether this potential will be transformed into sufficient resolve and energy to face the challenge, only time will tell.
Lewis M. Branscomb’s article tells instructive stories about presidents from both parties who have violated the unwritten rules of science advice; rules about balance, objectivity, and freedom of expression. Most of the stories involve presidents who felt wounded by scientists, and scientists who were punished for violating the unwritten rules of political loyalty.
This discussion could usefully separate science advice into two streams, traditionally called policy for science and science for policy. The unwritten rules in policy for science are macroshaping with microautonomy. Elected officials shape the allocation of research funds at a broad level and make direct decisions on big-ticket facilities, but are supposed to leave the details of what gets funded to researchers, particularly at the level of project selection. In his focus on presidential interventions, Branscomb does not point out that a growing number of members of Congress have been violating these unwritten rules during the past few decades, with the active cooperation of many major research universities, through earmarking funding for specific projects and facilities. Even though these activities take money away from strategically important projects that have passed rigorous standards of quality control, the activities not only continue but grow.
For the public, the stakes may be even higher in science for policy: the use of scientific expertise in regulatory and other policy decisions. Most of Branscomb’s stories describe times when researchers and presidents disagreed on the policy implications of scientific evidence. The research community has consistently and rightly maintained that the public is served best when researchers can speak out on such matters with impunity. Public debate on important issues such as climate change and toxic substances needs to be fully informed if democracy—decision-making by the people, for the people—is to survive in an age of increasing technical content in public policy decisions.
The most disturbing of Branscomb’s stories tell about a mixing of these two streams of policy, of times when speaking out on policy issues has brought retribution in funding. Branscomb even seems to sanction this mixing by stressing the symbiosis of science and politics, including the need for science to make friends in the political world in order to maintain the flow of money into laboratories. This is a dangerous path to follow. The Office of Management and Budget’s first draft of its new rules on the independence of regulatory peer reviewers initially incorporated a particularly corrosive version of this mixing by declaring that any researcher that had funding from a public agency was not independent enough to provide scientific advice on its regulatory actions. As many observers rightly pointed out, this rule would have allowed technical experts from the private firms being regulated to serve as peer reviewers, while eliminating publicly funded researchers. This aspect of the proposed rule has fortunately been removed.
The public needs to protect its supply of balanced, objective scientific advice and knowledge from threats in both policy for science and science for policy. Although Branscomb’s article is aimed at the research community, broader publics should also be organizing for action in both areas.
Lewis M. Branscomb proposes four rules to “help ensure sound and uncorrupted science-based public decisions.” I judge the rule key to be that “The president should formally document the policies that are to govern the relationship between science advice and policy.”
In George W. Bush’s second term, this would be the opportunity to quell overzealous staff in the White House, departments, and agencies, who, in the absence of explicit documented presidential policy, rely on their own predilections and readings of administration policy and admissibility.
Bush’s director of the Office of Science and Technology Policy has maintained that it is certainly not the policy of President Bush to disregard or distort science advice or to appoint any but the most competent people to advisory committees. But where is the presidential directive that the administration, the Congress, and the public can hold government officials to account for adhering to?
Explicit presidential policy should incorporate the 1958 code of ethics for government employees (provided to me many times as a consultant or special government employee):
Code of Ethics for Government Service
Any person in Government service should:
- Put loyalty to the highest moral principles and to country above loyalty to persons, party, or Government department.
- Uphold the Constitution, laws, and regulations of the United States and of all governments therein and never be a party to their evasion. . . .
- Never discriminate unfairly by the dispensing of special favors or privileges to anyone, whether for remuneration or not; and never accept, for himself or herself or for family members, favors or benefits under circumstances which might be construed by reasonable persons as influencing the performance of governmental duties.
- Make no private promises of any kind binding upon the duties of office, since a Government employee has no private word which can be binding on public duty. . . .
- Expose corruption wherever discovered.
- Uphold these principles, ever conscious that public office is a public trust.
(The Code of Ethics for Government Service can be found at 5 C.F.R., Part 2635. This version of the code was retrieved on 10/22/04 from www.dscc.dla.mil/downloads/legal/ethicsinfo/government_service.doc.)
The national interest lies in getting the best people into government and advisory positions. Although there is some benefit in having officials at various levels who have good channels of communication with the White House as a result of friendship or political affiliation, it seems to me that the appropriate way to assemble a slate of candidates for each position is through nonpartisan (rather than bipartisan) staffing committees, not the White House personnel office. The appointments and the ensuing conduct should be governed by the code above.
“Sink or Swim Time for U.S. Fishery Policy” (Issues, Fall 2004) is a helpful contribution to the continuing debate over U.S. fishery policy. However, James N. Sanchirico and Susan S. Hanna might give readers the impression that policymakers needed the reports of the two recent ocean policy commissions in order to understand the root cause of the problems facing our fisheries. That misperception might lead to expectations that appropriate policy will naturally follow the illumination of the problem.
Readers should recognize that the fishery problems outlined by the Pew Oceans Commission and the U.S. Commission on Ocean Policy were even more thoroughly explained in the 1969 report of the U.S. Commission on Marine Science, Engineering, and Resources (the Stratton Commission). That report led to many significant changes in government structure and policy related to the oceans. In terms of fundamental fishery policy, however, one must conclude that policymakers have essentially ignored the findings of the Stratton Commission concerning the root cause of fishery management problems.
The Stratton Commission clearly explained the biological and economic destructiveness that results from competition among fishermen for catch shares that are up for grabs. The commission recognized the joint biological and economic benefits that could be obtained for and from our fishery resources by having an objective of producing the largest net economic return consistent with the biological capabilities of the exploited stocks. If the recommendations of the Stratton Commission had been followed by fishery managers over the past 35 years, our fisheries would not be at the critical juncture they face today.
Ecosystem management and aligning incentives toward sustainability are not new ideas whose discovery was needed to allow progress on fishery management. As early as 1969, the Stratton Commission had explained the incentives facing fishermen under the open-access common-property regime that characterized most U.S. fisheries. Most of our current fishery management problems reflect the failure to adopt policies that align the incentives of fishermen with the broader interests of society. Sanchirico and Hanna offer specific policy actions that can be taken now to align incentives. But we should not assume that that knowledge will be acted on. The same politically oriented cautions that were offered in the Stratton Commission report are in play today. The public at large exhibits a “rational ignorance” concerning fishery policy. And fishery bioeconomics is too deep a subject for mass media treatment. Necessary changes in policy will require a continuing education effort aimed at the fishing community and their representatives. These fishery representatives include public officials who are nominal representatives of the public, with responsibility for the management of public-trust fishery resources.
As a commercial fisherman, I spent about half of my 40-year career fighting against the ideas that Sanchirico and Hanna put forth. When I finally convinced myself that the fishing industry’s opposition to those ideas was self-destructive, I became an advocate for policies that align the incentives facing fishermen with the interests of society. I welcome the support provided by the two ocean commissions, but I know that their pronouncements will not end the struggle.
James N. Sanchirico and Susan S. Hanna have identified the important issues facing U.S. and world fisheries managers, and I agree with the major points they make. However, few have recognized that the problem with U.S. fisheries is primarily economic, not biological. There is no decline in the total yields from U.S. fisheries, whether measured economically or in biomass; the decline is in the profitability of fishing. U.S. fisheries are currently producing, on a sustainable basis, 85 percent of their potential biological yield. The crisis is not from overfishing but from how we manage the social and economic aspects of our fishery.
Although I agree that we could do better in terms of biological production, increasing U.S. biological yields by 15 percent is not going to solve any problems. We are going to cure our fisheries problems by solving the economics, not by fine-tuning biological production. Sanchirico and Hanna are right on target when they list ending the race for fish and aligning incentives as the highest priorities, and both of these items were included in the recommendations of the two ocean commissions. However, the Pew Commission was almost totally mute on how to achieve this and emphasized a strongly top-down approach to solving biological problems, without discussing or evaluating incentives in any detail. The U.S. Commission was much more thorough in looking at alternatives for aligning incentives.
There remains a strong thread through the reports of both commissions: the idea that the solutions for U.S. fisheries will come from better science, stricter adherence to catch limits, marine protected areas, and ecosystem management. I refer to these solutions as Band-aids, stopping superficial bleeding while ignoring the real problems. The U.S. Commission recommended theadoption of “dedicated access privileges,” including individual quotas, community quotas, formation of cooperatives, and territorial fishing rights. Movement to these forms of access and the associated economic rationalization that comes with them should be the highest priority. In U.S. fisheries, the biological yield is good and the economic value of the harvest is good, but the profitability of fishing is terrible.
Finally, the United States has adopted a model of fisheries regulation that includes centralized control through regional fisheries management councils or state agencies and almost total public funding of research and management. The more successful models from the rest of the world suggest that more active user involvement in science and decisionmaking and having those who profit from the exploitation of fish resources pay all the costs of management will be much more likely to result in good outcomes.
The more we spend time on restructuring the agencies and trying to decide what ecosystem management is, the longer we will delay in curing the problems afflicting U.S. fisheries.
James N. Sanchirico and Susan S. Hanna and are on target in saying that we are at a critical time in U.S. fishery policy, but I would expand that view to included ocean policy internationally. The problems of degradation of the marine environment, overexploitation of resources, and insufficiency of current governance for ocean and coastal areas are global. Our oceans are under serious threat and major changes in policy are urgently needed.
A central feature of the U.S. Commission on Ocean Policy and the Pew Oceans Commission reports is the call for the implementation of ecosystem-based management: management of human impacts on marine ecosystems that is designed to conserve ecosystem goods and services. Ecosystem-based management needs to explicitly consider interactions among ecosystem components and properties, the cumulative impacts of human and natural change, and the need for clarity and coherence in management policy. Fisheries management must be part of this overall move toward ecosystem-based management, not remain as an isolated sector of policy.
The U.S. Commission recommends some needed changes in fisheries policy, but perhaps the most important change is instituting greater accountability for conservation in the management system. U.S. fisheries management, despiteits problems and failures, has some successful features: 1) there is a strong scientific advisory structure, 2) there is clear stakeholder involvement in management decisions, and 3) there is a governance structure that has the potential to deal with emerging issues. In order for this system to live up to its potential, accountability must be improved by ensuring that there is a positive obligation to implement strong management even if the stakeholder process of plan development fails. Under the current system, regional councils prepare management plans for approval or disapproval by the National Marine Fisheries Service (NMFS). If a plan is not developed in a timely manner or doesn’t meet conservation needs and is rejected, then usually no new management is implemented until a council develops a new plan, even if current management is clearly failing to conserve vital resources. In other words, the NMFS is presented with the choice of whether a submitted plan is better than nothing. Is that really the perspective we want for management? Alternatively, the U.S. Commission recommends a strong default rule: If a plan doesn’t meet conservation standards, no fishing should occur until one that does is available. In other words, shift the burden of conservation onto the management system, rather than the resource. Similarly, there must be an absolute and immediate obligation to adjust a plan if it doesn’t perform as intended.
Just managing fisheries is not enough to protect the marine environment. A broad suite of conservation measures is needed. The U.S. Commission calls for ecosystem-based management to be developed regionally and locally in a bottom-up approach to management. But in all cases there must be a positive, timely obligation for conservation. Participatory processes take time, and we need to remember that often the fish can’t wait.
“Protecting Public Anonymity,” by M. Granger Morgan and Elaine Newton (Issues, Fall 2004), deals with one problem by exacerbating another. If someone breaks into my home, I don’t expect the authorities to punish me for carelessness but to punish the perpetrator. Yet most of the methods for protecting anonymity put the burden on those who collect or manage databases. Why not a clearer definition of what an abuse is and of punishments for the abusers?
We already allow the merging of databases with information about individuals, because a great deal of research requires a lot of information about each individual, not for revelation but for statistical purposes. It is true that even statistical findings can lead to stereotyped conclusions about subgroups in society, but that can be reduced by proper presentations of results.
Important survey research uses personal interviews to collect much information directly from individuals, but highly productive improvements in the data can be made economically by adding information from other sources, ranging from data sets with individuals identified to those containing information about the area where a person lives or the nature of his or her occupation. And great reductions can be made in respondents’ burdens if some information can be made available from other sources. Methodologically, we can learn about response error and improve the data by comparing data from more than one source. Explanations of situation or behavior must allow for complex interaction effects.
We already have protections when personal data are merged, and to prohibit the ransacking of data to reveal individuals. At the University of Michigan’s Institute for Social Research, we have been collecting rich individual data for years, including the use of reinterview panels, without any case of loss of anonymity.
The challenge to our society is to calibrate the balance between personal privacy and society’s security in accord with the constant evolution of technology. This public policy debate has to include the full participation of academics, business leaders, civil libertarians, law enforcement, national security, and technologists with our elected political leaders who reflect the attitudes of the citizens.
The challenge is global because technology erases national borders but cannot eliminate the cultural and historical attitudes on the individual issues of personal privacy and national security as well as their convergence. Europe’s attitudes, for example, on the convergence of these issues are shaped by the historic experiences of Nazi occupation and by recent domestic terrorism in England, Ireland, Italy, Germany, France, and Spain. Other areas such as Hong Kong, Australia, and Japan have distinct national ideas about privacy.
Companies such as EDS are engaged in dialogues and partnerships with the U.S. government as well as governments in Europe, Asia, and Latin America and with multilateral governmental organizations to determine a process that reflects the consensus of all the participants in the robust debate about the “balance” between personal privacy and security. This global conversation is vertical and horizontal, because some information—personal financial and health records, for example—is particularly sensitive and is therefore more regulated. EDS has been involved in this discussion for well over 10 years and plans to continue its engagement in these public/private dialogues for years to come.
The article by M. Granger Morgan and Elaine Newton was troublesome, because there was the suggestion that anonymity was somehow a “right” in the United States. I disagree. In an era of search engines and digitization of records, people aren’t anonymous. That’s a reality. Controls can be put in place to provide privacy protections and punish actual abuses and serious crimes such as identity theft, but the idea that complete personal anonymity is possible, much less a “right” in the United States, is naïve and simplistic. Frankly, after September 11, every passenger and crew member on the airplane feels more secure because they know every other passenger was “screened” by the same regime, and no one is really anonymous to the authorities.
At the same time, the article was constructive, because there was the strong suggestion that a privacy/security regime could be instituted voluntarily in partnership with business, which frankly is more sensitive to the realities of the market, technology, and our customers’ concerns than is government regulation.
Sometimes, there is amnesia about a central fact: The customer sets the rules, because the customer is the final arbiter. Remember: If privacy is the issue, as in the financial and healthcare sectors, then the processes adapt to that concern. If security is the issue, as in airline travel, then the processes adapt to that concern as was demonstrated in the recent negotiations between the United States and the European Union on airline passenger lists. If there is customer concern about data from radio frequency identification devices, then the rules and business practices will evolve to address those concerns. Sometimes, the government will prod the process forward. In this space of privacy and security convergence with technology deployment, the odds are that government regulation is a lagging indicator.
At the same time, the article raises the legitimate concern about governmental abuse of its powers. History has certainly provided plenty of examples for the concern to be warranted. However, the lesson to be drawn from history is that regulation should be a reaction to demonstrated abuses rather than an attempt to anticipate and proscribe abuse. The marketplace can generate its own more powerful and immediate remedy, especially with an issue where consumer confidence is key to market success.
The article raises a number of points but fails to recognize the current and robust engagement of all participants—academic, business, and government—in the pursuit of a balance. As a participant in many of these dialogues and forums, EDS remains committed to the global dialogue to provide privacy and security simultaneously to our customers and our customers’ customers in full partnership with elected and appointed leaders of governments.
Michael Csaszar and Bhavya Lal (“Improving Health in Developing Countries,” Issues, Fall 2004) have done a service by drawing attention to the need for more research on global health problems. The key issue is how to institutionalize appropriate health R&D financing.
The United States, Japan, and the European Union governments fund nearly half of the world’s health research. Although much of that research eventually benefits poor countries, many global health problems are underfunded. Unfortunately, it is hard to convince legislators in rich countries, who answer to their domestic constituencies, to allocate funds for research on the diseases of the poor abroad. The Grand Challenges in Global Health initiative of the Gates Foundation and the National Institutes of Health offers a model for tapping governmental health research capacity for the diseases of the poor.
The pharmaceutical industry last year provided more than 40 percent of world health R&D expenditures—some $33 billion. The industry brings to market only a small percentage of the products it studies, earning enough from a tiny percentage of very successful products to pay for its entire R&D, manufacturing, and marketing enterprise. Research-intensive pharmaceutical firms are not more profitable than other companies (or the stock market would drive up their prices). Yet their successful model for financing R&D is under attack as overly costly to the consumer. Moreover, the low-cost preventive measures that are most appropriate to the needs of developing countries are unattractive to the pharmaceutical industry. People will pay less to prevent than to cure disease. Tax inducements and regulatory reform should be considered to stimulate industrial R&D.
Ultimately, pharmaceutical companies need strong markets for their products in developing nations. The Interagency Pharmaceutical Coordination Group (IPC) offers one approach to creating these markets. Similarly, the Global Alliance for Vaccines and Immunizations and the Global Fund to Fight AIDS, Tuberculosis and Malaria are providing money to buy vaccines and pharmaceuticals for developing nations.
Philanthropic foundations, including the Howard Hughes Medical Institute, the Wellcome Trust, and the Gates Foundation, fund less than 10 percent of world health research. Yet their leadership has been and is critically important.
After the creation of the Tropical Disease Research Program in 1975, new institutions were created to further encourage research on global health problems, notably the Global Forum for Health Research, the Council on Health Research and Development, the International AIDS Vaccine Initiative, and the Initiative on Public-Private Partnerships for Health. Still, the key to providing more technological innovations appropriate to developing nations and to building their health science capacity probably lies in creating more public and political support for existing institutions while improving their policies and programs.
Michael Csaszar and Bhavya Lal raise important concerns relating to health in developing countries. By focusing on a systems approach, they identify one of the most critical factors that accounts for the success or failure of project activities in developing countries.
The most common source of failure in health innovation systems arises from the lack of focus on specific missions. Even where research missions exist, they tend to be formulated in the developed countries and extended to developing countries. This common practice often erodes the potential for local ownership and undermines trust in the health systems being promoted.
A second cause of failure is the poor choice of collaborating institutions in developing countries. Many of the international research programs do not make effective use of knowledge nodes such as universities in developing countries. Knowledge-based health innovation systems that are not effectively linked to university research are unlikely to add much value to long-term capacity-building in developing countries.
Probably the most challenging area for health innovation systems is the creation of technological alliances needed to facilitate the development of drugs of relevance to the tropics. A number of proposals have been advanced for increasing research investment in this area. They range from networks of existing institutions to new technology-development alliances, many of which focus on vaccine development. Although these arrangements seek to use a systems approach in their activities, the extent to which they involve developing-country universities, research institutions, and private enterprises is not clear. The design of such incomplete health innovation systems can only guarantee failure.
A systems approach to building research capacity and finding ways to apply the research findings to benefit the health of a population is an attractive proposition. I would like to highlight two fundamental issues that must be addressed if the proposed concept is to be successful. My response is based on my experience at SATELLIFE (www.healthnet.org), a nonprofit organization serving the urgent health information needs of the world’s poorest countries through the innovative use of information technology for the past 15 years.
First, what are the mechanisms by which networks will be created for the sharing of research results with health practitioners in developing countries? What are the formal, reliable systems for knowledge dissemination leading to an evidence-based practice of health care in a country? How does the knowledge move from the capital cities, where it is generated, to rural areas, where health care providers are scarce and 90% of the population lives? In these rural areas, nurses and midwives are the frontline health workers who mainly see patients. These are challenging questions with no easy answers, but clearly information and communications technology can play a significant role.
Second, information poverty plagues researchers and health practitioners in emerging and developing countries. Many medical libraries cannot afford to subscribe to journals that are vital and indispensable informational resources for conducting research. How does one gain access to the most current, reliable, scientifically accurate knowledge that informs research and data for decisionmaking? Poor telecommunications infrastructure, expensive Internet access, poor bandwidth to surf the Web, and the lack of computers and training in their use often work against the researcher in resource-poor countries. Timely, affordable, and easy access to relevant knowledge has a profound impact on policy formulation and the delivery of health care in a country. On October 22, 2004, a subscriber from Sri Lanka sent a message to our email-based discussion group on essential drugs, trying to locate a full-text article: “We don’t have Vioxx here in Sri Lanka but there are about 12 registered brands of rofecoxib in the market. I would be thankful if anyone having access to that article can mail it to me as an attachment. (We don’t have access to many medical journals!)” The digital divide is not only about computers and connections to the Internet but also about the social consequences of the lack of connectivity.
The systems approach to developing research capacity and disseminating findings most likely addresses these crucial barriers in an implicit manner. But they need to be made more explicit so as to garner the necessary resources at the social/governmental, organizational, physical, and human levels to make a real difference.
David H. Guston (“Forget Politicizing Science. Let’s Democratize Science!” (Issues, Fall 2004) rightly argues that public discussion should move beyond bickering over the politicization of science and consider how science can be made more compatible with democracy. But that may be difficult without some discussion of what politicization is. One useful concept says that politics is the intersection of power and conflict. So if conflicts of opinion on a science advisory committee are resolved through fair discussion, they are not political. Voting on advisory committees, however, amounts to the political resolution of conflicts through the equal distribution of power. Similarly, even though good advice may enhance the power of public officials, it would be odd to call appointing the best scientists to an advisory committee political. But such appointments may become political, if they become matters of conflict or if power is used to keep latent conflicts from emerging. Science is thus rarely entirely political, but usually in part; and it always has the potential to become more political.
This view of politics suggests that the Bush administration and its critics are each only half right when accusing the other of politicizing science: The administration has apparently used its power to dominate selected advisory processes, and its critics have publicly contested that use of power. From this perspective, the politicization of science might be compared to the politicization of other social institutions once deemed essentially private. The workplace and the family, for example, have been politicized to a certain extent as part of efforts to fight discrimination and domestic violence, respectively. In each case, politicization was a necessary part of alleviating injustices, and coping with politics proved better than trying to suppress it.
The best way of coping with politics is democracy, and Guston’s suggestions promise a more just distribution of the costs and benefits of science. Pursuing these suggestions effectively will require careful consideration of what democratization means. Guston refers to ideals of accessibility, transparency, accountability, representation, deliberation, participation, and the public interest. These ideals are not always compatible. Creating spaces for public deliberation on science policy, for example, may require limits on transparency and participation, since media scrutiny or too many participants may hinder productive deliberation. And although interest groups are usually not representative of all citizens, they can often enhance participation more effectively than deliberative forums. Democratizing science thus requires a wide variety of institutions, each focused on a limited set of ideals.
More generally, some modes of democratizing science distribute power far more equally than others. If “democratic” means open to public view, accountable to professional associations, and representative of public interests, science has been democratic for much of its history. But if scientists are to be held accountable to elected officials or lay citizens, and if representing the public interest depends on public input, then democratizing science becomes both more controversial and more difficult. Democratizing science thus requires a willingness to politicize not only science but also democracy.
David H. Guston is correct to assert that science is political, and his proposals for increasing accessibility, transparency, and accountability in science point us in a positive direction. However, the success of Guston’s proposals will depend on two fundamental reforms. First, comprehensive scientific literacy initiatives must emphasize not just the “facts” of science but should also teach citizens to think critically about science. Second, scientists need to be offered incentives to collaborate with lay citizens in the scientific enterprise.
We need to understand—and teach—that science is not just political in the sense that elected officials engage in the process of setting science policies and funding priorities. The ways in which scientists understand the phenomena they study also reflect an array of social and political factors. Thus, for example, the use of the techniques of the physical sciences in biology beginning in the early 1930s did not come about because nature called on scientists to think about biological phenomena in physical terms, but because the Rockefeller Foundation had the resources to push biologists in this direction. Likewise, nature doesn’t tell scientists to prefer false negatives to false positives in their research. This is a well-established social norm with political implications. Today, a scientist who claims that a phenomenon is real when it is not (a false positive) may hurt her or his professional reputation. By contrast, lay citizens who are concerned about carcinogen exposure in their local environment would probably prefer to be incorrectly informed that they were exposed (a false positive) than that they were not (a false negative). In short, science is thoroughly political, reflecting the interplay of actors with varying degrees of power and diverse interests.
To give citizens the sense that science is political in its everyday practice demands that we rethink what it means to be scientifically literate. We must not only teach our children how experiments are done, what a cell is, and the elements that make up water, but also that the phenomena scientists study, the way they study them, and what scientists accept as competent experimental designs all reflect social and political processes. This kind of scientific literacy is the necessary bedrock of a truly democratic science.
Democratizing science also demands that we alter the incentive structure for scientists. Guston points to the virtues of organizations that offer lay citizens the chance to shape research agendas. What motivation do academic scientists have to work with citizens to craft research agendas in such arenas? Will doing so improve the prospect that a junior faculty member will get tenure? Will the results of the citizen-prompted research be publishable in scholarly journals? To successfully democratize science demands that universities broaden their criteria for tenure so that scientists get credit from their colleagues for working with citizens.
I fully endorse Guston’s proposals, but to thoroughly democratize science, we will need to broaden what it means to be scientifically literate and work to alter the structure of incentives scientists have for doing their work.
Evidence of the need to improve science education in elementary school, especially in the lower grades, is not far to seek. The recently released results of the Trends in International Mathematics and Science Survey (TIMSS) 2003 show that achievement by U.S. fourth-grade students is notwhat this nation expects. Between 1995 and 2003, fourth-graders in the United States did not improve their average science scores on TIMSS. In “Precollege Science Teachers Need Better Training” (Issues, Fall 2004), John Payne poses the question: Could part of U.S. students’ problem with science achievement have its roots in the way and extent to which elementary science teachers are being trained to teach science while in their college programs?
The short answer must be yes. Although many factors influence student achievement, the preparation of science teachers is certainly one critical factor. One analysis, based on the Bayer Facts of Science Education, suggests that elementary teachers do not teach science daily, do not feel “very qualified” to teach science, and do not rate their school program very highly. What could an undergraduate program do to help alleviate these problems?
In 2007-2008, the No Child Left Behind legislation mandates that school districts assess all students in science at least once in the elementary grades, thus elevating science to the same tier as literacy and mathematics. The result: More science will be taught in elementary schools. So we have a response to the first issue, but it is not a result of teacher education.
What about the second issue? One of the limiting factors for elementary teachers feeling qualified to teach science is their understanding of science. I suggest that colleges design courses specifically for elementary teachers. Often, the response to such a suggestion is that they should take the standard courses such as introductory biology, chemistry, physics, and geology. Well, at best they only will take two of these courses. And these courses are usually not in the physical sciences, where our teachers and students have the greatest deficits. Colleges and universities can design courses that develop a deep conceptual understanding of fundamental science concepts and provide laboratory experience based on core activities from elementary programs. There is research supporting this recommendation that comes mostly from mathematics education, but in my view it applies to science teacher education as well.
The third issue, exemplary science programs for elementary schools, could be addressed by an emphasis on National Science Foundation (NSF) programs in future teacher education programs. The reality is that undergraduate teacher education has some, but not substantial, impact on the actual program used by a particular school district. State standards and the economics and politics of commercial publishers all play a much more significant role in the adoption and implementation of exemplary programs.
In the NSF Directorate for Education and Human Resources, programs related to the issue of teachers’ professional development and exemplary programs have been severely reduced because of recent budget reallocations. Without such external support, the likelihood of major reforms such as those envisioned by Payne and proposed here is very low.
I completely agree with John Payne’s comments about the success of efforts by the National Science Foundation (NSF) and others to improve the quality of in-service teacher education activities in science, technology, engineering, and mathematics (STEM) fields. However, he seems unaware of the equally aggressive efforts by NSF to improve the quality of pre-service teacher education in STEM fields.
Between 1991 and 2002, I served as a program officer and later as division director in NSF’s Division of Undergraduate Education. That division was assigned responsibility for pre-service education programs in 1990 in recognition that teacher preparation is a joint responsibility of STEM faculty and departments as well as schools and colleges of education. The division incorporated attention to teacher preparation in all of its programs for curriculum, laboratory, instructional, and workforce development. The flagship effort was the Collaboratives for Excellence in Teacher Preparation (CETP) program, which made awards from 1993 to 2000. The CETP program was predicated on the realization that effective teacher preparation programs require institutional support and the concerted effort of many stakeholders, including faculty and administration from two-year, four-year, and research institutions; school districts; the business community; and state departments of education. Funded projects were expected to address the entire continuum of teacher preparation, including recruitment, instruction in content, pedagogy, classroom management, early field experiences, credentialing, and induction and support of novice teachers. Attention was also given to the preparation of teachers from nontraditional sources.
Two evaluations were done of the CETP program. The first was an evaluation of the first five funded projects released in March 2001 by SRI International. The report concluded that CETP was “highly successful” in exposing pre-service teachers to improved STEM curricula, more relevant and innovative pedagogy, and stronger teacher preparation programs. The program was also judged “very successful” in involving STEM faculty. It also noted that “the potential for institutionalization looks positive.” The other evaluation was performed by the Center for Applied Research and Educational Improvement at the University of Minnesota and was a summative evaluation of the entire project. This report, released in March 2003, concluded that “the establishment and institutionalization of the reformed courses stand out as do improved interactions within and among STEM and education schools and K-12 schools.” Furthermore, when comparing graduates of CETP projects with graduates of other projects, the report noted, “CETP[-trained] teachers were clearly rated more highly than non-CETP[-trained] teachers on nine of 12 key indicators.” These indicators included working on problems related to real-world or practical issues, making connections between STEM and non-STEM fields, designing and making presentations, and using instructional technology. I wish STEM faculty were as well prepared for their instructional responsibilities; but that’s a topic for an article in itself.
It’s unfortunate that the CETP program was ended before we could obtain rich longitudinal data that might inform us about the actual classroom performance of the CETP-trained teachers. Of greater concern has been the volatility that has followed the expiration of CETP. The CETP program made new awards over an eight-year period (or two undergraduate student lifetimes). CETP was followed, briefly, by the STEM Teacher Preparation program, which was later folded into the Teacher Professional Continuum along with the previously separate program for in-service teacher enhancement lauded by Payne. This compression was necessary in order to pay for the Math and Science Partnership (MSP) program at NSF, an ambitious effort that focuses on partnerships between institutions of higher education and K-12 school districts. After three rounds of awards, there is now an effort to remove MSP from NSF and add funds to a similarly named program at the Department of Education that now functions more by block grant than by competitive peer review. So on balance, Payne’s call for new efforts is entirely appropriate as long as we amend his call to request that, when indications are that they are successful, programs also be sustained.
John Payne correctly identifies the most serious problem in science education: the poor learning of science in the elementary school years. He also recognizes that the poor teaching of science by elementary school teachers is at the core of poor learning by students. I applaud him for calling for better educating those who will become elementary school teachers. Finally, I extend my appreciation and congratulations to him and his company for their long-term commitments to helping improve the situation.
Having said these things, I would like to make some observations and take exception to a few of his claims. Having followed the reforms in Pittsburg, I suggest that the early and dramatic improvements in student performance and attitudes toward science there should be attributed to the use of elementary science specialists. These teachers have uncommonly strong backgrounds in science from their undergraduate years, and they make up a small percentage of all elementary school teachers. By contrast, most elementary teachers and teacher candidates are fearful of science, many to the point of anxiety and dislike, and took only few science courses in college (which are often large lecture classes in the general education curriculum).
Many of us have long noted that science (and mathematics) anxiety in elementary school teachers is one more consequence of poor teaching in the elementary (and often in the secondary) years of a teacher’s education. Bad attitudes and practices are passed from generation to generation. I assert that meaningful progress in reforming early science education would be best served by converting to the use of elementary science specialists, parallel to how specialists are used for instruction in art and music.
The practice of inquiry-based science deserves further comment. I don’t doubt that Payne accurately quoted published figures, such as 95 percent of deans (of education, I presume) and 93 percent of teachers say that students learn science best through experiments and discussions where they defend their conclusions. And that 78 percent of new teachers say they use inquiry-based science teaching most often (compared to 63 percent 10 years ago). However, based on my personal observations over many years, the observations of many colleagues who visit classrooms regularly, and the continuing poor performance of elementary students in science nationwide (selected communities like Pittsburgh excepted), these figures simply cannot be believed. I have administered many surveys to teachers myself, and one has to expect that most teachers report what they wished they were doing rather than what they actually do. Learning by inquiry is difficult for most science majors in college. Expecting most elementary school teachers to become comfortable and skilled at teaching this way is completely unrealistic unless the budget for teacher professional development activities in science is increased a hundredfold.
Investing in and requiring the use of elementary science specialists is a cheaper and more reliable solution to the K-8 learning problems.
In “Meeting the New Challenge to U.S. Economic Competitiveness”(Issues, Fall 2004), William B. Bonvillian offers a concise statement of many of the challenges now facing the U.S. economy and especially its technology-intensive sectors. He reminds us of the concerted efforts during the 1980s of business, government, organized labor, and academia to find new ways of innovating and producing that led in large measure to the boom times of the 1990s. He recommends returning to this formula to search again for new ways to stay “on top.”
This is certainly a wise prescription and one that leaders in every sector should embrace. Today, Americans are sharply divided not only on their politics but also on their understanding of the causes and consequences of current economic ills. The debate about offshore outsourcing and whether it is good or bad for U.S. jobs is only one illustration of how far we are from a shared understanding of the problem, let alone a solution. A fresh dialogue is essential to help us move forward as a nation.
2004 is not 1984, however, and it is not obvious that the old formula for dialogue would succeed today. Many more and different kinds of legitimate stakeholders need to be in the conversation. Part-time, contract, and self-employed workers, as well as the new generation of knowledge and service workers, have as great a stake as do the members of the old manufacturing trade unions. “New economy” companies view the challenges and opportunities of the global economy in quite a different light from those from an earlier era. Resource scarcity, environmental challenges, and global climate change are just as important as the balance of trade and productivity growth in defining the next American future. Any process of national dialogue must incorporate all of these perspectives, and more, if it is to succeed.
I see two highly promising pathways for a fruitful new American dialogue, in addition to Bonvillian’s wise suggestion of a new “Young Commission.” The first is for Congress to reassert its traditional role as the forum within which the United States openly examines its most pressing problems. During the past decade, Congress has lost much of its real value, turning from rich and open inquiry directed at solving problems to sterile partisan exercises intended to preserve the status quo or score points against the political opposition. Our country can no longer afford to squander our precious representative institution in this way. Congress must go back to real work.
The second is for the organizers of a new American dialogue to find ways to take advantage of the immensely rich Internet-based communications culture, which barely existed when the first Young Commission was doing its work in the 1980s. All the tools of the new forms of information exchange— Web pages, email, list serves, chat rooms, blogs, data mining, and all the other new modes—offer unprecedented opportunities, not only to tap into the chaotic flow of information and misinformation that characterizes the 21st-century world but also to pulse that flow in ways that yield new insights that can help build the new competitive nation that Bonvillian and I and others like us are seeking.
William B. Bonvillian states well the key issues related to U.S. economic competitiveness: “If the current economy faces structural difficulties, what could a renewed economy look like? Where will the United States find comparative advantage in a global economy?” After a brief review and history of competitiveness, he focuses on innovation as a major factor and discusses the appropriate role for government in support of innovation in the context of five key issues: R&D funding, talent, organization of science and technology (S&T), innovation infrastructure, and manufacturing and services.
Indeed, well-crafted government policies and programs in these areas could significantly improve the ability of U.S.-based companies to innovate and excel in the global economy. I found it particularly noteworthy that Bonvillian’s proposals represent a positive agenda. His proposals for funded government programs do not have the appearance of corporate welfare, and his S&T proposals acknowledge the limits of federal R&D budgets and the need to prioritize investments. Bonvillian also avoids protectionist recommendations and emphasizes the need for U.S. companies, individuals, and institutions, including the government, to innovate in order to compete. This positive agenda is one that could muster bipartisan support within Congress and the Executive Branch.
Manufacturing is an area primed for a public/private partnership. Bonvillian mentions several public policy actions that could help our manufacturing sector, including trade, tax, investment, education, and Department of Defense program proposals. However, he identifies innovation in manufacturing as the most important element. Bonvillian calls for a revolution in manufacturing that exploits our leadership and past investments in technology. He calls for “new intelligent manufacturing approaches that integrate design, services, and manufacturing throughout the business enterprise.” Such an approach is worthy of a public/private partnership.
As we embark on new public/private partnerships, we must realize that globalization has significantly altered the playing field. Consider the case of SEMATECH, which Bonvillian correctly identifies as a government/industry partnership success of the 1980s. SEMATECH was originally established as a public/private partnership to ensure a strong U.S. semiconductor supplier base (especially for lithography) in light of a strong challenge from Japan. The creation of SEMATECH, along with effective trade and tax policies, S&T investments, and excellent management in U.S. companies, helped the U.S. semiconductor industry recover and thrive. However, during the late 1990s, in response to the globalization of the semiconductor industry, SEMATECH evolved from a U.S.-only consortium working to strengthen U.S. suppliers into a global consortium with a global supply chain focus. Today, SEMATECH has members from the United States, Europe, and Asia, and works with global semiconductor equipment and material suppliers. Among SE-MATECH’s most significant partnerships is one with TEL, the largest Japanese semiconductor equipment supplier and a major competitor of U.S. suppliers. Applied Materials, a U.S. company that is now the world’s largest semiconductor equipment supplier, achieved its growth by making large investments in R&D, aggressively pursuing global customers, and purchasing companies (hence technology) throughout the world. And though Applied Materials is the world’s largest semiconductor equipment supplier, there are no longer any U.S. suppliers of leading-edge lithography. In today’s global economy, U.S. semiconductor manufacturers view a diverse global supply chain as a strength, not a threat. U.S. policy-makers must develop new policies and programs that acknowledge the realities of the global economy and recognize that to maximize benefit to the United States, government investments in innovation may need to include the participation of global companies and yield benefits beyond our borders.
Bonvillian has established an excellent framework for a reasoned debate on meeting new challenges to U.S. economic competitiveness. And as he asserts, it is time to go from analysis to action.
William B. Bonvillian spells out a series of challenges to long-term U.S. competitiveness. The responseto those challenges will go a long way toward determining America’s 21st-century prosperity and capacity for international leadership.
In the past 15 years, China, India, and the former Soviet Union have brought 2.5 billion people into the global economy. China is already producing technologically sophisticated products, and India is a growing force in providing information technology and other services. Korea has emerged as a power in advanced electronics, and Brazil is the third largest manufacturer of civilian aircraft.
The digital revolution continues to change the playing field for many occupations that were formerly shielded from international competition. Europe, Japan, and much of the world are seeking to emulate the successful U.S. model of innovation and are actively recruiting students and scientists that used to think of America as the preferred destination.
What then must the United States do to retain its leadership in the global economy? First, we need to move past the debate on government versus the market and focus on developing the right mix of public policies and private initiative to ensure an innovative future.
Second, we must establish the right macroeconomic context. That means reducing the fiscal deficit without endangering needed investments in R&D. It also means striking a global bargain with the world’s major economies to gradually reduce the size of our current account deficit that has helped erode the country’s manufacturing base.
Third, we need to adjust our national research portfolio to ensure adequate funding for the physical sciences and to help bridge the gap between the private sector and basic research.
Fourth, we must adopt an aggressive strategy to prepare Americans for the careers of the future and continue to welcome international students and scientists.
Finally, we need to forge a durable political consensus that supports a strategy for 21st-century innovation. National security played that role in the 1960s and 1970s, and international competition was an added force in the 1980s. We need to articulate a national mission that will galvanize popular support and, like the space program, excite young Americans about careers in science and technology. The president’s proposed mission to Mars might be the answer. I would suggest two others: new forms of energy that will reduce and eventually end dependence on the Middle East while better preserving the environment, and renewed U.S. leadership in making a globalattack on tropical and other threatening diseases.
Hats off to Bonvillian for clearly spelling out some critical American choices. Working on Capitol Hill, Bonvillian is in a position to help turn good ideas into timely legislation. We all need to wish him well.
Like Tom Paine demanding attention for “Common Sense,” William B. Bonvillian makes a persuasiveand eloquent argument that the U.S. economy faces grave and unprecedented threats—a situation that cries out for an immediate creative response.
He argues cogently that we’ve never been able to measure our ability to remain at the forefront of innovation with any precision. It’s hard to attract attention to problems you can’t see. It’s fair to ask whether, at the end of the 19th-century, Britain could have seen signs that it was about to blow a two-century lead in innovation. Alarm bells did not ring, even as huge amounts of capital flowed to upstart projects in the United States, nor as Americans started dozens of universities that were admitting smart American rustics and granting degrees in “agricultural and mechanical arts” and other topics not considered suitable for young gentlemen. Politics in Britain focused on the burdens of empire, not on whether local steel mills were decades out of date.
The recent presidential campaign was particularly disappointing in that the debate on the United States’ declining status in innovation was scarcely joined. This was painful. Federal research investment is essential because these investments provide a stream of radically new ideas and the sustained investments needed to engage in bold projects such as sequencing the genome. It is outrageous that this investment continues to decline as a fraction of the nation’s economy, and it is vulnerable to even more dramatic new cuts when post-election budget writers face the reality of ballooning defense costs and declining revenues. As the long knives come out, it will be a battle to see who screams the loudest, and it will be hard for the arguments of the research community to be heard in the din.
As Bonvillian points out, the success of the federal research investment depends not just on its size but on the skill with which it’s managed. We can only succeed if federal managers find a way to move adroitly to set new priorities and ensure that investments are made where they are most likely to yield results. They must also ensure that the process rewards high-risk proposals whose success can yield high potential impacts (the old DARPA style). Many of these concepts will not come with familiar labels but will operate at the interface between disciplines such as biology, mathematics, physics, and engineering. Bonvillian’s insight that technical innovation must now be coupled with “an effective business model for using the technology” means that many innovations will involve both products and services. And his observation that “a skilled workforce is no longer a durable asset” demands that we find new, more productive ways of delivering education and training.
Loss of technical leadership is an enormous threat to our economic future. It cripples our ability to meet social goals such as environmental protection or universal education at an affordable cost. It undermines a central pillar of national and homeland security. What I fear most is that instead of being remembered as Paine, Bonvillian will be remembered as Cassandra—completely correct and completely ignored.
Women in science
I was dismayed to see your magazine publish an article that advocates discrimination. This is Anne E. Preston’s “Plugging the Leaks in the Scientific Workforce” (Issues, Summer 2004), where she says that universities should make “stronger efforts to employ spouses of desired job candidates.” Because universities have finite resources, such efforts inevitably reduce job prospects for candidates who lack the “qualification” of a desirable spouse. Favoring spouses thus amounts to the latest version of the old-boy system, where hiring is based on connections rather than on merit. When a couple cannot get jobs in the same city, it is unfortunate. But when a single person is denied a job because the spouse of a desirable candidate is favored, it is not only unfortunate but also unjust. It is particularly ironic when favoring women who are married to powerful men is somehow felt to serve the cause of feminism.
Future of the Navy
Robert O. Work’s “Small Combat Ships and the Future of the Navy” (Issues, Fall 2004) makes a much-needed contribution to the debate over the transformation of the U.S. armed forces to meet the threats of the future.
As Work notes, the case in favor of acquiring at least some Littoral Combat Ships (LCSs) is strong. The U.S. Navy has conducted, and will continue to conduct, a range of missions that would benefit from the capabilities of a ship such as the LCS. Moreover, the development of these ships can foster innovation within the naval services. The Australian, Norwegian, and Swedish navies, among others, have fielded highly innovative small craft in recent years. The U.S. Navy can benefit from many of these developments through the LCS program. Finally, regardless of whether one believes that the era of the aircraft carrier is at an end, there is a strong argument for diversifying the Navy’s portfolio of capabilities.
Although the case for investment in LCSs is strong, Work correctly notes that there is opposition to even a limited buy in parts of both the Navy and Congress. The fact that the Navy envisions LCSs undertaking missions that it considers marginal, such as mine warfare, demonstrates that to some, small combatants are themselves peripheral.
This is not the first time that the Navy has considered a prominent role for small combatants. In the late 1970s, Chief of Naval Operations Elmo Zumwalt envisioned a fleet that would include a number of new models of small combatants, including missile-armed hydrofoils. His plans came to naught, however, because of a combination of organizational opposition within the Navy and uncertainty over how such ships would fit U.S. strategy. Supporters of LCS would do well to heed this experience. The LCS program will succeed only if supporters can demonstrate that it will have value as an instrument of U.S. national power.
Robert O. Work’s assessment of the U.S. Navy’s ongoing transformation and the Littoral Combat Ship (LCS) program captures the essential technical and doctrinal challenges facing the Navy as it transitions to a 21st-century fleet postured to meet U.S. national security requirements in a dangerous and uncertain world. Work’s article is a summary of a masterful study he completed early in 2004 for the Center for Strategic and Budgetary Assessments in Washington, D.C.
Today’s Navy, Marine Corps, and Coast Guard are proceeding on a course of true transformation. The term runs the risk of becoming shopworn in the Bush administration’s national security lexicon, but it is undeniable that the U.S. sea services are being transformed in a way comparable to the transition to modern naval power that began roughly 100 years ago. Work’s article highlights the key attributes of this transformation, notably the development of highly netted and more capable naval platforms.
His contemplation of the Navy of tomorrow resembles the experience of naval reformers in ages past. As Bradley Allen Fiske wrote at the Naval War College in 1916, “What is a navy for? Of what parts should it be composed? What principles should be followed in designing, preparing, and operating it in order to get the maximum return for the money expended?”
Chief of Naval Operations (CNO) Admiral Vern Clark grapples with the same issues that Fiske pondered 88 years ago. Clark seeks to build a balanced fleet encompassing potent platforms and systems at both the high- and low-end mix of the Navy’s force structure—a force able to meet all of its requirements in both coastal waters and the open ocean.
Tomorrow’s Navy will be able to project combat power ashore with even higher levels of speed, agility, persistence, and precision than it does today, but Clark also faces the stark challenge of affordability in recapitalizing the Navy, when it is unlikely that funding for Navy recapitalization will be increased and, because of a variety of factors, could be decreased if wiser heads do not prevail.
At a time when the number of warships in the Navy is falling to the lowest level since 1916, the need for a significantly less expensive, modular, and mission-focused LCS is obvious. Today’s ships are far more capable than hulls of just a decade ago, but in a world marked by multiple crises and contingencies, numbers of ships have an importance all their own. “There is no substitute for being there,” is how one former CNO expressed this consideration. LCS will help the Navy to achieve the right number of ships in the fleet by providing a capable and more affordable small combat ship suitable for a wide range of missions.
Clark has spoken eloquently of the shared responsibilities faced by navies and coast guards around the world in keeping the oceans free from terror to allow nations to prosper. “To win this 21st-century battle on the 21st-century battlefield, we must be able to dominate the littorals,” Clark said last year. ”I need LCS tomorrow.”
Work offers some useful cautions regarding LCS design considerations (notably the tradeoff between high speed and payload), and his recommendation that the Navy evaluate its four first-flight LCS platforms carefully before committing to a large production run makes sense.
It should be noted, however, that the Navy has conducted extensive testing and experimentation in recent years using LCS surrogate platforms, including combat operations during the invasion of Iraq. It has a good grasp of its mission requirements in the littorals. As for the Navy’s requirement for a high-speed LCS, no less an authority than retired Vice Admiral Arthur Cebrowski, director of the Office of Force Transformation in the Department of Defense, supports the Navy’s position. As he observed earlier this year, speed is life in combat.