From the Hill – Spring 2002

Federal R&D in FY 2002 will have biggest percentage gain in 20 years

Federal R&D spending will rise to $103.7 billion in fiscal year (FY) 2002, a $12.3 billion or 13.5 percent increase over FY 2001. It is the largest percentage increase in nearly 20 years (see table).

In addition, in response to the September 11, 2001, terrorist attacks and the subsequent anthrax attacks, Congress and President Bush approved $1.5 billion for terrorism-related R&D, nearly triple the FY 2001 level of $579 million. The president had originally proposed $555 million. About half the money comes from regular appropriations and half from emergency funding approved after the attacks.

All the major R&D agencies will benefit from the significant spending boost, in contrast to the proposed cuts for most agencies in the administration’s initial budget request. The biggest increases go to the two largest R&D funding agencies: the Department of Defense (DOD) and the National Institutes of Health (NIH). DOD R&D will increase $7.4 billion or 17.3 percent to $50.1 billion, largely because of a 66.4 percent increase, to $7 billion, for ballistic missile defense R&D. Basic research will increase 5 percent to $1.4 billion; applied research by 14.6 percent to $4.2 billion.

NIH R&D will increase 15.8 percent to $22.8 billion to fulfill the fourth year of Congress’s five-year campaign to double the agency’s budget. Every institute will receive an increase greater than 12 percent, and five will receive increases greater than 20 percent. NIH counterterrorism R&D will jump from $50 million to $293 million, including $155 million in emergency appropriations for construction of a biosafety laboratory and for bioterrorism R&D.

The total federal investment in basic and applied research will increase 11 percent or $4.8 billion to $48.2 billion. NIH remains the largest single sponsor of basic and applied research; in FY 2002, NIH will fund 46 percent of all federal support of research in these areas.

Nondefense R&D will rise by $4.6 billion or 10.3 percent to $49.8 billion, the sixth year in a row that it has increased in inflation-adjusted terms. Because a large part of recent increases stems from steady growth in the NIH budget, NIH R&D has become nearly as large as all other nondefense agencies’ R&D funding combined. Funding for nondefense R&D excluding NIH has stagnated in recent years. After steady growth in the 1980s, funding peaked in FY 1994 and then declined sharply. The FY 2002 increases for non-NIH agencies, although large, just barely bring these agencies back to the funding levels of the early 1990s.

The following is a breakdown of appropriations for other R&D funding agencies.

In the Department of Health and Human Services, the Centers for Disease Control and Prevention (CDC) will receive a 33.3 percent increase to $689 million for its R&D programs. Its counterterrorism R&D funding will climb to $130 million, up from $37 million in FY 2001. The CDC also received more than $1 billion in emergency funding.

The National Aeronautics and Space Administration’s (NASA’s) total budget of $14.9 billion in FY 2002 represents a 4.5 percent increase over FY 2001. Total NASA R&D, which excludes the Space Shuttle and its mission support costs, will increase 3.8 percent to $10.3 billion. The troubled International Space Station, now projected to run more than $4 billion over budget during the next five years, will receive $1.7 billion, an 18.4 percent cut.

The Department of Energy (DOE) will receive $8.1 billion, which is $378 million or 4.9 percent more than in FY 2001. R&D in DOE’s three mission areas of energy, science, and defense will all rise, with small increases for energy R&D (up 1.6 percent) and science R&D (up 2.1 percent) and a larger increase for defense R&D (up 8.4 percent), which partially reflects emergency appropriations for counterterrorism R&D. DOE received a large increase for its programs to combat potential nuclear terrorism.

National Science Foundation (NSF) R&D funding will rise by 7.6 percent to $3.5 billion. Most research directorates will receive increases greater than 8 percent, compared to level or declining funding in the president’s request. The largest increases, however, will go to NSF’s non-R&D programs in education and human resources for a new math and science education partnerships program. The final budget also boosts funding for information technology and nanotechnology research.

The U.S. Department of Agriculture (USDA) will receive a large budget boost from emergency funds to combat terrorism. USDA R&D will total $2.1 billion in FY 2002, a boost of $180 million or 9.2 percent. USDA’s intramural Agricultural Research Service (ARS) will receive $40 million in emergency funds for research on food safety and potential terrorist threats to the food supply and $73 million in R&D facilities funds to improve security at two laboratories that handle pathogens.

The Department of Commerce’s R&D programs will receive $1.4 billion, which is $153 million or 12.7 percent more than in FY 2001. Commerce’s two major R&D agencies, the National Institute of Standards and Technology (NIST) and the National Oceanic and Atmospheric Administration (NOAA), will both receive large increases. NOAA R&D will rise by 15.3 percent to $836 million. NIST’s Advanced Technology Program will get a 26.6 percent boost to $150 million, despite the desire of the administration and the House to all but eliminate the program. Total NIST R&D will increase 17.1 percent to $493 million.

The Department of the Interior’s R&D budget totals $673 million in FY 2002, an increase of 6.5 percent. Although the president’s FY 2002 request caused alarm in the science and engineering community because of its proposed cut of nearly 11 percent for R&D in the U.S. Geological Survey (USGS), the final budget restores the cuts and gives USGS an increase of 3.1 percent over FY 2001 to $567 million.

The Environmental Protection Agency FY 2002 R&D budget will increase to $702 million, up $93 million or 15.3 percent from last year. The boost is due to $70 million in emergency counterterrorism R&D funds, including money for drinking water vulnerability assessments and anthrax decontamination work. The nonemergency funds for most R&D programs will remain at the FY 2001 level, though nearly 50 congressionally designated research projects were added to the Science and Technology account and nearly 20 earmarked R&D projects were added to other accounts.

Department of Transportation R&D will climb to $853 million in 2002, which is $106 million or 14.2 percent more than FY 2001. The Federal Aviation Administration (FAA) will receive $50 million in emergency counterterrorism funds to develop better aviation security technologies. The FAA will receive a total of $373 million for R&D, a gain of 23.9 percent because of the emergency funds and also because of guarantees of increased funding for FAA programs that became law last year.t student visitors and using innovative technologies to enforce immigration policies.

R&D in the FY 2003 Budget by Agency
(budget authority in millions of dollars)

  FY 2001
Actual
FY 2002
Estimate
FY 2003
Budget
Change FY 02-03
Amount Percent
Total R&D (Conduct and Facilities)
Defense (military) 42,235 49,171 54,544 5,373 10.9%
  S&T (6.1-6.3 + medical) 8,933 9,877 9,677 -200 -2.0%
  All Other DOD R&D 33,302 39,294 44,867 5,573 14.2%
Health and Human Services 21,037 23,938 27,683 3,745 15.6%
  Nat’l Institutes of Health 19,737 22,539 26,472 3,933 17.4%
NASA 9,675 9,560 10,069 509 5.3%
Energy 7,772 9,253 8,510 -743 -8.0%
  NNSA and other defense 3,414 4,638 4,010 -628 -13.5%
  Energy and Science programs 4,358 4,615 4,500 -115 -2.5%
Nat’l Science Foundation 3,363 3,571 3,700 129 3.6%
Agriculture 2,182 2,336 2,118 -218 -9.3%
Commerce 1,054 1,129 1,114 -15 -1.3%
  NOAA 586 644 630 -14 -2.2%
  NIST 412 460 472 12 2.6%
Interior 622 660 628 -32 -4.8%
Transportation 792 867 725 -142 -16.4%
Environ. Protection Agency 598 612 650 38 6.2%
Veterans Affairs 748 796 846 50 6.3%
Education 264 268 311 43 16.0%
All Other 922 1,021 858 -163 -16.0%
  Total R&D 91,264 103,182 111,756 8,574 8.3%
Defense R&D 45,649 53,809 58,554 4,745 8.8%
Nondefense R&D 45,615 49,373 53,202 3,829 7.8%
  Nondefense R&D excluding NIH 25,878 26,834 26,730 -104 -0.4%
Basic Research 21,330 23,542 25,545 2,003 8.5%
Applied Research 21,960 24,082 26,290 2,208 9.2%
Development 43,230 50,960 55,520 4,560 8.9%
R&D Facilities and Equipment 4,744 4,598 4,401 -197 -4.3%

Source: AAAS, based on OMB data for R&D for FY 2003, agency budget justifications, and information from agency budget offices.

Bush FY 2003 R&D budget increases would go mostly to DOD, NIH

On February 4, the Bush administration released its fiscal year (FY) 2003 budget request containing a record $111.8 billion for R&D. But in a repeat of last year’s request, nearly the entire increase would go to the Department of Defense (DOD) and the National Institutes of Health (NIH).

There are no clear patterns in the mix of increases and decreases for the remaining R&D funding agencies. Unlike last year, the FY 2003 budget would see increases and decreases scattered even within R&D portfolios, as agencies try to prioritize in an environment of scarce resources. Some cuts stem from the administration’s campaign to eliminate congressional earmarks, which reached $1.5 billion in FY 2002. Cuts in some agencies are due to efforts to return to normal funding levels from FY 2002 totals inflated by post-September 11 counterterrorism appropriations. However, spending on counterterrorism activities would remain robust, particularly in the areas of public health infrastructure, emergency response networks, and basic health-related research.

In sharp contrast to the financial optimism of last year’s budget, when economists forecasted endless surpluses, the FY 2003 budget proposes deficit spending. With President Bush taking the lead in preparing the public for budget deficits for the next few years, the most likely outcome is that Congress will spend whatever it feels it needs in order to adequately fund defense, domestic programs, homeland security, and other priorities.

For federal R&D programs, the only thing certain is that NIH will eventually receive its requested $27.3 billion and perhaps even more. In an election year, the pressures on Congress to add more money will be even greater than last year. Combined with the continuing crisis atmosphere surrounding matters related to war and security and the near-disappearance of budget balancing as a constraint, the president’s budget will almost certainly be a floor rather than a ceiling for the R&D appropriations action to come.

NIH would receive $27.3 billion for its total budget, an increase of $3.7 billion (15.7 percent) that would fulfill the congressional commitment to double the budget in five years. Of that, about $1.8 billion would go for antibioterrorism efforts, including basic research, drug procurement ($250 million for an anthrax vaccine stockpile), and improvements in physical security.

NIH R&D would rise 17.4 percent to $26.5 billion. The big winner would be the National Institute of Allergy and Infectious Diseases (NIAID), which would receive a boost of 57.3 percent to $4 billion as NIH’s lead institute for basic bioterrorism R&D. NIAID is also the lead NIH institute for AIDS research, which would increase 10 percent to $2.8 billion. Cancer research is another high priority, with a request of $5.5 billion, of which $4.7 billion would go to the National Cancer Institute. Buildings and Facilities would nearly double to $633 million over an FY 2002 total already inflated by emergency counterterrorism spending. The money would be used to further improve laboratory security, build new bioterrorism research facilities, and finish construction of NIH’s new Neuroscience Research Center. Most of the other institutes would receive increases between 8 and 9 percent.

DOD R&D would rise to $54.6 billion, an increase of $5.4 billion or 10.9 percent. However, most of this increase would go to the development of weapons systems rather than to research. The DOD science and technology account, which includes basic and applied research plus generic technology development, would fall 2 percent to $9.7 billion. After a near doubling of its budget in FY 2002, the Ballistic Missile Defense Organization would see its R&D budget decline slightly to $6.7 billion, which would still be more than 50 percent above the FY 2001 funding level. The Defense Advanced Research Projects Agency would be a big winner, with a proposed 19.2 percent increase to $2.7 billion.

The National Science Foundation (NSF) budget would rise by 5 percent to $5 billion. Excluding non-R&D education activities, NSF R&D would be $3.7 billion, up $129 million or 3.6 percent. $76 million of the increase, however, would be accounted for by the transfer of the National Sea Grant program from the Department of Commerce, hydrologic sciences from the Department of the Interior, and environmental education from the Environmental Protection Agency (EPA). Although mathematical sciences would receive a 20 percent increase to $182 million, other programs in Mathematical and Physical Sciences, such as chemistry, physics, and astronomy, would decline. Another big winner would be Information Technology Research (up 9.9 percent), though at the expense of other computer sciences research. The budget for the administration’s high-priority Math and Science Partnerships would increase from $160 million to $200 million, but most other education and human resources programs would be cut.

The National Aeronautics and Space Administration (NASA) would see its total budget increase by 1.4 percent to $15.1 billion in FY 2003, but NASA’s R&D (two-thirds of the agency’s budget) would climb 5.3 percent to $10.1 billion. In an attempt to reign in the over-budget and much-delayed International Space Station, only $1.5 billion is being requested for further construction, down from $1.7 billion. The Science, Aeronautics and Technology R&D accounts would climb 10.3 percent to $8.9 billion. Space Science funding would increase 13 percent to $3.4 billion, though the administration would cancel missions to Pluto and Europa. Funding for the Biological and Physical Research program, which was greatly expanded last year to take on all Space Station research, would rise 2.8 percent to $851 million. Aero-Space Technology would climb 11.7 percent to $2.9 billion, including $759 million (up 63 percent) for the Space Launch Initiative, which is developing new technologies to replace the shuttle. The NASA request would eliminate most R&D earmarks added on to the FY 2002 budget, resulting in a nearly 50 percent cut to Academic Programs, a perennial home to congressional earmarks.

The Department of Energy (DOE) would see its R&D fall 8 percent to $8.5 billion from an FY 2002 total inflated with one-time emergency counterterrorism R&D funds. Funding for the Office of Science would remain flat at $3.3 billion, but most programs would receive increases, offset by cuts in R&D earmarks and a planned reduction in construction funds for the Spallation Neutron Source. Although overall funding for Solar and Renewables R&D would remain level, the program emphasis would shift toward hydrogen, hydropower, and wind research. Fossil Energy R&D would receive steep cuts of up to 50 percent on natural gas and petroleum technologies. In Energy Conservation, DOE would replace the Partnership for a New Generation of Vehicles with FreedomCAR, a collaborative effort with industry to develop hydrogen-powered fuel cell vehicles. DOE’s defense R&D programs would fall 13.5 percent to $4 billion because the FY 2002 total is inflated with one-time counterterrorism emergency funds. However, defense programs in advanced scientific computing R&D and stockpile stewardship R&D would receive increases.

R&D in the U.S. Department of Agriculture (USDA) would decline $218 million or 9.3 percent to $2.1 billion, mostly because of proposed cuts to R&D earmarks and the loss of one-time FY 2002 emergency antiterrorism funds. Funding for competitive research grants in the National Research Initiative (NRI) would double from $120 million to $240 million, offsetting steep cuts in earmarked Special Research Grants from $103 million to $7 million. The large NRI increase would partially make up for the administration’s decision to block a $120-million mandatory competitive research grants program from spending any money in FY 2003. In the intramural Agricultural Research Service (ARS) programs, Buildings and Facilities funding would fall from $119 million to $17 million because FY 2002 emergency antiterrorism security upgrades have been made and because congressionally earmarked construction projects would not be renewed. ARS research would decrease by $30 million to $1 billion, but selected priority research programs would receive increases, offset by the cancellation of R&D earmarks.

Department of Commerce R&D programs would decline 1.3 percent to $1.1 billion. Once again the administration has requested steep reductions in the Advanced Technology Program at the National Institute of Standards and Technology. National Oceanic and Atmospheric Administration (NOAA) R&D would decline by 2.2 percent or $14 million due to the proposed transfer of the $62 million National Sea Grant program to NSF. Overall, NOAA R&D programs would increase.

R&D in the Department of the Interior would decline 4.8 percent to $628 million, but steeper cuts would fall on Interior’s lead science agency, the U.S. Geological Survey (USGS). USGS R&D would decrease 7 percent or $41 million to $542 million. Hardest hit would be the National Water Quality Assessment Program and the Toxic Substances Hydrology Program, including a $10 million transfer to NSF to initiate a competitive grants process to address water quality issues.

The EPA R&D budget would rise 6.2 percent to $650 million in FY 2003. Much of this increase is due to $77.5 million proposed for research in dealing with biological and chemical incidents.

New program for math and science teachers receives little funding

After nearly a year of negotiations, Congress enacted a sweeping reform law for federal K-12 education programs in December 2001 that included the creation of a new program for math and science teachers. However, the appropriations bill that provides funding for federal education programs has left it with little money.

The education law, signed by President Bush in January, creates a broad “Teacher Quality” program, which will provide grants to states for a wide array of purposes relating to teacher quality, including professional development. It also creates a program aimed specifically at improving math and science education. The program will establish partnerships between state and local education agencies and higher education institutions for bolstering the professional development of math and science teachers. It also includes several other types of activities to improve math and science teaching.

The new program replaces the Eisenhower Professional Development program, which provided opportunities for K-12 teachers to expand their knowledge and expertise. In fiscal 2001, the Eisenhower program received $485 million, $250 million of which was set aside for programs aimed at math and science teachers.

The new science and math program was strongly supported by the scientific, education, and business communities, which argue that the scientific literacy of the nation’s workforce is essential to national security and economic prosperity. Proponents point to the labor shortage that has existed in the high-tech sector in recent years and the prevalence of foreign students in U.S. graduate programs as evidence that U.S. math and science education programs need to be improved.

However, the fiscal 2002 appropriations bill that includes education spending allocated $2.85 billion for the broad teacher quality initiative but just $12.5 million for the math and science partnerships, far short of the $450 million authorized by the education reform law.

The conference report on the appropriations bill acknowledges that good math and science education “is of critical importance to our nation’s future competitiveness,” and agrees that “math and science professional development opportunities should be expanded,” but relies on the states to fund such programs within the teacher quality program. “The conferees strongly urge the Secretary [of Education] and States to utilize funding provided by the Teacher Quality Grant program, as well as other programs funded by the federal government, to strengthen math and science education programs across the nation,” the report states.

A similar program has also been created within the National Science Foundation (NSF), as proposed in the president’s original reform proposal, and was provided with $160 million for the current year. However, the NSF grants will be distributed through a nationwide competition and are not likely to achieve the balance or scope of the $450 million program envisioned by the authors of the reform law.

Also included in NSF’s fiscal year 2002 budget are two pilot education programs funded at $5 million apiece. One is based on legislation sponsored by Sen. Joseph I. Lieberman (D-Conn.), which would provide grants to colleges and universities that pledge to increase the number of math, science, and engineering majors that graduate. The other program, based on a proposal by Rep. Sherwood L. Boehlert (R-N.Y.), will provide scholarships to undergraduate students majoring in math, science, or engineering who pledge to teach for two years after their graduation.

Congress considers additional antiterrorism legislation

The House and Senate have passed or are considering additional counterterrorism legislation in the aftermath of last year’s attacks.

In December 2001, the House and Senate both passed bills (H.R. 3448 and S. 1765) that would improve bioterrorism preparedness at state and federal levels, encourage the development of new vaccines and other treatments, and tighten federal oversight of food production and use of dangerous biological agents. Because the bills are similar, resolution of the differences between the two was expected as early as March.

Both bills would grant the states about $1 billion for bioterrorism preparedness; both would spend approximately $1.2 billion on building up the nation’s stockpile of vaccines ($509 million for smallpox vaccine alone) and other emergency medical supplies; and both would increase the federal government’s ability to monitor and control dangerous biological agents and to mount a rapid coordinated response to a bioterrorist attack.

One of the few substantive differences between the bills concerns food and water safety. The Senate version provides more than $520 million to improve food safety and protect U.S. agriculture from bioterrorism. The House version, however, provides only $100 million, focusing instead on funding for water safety ($170 million).

There are also some discrepancies in the amount of money allocated to specific programs. The Senate bill authorizes only $120 million for laboratory security and emergency preparedness at the Centers for Disease Control and Prevention, whereas the House bill provides $450 million.

On February 7, the House passed the Cyber Security Research and Development Act (H.R.3394) by a vote of 400 to 12. The bill would authorize $877 million in funds within the National Science Foundation (NSF) and the National Institutes of Standards and Technology (NIST). The funding will go toward an array of programs to improve basic research in computer security, encourage partnerships between industry and academia, and help generate a new cybersecurity workforce.

House Science Committee Chairman Sherwood Boehlert (R-N.Y.) introduced the bill in the aftermath of the terrorist attacks. “The attacks of September 11th have turned our attention to the nation’s weaknesses, and again we find that our capacity to conduct research and to educate will have to be enhanced if we are to counter our foes over the long run,” Boehlert said. The bill’s cosponsor and the committee’s ranking member Rep. Ralph Hall (D-Tex.) stated that, “The key to ensure information security for the long term is to establish a vigorous and creative basic research effort.”

The bill authorizes $568 million between fiscal years (FYs) 2003 and 2007 to NSF, of which $233 million would go for basic research grants; $144 million for the establishment of multidisciplinary computer and network security research centers; $95 million for capacity-building grants to establish or improve undergraduate and graduate education programs; and $90 million for doctoral programs.

NIST would receive almost $310 million over the same five-year period, of which $275 million would go toward research programs that involve a partnership between industry, academia, and government laboratories. In addition, funding may go toward postdoctoral research fellowships. The bill provides $32 million for intramural research conducted at NIST laboratories. The bill also proposes spending $2.15 million for NIST’s Computer System Security and Privacy Advisory Board to conduct analyses of emerging security and research needs and $700,000 for a two-year study of the nation’s infrastructure by the National Research Council.

Congress continues to debate other measures that could improve the nation’s preparedness against terrorist attacks. On February 5, at a hearing of the Senate Subcommittee on Science, Technology and Space, Chair Ron Wyden (D-Ore.) discussed a bill that would create what he called a “National Emergency Technology Guard,” a cadre of volunteers that could be called upon in case of a terrorist attack or other emergency. Wyden also advocated creating a central clearinghouse for information about government funding for bioterrorism R&D, as well as local registries of resources, such as hospital beds, medical supplies, and antiterrorism experts, that would speed response to a bioterror attack.

According to witnesses at the hearing, both the private and academic sectors have had difficulty working with the federal government to protect the United States from bioterrorism. The main challenge faced by small companies trying to develop antiterrorism technologies is the lack of funding for products that may not have immediate market value, said John Edwards, CEO of Photonic Sensor, and Una Ryan, CEO of AVANT Immunotherapeutics and a representative of the Biotechnology Industry Organization. They testified in favor of the kind of central clearinghouse recommended by Wyden, which they argued would speed the development of antibioterrorism technologies.

Along similar lines, Bruno Sobral, director of the Virginia Bioinformatics Institute, suggested that a government-sponsored central database of bioterrorism-related information would facilitate coordination among academic researchers, who otherwise might fail to identify crucial gaps in knowledge about dangerous pathogens.

Proposal for comprehensive cloning ban debated

The Senate was expected to vote in early spring on a proposal, already approved in the House, for a comprehensive ban on human cloning. A bruising fight was expected. Since Congress reconvened in January, two Senate committees have held hearings on the issue, and outlines of the debate have taken both a familiar and a unique shape.

On one side are proponents of a bill (S.1899) sponsored by Sens. Sam Brownback (R-Kan.) and Mary Landrieu (D-La.) that is identical to a bill approved by the House in the summer of 2001 (H.R.2505). The bill would ban all forms of human cloning, whether for producing a human baby (reproductive cloning) or for scientific research (research cloning). On the other side are proponents of a narrower cloning ban that would prohibit reproductive cloning but permit research cloning. Two such narrow bans have been introduced, one by Sens. Tom Harkin (D-Iowa) and Arlen Specter (R-Penn.) and the other by Sens. Dianne Feinstein (D-Calif.) and Edward M. Kennedy (D-Mass).

Supporting the Brownback-Landrieu bill is an unusual coalition of religious conservatives and environmentalists. Religious conservatives argue that human embryos should be afforded a moral status similar to human beings and should not be destroyed even in the course of scientific research. Environmentalists argue that permitting research cloning would open the door to reproductive cloning and that such research should not proceed until strict regulatory safeguards are implemented.

Opposing the Brownback-Landrieu bill is a coalition of science organizations, patient groups, and the biotechnology industry, which argue that research cloning could potentially lead to cures for many diseases, that reproductive cloning can be stopped without banning research, and that criminalizing scientific research sets a bad precedent.

At the first of the two Senate hearings, the Senate Appropriations Committee’s Labor-Health and Human Services (HHS) Subcommittee heard from Irving L. Weissman, who chaired a National Research Council panel on reproductive cloning. He cited a low success rate in animal cloning and abnormalities in cloned animals that survive as reasons for a ban on human reproductive cloning. However, he testified that there is evidence that stem cells derived from cloned embryos are functional.

“Scientists place high value on the freedom of inquiry–a freedom that underlies all forms of scientific and medical research,” Weissman said. “Recommending restriction of research is a serious matter, and the reasons for such a restriction must be compelling. In the case of human reproductive cloning, we are convinced that the potential dangers to the implanted fetus, to the newborn, and to the woman carrying the fetus constitute just such compelling reasons. In contrast, there are no scientific or medical reasons to ban nuclear transplantation to produce stem cells, and such a ban would certainly close avenues of promising scientific and medical research.”

Brent Blackwelder, president of Friends of the Earth, laid out the environmental community’s case against human cloning. He argued that cloning and the possible advent of inheritable genetic modifications (changes to a person’s genetic makeup that can be passed on to future generations) “violate two cornerstone principles of the modern conservation movement: 1) respect for nature and 2) the precautionary principle.” He described these potential developments as “biological pollution,” a new kind of pollution “more ominous possibly than chemical or nuclear pollution.”

Blackwelder advocated a moratorium on research cloning in order to prevent reproductive cloning from taking place. “Even though many in the biotechnology business assert that their goal is only curing disease and saving lives,” he said, “the fact remains that once these cloning and germline technologies are perfected, there are plenty who have publicly avowed to utilize them.”

Although Blackwelder described the Feinstein-Kennedy bill as “Swiss cheese,” Specter, the ranking member of the Labor-HHS subcommittee, vowed to erect a strong barrier between research and reproductive cloning. “We’re going to put up a wall like Jefferson’s wall between church and state,” he said.

The second hearing, held by the Senate Judiciary Committee, featured testimony from Rep. Dave Weldon (R-Fla.), who shepherded the cloning ban through the House. Weldon addressed the moral status of a human embryo, describing the “great peril of allowing the creation of human embryos, cloned or not, specifically for research purpose.” He added, “Regardless of the issue of personhood, nascent human life has some value.

Among those testifying in favor of the Feinstein-Kennedy bill was Henry T. Greely, a Stanford law professor representing the California Advisory Committee on Human Cloning, which released a report in January 2002 entitled, Cloning Californians? The report, which was mandated by a 1997 state law imposing a temporary ban on reproductive cloning, unanimously recommended a continued ban on reproductive cloning but not on research cloning.

“Government should not allow human cloning to be used to make people,” Greeley said. “It should allow with due care human cloning research to proceed to find ways to relieve diseases and conditions that cause suffering to existing people.”

Future is cloudy for Space Station as new NASA chief takes helm

In a move that throws doubt on the future of the International Space Station (ISS), President Bush has appointed Sean O’Keefe, formerly deputy director of the Office of Management and Budget (OMB), to be the new administrator of the National Aeronautics and Space Administration (NASA). He replaces longtime administrator Daniel Golden. The Senate confirmed the nomination on December 20.

The appointment was announced just a week after O’Keefe appeared at a November 7 House Science Committee hearing to defend a report criticizing the Space Station’s financial management. He came under fire from some committee members for saying that NASA should focus its current efforts on maintaining a three-person crew on the station and not expanding its capacity to the seven- member crew originally envisioned for the ISS.

At his Senate confirmation hearing, O’Keefe received unanimous support from members of the Commerce Committee’s Subcommittee on Science, Technology, and Space, but the concerns expressed by the House Science Committee members were echoed loudly by Sens. Bill Nelson (D-Fla.) and Kay Bailey Hutchison (R-Tex.). Both hail from states that are home to NASA centers critical to the Space Station program.

Debate over ISS has heated up since NASA announced in the spring of 2001 that the project, which was already several years behind schedule and billions of dollars over budget, was facing another $4 billion cost overrun. In conjunction with OMB, NASA created the ISS Management and Cost Evaluation Task Force to assess the program’s financial footing. The task force, chaired by former Martin Marietta president A. Thomas Young, released a November 1, 2001, report that was the topic of the Science Committee hearing. Young testified alongside O’Keefe, who was representing OMB, and strongly endorsed the report.

The report found that “the assembly, integration, and operation of the [station’s] complex systems have been conducted with extraordinary success, proving the competency of the design and the technical team,” but that the program has suffered from “inadequate methodology, tools, and controls.” Further, the report concluded that the current program plan for fiscal years 2002-2006 was “not credible.”

The task force recommended major changes in program management and identified several areas for possible cost savings, including a reduction in shuttle flights to support the station from six to four per year. The panel also identified several steps to improve the program’s scientific research, including better representation of the scientific community within the ISS program office.

At the House hearing, O’Keefe and Young refused to endorse the seven-person crew originally planned for the station. Instead, they said NASA should produce a credible plan for achieving the “core complete” stage, which includes the three-person crew currently in place, before embarking on plans to expand. However, NASA has said that roughly 2.5 crew members are needed just to maintain the station, so with only three crew members, the time available for conducting research would be scarce. The task force confirmed that assessment.

Rep. Ralph M. Hall (D-Tex.), the ranking member of the Science Committee, said that the approach recommended by the task force “seems to me to be a prescription for keeping the program in just the sort of limbo that the task force properly decries… We should be explicit that we are committed to completing the space station with its long-planned seven-person crew capability.” A three-person ISS, he said, is not worth the money.

Some ISS partners, including Canada, Europe, Japan, and Russia, have also opposed a three-person crew, arguing that a failure to field at least a six-person crew would violate U.S. obligations under the agreements that created the ISS.

Science Committee Chair Sherwood L. Boehlert (R-N.Y.) defended the task force for arguing that, “we’re not going to buy you a Cadillac until we see that you can handle a Chevy.” In fact, nearly every member praised the panel’s efforts to help NASA control costs, if not its view of what ISS’s goals should ultimately be, but Rep. Dave Weldon (R-Fla.) criticized the proposed reduction in shuttle flights, saying it would lead to layoffs. “It looks like the administration is not a supporter of the manned space flight program,” he declared.

Language on evolution attached to education law

The conference report accompanying the education reform bill passed by Congress in December 2001 includes controversial though not legally binding language regarding the teaching of evolution.

Although Congress usually steers clear of any involvement in state and local curriculum development, the Senate in June 2001 passed a sense of the Senate amendment proposed by Sen. Rick Santorum (R-Penn.), dealing with how evolution is taught in schools. The resolution stated that, “where biological evolution is taught, the curriculum should help students to understand why this subject generates so much continuing controversy.”

Although the resolution appears uncontroversial on its face, the statement was hailed by anti-evolution groups as a major victory and criticized by scientific organizations. Proponents view it as an endorsement of the teaching of alternatives to evolution in science classes. Opponents say the resolution fails to make the crucial distinction between political and scientific controversy. Although evolution has generated a great deal of political and philosophical debate, the opponents argue, it is generally regarded by scientists as a valid and well-supported scientific theory.

In response to the resolution’s passage, a letter signed by 96 scientific and educational organizations was sent in August 2001 to Sen. Edward M. Kennedy (D-Mass.) and Rep. John Boehner (R-Ohio), the chairmen of the education conference committee, requesting removal of the language from the final bill. In an apparent compromise, the committee declined to include it as a sense of Congress resolution but added the following slightly altered language to the final conference report:

“The conferees recognize that a quality science education should prepare students to distinguish the data and testable theories of science from religious or philosophical claims that are made in the name of science. Where topics are taught that may generate controversy (such as biological evolution), the curriculum should help students to understand the full range of scientific views that exist, why such topics may generate controversy, and how scientific discoveries can profoundly affect society.”

This language has been praised by anti-evolution groups and criticized by scientists for the same reasons as the original amendment. Neither a sense of Congress resolution nor report language, however, has the force of law, so the debate has primarily symbolic importance


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Life-Saving Products from Coral Reefs

During the past decade, marine biotechnology has been applied to the areas of public health and human disease, seafood safety, development of new materials and processes, and marine ecosystem restoration and remediation. Dozens of promising products from marine organisms are being advanced, including a cancer therapy made from algae and a painkiller taken from the venom in cone snails. The antiviral drugs Ara-A and AZT and the anticancer agent Ara-C, developed from extracts of sponges found on a Caribbean reef, were among the earliest modern medicines obtained from coral reefs. Other products, such as Dolostatin 10, isolated from a sea hare found in the Indian Ocean, are under clinical trials for use in the treatment of breast and liver cancers, tumors, and leukemia. Indeed, coral reefs represent an important and as yet largely untapped source of natural products with enormous potential as pharmaceuticals, nutritional supplements, enzymes, pesticides, cosmetics, and other novel commercial products. The potential importance of coral reefs as a source of life-saving and life-enhancing products, however, is still not well understood by the public or policymakers. But it is a powerful reason for bolstering efforts to protect reefs from degradation and overexploitation and for managing them in sustainable ways.

Between 40 and 50 percent of all drugs currently in use, including many of the anti-tumor and anti-infective agents introduced during the 1980s and 1990s, have their origins in natural products. Most of these were derived from terrestrial plants, animals, and microorganisms, but marine biotechnology is rapidly expanding. After all, 80 percent of all life forms on Earth are present only in the oceans. Unique medicinal properties of coral reef organisms were recognized by Eastern cultures as early as the 14th century, and some species continue to be in high demand for traditional medicines. In China, Japan, and Taiwan, tonics and medicines derived from seahorse extracts are used to treat a wide range of ailments, including sexual disorders, respiratory and circulatory problems, kidney and liver diseases, throat infections, skin ailments, and pain. In recent decades, scientists using new methods and techniques have intensified the search for valuable chemical compounds and genetic material found in wild marine organisms for the development of new commercial products. Until recently, however, the technology needed to reach remote and deepwater reefs and to commercially develop marine biotechnology products from organisms occurring in these environments was largely inadequate.

The prospect of finding a new drug in the sea, especially among coral reef species, may be 300 to 400 times more likely than isolating one from a terrestrial ecosystem. Although terrestrial organisms exhibit great species diversity, marine organisms have greater phylogenetic diversity, including several phyla and thousands of species found nowhere else. Coral reefs are home to sessile plants and fungi similar to those found on land, but coral reefs also contain a diverse assemblage of invertebrates such as corals, tunicates, molluscs, bryozoans, sponges, and echinoderms that are absent from terrestrial ecosystems. These animals spend most of their time firmly attached to the reef and cannot escape environmental perturbations, predators, or other stressors. Many engage in a form of chemical warfare, using bioactive compounds to deter predation, fight disease, and prevent overgrowth by fouling and competing organisms. In some animals, toxins are also used to catch their prey. These compounds may be synthesized by the organism or by the endosymbiotic microorganisms that inhabit its tissues, or they are sequestered from food that they eat. Because of their unique structures or properties, these compounds may yield life-saving medicines or other important industrial and agricultural products.

Despite these potential benefits, the United States and other countries are only beginning to invest in marine biotechnology. For the past decade, Japan has been the leader, spending $900 million to $1 billion each year, about 80 percent of which comes from industry. In 1992, the U.S. government invested $44 million in marine biotechnology research, which is less than 1 percent of its total biotechnology R&D budget; an additional $25 million was invested by industry. In 1996, the latest date for which figures are available, U.S. government marine biotechnology research investment was estimated at only $55 million. Even with limited funding, U.S. marine biotechnology efforts since 1983 have resulted in more than 170 U.S. patents, with close to 100 new compounds patented between 1996 and 1999. U.S. support for marine biotechnology research is likely to increase in the coming years. According to the National Oceanic and Atmospheric Administration, marine biotechnology has become a multibillion industry worldwide, with a projected annual growth of 15 to 20 percent during the next five years.

Expanded efforts by the United States and other developed countries to evaluate the medical potential of coral reef species are urgently needed in particular because of the need for a new generation of specialized tools and processes for collection, identification, evaluation, and development of new bioproducts. The high cost and technical difficulties of identifying and obtaining marine samples, the need for novel screening technologies and techniques to maximize recovery of bioactive compounds, and difficulties in identifying a sustainable source or an organism for clinical development and commercial production are among the primary factors limiting marine bioprospecting activities.

The identification and extraction of natural products require major search and collection efforts. In the past, invertebrates were taken largely at random from reefs, often in huge quantities, but bioprospectors rarely provided an indication of the amount of organisms they were seeking, making it difficult to assess the impact associated with collection. Chemists homogenized hundreds of kilograms of an individual species in hopes of identifying a useful compound. This technique often yielded a suite of compounds, but each occurred in trace amounts that were insufficient for performing a wide range of targeted assays necessary to identify a compound of interest. For example, in one report a U.S. bioprospecting group collected 1,600 kg of a sea hare to isolate 10 mg of a compound used to fight melanoma. Another group collected 2,400 kg of an Indo-Pacific sponge to produce 1 mg of an anticancer compound. Yet, as much as 1 kg of a bioactive metabolite may ultimately be required for drug development.

Targeting a promising compound is only the first step; a renewable source for the compound must also be established before a new drug can be developed. Many suitable species occur at a low biomass or have a limited distribution, and in some cases a compound may occur only in species exposed to unusual environmental conditions or stressors. Because these compounds often come from rare or slow-growing organisms or are produced in minute quantities, collecting a target species in sufficient amounts for continued production of a new medicine may be unrealistic.

Sustainable management

It is estimated that less than 10 percent of coral reef biodiversity is known, and only a small fraction of the described species have been explored as a source of biomedical compounds. Even for known organisms, there is insufficient knowledge to promote their sustainable management. Unfortunately, a heavy reliance on coral reef resources worldwide has resulted in the overexploitation and degradation of many reefs, particularly those near major human populations. Managing these critical resources has become more difficult because of economic and environmental pressures, and human populations continue to grow.

Seahorses are a prime example of a resource that is rapidly collapsing. Demand for seahorses for use in traditional medicine increased 10-fold during the 1980s, and the trade continues to grow by 8 to 10 percent per year. With an estimated annual seahorse consumption of 50 tons in Asia alone, representing about 20 million animals supplied by 30 different countries, collection pressures on seahorses are causing rapid depletion of target populations. According to a study by Project Seahorse, seahorse populations declined worldwide by almost 50 percent between 1990 and 1995. In the absence of effective management of coral reefs and the resources they contain, many species that are promising as new sources of biochemical materials for pharmaceuticals and other products may be lost before scientists have the opportunity to evaluate them.

Expanded efforts to evaluate the medical potential of coral reef species are urgently needed.

Thus, as a first step in promoting continued biomedical research for marine natural products, countries must develop management plans for sustainable harvest of potentially valuable invertebrates. This must occur before large-scale extraction takes place. Because most of the desired species for biotechnology have little value as a food fishery, management strategies for sustainable harvest have been lacking, and much of the information needed on the population dynamics or the life history of the organisms is unknown. However, through joint efforts involving scientists, resource managers in the source country, and industry, it is possible to develop management plans that promote sustainable harvest, conservation, and equitable sharing of benefits for communities dependent on these resources.

For instance, researchers in the Bahamas identified a class of natural products, pseudopterosins, from a gorgonian coral (Pseudoterigorgia elisabethae) that have anti-inflammatory and analgesic properties. With help from the U.S.-funded National Sea Grant College Programs, the population biology of the species was examined in detail, with relevant information applied toward development of a management plan for sustainable harvest. This has allowed researchers to obtain sufficient supplies over a 15-year period without devastating local populations. By ensuring an adequate supply, this effort ultimately led to the purification of a product now used as a topical agent in an Estee Lauder skin care product, Resilience. In 1995, pseudopterosin was among the University of California’s top 10 royalty-producing inventions; today it has a market value of $3 million to $4 million a year.

Commercialization

New avenues for the commercial development of compounds derived from coral reef species may enhance the use of these resources and contribute to the global economy. If properly regulated, bioprospecting activities within coral reef environments may fuel viable market-driven incentives to promote increased stewardship for coral reefs and tools to conserve and sustainably use coral reef resources. These activities may also promote beneficial socioeconomic changes in poor developing countries.

Unfortunately, the difficulty in finding new drugs among the millions of potential species, the large financial investment involved, and long lead times that often take place before drugs can be brought to market has meant that the resources themselves have relatively low values. The anticancer metabolite developed from a common bryozoan, Bugula spp., is currently worth up to $1 billion per year. But the value of one sample in its raw form is only a few dollars. This makes it difficult to add significant value to coral reefs for conservation strictly on economic terms.

When bioprospecting has resulted in significant funds for conservation, special circumstances have been involved. The most success has been achieved when bioprospecting is carried out through international partnerships that include universities, for-profit companies, government agencies, conservation organizations, and other groups. Partnerships allow organizations to take advantage of differential expertise and technology, thereby providing cost-effective mechanisms for collection, investigation, screening, and development of new products. Partnerships also facilitate access to coral reef species, promote arrangements for benefit sharing, and assist in improving understanding of the taxonomy and biogeography of species of interest.

Many of the marine natural products partnerships negotiated in recent years between private firms and research institutes in developing countries have involved outsourcing by large R&D firms. In this approach, large companies engaged in natural products R&D work with suppliers, brokers, and middlemen in developing countries to obtain specimens of interest and with specialized companies that conduct bioassays or chemical purification of natural products. Through the development of contracts with several large pharmaceutical companies, Costa Rica was able to ensure that substantial funds were directed toward conservation. This was successful primarily because Costa Rica developed tremendous capacity to provide up-front work in taxonomy and initial screening of samples, which may not be the case in other developing countries.

An alternative approach often undertaken in the United States and Europe involves in-licensing, in which large R&D companies acquire the rights to bioactive compounds that have been previously identified by other firms or by nonprofit research institutes. For example, the National Cancer Institute (NCI) provides government research grants that support marine collecting expeditions and preliminary extraction, isolation, and identification of a compound and its molecular structure and novel attributes. Once a potentially valuable compound is identified, NCI may patent it and license it to a pharmaceutical company to develop, test, and market. In this approach, the company is required to establish an agreement with the source country for royalties and other economic compensation. In addition, scientists in the host country are invited to assist in the development of a new product, and the U.S. government guarantees protection of biodiversity rights and provides provisions for in-country mariculture of organisms that contain the compound, in the event that it cannot be synthesized.

The Convention on Biological Diversity (CBD) is leading an international effort to develop guidelines for access to coastal marine resources under jurisdictions of individual countries for marine biotechnology applications. The CBD is calling for conservation of biological diversity, the sustainable use of marine resources, and the fair and equitable sharing of benefits that arise from these resources, including new technologies, with the source country. Ratification of this agreement, from the standpoint of expanded development in marine biotechnology, requires that coastal nations agree on a unified regime governing access to marine organisms. Countries with coral reefs must also establish an acceptable economic value for particular marine organisms relative to the R&D investment of the biotech firm involved in the collection of the organism and the development of a new bioproduct. Although this type of international agreement would significantly affect the operations of the U.S. marine biotechnology industry, the United States cannot play an effective role in the process because it is not a party to the convention.

Options for sustainable use

The development and marketing of novel marine bioproducts can be achieved without depleting the resource or disrupting the ecosystem, but it requires an approach that combines controlled, sustainable collection with novel screening technologies, along with alternative sources for compounds of value. Instead of the large-scale collections that were formerly commonplace, more systematic investigations are now being undertaken, in which certain groups are targeted and the isolated materials are tested in a wide variety of screening assays. These collection missions involve the selective harvest of a very limited number of species over a broad area, with a focus on soft-bodied invertebrates that rely on chemical defenses for survival and marine microorganisms that coexist with these organisms. Assays used in major pharmaceutical drug discovery programs are also beginning to consider the function of the bioactive compounds in nature and their mechanisms of action, which can provide models for the development of new commercial products.

The ability to partition collections into categories of greater and lesser potential has raised the value of these species. For instance, sponges are ideal candidates for bioprospecting, because a single sponge can be populated by dozens of different symbiotic bacteria that produce an extraordinary range of chemicals. In Japan, researchers have examined more than 100 species of coral reef sponges for biomedical use, and more than 20 percent of them have been found to contain unique bioactive compounds. With greater knowledge of appropriate types of organisms for screening, companies may be willing to pay a premium for exclusive access to promising research prospects, thus creating an incentive to conserve ecological resources in order to charge access fees.

Investment incentives are needed to encourage partnerships to engage in marine natural products research.

With the advent of genomic and genetic engineering technologies, bioprospectors now have environmentally friendly and economically viable alternative screening tools. For any given species, a suitable sample consists of as little as 1 to 1.5 kilograms wet weight. In one screening approach, scientists collect small samples of an organism, extract the DNA from that species and its symbiotic microbes, and clone it into a domesticated laboratory bacterium. Thus, the genetically engineered bacterium contains the blueprint necessary to synthesize the chemical of interest, and it can ultimately create large quantities of the chemical without additional reliance on the harvest of wild populations.

Although synthetic derivatives provide an alternative to wild harvest, sometimes synthesis proves impossible or uneconomical; for example, in the case of an anticancer compound extracted from a sea squirt (tunicate). Mass production of a target species through captive breeding or mariculture may provide a consistent alternative supply. Many coral reef organisms that are in demand for the aquarium trade and the live reef food fish trade, and several invertebrates that contain valuable bioactive compounds, such as the sponges, are promising new species for intensive farming, and there are already a number of success stories. For example, sponge mariculture capitalizes on the ability of sponges to regenerate from small clippings removed from adult colonies. To minimize harvest impacts, only a small portion of the sponge needs to be removed for aquaculture; the cut sponge heals quickly and over time will regrow over the injury.

Mariculture offers another benefit as well. Through the use of selective husbandry or other mariculture protocols, it may be possible to select for a particular genetic strain of a species that produces a higher concentration of a metabolite of interest, thereby reducing the number of individuals needed for biotechnology applications. Mariculture can also provide a source of organisms to restock wild populations, which provides additional incentive for participation by a developing country with coral reef resources.

Four key steps

Coastal populations worldwide will continue to rely on coral reefs for traditional uses, subsistence, and commerce far into the future. In many cases, increased, unsustainable rates of collection coupled with pollution, habitat destruction, and climate change are threatening the vitality of these precious ecosystems. Coral reefs are vast storehouses of genetic resources with tremendous biomedical potential that can provide life-saving sources of new medicines and other important compounds, if these precious resources are properly cared for. To meet this challenge, research communities, government agencies, and the private sector must interact more effectively.

Through four key steps, the benefits of these activities can extend far beyond their medicinal potential to provide sustainable sources of income for developing countries and promote increased stewardship for the resources. First, there is a need for investment incentives to encourage partnerships among governments, local communities, academia, and industry to increase marine natural product research in coral reef environments. Second, those who stand to gain from the discovery of a new product must direct technical and financial assistance toward research and monitoring of the target species and the development and implementation of sustainable management approaches in exporting (developing) countries. Third, it is critical that biotech firms promote equitable sharing of benefits to include entire communities or source countries from which the raw materials come. Finally, expanded efforts are needed to reduce the demand for wild harvest and to improve the yield of bioactive compounds, including mariculture and selective husbandry and genomic and genetic engineering.

Without environmentally sound collection practices, only a few will benefit financially from new discoveries, and only over the short term. In the long term, communities may ultimately lose the resources on which they depend. Many species will perish, including those new to science, along with their unrealized biomedical potential. The ultimate objective of marine biotechnology should not be to harvest large volumes and numbers of species for short-term economic gains, but rather to obtain the biochemical information these species possess without causing negative consequences to the survival of the species and the ecosystems that support them. We must strive for a balance among the needs of human health, economics, and the health of our coral reefs, all of which are inextricably intertwined. This approach will ensure that marine resources that may prove valuable in the fight against disease will be available for generations to come.

A Sweeter Deal at Yucca Mountain

As this is written in the late winter of 2002, the stage is set for a struggle in Congress over whether to override the impending Nevada veto of President Bush’s selection of the Yucca Mountain nuclear waste disposal site. The geologic repository that would be built there for spent fuel from nuclear reactors and for highly radioactive defense waste would be the first such facility anywhere in the world. The criticism and doubts raised about the president’s decision are cause enough–even for one long convinced that the place for the repository is Nevada–to wonder whether the Yucca Mountain project can be licensed and built.

Where I come out is, yes, the U.S. Senate and House of Representatives should overturn the Nevada veto. The accelerated procedures afforded by the Nuclear Waste Policy Act of 1982 proscribe the filibustering and other parliamentary tactics that otherwise might block this present chance for the greatest progress yet on a nuclear waste problem that has eluded solution for over three decades. But still confronting the project if the Nevada veto is overturned will be the multitudinous law suits that the state is bringing against it. Even if they fall short on the merits, these suits could raise to new levels Nevada’s bitterness toward the project, further intensify distrust of the site and how it was chosen, and delay for several years a licensing application to the U.S. Nuclear Regulatory Commission. What is required of Congress in these circumstances is not just an override of the state veto but also major new amendments to the Nuclear Waste Policy Act strengthening the Yucca Mountain project financially, technically, and politically.

Congress must, above all, seek a dramatic reconciliation between Washington and the state of Nevada. The goal should be a greater spirit of trust, an end to the lawsuits, substantial direct and collateral economic benefits for Nevada, a stronger influence for the state in the Yucca Mountain project, and a stronger University of Nevada, the state’s proudest institution. A possibility to consider would be for congressional leaders to invite the Nevada delegation on Capitol Hill to join with them in a collaborative legislative effort to establish in Nevada a new national laboratory on nuclear waste management.

The Nevadans could look to their own inventiveness in any such initiative, aware of course that the final product will come about from much pulling and hauling from diverse quarters and diverse interests. Here we put forward a few possibilities that might go into the mix. Although the new laboratory would be created as a permanent institution with a broad mandate, central to that mandate in the beginning would be to take over direction of the Yucca Mountain project from the U.S. Department of Energy. Equipped with its own hot cells and other facilities for handling radioactive materials, the laboratory could assume a hands-on role in much of the high-end research and development work that is now done by project contractors. Its director, appointed by the president for a fixed term of, say, seven years, and removable only by the president, could be a far stronger administrator than the nuclear waste program has ever had before and one who is allowed wide latitude. Indeed, should the director come to conclude that not even with the best science and engineering can Yucca Mountain be made a workable site, the director could go to the president and the Congress and recommend its rejection in favor of finding another candidate site, whether in Nevada or elsewhere.

An advisory committee chaired by the Governor of Nevada would follow the laboratory’s work closely and be aided in this by a selective, well staffed group similar to the existing congressionally mandated, presidentially appointed Nuclear Waste Technical Review Board. Funding of the Yucca Mountain project and other activities under the Energy Department’s present Office of Civilian Radioactive Waste Management would continue to come from the Nuclear Waste Fund and the user fee on nuclear-generated electricity, but the new laboratory’s activities not covered by this dedicated funding would be dependent on other congressional appropriations.

Realistically, growth of the new lab would come, to one degree or another, at the expense of other national laboratories, particularly the existing nuclear weapons laboratories (Lawrence Livermore in California and Los Alamos and Sandia in New Mexico) where access for outside scientists and graduate students is severely constrained by their highly classified defense work. Creating the new lab would for some members of Congress be politically painful. But that would simply be part of the price for a successful Yucca Mountain project and, over the longer term, for new and more effective nuclear waste management initiatives across a much broader front.

Where would the new laboratory be located? At Yucca Mountain? In the vicinity of the University of Nevada’s home campus in Reno or near its new campus in Las Vegas? These would be delicate and important questions for Nevadans, but the new lab would surely bring new strength to the University in a variety of ways.

Of course, a great threshold question is whether there is any chance of Nevada’s political leaders actually doing an about-face and accepting a reconciliation allowing the Yucca Mountain project to go forward? It’s no sure thing, but consider the following: By the fall of 2002 Congress may already have overridden the Nevada veto, possibly by a comfortable margin. Also, the Nevada leaders will know that if their lawsuits should succeed only in delaying the project, the state’s leverage for gaining major concessions from Congress ultimately will either vanish or be sharply reduced. Furthermore, the University of Nevada and many businesses may see a national laboratory in the state starting or encouraging major new economic activities for Nevada, not just for nuclear waste isolation but also for other high tech work for government and private industry.

More money, more research

Financially, Congress could give both the project and the new national laboratory a major boost by designating the waste program as a mandatory account that is no longer to be denied half or more of the money collected each year from utility ratepayers in user fees on nuclear energy. In fiscal 2001 the fee revenue totaled $880 million. Moreover, an unexpended balance of nearly $12 billion has been allowed to pile up in the Nuclear Waste Fund in order to reduce the federal budget deficit. Congress must now forego this budgetary sleight of hand and ensure that the needs of the Yucca Mountain project are properly met.

Technically, Congress should have the project assume an exploratory thrust going far beyond anything now contemplated by DOE. It could in a general way urge the new Nevada laboratory to consider an innovative phased approach for testing current plans and exploring attractive technical alternatives. The licensing application might call for two or more experimental waste emplacement modules to confirm the engineering feasibility of project plans.

Project reviewers, who include many proponents of a phased approach to repository development, could help identify new possibilities worthy of a trial. For instance, scientists at the Oak Ridge National Laboratory in Tennessee favor a concept of enveloping spent fuel with depleted uranium within the waste containers. They see this concept as doubly attractive, affording both greater assurance of waste containment and safe disposal of much of the nation’s environmentally burdensome inventory of depleted uranium. Some 600,000 tons of depleted uranium sits outside in aging steel cylinders at the two inactive uranium enrichment plants at Oak Ridge, Tennessee, and Portsmouth, Ohio, and the still active plant at Paducah, Kentucky. A decay product of depleted uranium is the dangerously radioactive radium-226.

Depleted uranium dioxide in a granular form could be used to fill voids in the waste containers and also be embedded in steel plating to create a tough, dense layer nearly 10 inches thick just inside the containers’ thin corrosion-resistant outer shell. It would be meant to serve as a sacrificial material, grabbing off any oxygen entering the containers and delaying for many thousands of years degradation of the spent fuel.

Congress should have the project assume an exploratory thrust going far beyond anything now contemplated by DOE.

A number of close followers of the Yucca Mountain project, in Nevada and elsewhere, doubt that its weaknesses will ever be overcome. But in my view the problems are curable and the purported alternatives are either illusory or unacceptable. The default solution if geologic isolation of spent fuel and high-level waste fails is continued surface storage. In principle, this could involve beginning central storage in Nevada or elsewhere, but unfortunately what is far more likely is for storage to remain for many years at the some 131 sites in 39 states where the spent fuel and high-level waste are stored now. Indeed, the political effect of a congressional rejection of the project could be to freeze virtually all further movement of this material. With no Yucca Mountain project, there would be no foreseeable prospect of permanent disposal anywhere.

Fourteen years ago, Congress abandoned the effort to screen multiple candidate repository sites by enacting the NWPA Amendments of 1987. The narrowing of the search to Yucca Mountain was “political,” to be sure, but it was also sensible and practical viewed on the merits. The cost of “characterizing” sites, put at not more than $100 million per site in 1982, was soaring, although no one could then foresee that by 2002 characterization of the Yucca Mountain site alone would exceed $4 billion. Moreover, Yucca Mountain offered clear advantages compared to the other two sites still in the running. A repository at Hanford, Washington, in highly fractured lava rock was to have been built deep within a prolific aquifer, posing a high risk of catastrophic flooding. A repository in the bedded salt of Deaf Smith County, Texas, would have penetrated the Ogallala Aquifer, a resource of great political sensitivity in that very rich agricultural county.

A search for a second repository site in the eastern half of the United States was abruptly terminated by the Reagan administration in 1986 essentially because the political price had become too great. Four U.S. Senate seats were at stake in the seven states most targeted by this search and the Republican candidates were becoming increasingly imperiled. Today, few believe Congress will ever reopen the search for repository sites.

A stronger project

Managers of the Yucca Mountain project may have unwittingly set a trap for themselves by choosing to make the case for licensing by relying far less on the mountain’s natural hydrogeologic characteristics to contain radioactivity than on the engineered barriers that they propose. These barriers are principally an outer shell of nickel alloy for the massive spent-fuel and high-level waste containers, plus a titanium “drip shield” to go above the containers. The cost of the two together is put at $9.8 billion (year 2000 dollars).

Quantifying the effectiveness of a well-defined engineered barrier might at first appear easier than determining the effectiveness of a natural system that is mostly hidden inside the mountain and only partly understood. But in truth the uncertainties associated with the one may be every bit as great as those associated with the other. The corrosion resistance over thousands of years of the chosen alloy or any other manmade material is simply not known, and experts retained by Nevada can point to corrosion processes that might well compromise the proposed barriers.

Granted, the uncertainties as to waste containment associated with the natural system are significant. Into the early 1990s project managers felt sure that since the repository horizon is 800 feet above the water table, waste containers would stay dry for many thousands of years and thus be protected from corrosion. But there has since been evidence (albeit ambiguous and now under intense review) of a small amount of water infiltrating the mountain from the surface and reaching the repository level within several decades. Given the less arid climate expected in the future, somewhat more water could be present to infiltrate, although any flows of water reaching waste emplacement tunnels might simply go through fractures to deeper horizons without affecting waste containers. But an additional concern has to do with water contained within pores in the rock causing a high general humidity.

The U.S. Geological Survey has formally supported selection of Yucca Mountain for repository development, although with conditions. The Nuclear Waste Technical Review Board sees no reason for disqualifying the site but characterizes the technical work behind the project performance assessment as “weak to moderate.”

An unresolved design issue is whether to allow an emplacement density for heat-generating spent fuel that would raise the temperature of the rock near waste containers above the boiling point of water, a question that bears directly on the extent of the repository’s labyrinth of emplacement tunnels. In view of this and other unresolved issues, whether the project can meet its target of filing a licensing application by 2004 is hotly disputed. But a delay of a few years or possibly even longer might be desirable in any case, affording time for project plans to include test modules for innovative engineered barriers that could strengthen the case for licensing–and allowing time for new institutional arrangements to fall into place if a new national laboratory were to assume direction of the project.

To sum up, at this critical juncture in our long tormented quest for a spent-fuel and high-level waste repository, three things appear needed. First, an override by Congress of Nevada’s veto of the Yucca Mountain site. Next, amendments to the Nuclear Waste Policy Act to encourage a profound political reconciliation between Nevada and Washington and to make the repository project stronger financially and technically. Finally, an aggressively exploratory design effort to ensure a repository worthy of our confidence in the safe containment of radioactivity over the long period of hazard.

A Makeover for Engineering Education

Hollywood directors are said to be only as good as their last picture. Maintaining their reputations means keeping up the good work–continuing to do encores that are not only high-quality but that fully reflect the tastes and expectations of the time.

A similar measure applies to engineers. Though we are fresh from a whole century’s worth of major contributions to health, wealth, and the quality of life, there is trouble in paradise: Tthe next century will require that we do even more at an even faster rate, and we are not sufficiently prepared to meet those demands, much less turn in another set of virtuoso performances.

The changing nature of international trade and the subsequent restructuring of industry, the shift from defense to civilian applications, the use of new materials and biological processes, and the explosion of information technology–both as part of the process of engineering and as part of its product–have dramatically and irreversibly changed the practice of engineering. If anything, the pace of this change is accelerating. But engineering education–the profession’s basic source of training and skill–is not able to keep up with the growing demands.

The enterprise has two fundamental, and related, problems. The first regards personnel: Fewer students find themselves attracted to engineering schools. The second regards the engineering schools, which are increasingly out of touch with the practice of engineering. Not only are they unattractive to many students in the first place, but even among those who do enroll there is considerable disenchantment and a high dropout rate (of over 40 percent). Moreover, many of the students who make it to graduation enter the workforce ill-equipped for the complex interactions, across many disciplines, of real-world engineered systems. Although there are isolated “points of light” in engineering schools, it is only a slight exaggeration to say that students are being prepared to practice engineering for their parents’ era, not for the 21st century.

What’s needed is a major shift in engineering education’s “center of gravity,” which has moved virtually not at all since the last shift, some 50 years ago, to the so-called “engineering science” model. This approach–which emphasizes the scientific and mathematical foundations of engineering, as opposed to empirical design methods based on experience and practice–served the nation well during the Cold War, when the national imperative was to build a research infrastructure to support military and space superiority over the Soviet Union. But times have clearly changed, and we must now reexamine that engineering-science institution, identify what needs to be altered, and pursue appropriate reforms.

An agenda for change

Engineering is not science or even just “applied science.” Whereas science is analytic in that it strives to understand nature, or what is, engineering is synthetic in that it strives to create. Our own favorite description of what engineers do is “design under constraint.” Engineering is creativity constrained by nature, by cost, by concerns of safety, environmental impact, ergonomics, reliability, manufacturability, maintainability–the whole long list of such “ilities.” To be sure, the realities of nature is one of the constraint sets we work under, but it is far from the only one, it is seldom the hardest one, and almost never the limiting one.

Today’s student-engineers not only need to acquire the skills of their predecessors but many more, and in broader areas. As the world becomes more complex, engineers must appreciate more than ever the human dimensions of technology, have a grasp of the panoply of global issues, be sensitive to cultural diversity, and know how to communicate effectively. In short, they must be far more versatile than the traditional stereotype of the asocial geek.

These imperatives strongly influence how a modern engineer should be educated, which means that he or she requires a different kind of education than is currently available in most engineering schools. In particular, we see six basic areas in great need of reform:

Faculty rewards. Engineering professors are judged largely by science-faculty criteria–and the practice of engineering is not one of them. Present engineering faculty tend to be very capable researchers, but too many are unfamiliar with the worldly issues of “design under constraint” simply because they’ve never actually practiced engineering. Can you imagine a medical school whose faculty members were prohibited from practicing medicine? Similarly, engineering professors tend to discount scholarship on the teaching and learning of their disciplines. Can we long tolerate such stagnation at the very source of future engineers? (These perceptions of engineering faculty are not merely our own. When the National Academy of Engineering convened 28 leaders from industry, government, and academia in January 2002 to discuss research on teaching and learning in engineering, the retreat participants agreed that although an increased focus on scholarly activities in engineering teaching and learning is much needed, the current faculty-reward system does not value these activities.)

Curriculum. Faculty’s weakness in engineering practice causes a sizeable gap between what is taught in school and what is expected from young engineers by their employers and customers. The nitty-gritty of particular industries cannot, and should not, be included in the curriculum–particularly for undergraduates. But although everyone pretty much agrees that students will continue to need instruction in “the fundamentals,” the definition of this term has been rapidly changing. Whereas physics and continuous mathematics largely filled the bill for most of the 20th century, there are now additional fundamentals. For example, discrete mathematics (essential to digital information technology), the chemical and biological sciences, and knowledge of the global cultural and business contexts for design are now important parts of an engineer’s repertoire.

The first professional degree. We can’t just add these “new fundamentals” to a curriculum that’s already too full, especially if we still claim that the baccalaureate is a professional degree. And therein lies the rub: Whereas most professions–business, law, medicine–do not consider the bachelor’s degree to be a professional degree, engineering does. Maintaining such a policy in this day and age is a disservice to students, as it necessarily deprives them of many of the fundamentals they need in order to function; and it is a misrepresentation to employers.

Formalized lifelong learning. It has been said that the “half-life” of engineering knowledge–the time in which half of what an engineer knows becomes obsolete–is in the range of two to eight years. This means that lifelong learning is essential to staying current throughout an engineering career, which may span some 40 years. Yet the notion, at least as a formalized institution, has not been part of the engineering culture. This has to change, as merely taking training in the latest technology is not good enough. The fundamentals you learned in college are still fundamental, but they aren’t the only ones in this rapidly changing profession.

Diversity. An essential aspect of service to society is inclusiveness–the need to “leave no child behind.” But although diversity in our engineering schools has improved in recent years, we’ve leveled off. Fewer than 20 percent of entering freshmen are women, and underrepresented minorities account for just over 16 percent. Among the nation’s engineering faculty, the numbers are worse: Fewer than 10 percent are women, and fewer than 5 percent are underrepresented minorities. Another way to look at the situation is this: Although minority men and all women represent 65 percent of the general population, they comprise only 26 percent of the B.S. graduates in engineering. Such figures are unacceptable, and not just as an equity issue. It’s a workforce issue and, even more important, it’s a quality issue. Our creative field is deprived of a broad spectrum of life experiences that bear directly on good engineering design. Put more bluntly, we’re not getting the bang for the buck that we should.

Technological literacy in the general population. Thomas Jefferson founded the University of Virginia in the conviction that we could not have a democracy without an educated citizenry. Given that technology is now one of the strongest forces shaping our nation, we think he would consider our present democracy imperiled. Though our representatives in Congress are regularly called upon to vote on technology-based issues that will profoundly affect the nation, they and the people who elect them are, for the most part, technologically illiterate. Engineering schools have not traditionally provided courses for non-engineering majors, but in our view it’s time they did. These courses will not be of the kind we are accustomed to teaching, as they’ll relate technology and the process of creating it–that is, engineering–to larger societal issues. But noblesse must oblige: Technological literacy is now essential to citizens’ pursuit of a better and richer life.

Steps in the right direction

Clearly, a great deal needs to be changed, and the scale of the challenge can be daunting. But enlightened, come-from-behind reinvention is nothing new to our society.

Consider recent turnarounds in the business sector, aided by methods that may similarly benefit education. Twenty years ago, U.S. industry was seriously lagging its counterparts in other countries, but U.S. companies found answers in modern quality-improvement techniques. A technique called “Six Sigma,” for example–used with great success by Motorola, General Electric, and Allied Signal, among others–basically forces you to identify the product, the customer, the current processes for making and delivering it, and the sources of waste. Then you redesign the system and evaluate it once again. This procedure continues indefinitely, resulting in a practice of constant reevaluation and reform.

By applying such standards of industrial quality control to engineering education, we could well create more excitement, add more value, and get more done for students in less time. Many of the seemingly insuperable problems of the largely arrested academic enterprise could yield imaginative answers.

One area of much-needed answers is the “supply side” issue: How can engineering schools attract more bright young people out of high school? Part of the solution, we believe, is a massive engineering-mentor program. Think of it as every engineer in the country identifying, say, four students with an interest in engineering and essentially adopting them for the duration of their school years–not just to give occasional encouragement but to stick with them and really guide them.

Many people in the profession stayed with engineering because at critical points in their careers they experienced the helping hand and timely advice of a mentor. Similarly, we could be there for these kids when the going gets tough and they are tempted to abandon engineering for an easier alternative. Eventually, like us, they will get hooked on engineering when they experience the thrill of invention–of bringing their skills to bear on a problem and achieving a useful and elegant solution, on time, on budget, and within all the other practical constraints. But until then, there needs to be the continuous support and interest of a mentor.

Numerous other innovations, both for increasing the supply of engineering students and improving the quality of their education, are possible. Now they will be more probable with the recent adoption, by the Accreditation Board for Engineering and Technology (ABET), of new and flexible criteria for putting authoritative stamps of approval on engineering schools’ curricula. Unlike previous criteria, which were rigidly defined, the Engineering Criteria 2000 encourage each school to be outcome-oriented, to define its own niche and structure its curriculum accordingly. This is a huge step in the right direction, liberating faculty to propose virtually any modification they deem appropriate, which may then be evaluated by ABET against the school’s goals. Essentially, the new criteria say: You can do that; just do it well!

Accreditation, though necessary, is not sufficient. When an innovation is in place and showing itself to be effective, it also needs to be publicly recognized so that it may be replicated or serve as an inspiration for similar efforts elsewhere. One mechanism for this process is the recently established Bernard M. Gordon Prize for Innovation in Engineering and Technology Education. Awarded by the National Academy of Engineering (NAE), it is a prominent way to highlight novel teaching methods that motivate and inform the next generation of engineering educators.

The Gordon Prize, which carries a cash award of $500,000 divided equally between the recipient and his or her institution, was presented for the first time this past February to Eli Fromm, professor of electrical and computer engineering and director of the Center for Educational Research at Drexel University’s College of Engineering. He was cited for implementing “revolutionary ideas that are showing dramatic results in areas such as student retention and minority involvement in engineering studies.” In particular, Fromm established the Enhanced Education Experience for Engineers (E4) program, in which faculty members from diverse disciplines teach side-by-side with engineering colleagues in a hands-on, laboratory atmosphere. The aim is to build students’ communication skills, expand their knowledge of business, and give them a deeper understanding of the design process itself.

This E4 program has now expanded to seven other academic institutions–under the new name of Gateway Engineering Education Coalition–and participating schools report an 86 percent increase in the retention of freshmen. They also note that the number of engineering degrees they now award to women has shot up by 46 percent, to Hispanics by 65 percent, and to African-Americans by 118 percent.

Organizations send a message

A basic condition for the reform of engineering education is to change the attitudes of engineering faculty, and one good way to win hearts and minds is by their professional organizations–especially those positioned to reward individuals’ achievements–conspicuously taking up the cause.

The NAE, whose membership consists of the nation’s premier engineers recognized by their peers for seminal contributions, is one such organization, perhaps the country’s most prestigious. And it is strongly committed to moving engineering education’s center of gravity to a position relevant to the needs of 21st-century society. We refer to the Academy’s programs in this area as our “four-legged stool”:

First, we’ve reaffirmed that high-quality contributions to engineering education are a valid reason for election to the NAE. This criterion makes it clear that people’s creativity and excellence in engineering education can be rewarded in the same ways as outstanding technological contributions.

Second, we’ve established a standing committee of the Academy’s Office of the President–called, naturally enough, the Committee on Engineering Education –that identifies significant issues, organizes studies, develops long-term strategies, recommends specific policies to appropriate government agencies and academic administrations, coordinates with other leading groups in engineering and related fields, and encourages public education and outreach.

Third, we have created the Gordon Prize, essentially the “Nobel Prize” for engineering educators.

And fourth, the NAE is in the process of forming its very own center for focused research projects on teaching and learning in engineering. Usually we at the National Academies study things and then recommend that somebody else do something. Here we wish to also be implementers, developing innovative methods and disseminating the best results–our own as well as those of others.

Each of these initiatives serves a double purpose: developing or recognizing particular innovations and making the NAE’s imprimatur quite visible. The hope is that our activities send a message, particularly to engineering faculty throughout the country, that the Academy attaches great value to creative work in engineering education and wishes to acknowledge and spread the best ideas.

Other influential bodies must similarly get involved in this revitalization process, so that their efforts are mutually reinforcing. For example, we believe that most of what NAE is now trying to do in teaching and learning would not have been possible without ABET’s Engineering Criteria 2000.

Basically, to revitalize engineering education we must first and foremost change educators’ attitudes. Only then can engineering schools produce the open-minded and versatile modern engineers capable of making improvements to our quality of life–and to that of people around the world.

The average person today enjoys a great many advantages, most of them the result of engineering. But because we live in a time of rapid change, engineers in current practice face issues that little constrained their predecessors; and engineers we educate today will be practicing in future environments likely to be very different from our own. Thus if engineering education does not change significantly, and soon, things will only get worse over time.

The problem has now been studied to death, and the essential solution is clear. So let’s get on with it! It’s urgent that we do so.

Updating Automotive Research

On January 9, 2002, Department of Energy (DOE) Secretary Spencer Abraham announced a new public-private cooperative research program with the three major domestic automakers. According to a press release, the program would ” promote the development of hydrogen as a primary fuel for cars and trucks, as part of our effort to reduce American dependence on foreign oil … [and] … fund research into advanced, efficient fuel cell technology, which uses hydrogen to power automobiles.” Called FreedomCAR (with CAR standing for cooperative automotive research), the program replaces the Partnership for a New Generation of Vehicles (PNGV), which was launched by the Clinton administration with great fanfare in 1993.

The reaction to FreedomCAR, as reflected in press headlines, was largely skeptical. “Fuelish Decision,” said the Boston Globe. “Fuel Cell Fantasy,” stated the San Francisco Chronicle. A Wall Street Journal editorial asserted that fuel cells were expensive baubles that wouldn’t be plausible without vast subsidies. Automotive News, the main automotive trade magazine, expressed caution, stating that, “FreedomCAR needs firm milestones… Otherwise it will be little more than a transparent political sham.”

DOE has since released a tentative set of proposed performance goals for vehicle subsystems and components, which were immediately endorsed by the three automakers. Nonetheless, skepticism about the program continues, which is not surprising given the Bush administration’s ambivalence toward energy conservation and tighter fuel economy standards. Yet viewed strictly as an updating of PNGV, FreedomCAR is a fruitful redirection of federal R&D policy and a positive, albeit first step toward the hydrogen economy. However, for FreedomCAR to become an effective partnership and succeed in accelerating the commercialization of socially beneficial advanced technology, additional steps will need to be taken.

What was PNGV?

The goal of PNGV was to develop vehicles with triple the fuel economy of current vehicles [to about 80 miles per gallon (mpg) for a family sedan], while still meeting safety and emission requirements and not increasing cost. It was in part an attempt to ease the historical tensions arising from the adversarial regulatory relationship between the automotive industry and federal government. It would “replace lawyers with engineers” and focus on technology rather than regulation to improve fuel economy. It also reflected the government’s recognition that the nation’s low fuel prices resulted in an absence of market forces needed to “pull” fuel-efficient technology into the marketplace. As the technical head of the government’s side of the partnership said in a 1998 Rand report: “It is fair to say that the primary motivation of the industry was to avoid federally mandated fuel efficiency and emissions standards.”

PNGV was managed by an elaborate federation of committees from the three car companies and seven federal agencies. The government’s initial role was to identify key technology projects already being supported by one of the participating agencies. Industry teams determined which projects would be useful and whether additional or new research was needed. Throughout the process, technical decisions were made by industry engineers in collaboration with government scientists.

PNGV was high-profile. It engaged leaders at the highest levels and was championed by Vice President Gore. It was also subjected to extraordinary scrutiny, with a standing National Research Council (NRC) committee conducting detailed annual reviews.

The lofty rhetoric about and intense interest in PNGV did not, however, result in increased federal funding of advanced vehicle R&D. PNGV’s budget has always been controversial, with critics dubbing it “corporate welfare.” The ambitious program was realized by moving existing federal programs and funds under the PNGV umbrella. Funding for the PNGV partnership remained relatively steady at about $130 million to $150 million per year (or $220 million to $280 million if a variety of related federal programs not directly tied to PNGV goals are included).

From the start, the corporate welfare criticism was largely unfounded and became less so over time. Initially, about one-third of PNGV funding went to the automakers. That was largely carried over from already existing programs, and most of it was passed through to suppliers and other contractors. In any case, the amount steadily dropped to less than 1 percent by 2001. Although definitive data are not available, in the latter years of the program, more than half of the funding went to the national energy labs, and most of the rest went to a variety of government contractors, automotive suppliers, and nonautomotive technology companies, with universities receiving well under 5 percent. The automakers also provided substantial matching funds, though a major portion of this spending was in proprietary product programs.

The relevant issue with regard to automakers should not have been corporate welfare but how the research was prioritized and funds were spent. The three automakers played a central role for several reasons: As the final vehicle assembler and ultimate technology user, they had the best insight and judgment about research priorities, the greater expertise and staff resources to assess development priorities to meet consumer preferences, and the ability and resources to lobby Congress on behalf of the PNGV program.

Another issue with PNGV was the use of a specific product as the goal. In general, it is wise to direct a program’s activities toward a specific tangible goal, and a prototype often fulfills that role. But in the case of PNGV, the goal for 2004 of building an 80-mpg production prototype that would cost no more to build than a conventional car was flawed. One problem is that government and industry managers were so focused on meeting the affordability goal that they felt obligated to pick technology– small advanced diesel engines combined with electric power trains–that was similar to existing technology and not the most promising in terms of societal benefits. Diesel engines have inherently high air pollutant emissions, and it is unknown whether they can meet U.S. environmental standards. In addition, neither advanced diesel nor hybrid electric engines are longer-term technologies. Honda and Toyota are already commercializing early versions of these technologies: Toyota began selling hybrid electric cars in Japan in 1997, and both Toyota and Honda began selling them in the United States in 2000. More fundamentally, as the final NRC committee review of the program so succinctly stated, “It is inappropriate to include the process of building production prototypes in a precompetitive, cooperative industry-government program. The timing and construction of such a vehicle is too intimately tied to the proprietary aspects of each company’s core business to have this work scheduled and conducted as part of a joint, public activity.”

Even the interim goal of hand-built concept prototypes by 2001 was questionable. Indeed, the goal of public-private partnerships with automakers should not be prototype vehicles. Automakers have garages full of innovative prototypes. What is needed is accelerated commercialization of socially beneficial technology.

Still, in some ways, PNGV was a success. Milestones were achieved on schedule; communication between industry and government reportedly improved; new technologies were developed, and some were used to improve the efficiency of conventional vehicle subsystems and components; the program disciplined federal advanced technology R&D efforts; scientific and technological know-how was transferred from the national labs; and apprehensive foreign competitors responded to the program with aggressive efforts of their own, which in turn sparked an acceleration of the U.S. efforts.

From a societal perspective, this boomerang effect may have been most important, because the foreign automakers feared that this partnership between the richest country and three of the largest automakers in the world would create the technology that would dominate in the future. New alliances (the European Car of Tomorrow Task Force and the Japan Clean Air Program) were formed. Toyota and Honda accelerated the commercialization of hybrid electric cars. Daimler Benz launched an aggressive fuel cell program. Ford reacted in turn by buying into the Daimler-Ballard fuel cell alliance and announcing plans to market hybrid electric vehicles in 2003. General Motors followed by dramatically expanding its internal fuel cell program, creating technology partnerships with Toyota, and buying into a number of small hydrogen and fuel cell companies. Struggling Chrysler, with its minimal advanced R&D capability, merged with Daimler Benz.

Why fuel cells and hydrogen?

Fuel cells provide the potential for far greater energy and environmental benefits than diesel-electric hybrids. Hydrogen fuel cell vehicles emit no air pollutants or greenhouse gases and would likely be more than twice as energy-efficient as internal combustion engine vehicles. When hydrogen is made from natural gas, as most of it will be for the foreseeable future, air pollution and greenhouse gases are generated at the conversion site (a fuel station or large, remote, centralized fuel-processing plant), but in amounts far less than those produced by comparable internal combustion engine vehicles.

Fuel cell vehicles are close to commercialization, but no major company has initiated mass production. In 1997, Daimler Benz announced that it would produce more than 100,000 fuel cell vehicles per year by 2004, and other automakers chimed in with similar forecasts. That initial enthusiasm quickly waned. Now, in 2002, several companies plan to place up to 100 fuel cell buses in commercial service around the world by the end of 2003 (none in the United States); Toyota has announced plans to sell fuel cell cars in Japan for $75,000, also in 2003, as has Honda; and a variety of automakers plan to place hundreds of fuel cell cars in test fleets in the United States, mostly in California, in that same time frame. The new conventional wisdom is that by 2010, fuel cell vehicles will progress to where hybrid electric cars are today, selling 1,000 to 2,000 per month in the United States, and that sales in the hundreds of thousands would begin two to three years later.

Energy companies must be brought into the partnership, because of their key role in the transition to fuel cell vehicles.

Two energy scenarios released in the fall of 2001 by Shell International suggest the wide range of possible futures. In one scenario, Shell posited that 50 percent of new vehicles would be powered by fuel cells in 2025 in the industrialized countries. In the second scenario, hybrid electric and internal combustion vehicles would dominate, with fuel cells limited to market niches.

Three key factors are slowing commercialization: low fuel prices, uncertainty over fuel choice, and the time and resources needed to reduce costs. Costs are expected to drop close to those of internal combustion engines eventually, but considerable R&D and engineering is still needed. Current fuel cell system designs are far from optimal. Consider that internal combustion engines, even after a century of intense development, are still receiving a large amount of research support to improve their efficiency, performance, and emissions (far more, even now, than is being invested in fuel cell development). Fuel cells are at the very bottom of the learning curve.

The fuel issue may be more problematic. Hydrogen is technically and environmentally the best choice, but it will take time and money to build a fuel supply system. Investments in hydrogen and hydrogen fuel cell vehicles by energy suppliers and automakers are slowed by the chicken-and-egg dilemma. Alternatively, methanol, gasoline, or gasoline-like fuels can be used, simplifying the fuel supply challenge, but the cost, complexity, energy, and environmental performance of vehicles would be degraded. As late as mid-2001, the conventional wisdom in industry was that gasoline or gasoline-like fuels would be used initially, followed later by hydrogen. Now, in the wake of the FreedomCAR announcement, a direct transition to hydrogen is gaining appeal.

Is FreedomCAR good policy?

Although FreedomCAR is an overdue corrective action, it is hardly a major departure. For one thing, fuel cell R&D was already gaining a greater share of PNGV funding (from about 15 percent of the DOE PNGV funds in the mid-1990s to about 30 percent in 2001), as automakers increasingly kept their knowledge about hybrid vehicle technology proprietary. Moreover, it appears that no major overhaul will take place as PNGV is turned into FreedomCAR. The program structure and the management team will remain essentially the same. Funding for fuel cell research will be increased slightly and funding for internal combustion engine research decreased slightly. The plan to produce production prototypes in 2004 has been abandoned.

More R&D funding must go to universities to train the engineers and scientists who will design future generations of vehicles.

Perhaps of greater concern is automaker reluctance to expand industry engagement to energy companies. This will likely limit the overall effectiveness of the program, because uncertainty about hydrogen supply and distribution is arguably the single biggest factor slowing the transition to fuel cell vehicles. Other automakers, including the Japanese, should also be engaged, because they also are ultimate users of the technology. But perhaps the best use of limited government R&D funds may be to target 1) small innovative technology companies and larger technology companies that are not already major automotive suppliers; and 2) universities, because of their expertise in basic research, but equally because they will train the industry engineers and scientists who will design and build these vehicles in the future.

Finally, FreedomCAR does nothing, at least in the short run, to deal with the issues of fuel consumption and emissions. Fuel cell vehicles are not likely to gain significant sales before 2010, and perhaps even later. Given the reality of slow vehicle turnover, this means that fuel cells would not begin to make a dent in fuel consumption until at least 2015. Thus, if oil consumption and carbon dioxide emissions are to be restrained, more immediate policy action will be needed. If little or nothing is done in these areas, the Bush administration will continue to face the justifiable criticism that FreedomCAR is a means of short-circuiting the strengthening of the corporate average fuel economy standards.

Government’s role

Fuel cells and hydrogen show huge promise. They may indeed prove to be the Holy Grail, eventually taking vehicles out of the environmental equation, as industry insiders like to say. In a narrow programmatic sense, FreedomCAR is unequivocally positive as an updating and refashioning of the existing R&D partnerships and programs. Still, for a variety of reasons, including low fuel prices, industry still does not have a strong enough incentive to invest in the development and commercialization of this advanced, socially beneficial technology. Government will continue to have an important role to play.

The recommendations set forth below are premised on the understanding that government R&D is most effective when it targets technologies that are far from commercialization and have potentially large societal benefits, when funding is directed at more basic research, when the relevant industries are fragmented and have low R&D budgets; and when there is some mechanism or process for facilitating the conversion of basic research into commercial products. A strategy to promote sustainable cars and fuels must contain the following elements:


Advanced vehicle research, development, and education

  • Basic research directed at universities and national labs, especially focused on materials research and key subsystem technologies that will also have application to a wide range of other electric-drive vehicle technologies.
  • Leveraged funding of innovative technology companies.
  • Funding to universities to begin training the necessary cohort of engineers and scientists. This might merit creation of a second FreedomEDUCATION partnership (building on DOE’s small Graduate Automotive Technology Education centers program).

Hydrogen distribution

  • Assistance in creating a hydrogen fuel distribution system (with respect to safety rules, initial fuel stations, standardization protocol, pipeline rules, and so forth), requiring some R&D funding but in more of a facilitating role.
  • Funding to assist the development and demonstration of key technologies, such as solid hydrogen storage, and demonstration of distributed hydrogen concepts, such as electrolysis and vehicle-to-grid connections.

This activity might merit a third FreedomFUEL partnership.


Incentives and regulation

  • Incentives and rules that direct automakers and energy suppliers toward cleaner, more efficient vehicles and fuels.
  • Incentives to consumers to buy socially beneficial vehicles and fuels.

These three sets of strategies must all be pursued to ensure a successful and timely transition to socially beneficial vehicle and fuel technology. The last set of initiatives is particularly critical, not just to ensure a timely transition to fuel cells and hydrogen but also to accelerate the commercialization and adoption of already existing socially beneficial technologies, including hybrid electric vehicle technologies.

Solving the Broadband Paradox

If The Graduate were being filmed today, the one-word piece of advice that young Benjamin Braddock would hear is “broadband.” Most simply defined as a high-speed communications connection to the home or office, broadband offers Americans the promise of faster Internet access, rapid data downloads, instantaneous video on demand, and a more secure connection to a variety of other cutting-edge technologies and services.

If it were to become ubiquitously available throughout the United States, broadband communications services might finally make possible some long-dreamed-of commercial applications, including telecommuting, video conferencing, telemedicine, and distance learning. Beyond transforming the workplace, broadband could open new opportunities in the home for activities such as electronic banking, online gaming, digital television, music swapping, and faster Web surfing in general.

For these reasons, a growing number of pundits and policymakers are saying that Americans need broadband and they need it now. Moreover, assorted telecom, entertainment, and computer sector leaders are also proclaiming that the future of their industries depends on the rapid spread of broadband access throughout the economy and society. For example, Technology Network (Tech Net), one of the leading tech sector lobbying groups, is asking policymakers to commit to a JFK-esque “man on the moon” promise of guaranteeing 100 megabits per second (Mbps) connections for 100 million U.S. homes and small businesses by the end of this decade. This represents a bold–some would say unrealistic–vision for the future, considering that most Americans today are using a 56K narrowband modem connection and balking at paying the additional fee for a 1.5-Mbps broadband hookup.

What exactly is holding back the expansion of broadband services in America? Is a 100-Mbps vision within 10 years just a quixotic dream? What effect has regulation had on this sector in the past, and what role should public policy play in the future?

A digital white elephant?

As interesting as these questions are, the most important and sometimes forgotten question we should be asking first is: Do consumers really want this stuff? In the minds of many industry analysts, consumer demand for broadband services is simply taken for granted. Many policymakers see an inevitable march toward broadband and want to put themselves at the head of the parade. They have adopted the Field of Dreams philosophy: “If you deploy it, they will subscribe.”

But is this really the case? Are Americans clamoring for broadband? Are the benefits really there, and if so, do citizens understand them?

The answers to these questions remain surprisingly elusive for numerous reasons. This market is still in its infancy, and statistical measures are still being developed to accurately gauge potential consumer demand. Thus far, the most-quoted surveys have been conducted by private consulting and financial analysis firms. The cited results are all over the board, and critical evaluation is difficult because the full detailed analysis is available only to those who pay the hefty subscription fees. However, when one looks at government statistics about actual broadband use, it seems clear that the public has not yet caught broadband fever. According to the Federal Communications Commission (FCC), only 7 percent of U.S. homes subscribe to a high-speed access service connection, even though broadband access is available to roughly 75 to 80 percent of U.S. households. A clear paradox seems to exist in the current debate over this issue: Everyone is saying the public demands more broadband, yet the numbers don’t yet suggest they really do. What gives?

The FCC’s recently issued Third Report on the Availability of High Speed and Advanced Telecommunications Capability concluded that broadband was being made available to Americans in a “reasonable and timely fashion.” The report noted that over 70 percent of homes have cable modem service available to them, 45 percent have telco-provided digital subscriber line (DSL) service available, 55 percent of Americans have terrestrial fixed wireless broadband options, and almost every American household can purchase satellite-delivered broadband today.

Importantly, however, the FCC concluded that although broadband was within reach of most U.S. homes, most households were not yet subscribing. The FCC report notes that, “cost appears to be closely associated with the number of consumers willing to subscribe to advanced services.” It cites one private-sector survey that revealed that 30 percent of online customers were willing to pay $25 per month for broadband, but only 12 percent were willing to pay $40. Broadband service currently costs $40 to $50 per month on top of installation costs. This is a lot of money for the average household, especially when compared to other monthly utility bills.

And therein lies the real reason why broadband subscribership remains so sluggish: Most Americans still view broadband as the luxury good it really is instead of the life necessity that some policymakers paint it to be. Not every American needs, or even necessarily wants, a home computer or a connection to the Internet. This is especially the case for elderly households and households without children. In fact, children are a critical source of demand for the Internet and for broadband.

The National Telecommunications and Information Administration (NTIA) recently issued a report, A Nation Online: How Americans Are Expanding Their Use of the Internet, which found that a stunning 90 percent of children between the ages of 5 and 17 now use computers and that 75 percent of 14-to-17-year-olds and 65 percent of 10-to-13-year-olds use the Internet. Moreover, households with kids under 18 are more likely to access the Internet (62 percent) than are households with no children (53 percent).

Most Americans still view broadband as the luxury good it really is instead of the life necessity that some policymakers paint it to be.

The moral of the story is that to the extent that there is any sort of “digital divide” in this country, it is between the old and the young. We may just need to wait for the younger generation to grow up and acquire wallets and purses before broadband demand really intensifies.

But beyond the generation gap issue, other demand-side factors are holding down broadband adoption rates. For example, residential penetration rates are being held down by the fact that broadband access in the workplace is often viewed as a substitute for household access. If I can get online at work for a few minutes during the lunch hour each day and order goods from bandwidth-intensive sites such as Amazon.com, JCrew.com, or E-Bay, why do I really need an expensive broadband hookup at home at all? A narrowband dialup connection at home will give me easy access to e-mail and even allow me to get around most Web sites without much of a headache. I’ll just have to be patient when I hit the sites with lots of bells and whistles.

Another important demand-side factor that must be taken into account is the lack of so-called “killer aps,” or broadband applications that would encourage or even require consumers to purchase high-speed hookups for their homes. Although it makes many people (especially policymakers) uncomfortable to talk about it, the two most successful killer aps so far have been Napster and pornography. Like it or not, the illegal swapping of copyrighted music and the downloading of nudie pics has probably done more to encourage broadband subscription than any other online application thus far. While politicians work hard to rid the world of online file sharing and porn, they may actually be eliminating the only two services with enough appeal to convince consumers to take the broadband plunge.

But this certainly doesn’t count as the most serious obstacle policymakers have created to the growth of broadband markets. Regulation has played, and continues to play, a very important role in how service providers deploy broadband.

Regulatory roulette

Beyond the question of how much demand for broadband services really exists in the present marketplace, important supply-side questions remain the subject of intense debate as well. Many policymakers and members of the consuming public are asking why current providers are not doing more to roll out broadband service to the masses.

Regulation is certainly a big part of the supply-side problem. The primary problem that policymakers face in terms of stimulating increased broadband deployment is that the major service providers have decidedly different regulatory histories. Consider the radically different regulatory paradigms governing today’s major broadband providers.

  • Telephone companies have traditionally been designated as common carriers by federal, state, and local regulators. As common carriers, they have been expected to carry any and all traffic over their networks on a nondiscriminatory basis at uniform, publicly announced rates. At the federal level, the regulation of telephone companies generally falls under Title II of the Communications Act, and this regulation is carried out by the Common Carrier Bureau at the FCC. Today, telephone companies provide broadband service to Americans through DSL technologies that operate over the same copper cables that carry ordinary phone traffic. Telephone companies account for almost 30 percent of the current marketplace.
  • Cable companies have traditionally been more heavily regulated at the municipal level, because each cable company was quarantined to a local franchise area. Although they gained the exclusive right to serve these territories, many rate controls and programming requirements were traditionally required as well. But cable has not been treated as a common carrier. Rather, the industry has been free to make private (sometimes exclusive) deals with content providers on terms not announced to the public beforehand. At the federal level, cable regulations fall under Title VI of the Communications Act and are usually managed by the Cable Services Bureau at the FCC. Cable companies provide broadband service to Americans through cable modem technologies and are the leading provider of broadband, accounting for just under 70 percent of current users.
  • Satellite and wireless providers have been less heavily regulated than telephone and cable carriers, but many rules still govern the way this industry does business. The federal regulations these carriers face are found in various provisions of the Communications Act and subsequent statutes, but most oversight responsibilities fall to the Cable Services Bureau, which is ironic given the wire-free nature of satellite transmissions. The FCC’s Wireless Bureau also has a hand in the action. Like cable providers, satellite companies are considered private carriers rather than common carriers. Unlike cable and telephone companies, wireless carriers have not encountered as much direct regulation by state or local officials, given the more obvious interstate nature of the medium. (The exception to this is municipal zoning ordinances governing tower antenna placement, which continue to burden the industry.) Today, wireless providers offer broadband service to the public through a special satellite dish or receiving antenna and set-top box technologies. With the highest monthly subscription fees and the most expensive installation and equipment charges, satellite companies have captured less than 2 percent of the market.

These three industry sectors–telephony, cable, and satellite–are the primary providers of broadband connections to the home and business today. Although they use different transmission methods and technologies, they all essentially want to provide consumers with the same service: high-speed communications and data connectivity. And yet these providers are currently governed under completely different regulatory methodologies. FCC regulations are stuck in a regulatory time warp that lags behind current market realities by several decades, and regrettably the much-heralded Telecommunications Act of 1996 did nothing to alter the fundamental nature of these increasingly irrelevant and artificial legal distinctions.

The current regulatory arrangement means that firms attempting to offer comparable services are being regulated under dissimilar legal standards. It betrays the cardinal tenet of U.S. jurisprudence that everyone deserves equal treatment under the law, and the danger is that it could produce distorted market outcomes. Can these contradictory regulatory traditions be reconciled in such a way that no one player or industry segment has an unfair advantage over another? In theory, the answer is obviously yes, but in practice it will be quite difficult to implement.

Most favored nation

The public policy solution is to end this regulatory asymmetry not by “regulating up” to put everyone on equally difficult footing but rather by “deregulating down.” That is, to the extent legislators and regulators continue to set up ground rules for the industry at all, they should consider borrowing a page from trade law by adopting the equivalent of a “most favored nation” (MFN) clause for telecommunications. In a nutshell, this policy would state that: “Any communications carrier seeking to offer a new service or entering a new line of business should be regulated no more stringently than its least-regulated competitor.”

Such an MFN for telecommunications would ensure that regulatory parity exists within the telecommunications market as the lines between existing technologies and industry sectors continue to blur. Placing everyone on the same deregulated level playing field should be at the heart of telecommunications policy to ensure nondiscriminatory regulatory treatment of competing providers and technologies at all levels of government.

So much for theory. In practice, the difficulty is that deregulation of this industry is not popular with policymakers these days. In fact, the recent debate over broadband deregulation in Congress has been an incredibly heated affair, with all the industry players and special interests squaring off over the Internet Freedom and Broadband Deployment Act of 2001 (H.R. 1542). Sponsored by House Energy and Commerce Chairman Billy Tauzin (R-La.) and ranking member John Dingell, (D-Mich.), the Tauzin-Dingell bill would allow the Baby Bell companies, which offer local phone service, to provide customers with broadband services in the same way that cable and satellite companies are currently allowed to, free of the infrastructure-sharing provisions of the Telecom Act of 1996.

The Baby Bells are reluctant to make a large investment in broadband infrastructure if they will be forced to let their competitors use that infrastructure. In addition, under the current regulatory regime the Baby Bells are not certain whether or not they can offer broadband services to customers outside their local service areas. (They are clearly forbidden to offer phone services outside these areas.) Passage of the Tauzin-Dingell bill would resolve both of these questions and clear the way for the Baby Bells to make a major commitment to broadband service.

Cable companies, the large long-distance telephone companies, and small telecom resellers vociferously oppose the Tauzin-Dingell measure, arguing that it would represent the end of the road for them. These companies would prefer not to have to compete head-to-head with the Baby Bells or to have to invest in their own infrastructure. An intense lobbying, public relations, and advertising campaign was initiated to halt the measure, and the Bell forces responded in kind with stepped-up lobbying and ads of their own. On February 27, after months of acrimonious debate, the House of Representatives passed the Tauzin-Dingell measure with some last-minute modifications. But it will likely prove to be a Pyrrhic victory for the Bells, because of the bill’s limited support in the Senate. Sen. Ernest Hollings (D-S.C.), a longtime enemy of the Baby Bells and deregulation in general, has vowed to kill the bill when it enters the Senate Commerce Committee, which he rules with an iron hand.

The bottom line is that deregulation has a very limited constituency in today’s Congress. Even proposals aimed at leveling the playing field for all providers, which is essentially what the Tauzin-Dingell bill does, have very limited chances of achieving final passage in today’s legislative environment. This is especially the case given that carriers seem unwilling to forgo the insatiable urge to lobby for old and new rules that hinder their competitors at every turn. Remember Cold War-era “MAD” policy? The escalating lobbying and public relations battles have become the telecom industry’s equivalent of Mutually Assured Destruction: If you screw us, we’ll screw you.

What Congress might do

Although it appears increasingly unlikely that Congress will take the steps needed to clean up the confusing and contradictory legal quagmire the industry finds itself stuck in, a new class of broadband bills is simultaneously being considered that would authorize a variety of promotional efforts to spur broadband deployment. For example, Senate Majority Leader Tom Daschle (D-S.D.) has argued that government “should create tax credits, grants, and loans to make broadband service as universal tomorrow as telephone access is today.” And even though recent government reports such as the NTIA and FCC studies cited above illustrate that computer and broadband usage rates have been increasing, Sen. Patrick Leahy (D-Vt.) reacted to this news by noting, “I suspect we have to add money in the Congress” to boost the availability of these technologies.

Daschle and Leahy are not along in calling for government to take a more active role in promoting broadband use. In fact, one bill, the Broadband Internet Access Act (S. 88, H.R. 267) has attracted almost 200 sponsors in the House and over 60 in the Senate. The bill would create a tax incentive regime to encourage communications companies to deploy broadband services more rapidly and broadly throughout the United States. The measure would offer a 10 to 20 percent tax credit to companies that roll out broadband services to rural communities and “underserved” areas.

Policymakers need to undertake some much-needed regulatory housecleaning by removing outmoded rules and service designations from the books.

Whereas the Broadband Internet Access Act would represent an indirect government subsidy, more direct subsidization efforts are also on the table. Last fall, the bipartisan duo of Rep. Leonard Boswell (D-Iowa) and Rep. Tom Osborne (R-Neb.) introduced the Rural America Technology Enhancement (RATE) Act (H.R. 2847), which would authorize $3 billion in loans and credits for rural broadband deployment programs and establish an Office of Rural Technology within the Department of Agriculture to coordinate technology grants and programs. And these bills are just the tip of the iceberg; there are dozens more like them in Congress.

Welcome to the beginning of what might best be dubbed the “Digital New Deal.” In recent years, legislators and regulators have been promoting a veritable alphabet soup of government programs aimed at jump-starting the provision of broadband, especially in rural areas. Although only a handful of such programs have been implemented thus far, many of these proposals could eventually see the light of day, because so many policymakers seem eager to do something to put themselves at the front of a technological development that they see as inevitable. Deregulating the market so that this development can follow its own course apparently will not enable them to take credit for what happens.

The problem, however, is that Washington could end up spending a lot of taxpayer money with little gain to show for it, because it is unlikely that tax credits or subsidies would catalyze as much deployment as policymakers imagine. In the absence of fundamental regulatory reform, many providers are unlikely to increase deployment efforts significantly. Although a 10 to 20 percent tax credit may help offset some of the capital costs associated with network expansion, many carriers will still be reluctant to deploy new services unless a simple and level legal playing field exists.

If legislators sweetened the deal by offering industry a 30 to 50 percent credit to offset deployment costs, it might make a difference. But if subsidy proposals reached that level, it would beg the question: Why not just let government build the broadband infrastructure in rural areas itself? Ironically, that is exactly what a number of small rural municipal governments are proposing to do today. Frustrated with the slow pace of rollout by private companies, some local authorities are proposing to turn broadband into yet another lackluster public utility. Private companies are fighting the proposal, of course, but consumers should also be skeptical of efforts by city hall to model their broadband company after the local garbage or sewage service. Is that really a good model for such a dynamic industry? Fortunately, these broadband municipalization efforts have not made much progress. Most legislators still want to begin by jump-starting private-sector deployment through promotional efforts.

In the end, perhaps the most damning argument against a tax credit and subsidy regime for broadband is the threat of politicizing this industry by allowing legislators and regulators to become more involved in how broadband services are provided. By inviting government in to act as a market facilitator, the industry runs the risk of being subjected to greater bureaucratic micromanagement. Experience teaches us that what government subsidizes, it often ends up regulating as well. It is not hard to imagine that such tinkering with the daily affairs of industry might become more commonplace if Washington starts subsidizing broadband deployment. That explains why T. J. Rodgers, president and CEO of Cypress Semiconductor, has cautioned the high-tech industry about “normalizing relations” with Washington, D.C. As Rodgers says, “The political scene in Washington is antithetical to the core values that drive our success in the international marketplace and risks converting entrepreneurs into statist businessmen.”

Solving the broadband paradox will require steps by policymakers, industry providers, and consumers alike if the dream of ubiquitous high-speed access is to become a reality. Policymakers need to undertake some much-needed regulatory housecleaning by removing outmoded rules and service designations from the books. New spending initiatives or subsidization efforts are unlikely to stimulate much broadband deployment. What companies, innovators, and investors really need is legal clarity: an uncluttered, level playing field for all players that does not attempt to micromanage this complicated sector or its many current and emerging technologies.

Industry players will need to undertake additional educational efforts to make consumers aware of what broadband can do for them. Ultimately, however, as important as such educational efforts are, there is no substitute for intense facilities-based investment and competition to help drive down cost, which still seems to be the biggest sticking point for most consumers. New killer aps will hopefully also come along soon that can help drive consumer demand in the same way that Napster and the brief file-sharing craze did before litigation shut down this practice.

Finally, consumers will need to be patient and understand that there is no such thing as a free broadband lunch. It will take time for these technologies to spread to everyone, and even as they become more ubiquitously available, they will be fairly expensive to obtain at first. Cost will come down with the passage of time (if demand is really there), but you’ll still need to shell out a fair chunk of change to satisfy your need for speed online.

Putting Teeth in the Biological Weapons Convention

In the fall of 2001, letters sent through the U.S. mail containing powdered anthrax bacterial spores killed five people, infected 18 others, disrupted the operations of all three branches of the U.S. government, forced tens of thousands to take prophylactic antibiotics, and frightened millions of Americans. This incident demonstrated the deadly potential of bioterrorism and raised serious concerns about the nation’s ability to defend itself against more extensive attacks.

The anthrax crisis also made more urgent the need to prevent the acquisition and use of biological and toxin weapons–disease-causing microorganisms and natural poisons–by states as well as terrorist organizations. At present, the legal prohibitions on biological warfare (BW) are flawed and incomplete. The 1925 Geneva Protocol bans the use in war of biological weapons but not their possession, whereas the 1972 Biological and Toxin Weapons Convention (BWC) prohibits the development, possession, stockpiling, and transfer of biological and toxin agents and delivery systems intended for hostile purposes or armed conflict, but it has no formal measures to ensure that the treaty’s 144 member countries are complying with the ban.

Because the materials and equipment used to develop and produce biological weapons are dual use (suitable both for military ends and legitimate commercial or therapeutic applications), the BWC bans microbial and toxin agents “of types and quantities that have no justification for prophylactic, protective, or other peaceful purposes.” Given this inherent ambiguity, assessing compliance with the BWC is extremely difficult and often involves a judgment of intent. Moreover, the treaty lacks effective verification measures: Article VI offers only the weak option of petitioning the United Nations (UN) Security Council to investigate cases of suspected noncompliance, which has proven to be a political nonstarter.

The BWC’s lack of teeth has reduced the treaty to little more than a gentleman’s agreement. About 12 countries, including parties to the BWC such as Iraq, Iran, Libya, China, Russia, and North Korea, are considered to have active biological warfare (BW) programs. This level of noncompliance suggests that the legal restraints enshrined in the treaty are not strong enough to prevent some governments from acquiring and stockpiling biological weapons. Thus, it is essential to take concrete steps to reinforce the biological disarmament regime.

Despite the fall 2001 terrorist attacks, however, recent efforts to adopt monitoring and enforcement provisions for the BWC have gone nowhere. Indeed, negotiations at a meeting of the BWC member states in November and December 2001 broke down, in large part because of actions taken by the United States. Instead of the mandatory and multilateral approach favored by most Western countries, the Bush administration has advocated a package of nine voluntary measures, most of which would be implemented through national legislation. Although the administration’s approach has some value for combating bioterrorism, it is doubtful that it will be sufficient to address the problem of state-level noncompliance with the biological weapons ban.

History of failure

Efforts to strengthen the BWC have a long history. At the Second and Third Review Conferences of the treaty in 1986 and 1991, member states sought to bolster the BWC by adopting a set of confidence-building measures that were politically rather than legally binding. These measures included exchanges of information on vaccine production plants (which can be easily diverted to the production of BW agents), past activities related to BW, national biodefense programs, and unusual outbreaks of disease. The level of participation in the confidence-building measures, however, has been poor. From 1987 to 1995, only 70 of the then 139 member states of the BWC submitted data declarations, and only 11 took part in all rounds of the information exchange.

In 1992 and 1993, a panel of government verification experts known as VEREX assessed the feasibility of monitoring the BWC from a scientific and technical standpoint. The VEREX group concluded that a combination of declarations and on-site inspections could enhance confidence in treaty compliance and deter violations. Consequently, BWC member states established the Ad Hoc Group in September 1994 to “strengthen the effectiveness of and improve the implementation” of the BWC, including the development of a system of on-site inspections to monitor compliance with the treaty. In July 1997, the Ad Hoc Group began to negotiate a compliance protocol to supplement the BWC, but differences in national positions were significant.

In April 2001, the chairman of the Ad Hoc Group, Tibor Tóth of Hungary, proposed a compromise text that sought to bridge the gaps. It contained these key elements:

  • Mandatory declarations of biodefense and biotechnology facilities and activities that could be diverted most easily to the development or production of biological weapons
  • Consultation procedures to clarify questions that might arise from declarations, including the possibility of on-site visits
  • Transparency visits to randomly selected declared facilities to check the accuracy of declarations
  • Short-notice challenge investigations of facilities suspected of violating the BWC, declared or undeclared, as well as field investigations of alleged biological weapons use

Although most delegations were prepared to accept the chairman’s text as a basis for further negotiations, the new Bush administration conducted an interagency review and found 37 serious problems with the document. U.S. officials argued that the draft protocol would be ineffective in catching violators, create a false sense of security, impose undue burdens on the U.S. pharmaceutical and biotechnology industries, and could compromise government biodefense secrets. Other delegations countered that the protocol, though flawed, offered a reasonable balance between conducting on-site inspections intrusive enough to increase confidence in compliance and safeguarding legitimate national security and business information. Nevertheless, the United States declared that the draft protocol could not be salvaged and withdrew from the Ad Hoc Group negotiations on July 25, 2001. Although other countries considered proceeding with the talks without the United States, they quickly rejected this option. Instead, the mandate of the Ad Hoc Group was preserved so that the negotiations could potentially resume at a later date, after a change in the political climate.

The next opportunity for progress came in November 2001, during the Fifth Review Conference of the BWC in Geneva. On the first day of the meeting, John Bolton, the head of the U.S. delegation and Under Secretary of State for Arms Control and International Security, accused six states of violating the BWC: Iran, Iraq, Libya, and North Korea (all parties to the BWC); Syria (which has signed but not ratified); and Sudan (which has neither signed nor ratified). Bolton said that additional unnamed member states were also violating the convention and insisted that the review conference address the problem of noncompliance.

As an alternative to the BWC Protocol, which Bolton bluntly stated was “dead, and is not going to be resurrected,” the United States offered an “alternatives package” of nine voluntary measures that could be implemented through national legislation or by adapting existing multilateral mechanisms. They include:

  • Criminalizing the acquisition and possession of biological weapons
  • Restricting access to dangerous microbial pathogens and toxins
  • Supporting the World Health Organization’s (WHO’s) global system for disease surveillance and control
  • Establishing an ethical code of conduct for scientists working with dangerous pathogens
  • Contributing to an international team that would provide assistance in fighting outbreaks of infectious disease
  • Strengthening an existing UN mechanism for conducting field investigations of alleged biological weapons use so that BWC member states would be required to accept investigations on their territory.
Controls on access to dangerous pathogens must be implemented internationally, not just in the United States.

Several delegations welcomed the U.S. package but suggested that it did not go far enough and that some type of legally binding agreement among BWC member states would be necessary. On the last day of the conference, however, the United States insisted that the mandate of the Ad Hoc Group be terminated, thereby eliminating the sole forum for negotiating multilateral measures to strengthen the treaty. Because preserving the Ad Hoc Group’s mandate had long been a bottom line for many delegations, the U.S. proposal prevented the consensus needed to adopt a politically binding Final Declaration. In a desperate bid to prevent the BWC Review Conference from failing completely, chairman Tóth suspended the meeting for a year.

The Review Conference will reconvene in Geneva on November 11, 2002. Whether progress can be achieved before the conference resumes remains to be seen. One problem is that the United States continues to resist any formal multilateral agreements, creating a split between Washington and other Western countries. Moreover, without the Ad Hoc Group, no multilateral forum exists to negotiate the ideas in the U.S. alternatives package. During the period preceding the resumption of the conference, it will be important for the participating states to hammer out their differences; creative thinking will be needed to find a way out of the current impasse.

The U.S. alternatives package

The Bush administration’s current approach to strengthening the BWC, which emphasizes voluntary national measures, may have some benefit in reducing the threat of bioterrorism, but it will not be sufficient to address the problem of state-level noncompliance. There are ways, however, in which the U.S. proposals could be improved.

The first set of measures proposed by the United States relates to Article IV of the BWC, which deals with national implementation. This article requires each member state, in accordance with its constitutional processes, to take any necessary steps to prohibit and prevent the activities banned by the BWC on its territory or anywhere under its jurisdiction. Because Article IV is vaguely worded, it has been interpreted in various ways, and few of the 144 BWC member states have enacted domestic implementing legislation imposing criminal penalties on individuals who engage in illicit biological weapons activities. Not until 1989 did the United States develop its own implementing legislation, the Biological Weapons Antiterrorism Act, which imposes criminal penalties up to life imprisonment, plus fines, for anyone who acquires a biological weapon or assists a foreign state or terrorist organization in doing so.

Under the new U.S. proposal, the legislatures of BWC member states that have not already done so would adopt domestic legislation criminalizing the acquisition, possession, and use of biological weapons. As a key element of such laws, states would improve their ability to extradite biological weapons fugitives to countries prepared to assume criminal jurisdiction, either by amending existing bilateral extradition treaties to include biological weapons offenses, or by arranging to extradite for BW offenses even when a bilateral treaty is not in place with the country seeking extradition. In addition, BWC member states would commit to adopt and implement strict national regulations for access to particularly dangerous pathogens, along with guidelines for the physical security and protection of culture collections and laboratory stocks.

Within the United States, the federal Centers for Disease Control and Prevention (CDC) regulates the interstate transport of 36 particularly hazardous human pathogens and toxins, permitting the transfer of these agents only between registered facilities that are equipped to handle them safely and have a legitimate reason for working with them. Similar regulations on transfers of dangerous plant and animal pathogens are administered by the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS). In response to the anthrax letter attacks, Congress is strengthening the statutory framework relating to biological weapons by extending the current controls on transfers of dangerous pathogens to prohibit the possession of such agents by unauthorized individuals or for other than peaceful purposes. Presumably, the U.S. government hopes that other nations will adopt similar legislation.

Enhanced UN field investigation powers should be part of a legally binding multilateral agreement.

The proposed U.S. measures to strengthen Article IV also include urging BWC member states to sensitize scientists to the risks of genetic engineering and to explore national oversight of high-risk experiments (see sidebar). In addition, states would be encouraged to develop and adopt a professional code of conduct for scientists working with pathogenic microorganisms, possibly building on existing ethical codes such as the Hippocratic oath.

A second set of measures in the U.S. package aims to strengthen the BWC’s Article VII, on assisting the victims of a biological attack, and Article X, on technical and scientific cooperation in the peaceful uses of biotechnology. The U.S.-proposed measures for assistance and cooperation would require member states to adopt and implement strict biosafety procedures for handling dangerous pathogens, based on those developed by WHO or equivalent national guidelines, and to enhance WHO’s capabilities for the global monitoring of infectious diseases. This latter measure could help to deter the covert use of biological weapons by detecting and containing the resulting outbreak at an early stage, thereby reducing its impact. An enhanced global disease surveillance system would also increase the probability that an epidemic arising from the deliberate release of a biological agent would be promptly investigated, recognized as unnatural in origin, and attributed to a state or terrorist organization. Further, the United States has proposed the creation of international rapid response teams that would provide emergency and investigative assistance, if required, in the event of a serious outbreak of infectious disease. BWC member states would be expected to indicate in advance what types of assistance they would be prepared to provide.

The third set of measures proposed by the United States is designed to strengthen the BWC’s Article V, on consultation and cooperation, by addressing concerns over treaty compliance. One proposed measure would augment the consultation procedures in Article V by creating a “voluntary cooperative mechanism” for clarifying and resolving compliance concerns by mutual consent, through exchanges of information, visits, and other procedures. The other measure would adapt a little-known procedure by which the UN secretary general can initiate field investigations of alleged chemical or BW incidents. If the secretary general determines that an allegation of use could constitute a violation of international law, he or she has the authority to assemble an international team of experts to conduct an objective scientific inquiry.

When the UN field investigations mechanism was first developed in 1980, the General Assembly or Security Council had to pass a resolution requesting the secretary general to launch an investigation. This procedure was used for investigations of alleged chemical warfare by the Soviet Union and its allies in Southeast Asia and Afghanistan in 1980-83, and by Iraq and Iran in 1984-88, during the Iran-Iraq War. Experience demonstrated, however, that it was essential to conduct a field investigation while the forensic evidence was still fresh, and that the procedure of requiring a UN body to make a formal request was too cumbersome and lengthy to permit a rapid response.

In view of this problem, on November 30, 1987, the General Assembly adopted Resolution 42/37 empowering the secretary general to launch, on his or her own authority, an immediate field investigation of any credible complaint of alleged chemical or biological weapons use. The Security Council adopted a similar resolution on August 26, 1988, making the secretary general the sole arbiter of which allegations to investigate and the level of effort devoted to each investigation. In 1992, the secretary general launched investigations of the alleged use of chemical weapons by RENAMO insurgents in Mozambique and by Armenian forces in Azerbaijan. In both cases, UN expert teams concluded that the allegations were false.

No UN field investigations have been requested since 1992. Now that the BWC Protocol negotiations have been placed on indefinite hold, however, the ability of the secretary general to initiate investigations of alleged biological weapons use could fill a major gap in the disarmament regime. Although all the UN investigations conducted to date have involved the alleged use of chemical or toxin warfare agents, the United States has proposed expanding the existing mechanism to cover suspicious outbreaks of infectious disease that might result from the covert development, production, testing, or use of biological weapons. The U.S. proposal would also require BWC member countries to accept UN investigations on their territory without right of refusal, which is not currently the case.

Will they work?

The various U.S. measures presented at the Fifth Review Conference are modest steps for reducing the threats of BW and bioterrorism, but they would do little to reinforce the biological disarmament regime. Although the Bush administration has good reason to be troubled by the evidence of widespread noncompliance by members of the BWC, the remedies it has proposed are not commensurate with the gravity of the problem.

A key weakness of relying almost exclusively on domestic legislation to address biosecurity concerns is that national laws cannot impose uniform international standards. Legislation criminalizing the possession and use of biological weapons by individuals may vary considerably from country to country, resulting in an uneven patchwork that could create loopholes and areas of lax enforcement exploitable by terrorists. Moreover, some states will fail to pass such laws or will not enforce them. As an alternative to national legislation, the Harvard Sussex Program on CBW Armament and Arms Limitation, a nongovernmental group, has developed a draft treaty criminalizing the possession and use of biological weapons. This text could serve as a starting point for multilateral negotiations to strengthen the BWC.

Similarly, tighter U.S. regulations on access to dangerous pathogens, although desirable, will not significantly reduce the global threat of bioterrorism unless such controls are implemented internationally. Hundreds of laboratories and companies throughout the world work with dangerous pathogens, yet restrictions on access vary from country to country. To harmonize these national regulations, the United States should pressure the UN General Assembly to negotiate a “Biosecurity Convention” requiring all participating states to impose uniform limits on access to dangerous pathogens, so that only bona fide scientists are authorized to work with these materials. In addition, the treaty should establish common international standards of biosafety and physical security for microbial culture collections that contain dangerous pathogens, whether they are under commercial, academic, or government auspices.

The U.S. proposal for an expanded global disease-monitoring system run by WHO could play an important, albeit indirect, role in strengthening the BWC. Nevertheless, it is essential that WHO’s public health activities not be linked explicitly to monitoring state compliance with the BWC. Because WHO epidemiologists conduct investigations of unusual disease outbreaks only at the invitation of the affected country, the organization must preserve its political neutrality; any suspicions about WHO’s motives could seriously compromise its ability to operate. Accordingly, although the U.S. proposal for greater funding of global disease surveillance is welcome, the money for this purpose should be provided directly in the WHO budget and kept separate from efforts to strengthen the BWC.

As for UN field investigations of alleged biological weapons use, the historical record suggests that such efforts can yield useful findings if they are carried out shortly after an alleged attack and if the expert group is granted full access to the affected sites and personnel. Under optimal conditions, small groups of three to five experts can carry out field investigations rapidly and cheaply. Nevertheless, it is unrealistic to expect UN member countries to waive all right of refusal and accept international investigations on their territory in the absence of a formal treaty that provides legally binding rights and obligations.

Without such a treaty, countries accused of using biological weapons could simply deny investigators access to the alleged attack sites and the affected populations. Because such countries would be under no legal obligation to cooperate with the UN, the political consequences of a refusal would be minimal. Indeed, UN investigations of chemical and toxin weapons use in Laos, Cambodia, and Afghanistan during the early 1980s failed to yield conclusive results because the accused countries refused the UN experts access to the alleged attack sites. If, however, the obligation to accept UN investigators were legally binding, a denial of access would have far more serious consequences, possibly leading to the imposition of economic sanctions on the refusing country.

Finally, although the existence of a measure to investigate allegations of use could help to deter countries from employing biological weapons, is it really desirable to wait until such weapons have been used before the secretary general can initiate an investigation? Would it not be preferable to prevent states or terrorists from developing, producing, and testing them in the first place? This more ambitious objective would mean granting the secretary general the authority to investigate not only the alleged use of biological weapons but also facilities suspected of their illicit development and production, an option that the U.S. proposal does not include. To this end, BWC member states should negotiate a legally binding agreement that obligates them to cooperate with UN field investigations on their territories of the alleged development, production, and use of biological weapons, as well as suspicious outbreaks of disease.

U.S. flexibility needed

The use of anthrax-tainted letters sent through the mail to kill and terrorize U.S. citizens has seriously challenged the international norm against BW and terrorism and made it imperative to strengthen the existing disarmament and nonproliferation regime. Although the Bush administration’s package of proposals for strengthening the BWC is a useful step in the right direction, the United States must show greater flexibility by permitting meaningful efforts to expand on these ideas and to negotiate them in a multilateral forum. Should the administration persist in its ideological opposition to multilateral arrangements of any kind, efforts to strengthen the BWC will probably remain in limbo indefinitely.

If the resumption of the Fifth Review Conference in November 2002 fails to yield constructive results, the credibility of the international biological disarmament regime will continue to erode. The consequences of such an outcome could be grim indeed. As the know-how and dual-use technologies needed to develop, produce, and deliver biological weapons continue to diffuse worldwide, the ability to inflict mass injury and death will cease to be a monopoly of the great powers and will become accessible to small groups of terrorists, and even to mentally deranged individuals. To prevent this nightmare from becoming a reality, the United States should join with other nations in taking urgent and meaningful steps to reinforce the BWC.


The Need for Oversight of Hazardous Research

In recent decades, dramatic advances in molecular biology and genetic engineering have yielded numerous benefits for human health and nutrition. But these breakthroughs also have a dark side: the potential to create more lethal instruments of biological warfare (BW) and terrorism. Harnessing the powerful knowledge emerging from the biosciences in a way that benefits humankind, while preventing its misuse, will require the scientific community to regulate itself.

An inadvertent discovery that became known in early 2001 highlights the risks. Australian scientists developing a contraceptive vaccine to control field mouse populations sought to enhance its effectiveness by inserting the gene for the immune regulatory protein interleukin-4 into mousepox virus, which served as the vaccine carrier. Insertion of the foreign gene unexpectedly transformed the normally benign virus into a strain that was highly lethal, even in mice that previously had been vaccinated against mousepox. The experiment demonstrated that the novel gene combinations created by genetic engineering can yield, on rare occasions, more virulent pathogens. Although the Australian team debated for months the wisdom of publishing their findings, they finally did so as a means of warning the scientific community.

As scientists obtain a flood of new insights into the molecular mechanisms of infection and the host immune response, this information could be applied for nefarious purposes. Indeed, until at least 1992, Soviet/Russian military biologists developed advanced BW agents by engineering pathogenic bacteria to be resistant to multiple antibiotics and vaccines, creating viral “chimeras” by combining militarily relevant traits from different viruses, and developing incapacitating or behavior-modifying agents based on natural brain chemicals.

In view of these troubling developments, the scientific community will have to address the problem of hazardous research–ideally through self-governance. Many scientists oppose any limits on scientific inquiry, but because public outrage over an accident involving a genetically engineered pathogen could compel Congress to impose draconian restrictions, it is in the interest of scientists to make their research safer. One precedent for self-regulation already exists: In February 1975, some 140 biologists, lawyers, and physicians met at the Asilomar Conference Center near Monterey, California, to discuss the risks of recombinant DNA technologies and to develop a set of research guidelines, overseen by a Recombinant DNA Advisory Committee (RAC).

To prevent the deliberate misuse of molecular biology for malicious purposes, the scientific community, working through professional societies and national academies of science, should negotiate a set of rules and procedures for the oversight of hazardous research in the fields of microbiology, infectious disease, veterinary medicine, and plant pathology. Regulated activities would include the cloning and transfer of toxin genes and virulence factors, the development of antibiotic- and vaccine-resistant microbial strains and genetically engineered toxins, and the engineering of “stealth” viruses to evade or manipulate human immune defenses.

The oversight mechanism should be global in scope and cover academic, industrial, and government research. Various models are under consideration. Under one approach, legitimate but high-risk research projects would be reviewed by a scientific oversight board, similar to the RAC but operating at the international level. Checks and balances would be needed to ensure that the international oversight board has the power and authority it requires to enforce the regulations, while preventing it from becoming corrupt and arbitrary, unduly constraining scientific freedom, or abusing its privileged access to sensitive and proprietary information. Furthermore, governments may be reluctant to grant an international body binding review authority over national biodefense programs. Simply requiring countries to notify the oversight board about activities and describing them in general terms may be all that can reasonably be accomplished.

Scientific journals will also need to develop guidelines for declining to publish research findings of direct relevance to offensive BW or terrorism, such as the Australian mousepox results. Because the ethos of the scientific community opposes censorship of any kind, a strong professional consensus will be needed to embargo data because its dissemination could be harmful to society. Given the complexity and sensitivity of these issues, the process of developing an international mechanism to regulate hazardous dual-use research will be long and difficult, requiring the active participation of a variety of stakeholders, including scientists, lawyers, and politicians from several countries.

Spring 2002 Update

Bush reverses course, supports funding boost for nonproliferation efforts

In “Improving U.S.-Russian Nuclear Cooperation” (Issues, Fall 2001), I highlighted the misguided decision of President Bush to propose significant funding reductions for U.S.-Russian nuclear nonproliferation cooperation programs and raised concerns about the lack of attention to this work at high levels of the U.S. and Russian governments. For most of last year, the administration refused to reconsider its position, stating that it was conducting its own evaluation. Even in the wake of the September 11 terrorist attacks, the administration requested no additional funds to lock up weapons of mass destruction (WMD) in the former Soviet Union (FSU). But faced with the growing specter of WMD-armed terrorists and strong congressional support for controlling this danger, the Bush administration has now made a 180-degree turn and embraced this agenda in its latest budget.

Known generally as the Nunn-Lugar program, these nonproliferation activities are focused on improving the security of the remaining nuclear, chemical, and biological arsenals of Russia and other post-Soviet states. In April 2001, the Bush budget proposed cutting $100 million from Department of Energy (DOE) programs focused on reducing U.S.-Russian nuclear material security, disposition, and safety efforts: activities at the heart of the effort to control potentially “loose nukes” in the FSU. However, during the regular appropriations process last year, Congress bucked the administration and restored much of the proposed cut. Then, after the terrorist attacks, Congress included in a $40 billion emergency appropriation an additional $120 million for nuclear material control, $15 million for alternative employment for weapons scientists, and $10 million for improving the safety and security of Soviet-era nuclear power reactors and facilities. The initiative for this new money came solely from the Congress. When the administration notified Congress of its priorities for this funding, it did not designate any of the funds for WMD security activities in Russia or the FSU, despite the anthrax attacks against Congress and the emerging danger of potentially nuclear-armed terrorists.

Obviously recognizing the need to correct its course after the demonstration of congressional resolve, the White House announced in late December that it would seek major increases for Nunn-Lugar activities. In the fiscal year 2003 budget, the administration has requested overall increases for these activities when compared to its prior year budget request, and the increases in some areas are quite substantial. However, in other areas, program budgets would be funded below the final congressionally appropriated level for last year.

For DOE, approximately $770 million is requested for WMD nonproliferation efforts in the FSU, about 17 percent more than the total fiscal year 2002 congressional appropriation for these programs, including the supplemental funding. However, this number is somewhat deceiving, because it includes $49 million in a funding transfer from the Department of Defense (DOD) for a project to eliminate weapons-grade plutonium production in Russia and a significant increase for plutonium disposition facility construction in the United States. Still, significant increases were requested in efforts to dispose of excess plutonium in Russia, improve FSU export controls, and facilitate the dismantlement of nuclear warheads. Key programs, such as those designed to improve security for nuclear material and naval nuclear warheads and create peaceful employment opportunities for weapon scientists, would be funded below last year’s final appropriation. But this year, these programs are riding a wave of increased funding from the supplemental appropriation, and government officials in charge must prove that they can effectively use substantially increased future funding by significantly accelerating their progress.

At DOD, the president has requested a budget increase of about 4 percent to $417 million. Most of the major increases are in nuclear weapon transportation security, chemical weapon destruction, and biological weapon proliferation prevention. These are necessary increases, though the objectives for some of the funding are not very clear at this point.

The administration’s budgetary about-face on nonproliferation cooperation with Russia is welcome. But the critical issue in the coming year is how well officials in the key programs will actually use the financial windfall from last year to ramp up progress and decrease the timelines for completion of their work. More than half of the bomb-grade materials outside of weapons in Russia still remain inadequately secured, and vulnerable storage facilities for some key categories of warheads have been upgraded at a snail’s pace. Both of these efforts have timelines for completion that extend for a decade or more. In this new world, that time frame is unacceptable. But political resources, as well as financial ones, are required to make this process move more quickly and effectively. Although the budget logjam seems to have been broken, the political blockages remain to be tackled.

Kenneth N. Luongo

Archives – Winter 2002

Photo: Carnegie Institution of Washington

Andrew Carnegie and George Ellery Hale

The Carnegie Institution of Washington was born a century ago in January 1902 with a $10 million gift from Andrew Carnegie to foster the development of new knowledge. It has supported research in a broad array of scientific fields, and its several observatories are among its most prominent investments.

The astronomer George Ellery Hale was one of the original advisors to the Carnegie Institution, and he was instrumental in convincing it to agree in 1904 to support construction of a telescope on Mount Wilson in the San Gabriel Mountains outside Pasadena. The photo above shows Carnegie and Hale beside the 60-inch telescope at Mount Wilson in 1910. Hale led the campaign for ever larger telescopes at the site, including the 100-inch Hooker telescope, which began operation in November 1917. This was the telescope that Edwin Hubble used to map the cosmos.

In 1915, Hale offered the services of the National Academy of Sciences to President Wilson to help with the war effort, and this led to the founding of the National Research Council. Hale was also instrumental in the construction of the NAS building, which was funded as part of a $5 million grant from the Carnegie Institution to the NRC.

Achievement Versus Aptitude in College Admissions

Students should be selected on the basis of their demonstrated success in learning, not some ill-defined notion of aptitude.

Every year, more than a million high school students stake their futures on the nation’s most widely used admissions test, the SAT I. Long viewed as the gold standard for ensuring student quality, the SAT I has also been considered a great equalizer in U.S. higher education. Unlike achievement tests such as the SAT II, which assess mastery of specific subjects, the SAT I is an aptitude test that focuses on measuring verbal and mathematical abilities independent of specific courses or high school curricula. It is therefore a valuable tool, the argument goes, for correcting the effects of grade inflation and the wildly varying quality of U.S. high schools. And it presumably offers a way of identifying talented students who otherwise might not meet traditional admissions criteria, especially high-potential students in low-performing high schools.

In February 2001, at the annual meeting of the American Council on Education (ACE), I delivered an address questioning the conventional wisdom about the SAT I and announced that I had asked the Academic Senate of the University of California (UC) to consider eliminating it as a requirement for admission to UC. I was unprepared for the intense public reaction to my remarks. The day before I was scheduled to deliver them, I went to the lobby of my hotel to get a copy of the Washington Post. I was astounded to find myself and excerpts from the speech on the front page; an early version had been leaked to the press. To my further astonishment, an even more detailed story appeared on the front page of the New York Times.

And that was only the beginning. In the months since my address, I have heard from hundreds of college and university presidents, CEOs, alumni, superintendents, principals, teachers, parents, students, and many others from all walks of life. Television programs, newspaper editorials, and magazine articles have presented arguments pro and con. I was most struck by the Time magazine article that had a picture of President Bush and me side by side. The headline read, “What do these two men have in common?” Those who have speculated that the answer is that we had the same SAT scores are wrong. I did not take the SAT. I was an undergraduate at the University of Chicago, and at that time the university was adamantly opposed to the concept of aptitude tests and used achievement tests in its admissions process. Time was simply observing that we share an interest in testing.

It came as no surprise that my proposal to take a hard look at the role and purpose of the SAT I and standardized tests in general attracted the attention of educators, admissions officers, and testing experts. I have been impressed and pleased by the many researchers, professors, and psychometricians who have shared with me their findings and experience regarding the SAT. But I was also surprised at the number of letters I received from people who had no professional connection with higher education. I heard from a young woman–an honors graduate of UC Berkeley with an advanced degree from Princeton–who had been questioned about her 10-year-old SAT scores in a job interview; an attorney who, despite decades of success, still remembers the sting of a less-than-brilliant SAT score; an engineer who excelled on the SAT but found it bore no relation to the demands of college and his profession; a science student who scored poorly on the SAT and was not admitted to his college of choice but was elected to the National Academy of Sciences in later years. Clearly, the SAT strikes a deep chord in the national psyche.

The second surprise in the months after my speech was the degree of confusion about what I proposed and why I proposed it. For example, some people assumed I wanted to eliminate the SAT I as an end run around Proposition 209, the 1996 California law banning affirmative action. That was not my purpose; my opposition to the SAT I predates Proposition 209 by many years. And as I said in my ACE speech, I do not anticipate that ending the SAT I requirement by itself would appreciably change the ethnic or racial composition of the student body admitted to UC.

Others assumed that because I am against the SAT I, I am against standardized tests in general. I am not; quite the opposite is true. Grading practices vary across teachers and high schools, and standardized tests provide a measure of a student’s achievements that is independent of grades. But we need to be exceedingly careful about the standardized tests we choose.

So much for what I did not propose. Let me turn briefly to what I did propose. I requested the Academic Senate of UC to consider two further changes in addition to making the SAT I optional. The first is to use an expanded set of SAT II tests or other curriculum-based tests that measure achievement in specific subject areas until more appropriate tests are developed. The second is to move all UC campuses away from admissions processes employing quantitative formulas and toward a comprehensive evaluation of applicants.

In a democratic society, I argued, admitting students to a college or university should be based on three principles. First, students should be judged on the basis of their actual achievements, not on ill-defined notions of aptitude. Second, standardized tests should have a demonstrable relationship to the specific subjects taught in high school, so that students can use the tests to assess their mastery of those subjects. Third, U.S. universities should employ admissions processes that look at individual applicants in their full complexity and take special pains to ensure that standardized tests are used properly in admissions decisions. I’d like to discuss each in turn.

Aptitude versus achievement

Aptitude tests such as the SAT I have a historical tie to the concept of innate mental abilities and the belief that such abilities can be defined and meaningfully measured. Neither notion has been supported by modern research. Few scientists who have considered these matters seriously would argue that aptitude tests such as the SAT I provide a true measure of intellectual abilities.

Nonetheless, the SAT I is widely regarded as a test of basic mental ability that can give us a picture of students’ academic promise. Those who support it do so in the belief that it helps guarantee that the students admitted to college will be highly qualified. The SAT I’s claim to be the “gold standard of quality” derives from its purported ability to predict how students will perform in their first year of college.

Nearly 40 years ago, UC faculty serving on the Academic Senate’s Board of Admissions and Relations with Schools (BOARS) gathered on the Santa Barbara campus to consider the merits of the SAT and achievement tests. At that point, UC had only run experiments with both kinds of tests. In the actual process of admissions, UC used standardized tests in admissions decisions for only a small percentage of students who did not qualify on the basis of their grades in selected courses. BOARS wanted answers to a couple of critical questions: What is the predictive power–what researchers call the “predictive validity”–of the SAT for academic success at UC? How might it improve the process of admissions?

To answer these questions, BOARS launched a study that compared the SAT and achievement tests as predictors of student performance. The results were mixed. In the view of the board, the achievement tests proved a more useful predictor of student success than did the SAT, both in combination with grades and as a single indicator. But the benefits of both tests appeared marginal at the time. As a result, both the SAT and achievement tests remained largely an alternative method for attaining UC eligibility. In 1968, UC began requiring the SAT I and three SAT II achievement tests, although applicants’ scores were not considered in the admissions process. Rather, the SAT I and SAT II tests remained largely a way of admitting promising students whose grades fell below the UC standard and an analytical tool to study the success patterns of students admitted strictly by their grades in UC-required courses.

This policy lasted until the late 1970s. As historian John Douglass has noted in a number of studies on the history of UC admissions, not until 1979 did the university adopt the SAT as a substantial and formal part of the regular admissions process. That year, BOARS established UC’s current Eligibility Index: a sliding scale combining grade point average (GPA) in required courses with SAT scores to determine UC eligibility. Even then, GPA remained the dominant factor in this determination. UC established the Eligibility Index largely as a way of reducing its eligibility pool in light of a series of studies that showed UC accepting students well beyond its mandated top 12.5 percent of statewide graduates. The decision to include SAT scores in the Eligibility Index was based not on an analysis of the SAT’s predictive power but on its ability to serve as a screen that would reduce the pool of eligible students.

We should use standardized tests that have a demonstrable relationship to the specific subjects taught in high schools.

Fortunately, today we do have an analysis of the SAT’s value in admissions decisions. Because our students have been taking the SAT I and the SAT II for more than three decades, UC is perhaps the only university in the country that has a database large enough to compare the predictive power of the SAT I with that of the achievement-based SAT II tests. UC researchers Saul Geiser and Roger Studley have analyzed the records of almost 78,000 freshmen who entered UC over the past four years. They concluded that the SAT II is, in fact, a better predictor of college grades than the SAT I. The UC data show that high school grades plus the SAT II account for about 21 percent of the explained variance in first-year college grades. When the SAT I is added to high school grades and the SAT II, the explained variance increases from 21 percent to 21.1 percent, a trivial increment.

Our data indicate that the predictive validity of the SAT II is much less affected by differences in socioeconomic background than is the SAT I. After controlling for family income and parents’ education, the predictive power of the SAT II is undiminished, whereas the relationship between SAT I scores and UC freshman grades virtually disappears. These findings suggest that the SAT II is not only a better predictor but also a fairer test for use in college admissions, because its predictive validity is much less sensitive than is the SAT I to differences in students’ socioeconomic background. Contrary to the notion that aptitude tests are superior to achievement tests in identifying high-potential students in low-performing schools, our data show the opposite: The SAT II achievement tests predict success at UC better than the SAT I for students from all schools in California, including the most disadvantaged.

UC data yield another significant result. Of the various tests that make up the SAT I aptitude and the SAT II achievement tests, the best single predictor of student performance turned out to be the SAT II writing test. This test is the only one of the group that requires students to write something in addition to answering multiple-choice items. Given the importance of writing ability at the college level, it should not be surprising that a test of actual writing skills correlates strongly with freshman grades.

When I gave my speech to ACE, this comprehensive analysis of the UC data comparing the two tests was not available. My arguments against the SAT I were based not on predictive validity but on pedagogical and philosophical convictions about achievement, merit, and opportunity in a democratic society. In my judgment, those considerations remain the most telling arguments against the SAT I. But these findings about the predictive validity of the SAT I versus the SAT II are stunning.

Curriculum-based tests

If we do not use aptitude tests such as the SAT I, how can we get an accurate picture of students’ abilities that is independent of high school grades? In my view, the choice is clear: We should use standardized tests that have a demonstrable relationship to the specific subjects taught in high schools. This would benefit students, because much time is currently wasted inside and outside the classroom prepping students for the SAT I; the time could be better spent learning history or geometry. And it would benefit schools, because achievement-based tests tied to the curriculum are much more attuned to current efforts to improve the desperate situation of the nation’s K-12 schools.

One of the clear lessons of U.S. history is that colleges and universities, through their admissions requirements, strongly influence what is taught in the K-12 schools. To qualify for admission to UC, high-school students must attain specified grades in a set of college-preparatory classes that includes mathematics, English, foreign languages, laboratory sciences, social sciences, and the arts. These requirements let schools and students alike know that we expect UC applicants to have taken academically challenging courses that involve substantial reading and writing, problem-solving and laboratory work, and analytical thinking, as well as the acquisition of factual information. These required courses shape the high-school curriculum in direct and powerful ways, and so do the standardized admissions tests that are also part of qualifying for UC.

Because of its influence on K-12 education, UC has a responsibility to articulate a clear rationale for its test requirements. In my ACE address in February, I suggested what that rationale might contain: 1) The academic competencies to be tested should be clearly defined; in other words, testing should be directly related to the required college preparatory curriculum. 2) Students from any comprehensive high school in California should be able to score well if they master the curriculum. 3) Students should be able, on reviewing their test scores, to understand where they did well or fell short and what they must do to earn higher scores in the future. 4) Test scores should help admissions officers evaluate the applicant’s readiness for college-level work. The Board of Admissions and Relations with Schools is in the process of developing principles to govern the selection and use of standardized tests. These principles will be an extremely important contribution to the national debate about testing.

Universities in every state influence what high schools teach and what students learn. We can use this influence to reinforce current national efforts to improve the performance of U.S. public schools. These reform efforts are based on three principal tenets: Curriculum standards should be clearly defined, students should be held to those standards, and standardized tests should be used to assess whether the standards have been met.

The SAT I sends a confusing message to students, teachers, and schools. It says that students will be tested on material that is unrelated to what they study in their classes. It says that the grades they achieve can be devalued by a test that is not part of their school curriculum. Most important, the SAT I scores only tell a student that he or she scored higher or lower than his or her classmates. They provide neither students nor schools with a basis for self-assessment or improvement.

Appropriate role of standardized tests

Finally, I have argued that U.S. universities should employ admissions processes that look at individual applicants broadly and take special pains to ensure that standardized tests are used properly in admissions decisions. Let me explain this statement in terms of UC.

UC’s admissions policies and practices have been in the spotlight of public attention in recent years as California’s diverse population has expanded and demand for higher education has skyrocketed. Many of UC’s 10 campuses receive far more applicants than they can accept. Thus, the approach we use to admit students must be demonstrably inclusive and fair.

To do this, we must assess students in their full complexity. This means considering not only grades and test scores but also what students have made of their opportunities to learn, the obstacles they have overcome, and the special talents they possess. To move the university in this direction, I have made four admissions proposals in recent years:

  • Eligibility in the Local Context (ELC), or the Four Percent Plan, grants UC eligibility to students in the top 4 percent of their high school graduating class who also have completed UC’s required college preparatory courses. Almost 97 percent of California public high schools participated in ELC in its first year, and many of these had in the past sent few or no students to UC.
  • Under the Dual Admissions Program approved by the regents in July 2001, students who fall below the top 4 percent but within the top 12.5 percent of their high school graduating class would be admitted simultaneously to a community college and to UC, with the proviso that they must fulfill their freshman and sophomore requirements at a community college (with a solid GPA) before transferring to a UC campus. State budget difficulties have delayed implementation of the Dual Admissions Program, but we hope to launch it next year.
  • For some years, UC policy has defined two tiers for admission. In the first tier, 50 to 75 percent of students are admitted by a formula that places principal weight on grades and test scores; in the second tier, students are assessed on a range of supplemental criteria (for example, difficulty of the courses taken, evidence of leadership, or persistence in the face of obstacles) in addition to quantitative measures. Selective private and public universities have long used this type of comprehensive review of a student’s full record in making admissions decisions. Given the intense competition for places at UC, I have urged that we follow their lead. The regents recently approved the comprehensive review proposal, and it will be effective for students admitted in fall 2002.
  • Finally, for the reasons I have discussed above, I have proposed that UC make the SAT I optional and move toward curriculum-based achievement tests. The Academic Senate is currently considering this issue, and its review will likely be finished in spring 2002, after which the proposal will go to the Board of Regents.

The purpose of these changes is to see that UC casts its net widely to identify merit in all its forms. The trend toward broader assessment of student talent and potential has focused attention on the validity of standardized tests and how they are used in the admissions process. All UC campuses have taken steps in recent years to ensure that test scores are used properly in such reviews; that is, that they help us select students who are highly qualified for UC’s challenging academic environment. It is not enough, however, to make sure that test scores are simply one of several criteria considered; we must also make sure that the tests we require reflect UC’s mission and purpose, which is to educate the state’s most talented students and make educational opportunity available to young people from every background.

Achievement tests are fairer to students because they measure accomplishment rather than ill-defined notions of aptitude; they can be used to improve performance; they are less vulnerable to charges of cultural or socioeconomic bias; and they are more appropriate for schools, because they set clear curricular guidelines and clarify what is important for students to learn. Most important, they tell students that a college education is within the reach of anyone with the talent and determination to succeed.

We must assess students in their full complexity, not just their grades and test scores.

For all of these reasons, the movement away from aptitude tests toward achievement tests is an appropriate step for U.S. students, schools, and universities. Our goal in setting admissions requirements should be to reward excellence in all its forms and to minimize, to the greatest extent possible, the barriers students face in realizing their potential. We intend to honor both the ideal of merit and the ideal of broad educational opportunity. These twin ideals are deeply woven into the fabric of higher education in this country. It is no exaggeration to say that they are the defining characteristics of the U.S. system of higher education.

The irony of the SAT I is that it began as an effort to move higher education closer to egalitarian values. Yet its roots are in a very different tradition: the IQ testing that took place during the First World War, when two million men were tested and assigned an IQ based on the results. The framers of these tests assumed that intelligence was a unitary inherited attribute, that it was not subject to change over a lifetime, and that it could be measured and individuals could be ranked and assigned their place in society accordingly. Although the SAT I is more sophisticated from a psychometric standpoint, it evolved from the same questionable assumptions about human talent and potential.

The tests we use to judge our students influence many lives, sometimes profoundly. We need a national discussion on standardized testing, informed by principle and disciplined by empirical evidence. We will never devise the perfect test: a test that accurately assesses students irrespective of parental education and income, the quality of local schools, and the kind of community students live in. But we can do better. We can do much better.

Recommended reading

Richard C. Atkinson, “Standardized Tests and Access to American Universities,” 2001 Robert Atwell Distinguished Lecture, 83rd Annual Meeting of the American Council on Education, Washington, D.C., February 18, 2001 (see .

John A. Douglass, “Anatomy of Conflict: The Making and Unmaking of Affirmative Action at the University of California,” in David Skrentny, Ed., Color Lines: Affirmative Action, Immigration and Civil Rights Options for America (Chicago, Ill.: University of Chicago Press, 2001).

John A. Douglass, Setting the Conditions of Admissions: The Role of University of California Faculty in Policymaking, study commissioned by the University of California Academic Senate, February 1997 (see cshe/jdouglass/publications.html).

Saul Geiser and Roger Studley, UC and the SAT: Predictive Validity and Differential Impact of the SAT I and SAT II at the University of California, University of California Office of the President, October 29, 2001 (see .


Richard C. Atkinson is president of the University of California system.

Rethinking U.S. Child Care Policy

Demand for high-quality care will increase only when consumers have better information about child care and stronger economic incentives to purchase excellent care.

Child care in the United States is, by many standards, in poor shape. Commonly heard complaints include that today’s system of child care endangers the well-being of children, causes financial hardship and stress for families, makes it next-to-impossible for low-income families to work their way off welfare, causes substantial productivity losses to employers, and prevents many mothers from maintaining productive careers in the labor force.

Because child care is a service that is bought and sold in markets, economics can provide a useful framework for thinking about its problems. As with any commodity, supply, demand, cost, price, and quality are key elements of market analysis. But with child care, quality plays a special role, because this characteristic may affect the development of the children who receive the care. Extensive research during the past 25 years documents a positive association between measures of child care quality and the social, emotional, and cognitive development of children. Although most of this research stops short of proving that the association is causal (there is ample reason to expect that children who would have developed well anyway will probably be placed in higher-quality care environments), a number of well-structured studies do support a causal relationship. These studies also show that the benefits of high-quality care are probably larger for children who are at risk of developmental delays as a result of living in poverty.

Analyzing the source of the quality problem in child care suggests a remedy. However, putting this remedy into action will require a fundamental reorientation of public policy, leading in turn to major shifts in the behavior of care providers and in the actions of parents.

Although there are no national data regarding quality, an intensive study by Suzanne Helburn of the University of Colorado of 400 day care centers in four states documented what many child care experts have been saying for a long time: “Child care at most centers in the United States is poor to mediocre, with almost half of the infant and toddler rooms having poor quality. Only one in seven centers provides a level of quality that promotes healthy development.” Analysis of data from this and other studies reveals that child care is of low quality, on average, because of the unwillingness of many consumers to pay a high enough price to cover the cost of high-quality care, even though the cost to providers of improving the quality of care would be moderate. Higher-income children, on average, receive child care of roughly the same quality as lower-income children, indicating that the quality of child care is not a high-priority item for most consumers.

Even when high-quality child care is available, it is expensive in comparison to the income of low-income families. Families in poverty who pay for child care spend more than one-quarter of their income on that cost. Thus, high-quality care is very likely beyond the reach of most low-income families unless subsidized by the government. On the other hand, the large majority of U.S. families are not poor. The average family that pays for child care spends only 7 percent of its income on child care. These families could afford higher-quality care as easily as they could afford a nicer car. Research shows that day care centers can and do improve quality when the price of care rises. But centers have relatively little financial incentive to offer high-quality care, because consumers, on average, are not willing to pay much more for better care. To put it simply, the problem is not on the supply side, but on the demand side.

Workers who provide child care receive low wages, which leads to a high turnover rate among care providers. This instability contributes to low quality of care in a number of ways; for example, fewer workers hold their jobs long enough to build a good base of experience or to take part in extended training and education. And secure attachment of children to adults, an important element in child development, is more difficult when turnover is high. Low wages are due, in large measure, to the apparent willingness of many women to work as care providers for low wages. In day care centers, full-time teachers (almost all of whom are women) earn, on average, less than half the amount earned by other women of the same education and age. And family day care providers earn substantially less than do full-time teachers in day care centers. One-third of all teachers leave their jobs each year, a rate that is about three times higher than the average for all women. Naturally, these women would prefer higher earnings. But the fact that they are willing to supply their labor for such low monetary rewards, despite the fact that higher-wage jobs are available in other sectors of the economy, suggests that there are nonmonetary rewards to being a child care provider. Such rewards often include being able to care for their own children while working.

Consumers lack knowledge about important aspects of child care. Many parents cannot tell the difference between low-quality and high-quality care if they see it, and most of them do not really “see” the care provided anyway, because they typically drop off their children and head to work. Comparing the quality of child care arrangements as rated by parents and by trained observers shows that parents systematically overrate the quality of care, by a rather large amount. Survey evidence also suggests that the average parent does not visit many providers before selecting one. These are important clues about the source of the apparent unwillingness of many parents to pay a price high enough to cover the cost of high-quality child care.

Market failure

In many respects, the child care market functions much better than is commonly believed. Shortages of care facilities are uncommon, and when they do exist, they are often limited to small segments of the market. Power is not concentrated in the hands of a few providers who are able to extract excess profits from consumers by restricting supply. Child care workers have low wages because they are willing to work for low wages, not because they are exploited by center owners or forced to subsidize consumers.

Yet, there is a fundamental failure in the child care market. This failure is not due to some defect in the internal workings of the market. Rather, it is caused by an externality: Market participants (parents) either do not bear all of the costs of their child care decisions or they make these decisions without understanding their consequences, or both. In general, the remedy for a market failure caused by an externality or an information problem does not lie in regulation or public takeover of the market. Instead, the remedy lies in providing appropriate information to market participants so they can make well-informed decisions and in finding a way to internalize the externality by giving the participants an incentive to seek high-quality care.

Thus, demand for excellent child care will not increase unless consumers have sound information about the quality of care as well as stronger incentives to purchase better care. The most important determinant of child development and well-being is no doubt the quality of parenting, not the quality of child care. But child care quality can be an important factor, particularly for low-income children, and it is probably more susceptible to change through public policy than is the quality of parenting. However, today’s child care policies are badly flawed.

For one thing, the majority of child care subsidy funds are available only to employed parents, but it is not obvious what sort of problem in the child care market is related to employment. Subsidies that require employment increase the demand for care but do not increase the quality of care demanded. Tax-based subsidies that are available to middle- and upper-income families–the Dependent Care Tax Credit and the Exclusion of Employer-Provided Dependent Care Expenses–place no restrictions on the quality of care obtained, so it cannot be argued that they have the goal of improving quality. The subsidies encourage employment of both parents in two-parent families and of the single parent in one-parent families, but it is not immediately clear why society should wish to provide such encouragement.

Instead of subsidizing employment of parents, government should, if anything, subsidize the costs of raising children.

The case of employment-related child care subsidies for low-income families may seem easier to rationalize. The goal of such subsidies is to help families achieve and maintain economic self-sufficiency as an alternative to dependence on welfare. For many low-wage parents today, employment is not very rewarding financially, especially when cash transfers from the government are reduced dollar for dollar as earnings increase. The cost of child care and other work-related expenses can make the net financial reward from employment so low as to make welfare a more attractive alternative.

However, low employee wages typically are a result of low skills, and child care subsidies do not directly address this connection. These subsidies do make employment more attractive, and if skills improve through on-the-job training and experience gained by being employed, then the subsidies would indirectly address the problem of low skills and help families escape poverty and welfare dependence in the long run. But there is no evidence that the typical low-wage job provides the training and experience that lead to improved skills, and this means employees are not likely to receive significantly higher wages as they continue to work. In this case, child care subsidies must be continued indefinitely in order to make employment attractive, and the goal of economic independence is not achieved.

Another problem is that one of the primary current means by which government provides child care subsidies to low-income families, the Child Care and Development Fund, which resulted from welfare reform efforts of 1996, places few restrictions on the quality of child care that can be purchased with the subsidies, and it places no emphasis on improving the development of low-income children.

Clearly, low-income families face an employment problem: Their low labor market skills mean that even full-time employment frequently will not be enough to lift them out of poverty. But this is not a child care problem, and it cannot be solved by child care subsidies. Such subsidies do not result in improved labor market skills, and as a result they simply substitute in-kind transfers for cash transfers under welfare. Education and training are the most appropriate policies for improving the labor market skills of low-income workers.

Principles for public policy

The following principles reflect research findings about the child care market as well as judgments about the goals that child care policy should try to achieve:

Child care policy should be neutral with respect to employment. There are no compelling economic or moral reasons for society to encourage employment of both parents in a two-parent middle-class family. There may be a more compelling case for encouraging single parents to achieve economic independence through employment. But a child care subsidy is at best an indirect approach and at worst an ineffective approach to accomplishing this goal. Indeed, employment-related child care subsidies are likely to have the unfortunate side effect of increasing the amount of low-quality child care experienced by children from low-income families. Instead of subsidizing the employment of parents, government should, if anything, subsidize the costs of raising children, without favoring market costs for child care over the foregone earnings cost of a parent who stays home to care for a child.

Child care policy should provide information to parents about the benefits of high-quality child care, about how to discern the quality of care, and about how to find high-quality care. As in all markets, an informed consumer drives improved performance of service providers.

Child care policy should provide incentives for parents to choose high-quality care. Even if parents are generally aware of the developmental benefits of high-quality care, they may not value those benefits much compared to other things they can buy. For example, parents may feel that their own influence on the development of their children can make up for the effects of low-quality care, or that the developmental outcomes measured by standard assessments are less important than, say, religious values, respect for authority, and other intangible attributes. If consumers are given sufficient incentives to choose high-quality care, then providers will have an incentive to offer such care.

Child care policy should encourage the development of programs to help providers learn how to improve the quality of care. An essential feature of a competitive market is that firms can prosper only by offering the services consumers are willing to pay for. Thus, direct subsidies to providers should not be necessary. Providers will have an incentive to increase quality in response to consumer demand, but they may lack the knowledge to upgrade quality.

Child care policy should be progressive, with benefits being larger for children in poor families.

Child care policy should be progressive, with benefits being larger for children in poor families. Because children in poor families are at greater risk of developmental delays and the problems that result from such delays, the benefits of high-quality child care are therefore likely to be larger for them. Equity considerations also favor a progressive child care policy.

Child care policy should be based on incentives, not regulations. Regulating an industry such as child care, with its hundreds of thousands of providers, is likely to be either very costly or ineffective. Evidence suggests that current regulations imposed by the states are not very effective at improving the quality of care being provided. Of course, states should not be discouraged from regulating basic safety and health aspects of child care. But with federal policy, financial incentives can be more flexible than regulations, and well-designed incentives can be self-enforcing rather than requiring a monitoring bureaucracy.

Child care policy should presume that well-informed parents will make good choices about the care of their children. Government can provide the best available information to inform parental decisionmaking, along with incentives for parents to make good choices. But government should not limit the freedom of parents to arrange care for their children as they see fit (again, subject to regulations regarding neglect and abuse). Not all parents will want to take advantage of subsidized care in preschools and family day care homes, no matter how high the quality of care provided. Some parents will prefer care by a relative or close friend; some will prefer care in a church-based setting that emphasizes religion; and some will prefer care by a babysitter in the child’s home. Although these choices may not be optimal in fostering child development, government should not coerce parents to raise children in a particular way. Parents should remain the decisionmakers.

A proposal for reform

Many previous reform proposals contain creative and useful ideas. The following proposal, which rests on the guiding principles outlined above, borrows liberally from previous proposals and adds some new ideas:

  • Provide a means-tested child allowance. Each family would receive from the federal government a cash allowance for up to two children, from birth through age 17. The allowance could take the form of a refundable tax credit, requiring that a family file a tax return to claim the allowance. Refundability means that even a family with no tax liability is eligible for the credit, so it is of value to low-income families (unlike most child allowances in the current tax code). The value of the allowance should decline as the level of family income rises, with the allowance phased out entirely for high-income families. There would be no restrictions on the use of the allowance. The money could be used to pay for child care, food, housing, medical care, or other items that directly benefit children. But it could just as easily be used for other purposes–for example, to subsidize nonemployment by one of the parents, enabling the parent to stay home to care for his or her children. The rationale for subsidizing child-rearing costs is twofold: On moral grounds, children should be taken care of; and on efficiency grounds, children are future workers and citizens, and society has an interest in their healthy development.
  • Subsidize the cost of accreditation to care providers. The National Association for the Education of Young Children, as well as several other organizations, will for a fee assess the operations of day care centers and preschools that want to have their services accredited. These organizations should be subsidized so that they can carry out their assessments at no cost to care providers. In addition, accrediting groups should develop a standardized system that offers several levels of accreditation, so that providers unable to qualify for the highest level of accreditation could nevertheless be certified as providing certain levels of care. In this system, providers would be either (1) accredited as offering care of excellent quality; (2) accredited as offering care of good quality; or (3) unaccredited, meaning that they satisfy state regulatory standards but do not reach the higher level of performance required for a higher rating. Participation by providers would be voluntary.
  • Inform all new parents of the benefits of high-quality care, as well as how to recognize and find excellent care. The simplest way to accomplish this would be to give a booklet and video with such information to mothers when they are in the hospital to give birth. The materials should describe, within the limits of scientific evidence, the consequences for child development of high- and low-quality care, and they should illustrate in vivid terms what a high-quality child care arrangement is like, in contrast with a low-quality arrangement. The materials also should describe the accreditation system for care providers (emphasizing that accreditation is certified by independent agencies), and they should provide information on how to contact resource and referral agencies and other sources of information about the local child care market.
  • Provide a means-tested care voucher, for up to two children per family, with a value that depends on the quality of the care provider at which it is redeemed. Vouchers would be worth more if used at an accredited provider. For example, a low-income family might receive a voucher that covers 30 percent of the average cost of unaccredited child care, 60 percent of the average cost of good-quality care, and 100 percent of the average cost of care accredited as of excellent quality. This differential gives families an incentive to seek high-quality care, and thus it also gives providers an incentive to offer high-quality care in order to attract consumers with greater purchasing power. The value of the vouchers would decline as family income rises, with the value dropping to zero for high-income families. The voucher would be of no value if a family does not purchase child care or pays a relative for child care. This condition is unavoidable if the system is to best provide incentives for use of high-quality care. Because the vouchers would not require employment, they would encourage use of high-quality care by both employed and nonemployed parents to enhance child development. Families that did not use their voucher would still receive the cash benefits provided under the child allowance part of the system.

Balancing benefits and costs

This new system would replace all current federal child care subsidies, as well as all tax deductions and credits for children, including the income tax exemption for children and the child tax credit. Some programs that provide child care for low-income children but are explicitly development-oriented and have no employment requirement, such as Head Start and Title IA of the Elementary and Secondary Education Act, could be integrated into the system. The system also would replace the Temporary Assistance for Needy Families (TANF) program, which provides cash assistance to low-income families with children but which includes employment requirements and time limits.

Because the proposed system is neutral with respect to employment, it would not replace programs that are explicitly intended to reward employment, such as the Earned Income Tax Credit and job training and education programs. If society considers it desirable for low-income single mothers to be employed, then the voucher part of the system provides considerable resources that these mothers could use for child care. If further encouragement of employment is desired, the government would have to provide resources from another source.

The cost of the new system will depend on the value of the child allowance and the child care voucher. To illustrate, I have calculated the cost based on several assumptions. I assumed that the cash allowance would be set at $5,000 per low-income child, with the allowance being reduced as annual family income rises, and phased out at $64,000. The child care voucher would be worth $6,000 for a low-income preschool age child in the highest-quality child care, with the value of the voucher decreasing with family income, child age, and child care quality. (I also made some assumptions about how many families will redeem the vouchers.) In this scenario, the total cost of the system is $208 billion per year. However, after accounting for savings due to the elimination of several current federal programs, the net annual cost is about $95 billion. This is obviously a very large sum, but it is a realistic estimate of the cost of a rather sweeping solution to the child care problem.

Are the benefits from such a radical revamping of child care policy as large as the costs? We simply lack the information needed to quantify the long-run benefits of improvement in the quality of child care on such a large scale to answer this important question. Some encouragement is offered by the Perry Preschool Study, a small-scale randomized study of very intensive preschool and other social services for low-income children. The study found that the long-run benefits to the government, in the form of lower spending on crime, special education, and welfare, exceeded the amount of government support provided for the program. This conservative approach to evaluation does not take into account the benefits to the children themselves in the form of enhanced education and earnings. The fact that the program actually saved the government money suggests that high-quality preschools may have significant benefits to society as well as to the participants. However, it is not clear whether it was the high-quality preschool itself that made such a big difference or the other social services provided, such as weekly home visits. Nor is it clear whether the benefits would be as large for children who are not as deprived as the children who were enrolled in the study.

Certainly, much more well-designed research on the benefits of high-quality child care is called for. But such research will take many years to produce results. In the meantime, policymakers need advice. Based on economic analysis, the best course for today is to proceed with implementing the new system, because the risk of suboptimal development for millions of children under the current system is not worth running.

Recommended reading

W. Steven Barnett, “New Wine in Old Bottles: Increasing Coherence in Early Childhood Care and Education Policy,” Early Childhood Research Quarterly 8, no. 4 (1993): 519–558.

Barbara Bergmann, Saving Our Children from Poverty: What the United States Can Learn From France (New York: Russell Sage Foundation, 1996).

Tricia Gladden and Christopher Taber, “Wage Progression Among Less Skilled Workers,” in Finding Jobs: Work and Welfare Reform, eds. David Card and Rebecca Blank (New York: Russell Sage Foundation, 2000).

Suzanne W. Helburn, Cost, Quality and Child Outcomes in Child Care Centers, Technical Report (Denver, Colo.: Department of Economics, University of Colorado at Denver, 1995).

Lynn A. Karoly, Peter W. Greenwood, Susan S. Everingham, Jill Houbé, M. Rebecca Kilburn, C. Peter Rydell, Matthew Sanders, and James Chiesa, “Investing In Our Children: What We Know and Don’t Know About the Costs and Benefits of Early Childhood Interventions” (Santa Monica, Calif.: RAND Report MR-898-TCWF, 1998).

Michael E. Lamb, “Nonparental Child Care: Context, Quality, Correlates, and Consequences,” in Child Psychology in Practice, eds. I. Sigel and K. Renninger, Handbook of Child Psychology, fifth ed., W. Damon, series ed. (New York: Wiley, 1998).

John M. Love, Peter Z. Schochet, and Alicia L. Meckstroth, Are They in Any Real Danger? What Research Does–And Doesn’t–Tell Us About Child Care Quality and Children’s Well-Being (Princeton, N.J.: Mathematica Policy Research, May, 1996).

James Walker, “Funding Child Rearing: Child Allowance and Parental Leave,” The Future of Children 6, no. 2 (Summer/Fall 1996): 122–136.

Edward Zigler and Matia Finn-Stevenson, Schools of the Twenty First Century: Linking Child Care and Education (Boulder, Colo.: Westview Press, 1999).


David Blau ([email protected]) is professor of economics and fellow of the Carolina Population Center at the University of North Carolina at Chapel Hill. He is the author of The Child Care Problem: An Economic Analysis (Russell Sage Foundation, 2001).

Homeland Security Technology

A new federal agency is needed to rapidly develop and deploy technologies that will limit our vulnerability to terrorism.

On September 11th, our complex national aviation infrastructure became a brilliant weapons delivery system, both stealthy and asymmetrical. The attack was so successful that we should expect this and other like-minded groups to strike again at our homeland. The nation has rallied to improve security at airports, public buildings, and other likely targets. But these efforts have made painfully clear how vulnerable the country is to attackers willing to kill not only innocent civilians but themselves as well. Much must be done in all areas of homeland security before Americans feel safe again. Technology will have to play a critical role. Indeed, technology will be every bit as important in ensuring homeland security as it has been historically in creating military dominance for the armed services. Of course, technology has already been enlisted in areas such as airport security, and technology exists that can be applied to homeland protection. But much work could be done to find additional ways in which existing technology could enhance security and to do the research needed to develop new technology to meet security needs. The United States has no organization or system in place to fund and coordinate this technology development effort, and we cannot expect the effort to organize itself. We need to evaluate carefully what our homeland security needs are, think creatively about how technology can help meet those needs, and put in place a federal entity with the wherewithal to marshal and direct the resources necessary. A survey of the most obvious areas of national vulnerability demonstrates the profound need for accelerated technology development and deployment.

Aviation security. The World Trade Center and Pentagon attacks illustrated the insecurity of the nation’s commercial aviation infrastructure. The insecurity turns out to be even worse than the attacks illustrated. For example, in 1998 and 1999, test teams from the Department of Transportation’s Inspector General gained unauthorized and uninspected access to secure areas in eight major airports in 68 percent of attempts and boarded aircraft 117 times. Even after the terrorist attacks, the Inspector General testified that fewer than 10 percent of checked bags were being screened for explosives before being loaded onto the aircraft.

Technology needs to be swiftly implemented and deployed for a comprehensive new screening system for detecting weapons and explosives on passengers and in their carry-on and checked baggage. Advanced surveillance systems are needed to prevent unauthorized access to secure areas, and advanced screening and tracking technology is needed to monitor everything loaded onto a commercial aircraft, including cargo and catering. In the past two months, opt-in “trusted passenger” systems, using biometrics for positive passenger identification, have been proposed for frequent travelers willing to provide personal information in return for faster check-in. Other technology-based options under discussion are real-time streaming video surveillance of cabin and cockpit transmitted to ground locations, installation of auto-landing systems on compromised aircraft, and biometrics for positive identification of all passengers. Many of the proposed technologies are either already used on a limited basis or could be commercially available in a relatively short time.

The Federal Aviation Administration maintains an R&D program for new technologies, but this must be augmented and accelerated. Critical new technology needs include much faster baggage-screening devices with much better imaging; computer systems that “read” and identify screened images; machines that search for a wider range of contraband; cheaper baggage-screening machines for smaller or mid-sized airports; and passenger-screening machines that can detect nonmetal weapons, explosives, and components of weapons of mass destruction.

Port security. The state of U.S. border control is as disturbing as the pre-9/11 aviation security scenario. In 2000, almost half a billion people, 11.5 million trucks, and 2.2 million rail cars crossed U.S. land borders, and security is being increased at these crossings. But the nation’s ports need special attention. Although numerous U.S. ports receive a steady stream of ships from throughout the world and are essential to the nation’s participation in global markets, there are no federal standards or no single federal agency addressing the security of our shipping system. More than 200,000 vessels and 11.6 million shipping containers passed through U.S. ports during 2000. Fewer than one percent of containers are physically inspected. The shipping traffic simply overwhelms the current inspection system, which suffers from a lack of resources, poor coordination and communication between agencies, and inadequate inspection technology.

Port and shipping security belong at the top of the threat list for terrorism. Containers are an ideal delivery system for weapons of mass destruction, particularly chemical weapons or dirty nuclear bombs. With the use of a global positioning system (GPS) transponder, someone could remotely detonate an explosive at any location, making it almost impossible to identify the responsible party.

In recent Senate testimony, Cmdr. Stephen Flynn of the Coast Guard and the Council on Foreign Relations outlined elements of a potential systemic solution for the shipping part of this problem, which relies heavily on technology. He proposed “pushing out the borders” by locating Customs and Coast Guard officials and technology offshore, particularly in the world’s megacontainer ports. His proposal includes systems to prevent unauthorized access to loading docks, camera surveillance of loading areas, advanced and high-speed cargo and vehicle scanners, theft-resistant mechanical seals, continuous real-time monitoring and tracking of containers and vehicles in transit by GPS transponders and electronic tags, and electronic sensors to prevent unauthorized opening of containers in transit. Containers that meet advanced security requirements would travel on a fast track to ensure high-speed transport. A number of the technologies proposed are information- rather than hardware-intensive, making them less expensive. In addition to improving security, a more effective tracking system could help combat the industry’s serious theft problems, which would enhance its appeal to the shipping industry.

Many of these technologies are either currently commercially available or are in the final stages of being evaluated and could be deployed within a matter of months if funding were available. The system as a whole needs to be carefully integrated, tested, and evaluated. Over the long term, there is a need for accelerated R&D on integrated technology for affordable rapid screening of large containers for a wide range of contraband and on technology for rapid dissemination of tracking and intelligence information.

The technology entity will need its own R&D budget to augment existing federal and private efforts as well as to fund new projects.

Bioterrorism. Although numerous researchers and studies warned of the possibility of bioterrorism, the threat had not penetrated the public consciousness until the October anthrax attacks. Although we’ve already seen several fatalities, widespread public concern, and major disruptions to the postal system, experts continue to warn that future bioattacks could be far worse. In June 2001, the Johns Hopkins Center for Civilian Biodefense Studies simulated a complex smallpox biowarfare scenario. In this “Dark Winter” simulation, 16,000 people in the United States would have contracted smallpox within two weeks, and smallpox vaccine supplies would have been depleted. The simulation projected that within two months, one million people worldwide would have died. Although we have a vaccine for smallpox, we still do not have any effective treatment. And anthrax and smallpox are only the tip of the iceberg. A hundred other toxins and agents can be weaponized, and we have either very limited or no effective treatment for nearly all of them. Potential genetic modifications of these biological threats multiply the danger.

Remedies for most of the anticipated bioterror weapons do not exist, which presents special development problems. The biotech and pharmaceutical sectors are central to the innovation system in the medical field but have zero market incentive to develop effective vaccines or treatments for bioterror threats, because these products would have a market only in the face of a national disaster. Incentives such as patent, tax, and funding benefits will be necessary to enlist this industry in the battle against bioterrorism. Advance government agreements to purchase and stockpile remedies when developed may help.

In addition to participating in this major R&D effort, the U.S. scientific and medical enterprise needs to reorganize to advise on the management of an attack. To prevent or respond to both known and as yet unanticipated bioterrorism threats, they will need to set care priorities, allocate resources, and establish response protocols.

Cybersecurity. Broadly defined, a secure information infrastructure underlies many of the systems described in previous sections, as well as many of our financial systems, communications networks, and transportation and other critical infrastructure. In a typical denial-of- service attack against its Internet server in March 2000, a New York firm estimated that it lost $3.5 million in business because its customers were unable to conduct trading. But according to some experts, the mode of attack may be changing from active (distributed denial-of-service) attacks to “passive control,” in which an attacker takes control of certain critical infrastructure elements (including computers or networks) and manipulates them at will. The information technology sector has expanded rapidly without paying adequate attention to information security. Viruses or worms including Melissa, I Love You, Code Red, SirCam, and most recently, W32/Goner have spread with breathtaking speed across the globe. For example, Code Red infected more then 350,0000 computers in less than 14 hours.

Although it is difficult to even briefly describe the breadth of current information infrastructure vulnerabilities, the pervasiveness of information technology in all of our critical infrastructure and economic systems demands that information security be a national priority in coming years. Information infrastructure protection and information assurance require increased interagency coordination for prevention, response, and recovery from active or passive attacks; increased commitment to the education and training of information security specialists; an active movement by industry to minimize vulnerabilities through anticipatory rather than simply reactive strategies; and a commitment of governmental and private resources to implement existing strategies for securing their information infrastructure. Although the challenges for deployment of existing technology are daunting, the R&D needs are even greater. We need an aggressive information security R&D effort in new technology approaches such as self-healing networks, sophisticated access controls, and sensor and warning systems.

A model for action

The federal government has played a critical role in defense R&D, and the homeland security technology needs outlined above require extensive federal involvement. The task crosses many federal agency jurisdictions and falls outside of or in secondary status to established missions, creating a requirement for cross-agency technology coordination, deployment, and development. Although the White House Office of Science and Technology Policy has been effective in coordinating some interagency science and technology (S&T) programs, the scale of the homeland security technology task would dwarf its limited capabilities and staffing. Besides, what is needed here is not just coordination but active technology development and deployment. It is hard to imagine how this can be accomplished without the creation of a new institution, and not just any institution will be up to the job. We need to ensure that the entity created is well suited to the task.

War spurred the creation of most U.S. government science agencies, and most of these integrate applied R&D in support of agency missions with elements of fundamental science to allow access to breakthrough opportunities. These integrated agencies, which focus on long-term technology development and employ sizable bureaucracies, do not appear to be lean or flexible enough for the current emergency. In 1958, President Eisenhower faced a similar problem and introduced a different model. Shocked by the Russian launch of Sputnik and determined to avoid another technological surprise, he created the Advanced Research Projects Agency [ARPA, later renamed the Defense Advanced Research Projects Agency (DARPA)]. Designed to spur rapid development of revolutionary technologies, the DARPA culture features talented, highly independent program managers with great budget discretion, flexible contracting and hiring rules, and a skeleton staff. DARPA identifies a technological need, and then the staff contracts with whatever federal agency, university, or company has the capability to meet that need. Once described as “75 geniuses connected by a travel agent,” DARPA has been one of the most successful technology development agencies in history, with the Internet as only one of its innovations. Because of its flexibility and speed, DARPA serves as a model for a federal program to address the technology needs of homeland security.

One DARPA effort is particularly instructive. In 1992, when the confluence of an economic recession and the end of the Cold War led to a reduction in funding for defense R&D, DARPA was called on to run a Technology Reinvestment Project (TRP) aimed at stimulating development of commercial technology that would also be useful to the military. DARPA received more than $1 billion over three years, enough money to entice other technology development agencies to its table. DARPA spearheaded a true interagency effort, using its funding to leverage other agency investments. The interagency group traveled around the country making face-to-face presentations about its program to private- and public-sector technologists. It succeeded in quickly setting in motion a number of industry-led targeted technology development projects in areas such as rechargeable lithium ion batteries and manufacturing techniques for display screens. The program was unpopular with many military officials who prefer defense-only technology, and it was cancelled when economic recovery eliminated the rationale that it was needed as an economic stimulus. Nevertheless, experience with TRP can be helpful in developing a quick-footed R&D response to homeland security needs.

A DARPA for homeland security

To survive in the Darwinian federal bureaucracy, the homeland security technology entity must be housed in a strong overall homeland security agency with a strong director. The homeland agency director’s power will be determined by whether the agency has a significant budget of its own to hire a staff and implement policy; whether it has the ability to review and revise budgets of other mission agencies active in the field; whether it has controls over how those budgets are implemented; and whether it has the authority to set overall missions and compel cooperation among competing agencies. Just as DARPA, at least in theory, has the secretary of defense as a bureaucratic champion, the technology entity will need a homeland agency director with significant bureaucratic leverage. This authority must be formalized in legislation. Power based only on presidential intervention, which is all that the homeland defense office has for now, is of limited value because no president has the time to participate in all the battles for influence. Because many other mission agencies are already engaged in work that is critical to homeland security, the new technology entity must itself have the ability to ensure R&D cooperation among the existing agencies. Marshalling and coordinating R&D in already-existing programs is the quickest and most effective way to get the job done. The technology entity can succeed without establishing its own S&T bureaucracy, but there are number of powers and characteristics that will be essential to its work.

Its own pot sweetener. Just as DARPA’s TRP shared its funding to quickly bring other technology agencies into a truly cooperative relationship, the homeland technology entity will need its own R&D budget that can be used as a pot sweetener. This fund can be used to augment and accelerate targeted existing R&D efforts in federal agencies or private companies as well as to fund new projects. Technology entity project managers should have the flexibility to distribute funds to create cooperative partnership programs among universities, public-sector agencies and labs, and private companies, leveraging private funds with public wherever appropriate.

The aim is not to build a new bureaucracy but to find technology leaders who can deploy existing lab and S&T resources quickly and flexibly.

Lean and talented. The homeland technology entity should follow DARPA’s model of hiring outstanding talent in small numbers as project managers in key threat response areas and arming them with funding to make new development happen. The overwhelming public support for the fight against terrorism suggests that first-rate scientists and engineers would be willing to work for this entity in this time of crisis. The aim is not to build a new lab and a new staff-heavy science bureaucracy; it is to find technology leaders who can deploy existing lab and S&T resources to meet the new need. Glacial civil service hiring procedures and the inability to pull in private-sector employees on specific projects have damaged federal science agencies in their search for talent. DARPA has the authority to operate outside these restrictions, and the new technology entity will need similar powers if it is to ramp up quickly. Also important to the design of a new technology entity is the model DARPA has used for most of its history of independent project managers empowered with significant decisionmaking and budget authority. Where speed and innovation are priorities, a flat, empowered model trumps a tiered, bureaucratic model.

Nationwide outreach. DARPA’s TRP demonstrated what a well-organized technology road show could accomplish for technology development. Cross-agency teams would travel and present threat problems and corresponding technology needs, aiming to encourage technologists and private-sector entrepreneurs to bring in new ideas and apply for funding awards or matches. For the technology entity, this “technology pull” exercise would help survey what approaches are available or could be rapidly developed, and help promote a face-to-face identification of available talent. We need aggressive outreach to solicit the best ideas from the best minds, not an armchair approach that waits for proposals to drift in.

Procurement flexibility. DARPA has the power to operate outside the traditional slow-moving federal procurement system and to undertake rapid and flexible R&D contracting. The homeland technology entity will need these same powers. It should have the ability to extend its procurement flexibility to sweep in cooperative efforts with other entities, further encouraging other agencies to cooperate because they get access to new procurement powers.

Integrated development, deployment, evaluation, and testing. Whereas DARPA tries to integrate applied science with fundamental science to pursue breakthrough technologies, the homeland technology entity will have a more complex mission. In the long term, it will need to develop breakthrough technologies for homeland security, but in the short term it will need to survey and promote the deployment of existing technologies, often across agency lines. In addition, it will need to test and evaluate existing security capabilities as well as potential new systems. These tasks need to be integrated so that each stage can learn from the others. This complex array of technology tasks reinforces the argument that the new entity must have enough power to command respect and marshal cooperation from other parts of the government.

Governance. The new technology entity will need to pull other agencies into its governance structure to help enlist their cooperation through involvement and participation. It likely will work best if participating agencies feel shared ownership. This probably requires a bifurcated structure: (1) a council of senior S&T leaders from other mission agencies involved in homeland security for overall policy direction, and (2) working groups of project managers from affected agencies organized around developing detailed programs in each key threat area. A director with a strong technology development background could wield overall executive and administrative powers for the new entity, chairing the council and working with the entity’s project managers, who would chair the working groups. The governance structure must create cooperative buy-in among agencies, which must see this as a shortcut to solving security problems they face in their jurisdictions. S&T can’t be ordered into existence, they have to be nurtured.

Roadmap. An immediate task of this cooperative structure would be to develop a plan for finding and proposing deployment of existing technology opportunities and developing new ones. Threats and vulnerabilities will have to be assessed and focus areas set. This could incorporate a classic technology road-mapping exercise. Provision will need to be made for updates and revisions as new threats and opportunities materialize. There can be no order in the current homeland technology chaos unless there is a coherent planning exercise. This planning will be a key mechanism for winning the allegiance of other agencies and fixing their technology roles and commitments.

The deployment dilemma. DARPA has long faced problems in persuading the armed services to deploy technologies it has developed, because DARPA can’t control or significantly influence service acquisition budgets. A homeland technology entity faces similar problems in enticing a host of other agencies to adopt technologies it develops. Given the nation’s vulnerabilities, we cannot afford to have a homeland technology entity face institutionalized technology transition barriers. Its parent homeland security agency must be able to influence the deployment budgets of involved mission agencies. In addition, the homeland agency may need a special technology transition fund to encourage deployment by these agencies. Regular reporting by the technology entity to the president and Congress on technology progress and opportunities could create additional awareness of research progress and corresponding pressure for prompt deployment.

The scope of the homeland security threat is so broad and deep that a new technology enterprise seems mandatory in facing it. Interestingly, the kind of model discussed here may have broader relevance. The U.S. science enterprise is still living under an organizational structure fixed in place a half century ago. Since then, scientific advance has increasingly required cross-disciplinary approaches, which in turn dictate cross-agency and public-private efforts. Yet we are not organized for these new kinds of approaches. A homeland technology entity, purposely created to cross agency, disciplinary, and sectoral lines and to promote cooperation across these lines, could provide a model for a new kind of organization for the new S&T advances we must have in all areas.

Finally, we have discussed the security of the U.S. homeland throughout this piece, because our government is discussing the same conceptual framework. But we need to recognize that this is only part of the picture. A security system will not work if it stops at our borders. Like it or not, the United States is so inherently open and our vision so global in reach that our thinking about homeland security has to push out our concept of borders and contemplate global systems of security. Homeland must be very broadly defined.

This further complexity underscores the need for technology advances in obtaining higher security. The National Guardsmen now deployed at U.S. airports daily remind us that manpower alone will not ensure security. Technology deployment is crucial, and technology intensity will be as crucial to a new system of security for this new political landscape as it was for military superiority in the Cold War and the Gulf War. A new homeland technology entity based on a new organizational model will be central to that technology development and deployment. We need both new tools and new ways to build them.

Recommended reading

Department of Transportation Inspector General Web site, links to testimony, statements, and audits, including “Deployment and Use of Security Technology,” before the House Committee on Transportation and Infrastructure, October 11, 2001, and “Status of Airline Security After September 11,” before the Senate Committee on Governmental Affairs, November 14, 2001.

Stephen E. Flynn, “The Unguarded Homeland: A Study in Malign Neglect,” in How Did This Happen? Terrorism and the New War, J. F. Hoge and G. Rose, eds. (Council on Foreign Relations, Inc., New York, 2001), 183–197.

House Committee on Transportation and Infrastructure Hearing, “Checked Baggage Screening Systems: Planning and Implementation for the December 31, 2002 Deadline,” December 7, 2001.

T. O’Toole and T. Inglesby, Shining Light on Dark Winter (Johns Hopkins Center for Biodefense Studies, , 2001).

Potomac Institute for Policy Studies, A Review of the Technology Reinvestment Project, PIPS-99-1, 1999; and A Historical Summary of the Technology Reinvestment Project’s Technology Development, PIPS-96-1, 1996 (Arlington, Va.)

S.1764, Biological and Chemical Weapons Research Act, introduced by Sen. Joseph I. Lieberman; statement on bill introduction, Congressional Record, p. S12376-S12384, December 4, 2001.

Senate Committee on Governmental Affairs Hearings: “Has Airline Security Improved?” November 14, 2001; “Federal Efforts to Coordinate and Prepare the United States for Bioterrorism: Are They Adequate?” October 17, 2001; “Legislative Options to Strengthen Homeland Defense,” October 12, 2001; “Critical Infrastructure Protection: Who’s in Charge?” October 4, 2001; “Weak Links: How Should the Federal Government Manage Airline Passenger and Baggage Screening?” September 25, 2001; “Responding to Homeland Threats: Is Our Government Organized for the Challenge?” September 21, 2001; “How Secure is Our Critical Infrastructure?” September 12, 2001


William B. Bonvillian is legislative director and chief counsel to Sen. Joseph I. Lieberman of Connecticut. Kendra V. Sharp is an assistant professor of mechanical engineering at the Pennsylvania State University.

Book Review: Information warfare

Information warfare

Strategic Warfare in Cyberspace, by Gregory J. Rattray. Cambridge, Mass.: MIT Press, 2001, 517 pp.

Bruce Berkowitz

Several books about information warfare (IW) have appeared in recent years. Government officials and industry leaders are more concerned than ever about the vulnerability of the U.S. information infrastructure. Military experts fear that terrorist groups or hostile armies might attack U.S. computers and communications systems. The Department of Defense (DOD) reemphasized this concern as recently as October 2001, with the release of its Quadrennial Defense Review (QDR), its top-level planning guidelines for military spending. The QDR assigned a high priority to defending against possible IW threats and to exploiting the potential of attacking our adversaries’ own information systems.

Strategic Warfare in Cyberspace is different from most books on the subject. It is probably the first original book-length study about the topic written by someone who actually works in IW operations. Gregory Rattray, a lieutenant colonel in the U.S. Air Force, currently commands the 23rd Information Operations Squadron at Fort Meade, Maryland, where he is responsible for developing Air Force IW tactics. Previously, Rattray served as deputy division chief for Defensive Information Warfare at Air Force headquarters. These assignments have given Rattray a hands-on perspective from which to view the technology, politics, and wartime experience that have led to the current state of U.S. IW plans and policy.

Rattray reviews recent developments in information technology. He observes that, although the recent information age has created new products and businesses at an astonishing rate, it has also created new vulnerabilities. Advanced computers and communications systems have led to major advances in war-fighting capabilities, but they have also made military forces more vulnerable to attacks on these systems.

He also provides an overview of the main components of the civilian and government information infrastructure, pointing out how developments such as deregulation of the telecommunications industry have made the infrastructure harder to protect. On the one hand, the pressure to cut costs has led to systems that meet only normal operating demands and do not provide sufficient redundancy. (The loss of a single Verizon switching facility in lower Manhattan during the strikes on the World Trade Center caused major telephone tie-ups in the region; some glitches persisted for months.) On the other hand, the end of AT&T’s long-distance monopoly forced the government to deal with multiple operators in developing security measures.

The recent development of Internet-based industries has also created new vulnerabilities and problems for defense. Many of the new companies have little experience in cooperating with government, and the technology itself raises new issues. For example, defending against “denial of service” attacks requires all server operators to protect their systems; organizing such cooperation can be challenging. Perhaps the creation of the new Office of Homeland Security will improve this situation by raising the visibility of the problem and creating a single point in the government where the issue can be addressed.

From his post on the Air Force staff, Rattray has been in a position to see how DOD has tried to address these vulnerabilities. Indeed, most of the book is devoted to defensive IW, in part because the details of most offensive IW planning remain classified. Some infrastructure vulnerabilities he discusses are familiar to the public, such as the susceptibility of communications to jamming. Some are well known to specialists, such as the vulnerability of financial databases. Others are much less familiar, such as supervisory control and data acquisition (SCADA) systems that control transportation systems, pipelines, and other infrastructure components. These systems are all vulnerable to three kinds of IW attack: mechanical (bombing, for example), electromagnetic (jamming or frying circuits with transmissions), and digital (inserting bogus data into an information system to deceive the users or cause it to crash).

Quick kills

Much of the book is devoted to exploring the parallel between the development of strategic bombing and the development of IW. Both were made possible by the introduction of a new technology (long-range aircraft for strategic bombing; digital electronics for IW). Both were promoted as offering the possibility of a quick kill and a revolution in warfare. Both promised to allow armies to leapfrog the front lines and attack an enemy’s rear directly. It took nearly 50 years for the full potential of strategic bombing to be realized. Rattray implies that it might take as long for IW to become an effective weapon.

In making this argument, Rattray might also have noted that pundits, proponents, and theoreticians were way ahead of reality in assessing both the effectiveness and the threat of strategic bombing in the early 1900s; today’s enthusiastic IW proponents may be similarly overoptimistic. IW will be a key component of future military campaigns, if only because information technology is becoming pervasive. However, there are inherent limits to IW.

For the foreseeable future, IW is most likely to be used to facilitate conventional warfare. For example, the ultimate achievement of strategic bombing, Rattray writes, was the ability to target a bomb with such precision and reliability that specific buildings or military facilities could be destroyed with virtually no collateral damage. The United States achieved this goal during the Balkan air campaigns of the mid-1990s. It will take some time before an information system can be controlled with such precision and reliability that the military can use it to “break things and kill people.”

Much of Rattray’s analysis deals with how the armed services adopt a new form of warfare. Change must proceed in a step-by-step process. First a technology emerges. Then someone proposes a concept for how it might be used to fight wars. Eventually, at least one branch of the armed services develops a doctrine explaining how it would organize itself to use the technology. Commanders translate this doctrine into specific weapons requirements. The weapons are developed and deployed. With experience, refinements are made in tactics and strategy.

This process may seem arcane to the average reader, but it is all too familiar to anyone who works on IW issues at DOD. It is also essential for understanding the difficulties of military reform and why bureaucracies are so resistant to change. The ability of an organization to move successfully from one stage to another depends on many factors. Rattray discusses several of them, including whether the organizational environment rewards innovation, whether an organization has innovative managers and the required technical expertise, and whether regulation inhibits investment. These factors are, in effect, leverage points that officials can influence to encourage reform.

IW policy evolves

The best and most original parts of the book, from a historical perspective, are the last 100 pages, which cover the development of IW policy during the 1990s. Rattray shows how thinking about IW went from existing work on traditional forms of electronic countermeasures to more sophisticated ideas about influencing and manipulating an adversary’s information systems. He recounts how the government inched forward to develop doctrine, policies, and programs.

Despite much detail, there are several developments that Rattray should have included but did not. He does not cite some of the earliest work in IW thinking, sponsored by DOD’s Office of Net Assessment in the late 1970s and throughout the 1980s. As part of its mission to compare U.S. and Soviet forces, the office discovered that the Soviets were concerned about U.S. “radio electronic combat” efforts. In fact, these efforts were negligible, but the fact that the Soviets seemed to be concerned spurred some of the first studies leading to IW efforts. The book also seems to have an Air Force-centric view of IW (the extensive work by the Navy during the 1980s is hardly mentioned), although this probably stems from Rattray’s background rather than bias.

Also, the book’s cutoff date appears to have been late 1999, which should have given Rattray time to analyze IW efforts in Operation Allied Force, the NATO military campaign to force Serbian troops out of Kosovo. DOD’s own analysis concluded that IW efforts in Allied Force were a failure. According to official reports, attempts to shape Serbian perceptions were “amateurish.” The United States also entered the war without having resolved many of the policy and legal issues that are raised in targeting enemy computer systems. These missteps led to major changes in how DOD has organized itself for IW operations. For example, it was partly because IW was so ineffective during the Kosovo operation that DOD decided to consolidate responsibility for offensive and defense IW planning in the U.S. Space Command.

One recurring theme in the book is that effective defense against IW attack will require closer cooperation among organizations that traditionally have not worked together effectively. Rattray explains the connections required among the military services, law enforcement agencies, industry, and regulatory bodies. He notes that these organizations made considerable progress when preparation for the Y2K rollover compelled them to work together. At the same time, he writes, the various virus attacks that tied up portions of the Internet in early 2000 demonstrate that more effort is required. Although these attacks were not earthshaking, they exposed vulnerabilities that could be exploited by a more sophisticated adversary with more people and funding.

In all, this is a useful book that explains both the principles and politics of a form of warfare that will continue to be important as long as people use information technology. Rattray concludes that the longer we wait to adopt policies to prepare for cyber attack, the more difficult it will be to do so–words worth keeping in mind as terrorists prove more innovative and determined than ever.


Bruce Berkowitz ([email protected]) is a research fellow at the Hoover Institution in Stanford, California.

Book Review: Return of the gadfly

Return of the gadfly

Science, Money, and Politics: Political Triumph and Ethical Erosion, by Daniel S. Greenberg. Chicago: University of Chicago Press, 2001, 530 pp.

David M. Hart

Webster’s defines gadfly as “an intentionally annoying person who stimulates or provokes others especially by persistent irritating criticism.” In the case of Daniel Greenberg, who has been the gadfly of the U.S. scientific establishment for four decades, stimulation and provocation have often been by leavened by wit and always motivated by sharp intelligence. Greenberg has made a career of puncturing the self-important puffery that sometimes passes for public discourse in this community, discerning the self-interest and turf conflicts that typically lie beneath high-flown rhetoric. He cultivated this unique sensibility as the first news editor of Science in the 1960s and then as the proprietor of Science and Government Report, which he wrote, edited, and published between 1971 and 1997. Along with the news, Greenberg brought us memorable characters such as Dr. Grant Swinger of the Center for the Absorption of Federal Funds, whose motto “something always comes along” remains as apposite today as ever.

Greenberg’s first book, The Politics of Pure Science, originally published in 1967 and recently reissued by the University of Chicago Press, provided more than one generation of students with a fresh perspective on the relationship between science and government in the United States. It reviewed what was known at the time about the history of this peculiar relationship and analyzed in depth some of the mega-projects of the era, such as the ill-fated Mohole, which was supposed to drill a deep hole in the ocean floor but wound up drilling one in the National Science Foundation budget instead. Most memorably, the book provided pithy characterizations of its subject that still ring true. The scientific community, Greenberg wrote, evinces “chauvinism” (in favor of its craft), “xenophobia” (toward outsiders who might intrude on it), and “evangelism” (aimed at prompting those outsiders to share the chauvinism). Although always skeptical and sometimes ironic, The Politics of Pure Science nonetheless maintained a sense of humor and an appreciation for the good will that motivated even the most pathetic antics that it chronicled.

I would like nothing better than to report that Greenberg’s new book meets the high standard that he set in his first book and sustained throughout his career. But I cannot do so. Science, Money, and Politics is badly in need of an editor. Greenberg devotes many pages to minor episodes that divert attention from his main arguments. The book is repetitious as well. Worst of all, it is tendentious. Greenberg’s wit and tolerance of human foibles have been swallowed up by cynicism.

In spite of the book’s literary flaws, Greenberg’s admirable record of tilting at the conventional wisdom and breaking comfortable silences impels us to weigh the book’s substantive arguments carefully. The gadfly delivers a stinging three-count indictment of the contemporary scientific community and adduces a large body of evidence to support it.

The first count alleges (to put it even more baldly than Greenberg does) that scientists will do virtually anything for money. Underlying the insatiable demand for funding, he emphasizes, is the exponential growth of the number of would-be principal investigators. Science faculty members do not practice birth control in producing graduate students, in large part because they need graduate student labor in order to publish and not perish. As a result, each generation is larger and more desperate for support than the preceding one. Industrial sponsorship of research and more recently the prospect of massive equity payoffs have added fuel to the funding fire. Greenberg supplies some shocking anecdotes to support this claim, such as the MIT professor who was accused of using homework assignments as a method of corporate espionage. More important, he describes institutional failures to preserve scientific integrity, such as the forced departures of the top editors of the New England Journal of Medicine in the face of revenue-generating pressure from the Journal’s owner, the Massachusetts Medical Society.

This count of the indictment warrants further investigation. There is evidence that patients may be suffering and even dying because conflicts of interest are ignored. There is evidence that universities are pushing faculty to produce patents, most of which have no economic or scientific value. However, there are enormous disciplinary and institutional variations in these trends. Moreover, in the cases described by Greenberg in which concerns about “grubbing for money” were most acute, efforts were made to defend the traditional norms of science. One of the most important developments in science policy in recent decades is the emergence of what Rutgers professor David Guston has labeled “boundary organizations,” which attempt to mediate systematically between scientific organizations and their societal environments and to resolve conflicts that emerge where they intersect. Not all of these organizations are the miserable failures that Greenberg assumes them to be. University research administrators and technology transfer offices at their best, for instance, can shield faculty members from objectionable conditions that sponsors may try to impose. Members of the scientific community, particularly the leaders of its institutions, should examine the record closely to learn lessons, both positive and negative, from the diverse experiences that are accumulating.

Scare tactics

The second count of the indictment maintains that in their quest for public funding scientists regularly resort to scare tactics. The community’s “report industry,” Greenberg argues, can be counted on to produce volumes of justification tailored to suit any crisis. Dr. Grant Swinger never makes an appearance in Science, Money, and Politics, but his spirit hovers over it. Many episodes of opportunistic report writing appear in the book, most of which are best forgotten. He devotes nearly 40 pages, for instance, to a cascade of reports during the 1980s and 1990s claiming with little foundation that the U.S. would soon face a shortage of Ph.D.’s. Greenberg worries that such intellectual elasticity will ultimately trigger a public backlash. He finds, however, few traces of such a reaction. The evidence on this count may be strong, but the crime is little more than political jaywalking, taken in stride by the citizenry and their representatives in Washington.

Indeed, Greenberg shows that the well of public credulity with respect to science is dangerously deep. If the public will believe virtually anything, it hardly matters what is funded. Pork is as good as peer review. On this point, Greenberg’s journalistic acumen produces evidence that goes beyond the ordinary, most notably Clinton science advisor Jack Gibbons’ candid comparison of the superconducting supercollider (SSC) and the international space station. The space station, Gibbons confesses, was scientifically unjustifiable but politically unstoppable. The SSC, on the other hand, was “good science…but not that well connected to people or jobs.” Gibbons is at most a reluctant accomplice to the killing of that good science and the care and feeding of the white elephant that the space station has become. The perpetrators are politicians who approach science policy as just another way to bring Federal dollars to their constituents.

The third count of the indictment charges scientists with abandoning their social responsibilities. Greenberg believes that scientists, as the creators of powerful and potentially dangerous knowledge, are obliged to help their fellow citizens make good decisions about its uses. He chronicles the attenuation and occasional silencing of some significant voices for responsible science, including the Federation of American Scientists, plagued by slumping membership, and MIT’s Technology Review, now reinvented in the gee-whiz mode of Popular Mechanics. Senior statesmen of science, he shows, now double as consultants. These are telling points, yet the case is not closed. The “greatest generation” was not entirely composed of paragons, as Jessica Wang has shown in her book on post-World War II anticommunism in science, and not all the causes of 1960s liberalism were worthy. Comparisons to more realistic historical standards might yield a somewhat kinder judgment than that found in this book.

This judgment is particularly and unduly harsh with respect to scientific leaders who have answered the call by serving the country in governmental advisory positions in recent decades. They have been “humbled,” “tamed,” and “politically neutered,” according to Greenberg. Politics, in short, triumphs over science. Yet in many of the cases cited in Science, Money, and Politics, there are scientists on both sides of the issue and enough wiggle room in the facts for all of them to make a plausible case that they have effectively upheld their principles. Would we prefer, in any case, that the experts be on top, rather than on tap? Greenberg sometimes leaves the impression that he would, as when he advocates that scientists become more involved in electoral politics, both as candidates and as supporters.

This position, so incongruous in the context of the indictment but nonetheless stated repeatedly, suggests that idealism lies beneath Greenberg’s crusty surface. Despite all he has witnessed, he believes in science with a capital S: unambiguous truth revealed by dint of human ingenuity and hard work. The shades of gray that pervade the borderlands where science meets society frustrate the true believer. Over the years, the abrasion of that idealism against Washington reality produced many enlightening sparks. One hopes that more gadflies will follow where Greenberg has led.


David M. Hart ([email protected]) is associate professor of public policy at the John F. Kennedy School of Government, Harvard University, and the author of Forged Consensus: Science, Technology, and Economic Policy in the U.S., 1921–1953 (Princeton University Press, 1998).

Book Review: Oil and war do mix

Oil and war do mix

Resource Wars: The New Landscape of Global Conflict, by Michael T. Klare. New York: Metropolitan Books, Henry Holt and Co., 2001, 289 pp.

Richard A. Matthew

Throughout his career, Michael Klare has written engaging, thoughtful, and timely pieces on emerging security issues. His latest book, Resource Wars, is another very readable and remarkably well-timed work that ought to be a welcome addition to the desks, night tables, and reading lists of all those interested in contemporary world affairs.

In particular, chapters 2 and 3 should make this book immediately appealing to a broad audience. These chapters offer a concise and well-documented discus- sion of the links among oil, conflict, and national security that provides a useful framework for understanding the terrorist attacks of September 11. Klare makes it very clear just how important Persian Gulf oil is to the economies of the United States, the Middle East, and the rest of the world. For example, oil provides 39 percent of the world’s energy, a share that is not likely to decline by much over the next 20 years. Five Gulf states (Iran, Iraq, Kuwait, Saudi Arabia, and the United Arab Emirates) possess two-thirds of the world’s oil, completely dwarfing the nonetheless significant supplies that exist in the Caspian and North Seas, Venezuela, Mexico, Russia, Nigeria, and the United States. As long as economies depend on oil, the strategic value of this region is assured.

Klare explicitly situates the terrorist activities of Osama bin Laden and his associates in the context of U.S. efforts to forge a mutually beneficial relationship with the government of Saudi Arabia and to constrain the hostile actions of Iraq and Iran in order to protect access to Persian Gulf oil. He also provides details about the several-year effort to track and capture bin Laden that raise anew tough questions about the dramatic failures of U.S. intelligence.

Resource Wars thus makes a valuable contribution to current efforts to explain al Qaeda’s attacks on U.S. soil. But even were this not the case, the book would be of wide interest, for it takes a strong position on an issue that has been the subject of extensive and often acrimonious debate for over a decade.

Conflict after the Cold War

Since the end of the Cold War, scholars, policymakers, politicians, and others interested in world affairs have discussed the global prospects for war and peace with renewed vigor. On one side are the pessimists, who perceive a rapid, widespread, and probably unstoppable slide toward instability and violent conflict. They argue that the constraints on the use of force that existed during the Cold War have been removed. They contend that communication and transportation technologies have increased contact between cultures that are suspicious of or hostile toward each other. In particular, U.S. culture has swept across the planet in a flood of Marlboro cigarettes, fast foods, and Baywatch episodes that threatens to overwhelm the values and customs of many local cultures.

Pessimists also worry that the deepening of North-South inequalities and the cumulative effects of rapid population growth, economic failures, environmental stresses, and political corruption, all of which are predominant in the developing world, are producing a pool of underemployed, angry, and often well-armed youth, wildly trigger-happy and highly susceptible to various forms of fanaticism. Robert Kaplan sees signs of a “coming anarchy”; Samuel Huntington speaks of the “clash of civilizations”; Benjamin Barber writes about “Jihad versus McWorld”; and Thomas Homer-Dixon spotlights “environmental scarcity and diffuse civil violence.” In each case the message is clear: The next few decades are likely to be marked by great misery and conflict in the world.

These claims have been energetically contested by a remarkably optimistic group of analysts and commentators, including William McNeill, Francis Fukuyama, Thomas Friedman, and Michael Doyle. Encouraged by the expansion of democracy, trade, and human rights, they see many positive trends at the global level. The end of the Cold War, they suggest, has made it possible for the United Nations to play a more effective role in conflict resolution and peacemaking. New technologies have fostered information sharing, confidence building, and cross-cultural understanding, and they have enabled the creation of transnational coalitions aimed at saving the environment, ending the scourge of landmines, promoting education, and providing opportunities for development.

These analysts argue that the ideological rivalries that shook the world throughout the 20th century have largely come to an end, and democratization and trade are now laying the foundations for world peace and prosperity. The motivations for conflict are gradually being undermined as more people discover a stake in the international system and experience at first hand the benefits of, and the need for, extensive and permanent cooperation. The conflicts evident today are not a foretaste of global collapse but rather the last barrage from pockets of resistance to world peace, or the anxious outbursts of groups not yet satisfactorily integrated into the world system. The former groups must be uprooted through international coalitions; the latter must be given aid and encouragement.

Klare’s take on this debate is clearly presented in the final pages of Resource Wars. “Whereas international conflict was until recently governed by political and ideological considerations, the wars of the future will largely be fought over the possession and control of vital economic goods–especially resources needed for the functioning of modern industrial societies.” Indeed, “resource wars will become, in the years ahead, the most distinctive feature of the global security environment” as demand, boosted by population growth and economic development, increasingly exceeds nature’s capacity to supply many essential commodities.

One of the implications of this thesis is that “regions that once occupied center stage, such as the east-west divide in Europe, will lose all strategic significance, while areas long neglected by the international community, such as the Caspian basin and the South China Sea, will acquire expanded significance.” Unfortunately, these resource-rich areas are generally unstable places in which competing interests are quick to use force.

In Klare’s worldview, Africa, with its vast stores of untapped energy, timber, and mineral wealth, emerges as the hot spot of the future. He predicts that conflict there and elsewhere will frequently take the form of civil strife, often amplified by outside interests that are increasingly willing to send in private military companies, such as Executive Outcomes and Sandline International, to protect their holdings and keep resources flowing as freely as possible. At times, however, competition for natural resources will trigger interstate wars, especially over access to scarce transboundary supplies of water and oil.

At the conclusion of the book, Klare argues on behalf of a “resource-acquisition strategy based on global cooperation” and suggests that robust international institutions managing energy and other resources might be set up that would significantly reduce the incidence of violent conflict. But this brief discussion does little to change the foreboding tone of the rest of the book. Ultimately, Klare must be aligned with the pessimists who worry that the world lacks the will and the capability to prevent widespread conflict in the decades ahead. The emphasis on the growing importance of the link between natural resource scarcity and interstate warfare is Klare’s particular contribution to this perspective.

The implications of environmental scarcity

Klare’s approach to the general debate over the present and future prospects for war and peace places his study in a literature about the security implications of environmental change that has mushroomed over the past 10 years. Proponents of this line of inquiry generally contend that the rate and magnitude of environmental change, largely due to human activities, are unprecedented in human history. They examine the social effects of this remarkable period of environmental change, especially insofar as national or human security is concerned. Highlights of this literature include: 1) claims by scholars such as Ronnie Lipschutz, Daniel Deudney, and Aaron Wolf that interstate resource wars are unlikely because developing substitutes, shifting to alternative commodities, or acquiring resources through trade are almost always more cost-effective approaches than the use of force; 2) arguments by writers such as Peter Gleick and Daniel Hillel that the threat of interstate resource wars, especially over water and oil, is in fact increasing; and 3) arguments exemplified in the work of Homer-Dixon and Norman Myers that environmental scarcity is most closely linked to subnational conflict and human insecurity.

Klare’s work falls mainly into the second category of analysis, although elements of his study also endorse Homer-Dixon’s well-known position. Klare constructs his argument that control of resources will be the primary motivation for future civil and interstate wars by considering, in turn, examples of conflict or potential conflict related to oil, water, and minerals and timber.

The four chapters on oil, which include a useful overview of the issue as well as detailed case studies of the Persian Gulf, the Caspian basin, and the South China Sea, are the strongest chapters in the book. The case of the Persian Gulf has received considerable scholarly attention, but the other two are less well known. Klare does not add a great deal of new information to the field, but he provides the reader with an excellent and very accessible analysis.

The two chapters on water are somewhat less comprehensive but still provide a useful overview of the challenges posed by growing water scarcity. Klare examines the potential for conflict in four shared river basins–the Nile, Jordan, Tigris-Euphrates, and Indus–that have been widely studied in recent years by scholars such as Gleick, Miriam Lowi, and Arun Elhance. The single chapter on minerals and timber largely reiterates Homer-Dixon’s thesis about the deepening linkages between environmental scarcity and diffuse subnational conflict. With its very brief discussions of resource-driven conflicts in Bougainville, Sierra Leone, and Borneo, this chapter appears to be almost an afterthought to the far more extended treatments given to oil and water.

The bottom line

Klare is rather parsimonious in acknowledging the work of other scholars in the field. This leads to three shortcomings. First, the bibliographic material is not very extensive, and the reader is rarely guided toward other key works. Second, although Klare makes some allusions to the literature on geopolitics, he does not build on this or on recent treatments of it. Consequently, an obvious question–to what extent does his vision of the future suggest that the world is revisiting the violent competition for resources that characterized the colonial era before the 20th century–is left unanswered.

Third, and more seriously, counter-arguments receive virtually no consideration at all. One would have liked Klare to take on directly the arguments of Lipschutz, Deudney, Homer-Dixon, Wolf, and many others. These authors contend that there is little empirical evidence supporting a causal relationship between resource scarcity and interstate war. Moreover, they believe that the factors that make interstate resource wars unlikely will probably continue to dominate, even in tense settings where shared river basins cannot meet the demands of all riparian states or the rights to lucrative oil fields are contested. Specifically, they say that there are almost always alternatives to war that can be pursued more cheaply. Responding to these concerns would have strengthened Klare’s case.

But ultimately, these are minor flaws in an important and generally well-crafted book. In fewer than 300 pages, Klare provides a very well-written introduction to key strands of the environmental security literature, makes an important contribution to the debate over the worldwide prospects for war and peace in the coming decades, and provides a framework for understanding some of the motivation for al Qaeda’s terrorist actions.


Richard A. Matthew ([email protected]) is assistant professor of international and environmental politics in the Schools of Social Ecology and Social Science at the University of California at Irvine, and director of the school’s Global Environmental Change and Human Security Research Office.

Book Review: Loose numbers

Loose numbers

Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists, by Joel Best. Berkeley and Los Angeles: University of California Press, 2001, 190 pp.

It Ain’t Necessarily So: How Media Make and Unmake the Scientific Picture of Reality, by David Murray, Joel Schwartz, and S. Robert Lichter. Lanham, Maryland: Rowman & Littlefield Publishers, 2001, 248 pp.

David S. Moore

“Report deplores science-media gap,” declared a 1998 headline in Science magazine. The article noted a sample survey disclosing that journalists think that scientists are arrogant, whereas scientists think that journalists are ignorant. One result relevant to the books under review: 82 percent of the scientists agreed that the “media do not understand statistics well enough to explain new findings” in medicine and other fields.

Here are two more books, both competent, that explore for general readers the interactions among science, social activism, the media, and the loose numbers that often result from statistical studies of complex and vaguely defined problems. They have much common ground. They overlap in their discussion of prevalent abuses of data: vague and varying definitions, imperfect measures, partial results reported without adequate context, and so on. Both abound in amusing or infuriating examples. Both point to the way in which “good causes” tend to attract bad statistics. Neither is systematic in describing the weaknesses that occur in the use of statistics in public discourse or in the standards needed for good practice. Despite these similarities, they are quite different books by authors with different backgrounds. I found it remarkable how few of the same examples appear in both books.

Joel Best is a sociologist, and he concentrates on social statistics, or more precisely bad social statistics and why they won’t go away. The greatest strength of Damned Lies and Statistics is its consistent presentation of the sociological context of bad statistics. Best notes that social problems are “constructed” in the sense of being singled out for attention and promoted as serious until the public, previously indifferent, comes to regard hate crimes or child abuse, for example, as self-evidently major problems requiring action. Statistics, even the professional products of the Census Bureau and the Bureau of Labor Statistics, also reflect social construction. Those who doubt this should ponder the official definition of “unemployed,” which bears little relation to the everyday meaning of the word.

Best describes the sociology of activist groups, reminding us that dedication to a just cause insufficiently acknowledged by the public at large justifies (in the eyes of the dedicated) what skeptics regard as abuses. What is more, the activists have a point: Social phenomena have “dark figures,” unrecorded occurrences that may outweigh reported cases. But when activists believe that their cause is just and are certain that the dark figure must be large, the slippery slope awaits: Overly broad definitions expand the problem, and shocking examples suggest without actually asserting that all the cases covered by the broad definition are as horrifying as the examples. Because activists believe that no one understands the problem as well as they do, they are entitled to give “estimates” when reporters want numbers. Even sound data give rise to “mutant statistics,” which are simpler or more compelling than their ancestors and so well adapted to the ecology of the press and the public that they drive out more accurate but less dramatic numbers. These themes are nicely explained and even more nicely illustrated by numerous examples. We see, for instance, how “an estimate that perhaps 6 percent of priests in treatment were at some point attracted to young people was transformed into the ‘fact’ that 6 percent of all priests had had sex with children.”

Damned Lies and Statistics alludes in passing to the needs and practices of the media. It Ain’t Necessarily So, which cites Best’s other work several times, focuses on media reporting of research studies that are judged to be of public interest. Although the book is not ostensibly about statistics, statistical issues are pervasive. Indeed, roughly half the book is devoted to demonstrating that press reports regularly overlook statistical weaknesses in research.

Like Best, David Murray, Joel Schwartz, and S. Robert Lichter bring a social science perspective to their subject: Two were trained as political scientists, one as a social anthropologist. Unlike Best, they inform their readers at length about the culture of the press and more briefly about the culture of science. They show how scientists seeking an audience for their work collaborate with the media in oversimplifying conclusions, omitting the context of other studies, and emphasizing the interest of the findings while neglecting weaknesses in the evidence. Although activists are prominent in the book, scientists and especially journalists occupy center stage.

The great strength of It Ain’t Necessarily So lies in the many carefully documented case studies of exactly what different media outlets reported or failed to report about specific issues and of the quality of follow-up reporting in instances in which a scientific consensus eventually emerged. The authors place these case studies in an explanatory framework that emphasizes that “news” is a manufactured product, and they attempt to clarify the process by which events become news. Most of their examples are drawn from the print media, where it is easier to document exactly what was reported. It will surprise no one that the New York Times and the Wall Street Journal often differ in what their staffs consider newsworthy, as well as in the details they choose to publish or omit. That the authors’ own judgments can sometimes be criticized is no argument against their theme.

The authors’ portrait of journalists has something in common with Best’s picture of activists. Journalists are motivated by a noble desire to right wrongs, unmask hidden evils, and uncover ulterior motives. They are often partial to a simplified “villain, victim, hero” narrative and to an adversarial style that leads, in the case of science reporting, to the Food Marketing Institute’s Tim Hammonds’s wry dictum that “for every Ph.D., there is an equal and opposite Ph.D.” Journalists like and respect data, which seem solid, but dislike the qualifications and uncertainties that scientists accept and expect. Of course, journalists face a difficult task in summarizing a complex world in a finite number of words. The authors are careful not to cast them as villains or incompetents and appear to be trying to clarify journalistic practices for the rest of us. I would guess that most scientists and all scientific societies have already heard the authors’ warning that “other players will shape and construe the results and carry them to the public in partial form. Moreover, all findings will be apprehended in terms of cultural understandings that the media bring to bear on any individual story. It follows that developing a more sophisticated appreciation of the media’s interaction with public policy should be a high priority for scientists.” Neither Murray et al. nor Best offer suggestions for change that go beyond “be aware and be critical.” As social scientists, they describe rather than prescribe.

Nuggets aplenty

When I was a program director at the National Science Foundation long ago, we were urged to be on the lookout for “nuggets” in the work of the investigators we supported: stories that could help justify research to Congress and the public. These books abound in nuggets for those who think, write, or teach about social science, statistics, or the media. Moreover, each places the nuggets in a matrix of sorts. Both books are informative. Neither is highly original or provocative.

What do I miss in these books? Examples of sound statistics wisely used, for one thing. Both note that some data are much more reliable than others, and Murray et al. point to and applaud examples of accurate reporting, but both share the predilection of journalists for picturesquely negative stories. Not all data-based reasoning in messy settings is misleading; accounts of successes would sharpen the condemnation of shoddy work. For example, they could have mentioned the Tennessee STAR project, a four-year randomized comparative experiment that clearly demonstrated the beneficial effects of small class sizes on learning in the primary grades. Most of all, I think that any account of the many and dire misuses of statistics should also support the superiority of data–even partial and imperfect data–over anecdotes. No news story, it seems, can begin without a human interest anecdote, and many go no further.

Let us suppose that we wanted, in 1996, to make a case that corporate downsizing was demoralizing the American worker, even to the point of encouraging political extremism. This might seem like a tough assignment at the beginning of the great economic boom. Fear not. With anecdotes, anything is possible. Visit Dayton, Ohio, where the woes of NCR, the former National Cash Register, are eroding the community’s prosperity. Avoid Redmond, Washington, where Microsoft is hiring and passing out stock options. Poll the members of the class of 1970 at an expensive private university to learn that they worry that their children will not do as well as they. Don’t interview Korean immigrants, who would have a different estimate of their children’s future. The New York Times had no difficulty pointing to the devastation caused by downsizing in a seven-part series in March 1996. Dayton, Ohio, and the Bucknell class of 1970 each received full-page treatment. As the Harvard statistician Frederick Mosteller has said, “It is easy to lie with statistics. But it is easier to lie without them.”


David S. Moore ([email protected]) is Shanti S. Gupta Distinguished Professor of Statistics at Purdue University and the author of Statistics: Concepts and Controversies, (5th edition, Freeman, New York, 2001).

Weighing Our Woes

Weighing Our Woes

The horror of September 11 is difficult to absorb. We all looked in disbelief as the tape of the buildings collapsing was played over and over and over again. We watched thinking that if we saw it often enough perhaps we could feel the magnitude of the loss. For more than three months the New York Times ran biographical sketches of the people who were killed that day in an effort to help us slowly come to understand the magnitude of this mind-numbing tragedy.

On November 2, the Department of State in cooperation with the National Academies sponsored an all-day meeting on a human disaster of even greater magnitude–the spread of infectious disease in the developing world. The day that 3,000 people died at the Trade Towers and the Pentagon, more than 8,000 people died of AIDS, 5,000 people died of tuberculosis, and several thousand more died of malaria. Of course, these deaths are different, because disease is not murder. Nobody wanted these deaths to occur, no one made them happen.

Yet these deaths are not exactly the same as deaths from disease in the developed world. With the exception of AIDS, infectious diseases are not taking the lives of young people in the rich countries, because most diseases can be prevented or treated relatively inexpensively, and even AIDS is being contained by prevention efforts. In Africa infectious diseases are the cause of almost 70 percent of all deaths. What makes the enormous toll of death in the developing world not just the way of all flesh is that we know very well what has to be done to prevent most of these deaths. But we don’t act. And by not acting, we know that we are signing an early death warrant for tens of millions of people.

The nation’s leaders are not insensitive to the seriousness of this neglect. Even as the nation reeled in shock from the events of September 11, Sen. Bill Frist (R-Tenn.) and State Department science advisor Norman Neureiter told participants at the meeting how important it is to address global health problems. Secretary of State Colin Powell was scheduled to speak but was called away to meet with congressional leaders. His prepared remarks, which were read at the meeting, indicate that he understands the severity of the problem and is looking for ways to take effective action.

The most comprehensive description of the problem and what will be needed to fix it came from Barry Bloom, dean of Harvard University’s School of Public Health. Using data from the World Health Organization, he painted a devastating picture of human suffering and economic disaster.

Tuberculosis infects 8.4 million people per year and results in 2 million deaths, virtually all in the developing world. About one-third of deaths of AIDS patients in Africa are attributed to tuberculosis. It results in $1 billion in lost income from people too sick to work, $11 billion in future lost income from those who die, and $4 billion in diagnosis and treatment costs. Malaria infects 400 million-900 million people a year and results in 0.7-2.7 million deaths. More than 36 million people are living with AIDS, and 25 million of them are in sub-Saharan Africa. Of the roughly 3 million AIDS-related deaths that occurred last year, about 2.4 million were in sub-Saharan Africa. Of the 5.3 million new infections in 2000, about 3.8 million were in sub-Saharan Africa.

The pain and suffering caused by infectious disease do not end with the infected individuals and their families. Bloom explained that the economic repercussions touch everyone in the developing countries and deepen the cycle of poverty that is the breeding ground of disease. Too many people are dying before they can use their education to contribute to the society through work. WHO estimates that the short life expectancy in the least developed countries (49 years) compared to the 77 years of people in the industrialized world results in an annual economic growth deficit of 1.6 percent, which becomes an enormous difference over time.

Opportunities for action

Although these numbers are daunting enough to lead to despair, Bloom sees plenty of opportunity for effective action, even with AIDS. He cites a vivid example of how effective prevention efforts can be. In 1990, AIDS infection rates were about 1 percent in South Africa and slightly lower in Thailand. Thai officials recognized how devastating an AIDS epidemic could be and instituted an ambitious AIDS education and prevention program. In 2000, HIV prevalence in Thailand had increased, but it was still below 3 percent. By contrast, South Africa did little to control the spread of the disease, and the infection rate is now close to 25 percent.

The prospects for improvement are much better with malaria and tuberculosis, because we have effective vaccines and treatments. The solution is simple–money. Bloom explains that a billion people live on less than $1 a day. In the 44 countries with average per capita income of less than $500 per year, the average health expenditure is $12 per person per year. Bloom has worked with other public health experts to help WHO develop a plan that would dramatically decrease the incidence of deadly infectious diseases at a cost that the developed countries could easily afford.

The experience of countries that have implemented programs for the early detection and treatment of tuberculosis indicates that extending this effort worldwide would cost about $900 million per year and result in an estimated economic return of $6 billion per year through increased worker production. With malaria the key is more aggressive prevention efforts to stop mosquitoes from stinging people. Actions would include increased insecticide treatment of existing mosquito netting, the purchase of additional nets and insecticide, increased spraying of breeding areas, and chemoprophylaxis for children. Economic analysis indicates that a 10 percent reduction in the incidence of malaria could lead to a 0.3 percent increase in annual economic growth. Bloom estimates that implementing an AIDS prevention program such as the one used in Thailand would yield an economic return of 37-55 percent through averted income losses and medical expenditures.

A comprehensive program for addressing the related economic and health problems of developing countries was released by the WHO-appointed Commission on Macroeconomics and Health in December 2001. The report (available online at www.who.int) contains a detailed analysis of how annual expenditures of about $34 per capita aimed at reducing the harm caused by HIV/AIDS, malaria, tuberculosis, childhood infectious diseases, maternal and perinatal conditions, micronutrient deficiencies, and tobacco-related illnesses could prevent 8 million premature deaths per year. Money for the program would come from increased spending by the developing countries themselves and a significant increase in support from the wealthy countries. Donor spending would reach $27 billion during 2007 and $38 billion in 2015. About a third of the donor funding would go to the Global Fund to Fight AIDS, tuberculosis, and malaria. Estimated economic benefits would be in the hundreds of billions and would be reflected not only in better health but also in more robust economic growth that would eventually make it possible to reduce the need for assistance from the wealthy nations.

With the war on terrorism absorbing increased government spending and the economy in recession, it might seem an inopportune time to talk about additional government expenditures. But this is a time when people are willing to think outside their individual needs and concerns and when they are painfully aware that the well-being of all the world’s people is of direct importance to the United States. The humanitarian and economic reasons to take action against global infectious disease have never been more compelling.

Forum – Winter 2002

The Kyoto Protocol

I have to congratulate you on publishing Richard E. Benedick’s essay on Kyoto and its aftermath (“Striking a New Deal on Climate Change,” Issues, Fall 2001). Many of us already knew that Benedick was an accomplished scholar and diplomat. What I didn’t know was that he could write with such style and humor. Nobody can do justice to, or make sense of, the Kyoto affair without humor.

Imagine respectable governments willing to actually pay money, or make their domestic industries pay money, to an ailing former enemy, in the guise of a sophisticated emissions-trading scheme, for the dual purposes of bribing the recipient to ratify a treaty and providing the “serious” governments a cheap way to buy out of emissions commitments. All under the pretense that it serves somehow to reduce emissions.

Benedick was rightly protective of the U.S. position. Maybe a little too protective. President Bush may have had some choice, and didn’t make the best choice; but one choice he didn’t have: to submit for ratification. The U.S. “commitment” was almost certainly infeasible when the Clinton administration signed the protocol in 1997. Three and a half years later, with no action toward reducing emissions and no evidence of any planning on how to reduce emissions, and no attempt to inform the public about what it might have been committed to or any effort to so inform the Congress, what might barely have been possible in 13 years–1997 to 2010–had become ridiculous. No Senate would confirm the treaty without any knowledge of what the commitment was, and no president could answer that question without a year’s preparation. No such preparation appears to have been done in the Clinton administration. President Bush at least avoided hypocrisy.

The argument for staying with Kyoto was, according to Bonn conference president Jan Pronk: “It’s the only game in town.” Benedick suggests that the game can be changed. It has to be, or a new game introduced. The United States suffered ignominy in the spring of 2001; Kyoto champions look no better as the year comes to a close.

The world stage has been transformed since September 11. Conformist critics of America have been silenced by a new need to face a more immediate challenge. The United States, a “renegade” in March, became a leader in October. Perhaps, behind the glare of international terrorism, Kyoto can take advantage of the shadow and find a new, and serious, approach to the biggest environmental problem of the new century.

THOMAS C. SCHELLING

School of Public Affairs

University of Maryland

College Park, Maryland


After a week of hard bargaining, negotiators in the Moroccan city of Marrakech finally agreed on the details of how the Kyoto Protocol will operate. The Marrakech accord, which completes the Bonn agreement from earlier this year, is exactly what the United States pressed to achieve before it repudiated the whole process. It provides unrestricted emission trading, large-scale experiments with carbon sinks and unprecendentedly stiff international rules for compliance. Moreover, it de facto recognizes the need of several industrialized countries, notably Japan, for effectively lower targets. Japan experienced (like the United States) larger than expected increases of carbon dioxide emissions in the 1990s and faces a sluggish economic outlook.

The U.S. decision to drop out of Kyoto has made Japan, along with Russia, crucial for ratification. If either decides not to take part, the whole process will collapse. Both countries did take advantage of this exceptional bargaining position (as the U.S. could have done). But do these bazaar-like negotiations imply that the Kyoto protocol is fundamentally flawed, as Richard E. Benedick assumes? I don’t think so. Most successful international regimes experienced deviations from agreed targets. Examples are Russia and other Eastern European parties, which failed to comply with the ozone regime, or Norway, which failed to comply with the North Sea commitments.

Admittedly, the effectively agreed-on reductions of 3 percent will be only a tiny amount compared with what climatologists say is needed, and a far cry from the “technological revolution” that Benedick preaches. But let me borrow from his famous Ozone Diplomacy (p. 328): “A target, any target, will provide experience and can always be adjusted. It is essential to send unambiguous signals to the market in order to stimulate competition, innovation and investment to invest in less carbon-intensive technologies.” And there are promising indications that the signal from Bonn was well understood by industry. Only one day after the Bonn deal was struck, the values of renewable energy companies in Spain rose by five percent. From my point of view, there is no need for a government lead nor for obscure carbon taxes; the technology is there or it will be developed if we send the right signals. Everyone involved in the process acknowledges that Kyoto is a necessary start, but everyone also agrees that it will take progressive cuts in the future to get it right.

REIMUND SCHWARZE

Technische Universtat

Berlin, Germany


Richard E. Benedick provides an excellent account of the recent climate change negotiations. As he notes, the inadvertent hero of Kyoto’s revival has been President Bush, whose rejection of Kyoto produced a backlash that breathed new life into the negotiations. Countries drew together to adopt the Bonn Agreement in July 2001 and the recent Marrakesh Accords, which resolved the remaining issues relating to Kyoto’s implementation and now puts countries in a position to ratify the protocol and bring it into force.

In many respects, the Bonn Agreement was not significantly different from the deal almost reached in The Hague in November 2000. In The Hague, it was clear that the European Union (EU) would give up its insistence that countries achieve a specific percentage of their required emission reductions at home rather than through emissions trading (the so-called “supplementarity” issue) and would allow countries to receive significant credits for carbon absorption by forests and farmlands (carbon sinks). The failure at The Hague was due less to insurmountable differences than to the fact that countries began negotiating too late and simply ran out of time. Thus, although Japanese diplomacy was certainly adroit in Bonn as Benedick notes, the main new concession Japan obtained from the EU did not concern the supplementarity and sinks issues as he suggests, but rather compliance, where Japan succeeded in postponing a decision as to whether the compliance procedure would be legally binding. Moreover, it is not clear whether the sinks deal really solves Japan’s problems as Benedick suggests, since even with the new sinks credits allowed under the Bonn Agreement, Japan will still need to reduce its emissions very substantially or else buy credits from countries with a surplus such as Russia.

As Benedick notes, the United States would have significant negotiating leverage if it chose to reengage in the negotiations. Thus far, however, it has shown no sign of wanting to do so. Even before September 11, the Bush administration–to the extent that it was doing anything at all–appeared to be focused on domestic and possibly regional measures, not a new global agreement. Now, credible action appears even more unlikely, at least in the short term, both because September 11 has pushed issues such as climate change off the radar screen of high-level officials and because it has largely eliminated public and international pressure to act.

In the long term, however, if the scientific evidence regarding global warming continues to build, then pressure to take action will revive. Benedick proposes a technology strategy, which he suggests would be “far less costly and more productive” than Kyoto’s market-based approach. But although an emphasis on technology is certainly warranted, its superiority over Kyoto has not been established. Contrary to Benedick, Kyoto is not a “short-term perspective on a century-scale problem.” It establishes a long-term architecture to address climate change that relies on market-based instruments such as emissions trading, which have proved highly effective and efficient in other contexts. Emission targets for its first commitment period, from 2008 to 2012, are clearly inadequate to address climate change. But they are only the first of a series of targets, progressing toward the Framework Convention’s ultimate objective of stabilizing greenhouse gas concentrations at a safe level.

Whether Kyoto will be effective in combating climate change remains unclear, despite the breakthroughs in Bonn and Marrakesh. But the potential pitfalls relate to its practical workability, not its aspirations.

DANIEL BODANSKY

University of Washington School of Law

Seattle, Washington


Soon after President Bush pronounced the Kyoto Protocol dead, Richard E. Benedick told me that the president might have inadvertently secured Kyoto’s survival. I thought otherwise. I thought the other Kyoto signatories might use the opportunity to let Kyoto die and to blame the United States for its demise, thereby securing a rhetorical victory. In the event, Benedick was right and I was wrong. Today, the prognosis for Kyoto entering into force looks pretty good.

This experience makes me hesitate before disagreeing with Ambassador Benedick again, but I find his suggestion that the United States might still join a revised Kyoto implausible. It certainly seems unlikely after Marrakesh (and Benedick’s article was written before that meeting). In any event, my view is that renegotiating Kyoto’s targets would be a waste of time. The essential flaw in the Kyoto approach is that it incorporates specific targets and timetables without backing this up with effective enforcement. This is a narrowly directed criticism of the treaty, but one that finds agreement with Benedick’s assertion that Kyoto may yet prove unworkable.

Enforcement is needed to promote both participation and compliance, but Kyoto does neither. Its minimum participation clause is set at such a low level that the agreement can enter into force while limiting the emissions of less than a third of the global total. This will not suffice to mitigate climate change. Moreover, the compliance mechanism, negotiated years after the emission limits were agreed, essentially requires that noncomplying countries punish themselves for failing to comply–a provision that is unlikely to influence behavior. Most astonishingly, Kyoto specifically prohibits compliance mechanisms with “binding consequences” unless approved by an amendment.

The consequences of this approach seem clear: Kyoto will either fail to enter into force, or it will enter into force but will not be implemented, or it will enter into force and be implemented but only because it requires that countries do next to nothing about limiting their emissions (and in Marrekesh the treaty was watered down even more to make it acceptable to Russia, Japan, and other countries). These weaknesses cannot be improved by a minor redesign of the treaty. The basic problem stems from the requirement that countries agree to, and meet, emission limitation ceilings: the most central element of the Kyoto Protocol.

Where to go from here? Benedick proposes a technology strategy, and I agree with him wholeheartedly. Let me just add a twist to his proposal.

My suggestion is for the United States to leave Kyoto as it is and propose new protocols under the umbrella of the Framework Convention on Climate Change. These should include a protocol for joint R&D and a series of protocols establishing technology standards; including, for example, standards for automobiles, requiring, say, the use of the new hybrid engines or fuel cells. Economists normally reject the setting of technology standards in a domestic setting, but they have a strategic advantage in an international treaty. This strategic advantage is that as more countries adopt a standard, it becomes more attractive for other countries to adopt the same standard. Standards create carrots (the promise of selling your product in more markets, for example) and sticks (standards create automatic trade restrictions, which are easy to enforce and are permitted by the World Trade Organization). These kinds of incentives are lacking in the Kyoto agreement. Moreover, the proposal is eminently simple and practical: A multilateral treaty for automobile standards already exists.

There are, of course, problems with the standards approach. But an ideal remedy is not achievable for global climate change because of the problems with international governance. We need to be thinking of the best second-best remedy.

SCOTT BARRETT

Professor of Environmental Economics and International Political Economy

Paul H. Nitze School of Advanced International Studies

Johns Hopkins University

Washington, D.C.


Richard E. Benedick’s welcome article presents a coherent critique of the Kyoto Protocol, adding to the criticisms others of us have made of that treaty. His call for a positive U.S. initiative strikes the right note by arguing for a technology-based treaty instead of one based on quantified emissions reductions. His depiction of the Bonn negotiations shows how such emissions targets will lead participating countries into perpetual debates about how to measure and assess emissions reductions, drawing time and money away from actually doing something positive about the problem.

I would add a couple of items to his suggested policy initiative; things that the United States can do both unilaterally and in collaboration with other countries. Sweeping changes in our energy technology system will require more than the increases in R&D spending that Benedick advocates. First, such change requires regulatory reform. Numerous policies, standards, and practices make it difficult for renewable energy and energy efficiency technologies to penetrate the market. From building codes to housing development covenants to interconnection standards for distributed generation to certification for photovoltaic installers, these obstacles dramatically increase the transaction costs of making greater use of efficiency and renewable technologies. Such costs will discourage their use even when their nominal price goes down.

Second, changing the technological system requires targeted education programs. Many people and institutions make decisions that influence how easily consumers and businesses can adopt new energy technologies. Contractors, mortgage lenders, engineers, public utility commissioners, and many others could greatly promote or impede the diffusion of these technologies, yet they often know very little about them. Government policy could fund educational programs for these groups that provide them with information tailored to their particular needs.

FRANK N. LAIRD

Graduate School of International Studies

University of Denver

Denver, Colorado


The new economy

Dale W. Jorgenson’s excellent paper (“U.S. Economic Growth in the Information Age,” Issues, Fall 2001) finds that the fall in information technology (IT) prices helps explain the surge in U.S. growth in the 1990s. Research by the Organization for Economic Cooperation and Development (OECD) shows that the United States is not alone in this; IT plays an important role in explaining growth differentials in the OECD area in the 1990s. Rapid technological progress and strong competitive pressure in the production of IT have led to a steep decline in IT prices across the OECD, encouraging investment in IT. The available data for OECD countries show that IT investment rose from between 5 and 15 percent of total nonresidential investment in the business sector in 1980 to between 15 and 30 percent in 2000.

Although IT investment accelerated in most OECD countries, the pace of that investment and its impact on growth differed widely. For the countries for which data are available, IT investment accounted for between 0.3 and 0.9 percentage points of the growth in gross domestic product over the 1995-2000 period. The United States, Australia, and Finland received the largest boost; Japan, Germany, France, and Italy the smallest, with Canada and the United Kingdom taking an intermediate position. Software accounted for up to a third of this contribution.

IT has played two other roles in growth, however, through its impact on the overall efficiency of capital and labor, or multifactor productivity (MFP). First, in some countries, such as the United States, MFP growth reflects technological progress in the production of IT. This has enabled the amount of transistors packed on a microprocessor to double every 18 months since 1965 and even more rapidly since 1995. Although OECD statistics show that the IT sector is relatively small in most countries, it can make a large contribution to growth if it expands rapidly.

The other IT-related driver of MFP is linked to its use by firms. Firm-level studies show that IT can help to improve the overall efficiency of capital and labor, in particular when combined with organizational change, better skills, and strong competition. Moreover, certain services that have invested heavily in IT, such as wholesale and retail trade, have experienced a pickup in MFP growth in recent years, in the United States, Australia, and Finland, for example. And countries that experienced a more rapid diffusion of IT in the 1990s have typically had a more rapid pickup in MFP growth in the 1990s than countries where the diffusion process was slower.

The above does not imply that IT is the only factor explaining growth differentials in the OECD area. The OECD work shows that other factors, such as differences in labor use, are also important. Growth is not the result of a single factor or policy; it depends on an environment conducive to growth, innovation, and change.

DIRK PILAT

Organization for Economic Cooperation and Development

Paris, France

http://www.oecd.org/growth


Dale W. Jorgenson has given us an exceedingly careful analysis of the sources of the revival in the growth of total factor productivity and of gross national product (GNP) in the U.S. economy since the mid-1990s. He attributes the growth acceleration to dramatic growth in investment and technical change in the information technology (IT) industries. Although the IT industries account for less than 5 percent of GNP, they accounted for approximately half of the productivity bubble of the late 1990s.

Jorgenson is skeptical that these high growth rates are sustainable. One reason is a slowing of growth in labor inputs: both numbers of workers and hours worked.

A second is slower technological change in the IT-producing industries associated with an anticipated lengthening of the semiconductor product cycle.

I am even more skeptical than Jorgenson about the capacity of the U.S. economy to sustain the growth rates of the late 1990s. During the periods covered by Jorgenson’s productivity growth data–high (1948–73), slow (1973–90), and resurgent (1995–99)–growth has been sustained by substantially higher rates of productivity growth in the goods-producing sectors (agriculture, mining, and manufacturing) than in the rest of the economy. And within the manufacturing sector, rapid productivity growth has been highly concentrated in a few industries such as industrial machinery and equipment and electronic and electric equipment.

By the late 1990s, the goods-producing sectors accounted for less than 20 percent of U.S. GDP. It is not unreasonable to anticipate that during the second decade of the 21st century, the share of goods-producing industries will decline to somewhere in the range of 10 percent. This means that the burden of maintaining economy-wide productivity growth will fall almost entirely on the service sector.

Jorgenson has presented data elsewhere suggesting that the service sector’s contribution to total factor productivity growth during 1958-96 was negative. It may be objected that service sector output and productivity growth are particularly difficult to measure and are underestimated in the official productivity accounts. Some service sector industries, particularly those such as financial services, that have been able to make effective use of IT, have achieved relatively high rates of productivity growth.

My own sense, however, is that there are few significant industries in the service sector where substantial productivity gains can be readily anticipated. Some of these industries, such as entertainment, will be particularly subject to what Baumol long ago termed the service sector cost disease–characterized by some combination of increasing costs and/or lower-quality output. It will take some very creative growth accounting to avoid a conclusion that the “new economy” growth rates of the late 1990s are not sustainable, either in the short run or into the second decade of the 21st century.

VERNON W. RUTTAN

Department of Applied Economics

University of Minnesota

Minneapolis, Minnesota


Genetics and medicine

In “From Genomics and Informatics to Medical Practice” (Issues, Fall 2001), Samuel C. Silverstein accurately captures the extraordinary excitement and potential of medical research emerging from the disciplines of genomics and informatics. What is possible is nothing less than the unraveling of the mysteries of many medical illnesses, together with a clarification of the links between basic causes and pathophysiology. This would facilitate progress in our ability to develop real prevention methods, match better treatments to pathologies, and in general enhance the health care of the nation.

Our ability to fully exploit the advantages of this exciting research, however, could be compromised by rigid regulations that are emerging in the arena of information privacy, regulations that emanate from legitimate concerns about the confidentiality of people’s health information. It is certainly important for people to be protected against violations of confidentiality that might in any way compromise their work status or their ability to secure insurance. But it is also important that these protective regulations be formulated in such a way that they do not become major obstacles to the nation’s ability to reap the benefits of research and do not hobble the ability of our medical institutions to provide effective patient care.

In the spirit of strong privacy control, some groups have encouraged the development of regulations under the Health Insurance Portability and Accountability Act that have unintended and problematic side effects. An estimated 1,600 pages of regulations are about to descend on the health care system as a result of overextending the intention to protect privacy. As currently formulated, they represent as much of a substantial obstacle to the ability to deliver high-quality, efficient, and cost-effective care as they do to the conduct of the new research, and they will pose an extraordinary burden for the nation’s hospitals. It seems appropriate to reconsider these regulations and delay their implementation in order to come to a healthier balance between legitimate privacy concerns and the needs of the nation’s health care system and research programs.

Silverstein points out that a partnership between academic health centers and industry, facilitated by the government, would enable the nation to take advantage of the medical research opportunities now made possible by the rapid development of genomics and informatics. The result can be a nation far less compromised by illness and with substantial reductions in pain, time and productivity loss, and all the other negatives that accompany disease and poor health.

Let us hope that informed policymakers will revisit the proposed privacy regulations, modifying them to provide the appropriate protections for individual privacy while allowing the country and its population to benefit from a very hopeful vision for medical research and care.

HERBERT PARDES

President and Chief Executive Officer

New York-Presbyterian Hospital

New York, New York


U.S.-Russian cooperation

Kenneth N. Luongo’s “Improving U.S.-Russian Nuclear Cooperation” (Issues, Fall 2001) makes a convincing case for the need to renew the partnership with Russia to improve nuclear security. The impressive achievements cited by Luongo occurred mostly in the first half of the 1990s and resulted from a partnership established to meet common national security objectives. He correctly points out that an “undercurrent of political mistrust and resentment” curtailed additional progress by the end of the decade.

To make progress now, it is important for U.S. policymakers to realize just how broken this relationship is. Over the past three years, several key cooperative nuclear programs have effectively come to a halt. The U.S. side has had no clearly articulated strategic vision and no overarching strategy to guide the myriad of federal agencies or Congress in developing programs that enhance our national security while concurrently helping Russia deal with the vestiges of the huge Soviet nuclear complex. There has been little high-level U.S. attention paid to ensuring a constancy of purpose and continuity in implementing key cooperative programs.

Some programs pushed by the U.S. side ran contrary to Russia’s own national security interests or energy strategy. Other programs, such as upgrading the security of Russian weapons-usable fissile materials, were redirected by the U.S. side away from a partnership to a unilateral approach that insisted on intrusive and unnecessary physical access to sensitive Russian facilities in exchange for U.S. financial support. Such actions, along with political tensions caused by NATO expansion; the bombing of Serbia; disagreements over Iran, Iraq, and Chechnya; and the U.S. push for a national missile defense depleted the bank account of trust and good will built up in the early 1990s and inhibited further progress.

On the Russian side, the early cooperative spirit demonstrated by Russian military and scientific personnel was reined in gradually by a re-energized Russian government bureaucracy and re-empowered security services. Russia’s dire financial situation prompted it to aggressively export nuclear technologies worldwide (especially to Iran) over U.S. objections. Russia’s plea for help to downsize and convert its huge nuclear military complex to civilian applications did not receive strong U.S. support. The United States focused too narrowly on the “brain drain” of Russian nuclear scientists instead of tackling the root causes. Such programs should be directed at downsizing the vastly oversized Soviet complex safely and securely to reflect current requirements and on keeping the remaining Russian nuclear institutions and their people focused on the West, rather than selling their knowledge and technologies to less desirable states or groups.

Before September 11, the new administration, like its predecessor, appeared slow to take advantage of the historic opportunity to work with Russia to construct a new nuclear security framework. Now, the new Bush-Putin spirit of cooperation should enable a much broader common strategy to guide what is to be done. Luongo’s advice is both timely and on target. I strongly endorse most of his specific recommendations. They are quite similar to ones I make in Thoughts about an Integrated Strategy for Nuclear Cooperation with Russia ). In addition, I agree with Luongo that we must also focus on how to get things done. The critical element is restoring the partnership; without it, additional U.S. funds will be ineffective.

SIEGFRIED S. HECKER

Senior Fellow

Los Alamos National Laboratory

Los Alamos, New Mexico


Nuclear cooperation with Russia is an expensive and long-term proposition with uncertain payoffs for U.S. security interests. Current U.S. programs are fraught with technological and conceptual gaps that could easily be exploited by determined adversaries, whether hostile states, criminal organizations, or terrorists.

Take, for example, the Department of Energy (DOE)-funded effort to improve materials protection, control, and accounting (MPC&A) at former Soviet nuclear facilities. This is touted as “the nation’s first line of defense” against the threat of proliferation from unsecured Russian stockpiles. Yet as of 2001, 10 years after the Soviet collapse and after the expenditure of approximately $750 million, less than 40 percent of the 600-odd tons of at-risk weapons material is protected in some fashion by MPC&A. Security upgrades will not be extended to the remainder until 2010 and possibly beyond, according to DOE projections. But opportunistic nuclear criminals would not obligingly wait until all facilities are MCP&A-ready before orchestrating a major diversion, so the strategic rationale for the program diminishes as the time frame for completing it lengthens. Increased funding and tighter management might accelerate the timetable, but by this time some proliferation damage may already have occurred.

Furthermore, insider corruption and economic hardship in Russia erode the deterrent value of even the advanced safeguards being installed. MPC&A systems depend on the diligence, competence and integrity of the people tending them. They are not designed to defend against high-level threats, such as a decision by senior plant managers to sell off stocks of fissile materials to nuclear-prone Middle Eastern customers. Willing suppliers of strategic nuclear goods might well abound in Russia’s formerly secret cities when average pay hovers at $50 per month and where some 60 percent of nuclear specialists feel compelled to supplement their regular salaries by moonlighting.

Washington is also building other lines of defense against nuclear smuggling by training and equipping former Soviet customs officials to intercept radioactive contraband at airports, ports, and border crossings. Yet Russia’s frontiers with Georgia, Azerbaijan, and Kazakhstan alone–the most likely conduits to Middle Eastern states and groups of concern–run more than 7,800 kilometers, partly through terrain where banditry and narcotics smuggling traditionally have flourished. A few radioactive monitors installed here and there across Russia’s vast southern tier would do little to deter savvy smugglers adept at deceiving or avoiding representatives of the state.

Further complicating the security picture is Russia’s international behavior in the nuclear realm, especially the wide-ranging technical and commercial relationship with Iran. Iran, which now makes no secret of its intentions to acquire weapons of mass destruction (WMD), can easily leverage networks of official contacts to gain access to Russia’s nuclear suppliers. How much fissile material has escaped from Russia under the umbrella of ostensibly legitimate business deals is anyone’s guess.

Clever adversaries and their inside collaborators simply can find too many ways to defeat or circumvent the technical fixes, export controls, and other containment measures being introduced under the cooperative programs. Certainly the programs themselves should not be defunded, but U.S. security policy must go beyond containment to focus attention on the demand side of the proliferation equation: on the main adversaries themselves. In the near term, this means deciphering adversaries’ military procurement chains (how they are organized and financed and what front companies and other intermediaries are used, for example) and disrupting nuclear deals, in the making when possible. It means monitoring the status of their nuclear programs and assessing the threats emanating from them. Such tasks must necessarily be intelligence-based, requiring a wider deployment of human collection resources in proliferation-sensitive zones in Soviet successor states and in the Middle East than is now the case.

Since nonproliferation cannot be pursued as though in a political vacuum, Washington must strive to fashion a demand-reduction strategy, exploring new options for curbing the international appetite for nuclear weapons. Demand engenders supply, as with the illicit drug trade. If adversaries are already stockpiling fissile material (which is not beyond the realm of possibility by now) the challenge is to influence them not to build or deploy such weapons. Various economic, diplomatic and military options might come into play here, but implementing them will require a more nuanced and differentiated vision of aspiring nuclear actors and of the security concerns driving their WMD programs.

RENSSELAER LEE

McLean, Virginia

Lee is the author of Smuggling Armageddon: The Nuclear Black Market in the Former Soviet Union and Europe (St. Martin’s, 2000).


Workforce productivity

The current downturn in the economy, which has been exacerbated by the events of September 11, is raising doubts and causing uncertainty about the future of the United States in an increasingly competitive and hostile world. In “The Skills Imperative: Talent and U.S. Competitiveness” (Issues, Fall 2001), Deborah van Opstal does an excellent job of addressing many of the major issues confronting the United States, including changing demographics; the disproportionately small number of female and minority scientists and engineers; and the failure of our nation to provide every American with the skills and education needed to foster U.S. competitiveness in the global economy.

During the 1990s, the psychological sciences community developed a national behavioral science research agenda, the Human Capital Initiative, which views human potential as a basic resource that can be maximized through an understanding of the brain and behavior. The initiative identified several problem areas facing the nation, including some mentioned by van Opstal, such as aging and literacy, and some not, such as substance abuse, health (including mental health), and violence. Each of these factors has profound effects on workforce productivity and is amenable to research and intervention.

The 1990s were also the Decade of the Brain, reflecting the beginning of a revolution in the brain and behavioral sciences. It is my belief that neurobehavioral technologies can be harnessed to power a second productivity explosion, similar to the one fueled by information technology, and indeed the two may meet at the human-machine interface. We can and must “apply and extend our knowledge of how people think, learn, and remember to improve education” (testimony of Alan Kraut on the fiscal year 2002 budget of the National Science Foundation). We also can and must apply and extend our knowledge about the prevention and treatment of substance abuse and mental illness to improve job performance, about group dynamics and interpersonal conflict to prevent violence, and about preventing the cognitive decline that occurs with aging to increase the productivity of older Americans, who will become an increasingly large and critical segment of our nation’s workforce and economy.

Finally, I suggest that we not lose sight of a potential national resource that is often overlooked: gifted children, those with special intellectual, artistic, or leadership talent. Recognizing and nurturing gifted students is in the national interest just as much as recognizing and nurturing at-risk populations.

DIANA MACARTHUR

Chair and Chief Executive Officer

Dynamac Corporation

Rockville, Maryland


Advanced Technology Program

Glenn R. Fong’s proposed Advanced Technology Program (ATP) reforms are not new and would dramatically move the program away from its original intent (“Repositioning the Advanced Technology Program,” Issues, Fall 2001). In 1988, the statute creating the ATP stated: “There is established . . . an Advanced Technology Program . . . for the purpose of assisting U.S. businesses in creating and applying the generic technology and research results necessary to commercialize significant new scientific discoveries and technologies rapidly.” The intention was to address problems with U.S. industrial competitiveness, and the program was directed at industry rather than at “institutions that are further back in the innovative pipeline,” such as universities.

One of the strengths of the U.S. system of innovation is the richness and diversity of institutions that support technological advancement. Our university science and national labs are preeminent in the world, but alone they were not sufficient to sustain competitiveness and economic growth. The ATP, as it is currently working, plays a valuable role by providing resources and incentives for innovative companies to develop early-stage, high-risk, enabling technologies that are defined as priorities by industry.

Internal and external economic assessments have been a major program component at the ATP from its inception and have led to experience-based modifications to the program. As a result, political attacks on the program have generally been philosophical rather than substantive. The prior lack of political support does not imply that there is something wrong with the program.

Maryellen Kelley and I studied the ATP’s 1998 applicants to see how award-winning firms were different from firms that did not receive awards. We then examined both groups of firms one year later to see whether the ATP made a difference. We concluded that the ATP awarded high-risk, potentially high-payoff, research projects in technical areas that were new to the firms. In addition, ATP awards led to new R&D partnerships, to more extensive linkages to other businesses, and to wider dissemination of research results, whereas nonwinners overwhelmingly did not proceed with their projects. The ATP funded the types of risky projects that firms are unlikely to pursue without government incentives and that have characteristics that economists expect to yield broad-based economic benefits. The pejorative term “corporate welfare” is very far off target.

We also found that an ATP award created a “halo effect” that attracted additional funding to ATP winners. With its rigorous independent review process, the ATP certifies that a company is a worthy investment. The ATP had become a political football; it now shows every prospect of gaining the full bipartisan support that it deserves.

MARYANN FELDMAN

Johns Hopkins University

Baltimore, Maryland


I fully support Charles W. Wessner’s conclusion in ” The Advanced Technology Program: It Works” (Issues, Fall 2001) that the ATP has proven its success and justifies ongoing stable support from Congress and the president. However, there is far more to the program than simply helping to fill the “valley of death” with funding for applied research, as proposed by Glenn R. Fong.

I have tracked ATP awards to Industrial Research Institute (IRI) member companies since the program was begun in 1990. The record shows that 69 IRI members received awards for just over 200 projects worth nearly $1 billion. This means that they and their partners have contributed at least another billion dollars of their own funds toward the work. In general, the larger and more technology-intensive firms have applied for and received the most awards. For example, General Electric received 12 awards; IBM 10; General Motors, Honeywell, and 3M 8; and Du Pont 7. Each of these companies invests at least $1 billion a year in R&D, some of them three to seven times that amount. They would not make the effort to apply for an ATP award unless the work was particularly significant at the margin; that is, work that they may not have funded on their own but would if the shared funding for higher-risk studies (in most cases) were available, a point made by Wessner.

Fong is correct in saying that applied research, growing at a rate of 4.7 percent in the late 1990s, was the lagging category in the total R&D effort. However, industry invested over $35 billion in applied research in 2000, which is more than two orders of magnitude higher than that spent on the ATP in the same year. Clearly, the ATP does help to fill the valley of death, but justifying its continuation largely on that basis seems to be a stretch.

CHARLES F. LARSON

President Emeritus

Industrial Research Institute

Washington, D.C.


Fisheries management and fishing jobs

In “A New Approach to Managing Fisheries” (Issues, Fall 2001), Robert Repetto has made an extremely useful contribution to the field of fishery management, with his direct comparison between the U.S. and Canadian sea scallop fisheries. I am concerned, however, that Repetto may have left the impression that the benefits that will be obtained by the fishing industry through rights-based fishery management are likely to come at some substantial cost to fishing communities.

As Repetto points out, “there has never been an evaluation of actual experience in all ITQ systems worldwide using up-to-date data and an adequate, comparable assessment methodology.” My own study of rights-based fishery management leads me to question the prevailing belief that fishing communities will suffer under a system that encourages efficiency.

The “speculative and heated debate” to which Repetto refers has reached the point where many fishery stakeholders consider “efficiency” a dirty word. But the converse of efficiency is waste. And no one forthrightly defends waste. It is easy to demonstrate that efficient resource use can improve the standard of living of the people who rely on those resources, whether they are a family, a community, a nation, or the world.

Repetto puts wasteful fishery management in the context of business profits, suggesting that “if the U. S. scallop fishery were a business, its management would surely be fired, because its revenues could readily be increased by at least 50 percent while its costs were being reduced by an equal percentage.” What makes this analogy important to the average citizen is the fact that our fishery resources are public resources: poor management of fishery resources reduces the standard of living of every citizen by reducing the economic benefits that we receive from our fisheries. Redundant inputs (excess costs) used to overfish could be used elsewhere to improve medical care, education, housing, etc.

The key issue in this context, and the crux of the debate, is the distribution of the benefits of efficient resource use in both the short and long terms. Essential to this question is one’s beliefs about the role of the government as compared to the free market system in allocating scarce resources. If a government policy of rights-based fishing leads to profits in the fishing industry, should the government tax those profits away and use them for the benefit of all citizens, or should we rely on the free market to reinvest those profits for the betterment of society? Through which mechanism are local communities more likely to benefit?

Both theory and practical experience demonstrate that rights-based fishing can generate substantial economic benefits. What we need now are empirical case studies that follow the flow of benefits from efficient fisheries through their communities and the broader economy. With that knowledge we can design rights-based fishery management programs that achieve their expected benefits while accommodating legitimate concerns.

RICHARD B. ALLEN

Fishing Vessel Ocean Pearl

Wakefield, Rhode Island


Redesigning food safety

I commend Michael R. Taylor and Sandra A. Hoffmann for their thought-provoking “Redesigning Food Safety” (Issues, Summer 2001). I fully concur that the government needs a more coordinated and structured approach to determine the most productive uses of its budget and resources for addressing food safety problems. This approach should be well founded in both the natural and social sciences and provide a framework that enables the priority ranking of issues regarding human health

Risk analysis is an excellent descriptor for this strategy. What is most needed is a well-conceived model for conducting a risk analysis of food safety issues. The development of such a model will require the input of a broad cast of strategists representing a variety of disciplines, including public health, sociology, infectious diseases, microbiology, economics, and public policy. Considering the growing frequency with which previously unrecognized food safety issues are confronting today’s regulators, the model must be designed to allow updating as new issues surface. Properly done, risk analysis should be a work in progress.

However, all the best efforts to formulate a well-designed food safety risk analysis model for government decisionmaking will be in vain if many of the archaic food safety laws presently in place are not rescinded and new policies focused on the food safety issues of today put in their place. As Taylor and Hoffmann point out, current statutory mandates for specific modes of regulation skew the allocation of resources in ways that may not be optimal for public health and the government’s ability to contribute to risk reduction. It is ironic that some U.S. laws, such as those mandating an outdated inspection system, are an impediment to enabling government agencies to better address today’s most pressing food safety issues.

It is time we recognize the weaknesses of government food safety programs and bring government decisionmaking in line with the food safety priorities of today. It is a matter of public health

MICHAEL P. DOYLE

Director, Center for Food Safety

University of Georgia

Griffin, Georgia

www.griffin.peachnet.edu/cfsqe


Regulating genetically engineered foods

In “Patenting Agriculture” (Issues, Summer 2001), John H. Barton and Peter Berger describe how a few big agricultural biotechnology companies are increasingly consolidating control over the application of advanced molecular technologies in crop breeding, to the detriment of public-sector research programs with responsibility for genetic improvement of food staples in developing countries. They appropriately charge the narrow, money-motivated, intellectual property rights (IPR) licensing policies of advanced research universities with causing part of the problem. And, they offer promising strategies whereby the public sector could do a better job of managing its IPR to generate both public goods and income from its research.

However, poor management of IPR is only one of the ways in which the public sector is handing over control of this technology to the big multinational corporations. Increasingly onerous and expensive biosafety regulations are also a major cause. In the United States, the cost of obtaining regulatory approval for a new crop variety with a transgenic event can easily reach $30 million. Even the big companies are abandoning research programs for which the size of the market does not warrant this level of investment. Small seed and biotechnology companies are essentially priced out of the market unless they partner with the multinationals, and the public sector may be left out as well. If developing countries put in place biosafety regulations that are equally onerous, they too are likely to find themselves highly dependent on multinational corporations as their primary source of advanced new crop varieties. As with IPR, the public sector needs to find better and less expensive ways of addressing legitimate regulatory concerns, if it is to continue to play an important role in producing new crop varieties for the hundreds of millions of small-scale farmers who will not be served by the big companies.

GARY H. TOENNIESSEN

Director, Food Security

The Rockefeller Foundation

New York, New York


I read with interest Patrice Laget and Mark Cantley’s “European Responses to Biotechnology: Research, Regulation, and Dialogue” (Issues, Summer 2001). In particular, I note the comment that critical and apprehensive spectators can generate “what if” questions faster than they can be answered.

But this is absolutely right. It is essential that processes remain open to question and debate. The public attitude toward genetically modified organisms (GMOs) has changed incrementally since the first releases in the United Kingdom, questioning not only safety, moral, and ethical concerns; the right to consumer choice; and the apparent speed of advancement toward commercialization but also whether there are real benefits of this technology.

In the United Kingdom, we feel it is time for public debate. We already have a new directive governing the release of GMOs, strengthening and clarifying the existing rules and increasing openness and transparency. It is time to build on this further and consider the many questions outstanding. These include not only reassessing the risks to embrace new scientific thinking and the implications of the latest research, but also the provision of consumer choice by the introduction of comprehensive labeling and traceability requirements and workable thresholds for the adventitious presence of GM material. We will need rules covering the cultivation of GM crops, incorporating the separation distances necessary to permit the coexistence of different types of agriculture, as well as the instigation of strong liability regimes to protect those adversely affected. Strict rules on seeds must also be considered.

It is only right that regulatory mechanisms be open to development and improvement, in order to remain not only highly effective but also trusted. A science-based regulatory regime can only function within the wider context, in which the issues of morals and ethics must also be taken into account. In this respect, we are leading the way in addressing the public’s questions and have set up the Agriculture and Environment Biotechnology Commission especially to consider these issues. In the United States, there is also increasing public awareness of GMOs, and it is important to be unafraid in answering the questions that may be raised or in reassessing existing systems. As an indication of this, the U.S. Food and Drug Administration has already circulated draft guidance for voluntary labeling of GM food.

In the United Kingdom, the uncertainty regarding GMOs runs deep, and we are only at the beginning of the process to address all the issues involved. To do so will take time if it is to be done properly in order to build a firm foundation on which GMOs can be used in a safe manner enabling freedom of choice. Biotechnology comprises many enabling technologies, only one branch of which uses genetic modification as its core. This in turn will be only one component of a program of sustainable development for agriculture, which the United Kingdom and indeed the world must now address; and the role of GM technology in that sustainable development remains to be assessed.

MICHAEL MEACHER

Member of Parliament

Minister for the Environment

Department for Environment, Food, and Rural Affairs

London, England


Biological invasions

The spread of invasive species is, together with climate change, one of the most serious global environmental changes underway. In “Needed: A National Center for Biological Invasions” (Issues, Summer 2001), Don C. Schmitz and Daniel Simberloff argue that current responses are “highly ineffective.” I could not agree more.

The interagency National Invasive Species Council laid out the federal government’s first invasive species management plan in January 2001. Less than 12 months later, the council estimates that agencies are already six to eight months late in implementing it. In the absence of timely federal leadership, states are fending for themselves. As a result, the policy response that Schmitz and Simberloff describe as “fragmented and piecemeal” is becoming more so.

Clearly, now is the time for bold ideas. Schmitz and Simberloff present one. They make a powerful case for a national center to coordinate efforts. This kind of coordination would be a major step forward but it will not alone solve our problems. Bad or overly lax policy, perfectly coordinated, is no solution. Additional ideas deserve a hearing.

Making the National Invasive Species Act (NISA) live up to its name is another bold idea, and one of the best. We must address invasive species more comprehensively. This means filling gaps in law, regulation, programs, and funding. For example, we need more resources to manage the relatively neglected nonagricultural invaders. We need to ensure that all intentionally imported species are effectively screened for invasiveness before import and that those known or highly likely to be harmful are kept out. In the long run, we will need new and more helpful legislation. Now, though, the 2002 reauthorization of NISA gives us a chance to improve efforts considerably.

Although useful, NISA addresses only a subset of invasive species problems: largely those related to organisms that arrive inadvertently in ships’ ballast water. Also, NISA affects only a portion of international ship traffic. Its toughest requirements apply to ships with just a few destinations.

To its credit, NISA set in motion a series of policy experiments and technological innovations. We should strengthen this approach and apply it to more of the routes by which aquatic species travel. For example, mandatory ballast water management should replace the voluntary program, which has a woefully inadequate rate of compliance. States should receive additional help for implementing their own management plans and making them more complete. We should better address the potentially devastating impacts of intentional aquatic introductions, especially those by the aquarium, aquaculture, and nursery industries.

In many ways, invasive species policy is in its infancy. As Schmitz and Simberloff show, the time is ripe to borrow the best and brightest ideas from other areas of environmental protection. Myriad possibilities remain untapped. Now we must be bold enough to try them.

PHYLLIS N. WINDLE

Senior Scientist, Global Environment Program

Union of Concerned Scientists

Washington, D.C.

Revamping the CIA

The terrorist attacks have once again exposed wide-ranging flaws in the agency’s operations.

One week after the terrorist attacks on the Pentagon and the World Trade Center, national security adviser Condoleeza Rice told the press: “This isn’t Pearl Harbor.” No, it’s worse. Sixty years ago, the United States did not have a director of central intelligence and 13 intelligence agencies with a combined budget of more than $30 billion to produce an early warning against our enemies.

There is another significant and telling difference between Pearl Harbor and the September 11, 2001, attacks: Less than two weeks after Pearl Harbor, President Franklin D. Roosevelt appointed a high-level military and civilian commission to determine the causes of the intelligence failure. After the recent attacks, however, President George W. Bush, Director of Central Intelligence George Tenet, and, surprisingly, the chairmen of the Senate and House intelligence committees adamantly opposed any investigation or post mortem. Sen. Bob Graham (D-Fla.), chair of the Senate Select Committee on Intelligence, said it would not be “appropriate” to conduct an investigation at this time; his predecessor, Sen. Richard Shelby (R-Ala.), agreed that any investigation could wait another year. The President’s Foreign Intelligence Advisory Board normally would request such a study, but the board currently has only one member, because the president has not yet replaced members whose terms have expired. The president’s failure to appoint a statutory inspector general at the Central Intelligence Agency (CIA) deprives the agency of the one individual who could have requested an investigation regardless of the CIA director’s views. Overall, the unwillingness to conduct an inquiry increases the suspicion that there may have been indicators of the attacks that went unheeded.

The failure to anticipate the attacks is merely the latest in a series of CIA failures during the past 10 years. The CIA spent nearly two-thirds of its resources on the Soviet Union but did not foresee the Kremlin’s collapse. Yet there was no investigation or post mortem of what went wrong in the CIA’s directorate of intelligence, nor were there major changes in the CIA’s analytical culture.

There was also the incredible but true saga of Aldrich Ames, the CIA officer who spied for the Soviet Union and the Russian Federation for nearly a decade, flaunting his KGB-supplied wealth and betraying the entire U.S. spy network inside Moscow. The Ames saga did lead to a 1994 study of the CIA’s clandestine culture that concluded, in the words of then-director James Woolsey, “It is a culture where a sense of trust and camaraderie within the fraternity can smack of elitism and arrogance.” A year later, in fact, then-director John Deutch learned that the CIA payroll included a Guatemalan colonel implicated in the murder of a U.S. citizen and, as a result, initiated efforts to reform the directorate of operations and to remove the thugs from the payroll. Predictably, the old boy network rallied in the name of the directorate and tried to stymie Deutch’s efforts.

Demilitarize intelligence gathering

Previous directors, particularly Deutch and Robert Gates, have done great harm to the CIA and the intelligence community by deemphasizing strategic intelligence for use in policymaking and catering instead to the tactical demands of the Pentagon. The CIA began to produce fewer national intelligence estimates and assessments that dealt with strategic matters and placed its emphasis on intelligence support for the war fighter. Gates, moreover, ended CIA analysis of key order-of-battle issues in order to avoid tendentious analytical struggles with the Pentagon; Deutch’s creation of the National Imagery and Mapping Agency (NIMA) at the Department of Defense (DOD) enabled the Pentagon to be the sole interpreter of satellite photography. This is particularly important because the Pentagon uses imagery analysis to justify the defense budget, to gauge the likelihood of military conflict around the world, and to verify arms control agreements. In creating NIMA, Deutch abolished the CIA’s Office of Imagery Analysis and the joint DOD-CIA National Photographic Center, which often challenged the Pentagon’s analytical views.

In its short history, NIMA has been responsible for a series of major intelligence disasters, including the failure to predict Indian nuclear testing in 1998, the bombing of the Chinese embassy in Belgrade in 1999, and more recently the exaggeration of the missile programs in North Korea and Iran. The failure to anticipate and record Indian nuclear testing stemmed from the Pentagon’s downgrading of South Asian intelligence collection and DOD’s low priority for counterproliferation. Open sources did a far better job of predicting the nuclear tests than did the U.S. intelligence community. To make matters worse, CIA Director Tenet told the Senate that the CIA could not monitor and verify the Comprehensive Test Ban Treaty and, for the first time in 80 years, the Senate failed to ratify a major international treaty.

The bombing of the Chinese embassy was attributed to the faulty work of NIMA as well as the inability of the CIA to conduct operational targeting for the Pentagon. Consequently, when the crew of a U.S. B-2 Stealth bomber skimmed over Yugoslavia and dropped three bombs on a building in downtown Belgrade, it actually believed that it had made a direct hit on the country’s arms procurement headquarters. Instead, three people were killed and 20 wounded, creating a diplomatic crisis with Beijing and key members of the NATO coalition. The CIA had never been responsible for operational targeting before, and as a result of the Belgrade disaster, Tenet has made sure that the agency stays out of the targeting business.

Leaving imagery analysis in the Pentagon’s hands allows the military to exaggerate strategic threats to the United States. Throughout the Cold War, military intelligence consistently exaggerated Soviet strategic power, particularly the quantity and quality of Soviet strategic forces and the capabilities of key weapons systems. The Air Force was particularly guilty of exaggerating Soviet missile forces, presumably in order to gain additional resources for U.S. missile deployment. At the same time, the uniformed military was not enamored with the intelligence capabilities of satellite photography and such surveillance aircraft as the U-2, and if it had not been for lobbying by the CIA and civilian scientists, the United States would not have had access to such technology until much later. When the CIA tried to create its own Foreign Missile and Space Analysis Center in 1963 to provide detailed intelligence information on offensive missile systems, senior Air Force generals unsuccessfully tried to stop it.

New intelligence priorities

Although the collapse of the Soviet Union and its Eastern European empire fundamentally altered the strategic environment, there has been no major effort to redefine U.S. national security and intelligence needs. The Soviet collapse created new areas of instability and policy challenges in the Caucasus, central Asia, and southeastern Europe, where the United States and the intelligence community possess few intellectual resources. And nontraditional security problems, which will define U.S. policy choices in the 21st century, have been given short shrift. These problems include water scarcity in the Middle East, social migration caused by coastal flooding in South Asia, infectious diseases in Africa and Russia, and contamination caused by nuclear and chemical weapons stored in the former Soviet Union.

The nontraditional national security problems that confront the United States could give the CIA a competitive advantage because of its data storehouse on oil reserves, demographics, and water supply. The CIA is in a position to provide information on a variety of environmental issues, using baseline data from satellite photography documenting global warming, ozone depletion, and environmental contamination. Spy satellites already provide key environmental data on Earth’s diminishing grasslands, forests, and food resources. Yet the CIA has not been forthcoming with its data, and the only politician who has ever made a serious effort to obtain such data and analysis–former vice president Al Gore–is on the sidelines. To make matters worse, there is a satellite sitting on the ground that is designed to collect such data, but the Bush administration will not pay to launch it.

The major intelligence collection agencies–NIMA, NSA, and NRO–must be removed from military control.

With the proliferation of international peacekeeping missions, the intelligence community is a natural resource for providing political and military data to peacekeepers in places such as Afghanistan, Bosnia, Cambodia, and Somalia. The CIA should have assisted the United Nations (UN) monitoring programs in Iraq rather than running its own operations against Saddam Hussein. War crimes tribunals also require funds and expertise for collecting data on political and military officials, which would be a less difficult task if the political and biographic assets of the CIA could be used. And it is unlikely that global institutions such as the International Atomic Energy Agency can successfully monitor strategic weapons production in North Korea, Iraq, and Pakistan without support from the CIA.

Unfortunately, the CIA has shown little inclination to take on these tasks. Woolsey was lukewarm at best to the idea of sharing intelligence with international agencies. Deutch was stubbornly opposed to providing information to the UN, even though it would have been helpful in peacekeeping situations. And current director Tenet also does not have much interest in these activities.

Problems with covert action

There is no absolute political and ethical guideline delineating when to engage in covert action. However, Cyrus Vance, secretary of state in the Carter administration, articulated a standard two decades ago when he recommended covert action only when “absolutely essential” to the national security of the United States and when “no other means” would do. The CIA observed this standard in the breach when it placed world-class criminals such as Panama’s General Manuel Noriega, Guatemala’s Colonel Julio Alpirez, Peru’s intelligence chief Vladimiro Montesinos, and Chile’s General Manuel Contreras on its payroll. The CIA’s favorite “freedom fighter” in Afghanistan in the 1980s, Gulbuddin Hekmatyar, was also the country’s chief drug lord.

In addition to playing a role in overthrowing the democratically elected government of Chile in the 1970s, the CIA hired and protected Contreras despite his involvement in assassination plots in South America and the United States, including the car bombing in the nation’s capital of former Chilean Ambassador Orlando Letelier and his U.S. associate, Ronni Karpen Moffitt. Recently released documents demonstrate that the CIA placed Contreras on its payroll despite its acknowledgement that he was the “principal obstacle to a reasonable human rights policy” in Chile.

These unsavory assets had nothing to do with the collection of sensitive intelligence but were important to the CIA for the conduct of covert actions in South America that usually were counterproductive to the interests of the countries involved as well as to the United States. Montesinos, for example, was responsible for two decades of human rights abuses in Peru. Yet the CIA helped him flee the country in September 2000 to avoid standing trial for crimes that included the massacre of innocent civilians in the early 1990s. The CIA station in Amman approved an arms deal between Jordanian officials and Montesinos, although he was involved in a 1998 transfer of arms from Jordan to leftist guerrillas in Colombia, perhaps Washington’s most notorious enemies in Latin America. There is probably no stronger evidence of the ineptitude of the CIA’s directorate of operations.

We learned in 1999 that the United States and the CIA used the cover of the UN and the UN Special Commission (UNSCOM) to conduct a secret operation to spy on Iraqi military communications as part of an effort to topple Saddam Hussein. Neither the UN nor UNSCOM had authorized the U.S. surveillance, which Hussein cited as justification for expelling the UN operation. As a result, the most successful effort to monitor and verify Iraq’s nuclear, chemical, and biological programs was lost, and the credibility of multilateral inspection teams around the world was compromised.

Separating intelligence and operations

Any reform of the role and missions of the CIA must recognize that the agency performs two very different functions. The CIA’s clandestine operations, particularly covert action, are part of the policy process. Yet when paid agency assets are also the sources of intelligence reporting, the finished reports may be seriously flawed. CIA’s covert operations are approved and often designed by the White House and the State Department to support specific policies. The Bay of Pigs in 1961, which the inspector general of the CIA described as the “perfect failure,” and Iran-Contra in the 1980s, which violated U.S. law, demonstrated the ability of the directorate of operations to corrupt the analysis of the directorate of intelligence.

The CIA’s intelligence analysis, including national estimates and current reporting, must provide both an objective exploration of the situation for which policy is required and an impartial assessment of alternative policy options. Intelligence should play a role in setting the context for policy formulation, but it should never become an advocate for a specific policy. CIA Director William Casey and his deputy for intelligence, Robert Gates, slanted intelligence reporting in the 1980s to support operational activity in Central America and southwest Asia. In his memoirs, former Secretary of State George Shultz charged that the CIA’s operational involvement “colored” the agency’s estimates and analysis. The CIA’s distortion of Soviet strategic policy skewed the public debate on the Star Wars program in the 1980s, and similar distortions of the strategic capabilities of so-called rogue states have factored into the debate on national missile defense.

The decline of wizardry

During the worst days of the Cold War, the strategic position of the United States was enhanced by the scientific and technological successes of the CIA, which designed and operated some of our most important spy satellites as well as the U-2 spy plane. The CIA was heavily involved in the collection of signals intelligence and helped pioneer the technical analysis of foreign missile and space programs. Secret CIA installations eavesdropped on Soviet missile tests and gathered intelligence that was crucial to the success of arms control negotiations in the 1970s and 1980s. As a result, the CIA had advance knowledge of every Soviet strategic weapons system and up-to-date intelligence on the capabilities of these systems.

Unfortunately, the technological frontier has moved from Langley, Virginia, to Silicon Valley, and as a result, the CIA has lost much of its technological edge. In 1998, the CIA abolished its Office of Research and Development (ORD), which had been responsible for much of the agency’s success in the fields of technical collection and analytical intelligence. The CIA will no longer be on the cutting edge of advanced technology in the fields of clandestine collection and satellite reconnaissance and will be heavily dependent on the technology of outside contractors. ORD led the way in major breakthroughs in the area of overhead reconnaissance, including optics and imagery interpretation, which presumably are paying dividends in Afghanistan. Previous ORD technology, such as sophisticated facial recognition, will help in the war against terrorism but only if that technology is shared with the Immigration and Naturalization Service (INS), the FBI, and the Drug Enforcement Agency.

In addition to the weakening of the CIA in important areas of science and technology, the National Security Agency (NSA), which is responsible for collecting and interpreting signals and communications intelligence from around the world, has been weakened by a series of management decisions that have created serious problems. The NSA has been caught off guard by a series of new communications technologies that have compromised its intercept capabilities, including fiber optic cables that cannot be tapped, encryption software that cannot be broken, and cell phone traffic that is too voluminous to be processed. There is no question that a managerial revolution needs to take place throughout the intelligence community.

A new intelligence infrastructure

What the CIA and the intelligence community should be, what they should do, and what they should prepare to do are all less clear than at any time since the end of the World War II and the beginning of the Cold War. Throughout the Cold War, the need to count and characterize Soviet weapons systems against which U.S. forces might find themselves engaged, as well as the search for indications of surprise attack, focused the CIA’s efforts. Such clarity disappeared with the fall of the Soviet Union. The following steps are needed in order to design an intelligence infrastructure to deal effectively with the new and emerging national security problems.

Demilitarize the intelligence community. The mismatch between the tools of the past and the missions of the future has given rise to an increased militarization of the various intelligence agencies and an excessive reliance on CIA support for the war fighter. It is essential that the major intelligence collection agencies–NIMA, NSA, and the National Reconnaissance Office (which designs spy satellites), with their collective budget of at least $10 billion–be taken from DOD and transferred to a new office that reports to the director of central intelligence. This move would allow more leeway for spending the intelligence budget on analysis and sharing of information gathered by satellites, rather than the current emphasis on building satellites and other data collectors. According to press reports, retired general Brent Scowcroft, who is conducting a comprehensive review of the intelligence community for President Bush, favors such a transfer of authority, but Secretary of Defense Donald Rumsfeld and high-ranking members of the Senate Armed Forces Committee oppose it.

Revive oversight. The decline of the CIA during the past decade coincides with reduced oversight of the intelligence community by the Senate and House intelligence committees. Beginning with the chairmanship of Senator Shelby in 1994, the Senate committee has become less effective in providing oversight and in advancing much-needed reform. It is unusual to have more than two or three senators present at any given time, even at important hearings, and Senate committee members are limited to an eight-year term. (The House has a six-year term limit.) The number of open intelligence oversight hearings has dropped significantly, as has the number of nongovernmental witnesses invited to testify. Because the authorization bill for the intelligence community is imbedded in the defense budget, the Senate Armed Forces Committee is able to significantly modify the authorizations of the intelligence committee. The system worked when former Senators Sam Nunn and David Boren, who were close colleagues, chaired the armed services and intelligence committees, respectively, in the 1980s, but the system has broken down in the 1990s. The House intelligence committee chair, Rep. Porter Goss (R-Fla.), is a former CIA case officer who has acted as an advocate for the intelligence community and not a reformer.

Unless the CIA’s operational activity is separated from its analytical work, there will be a continued risk of tainted intelligence.

There has also been an astonishing exchange of personnel between intelligence committee staffs and the agencies they oversee. Tenet and his chief of staff formerly served as the majority and minority staff chiefs, respectively, of the Senate intelligence committee. Other staff members went on to serve in a variety of other CIA posts: inspector general, chief of the legislative counsel’s office, chief of the Foreign Broadcast Information Service, deputy director of the Counter-Proliferation Center, and director of resource management for the directorate of operations. The current head of the NRO and the NRO’s inspector general both came from the Senate intelligence committee, as did the deputy director of intelligence programs at the National Security Council. It is unprecedented for one congressional committee to supply staff to so many senior positions at a major executive agency, which raises a disturbing question: Who will oversee the overseers?

Reduce covert action. Covert action could be radically reduced without compromising national security. CIA propaganda has had little effect on foreign audiences and should end immediately. The CIA should never be allowed to interfere in foreign elections.

Many problems that have been considered candidates for covert action were ultimately addressed openly by unilateral means or cooperatively through international measures, both of which are preferable to clandestine operations. Nuclear proliferation problems created by missile programs in Iraq and North Korea in the 1990s led to congressional calls for covert actions, but in both cases overt multilateral activity with the United States in a pivotal role contributed to denuclearization. The U.S. military was successfully involved in secret denuclearization of the former Soviet Union, clandestinely removing strategic weapons and nuclear materials from Georgia, Kazakhstan, and Moldova in the 1990s.

Separate operations and analysis. It is time to debate whether it is preferable to separate the CIA’s operational activity from its analytical work or continue running the risk of tainted intelligence. The issue is one of advocacy, ensuring that the provider of intelligence is not in a position to advance its own point of view in the policy process. The CIA’s heavy policy involvement in the war on terrorism will certainly call into question the worst-case views of the directorate of intelligence on terrorist threats at home and abroad.

Because there are few institutional safeguards for impartial and objective analysis, the intelligence community ultimately depends on professional personnel of the highest intellectual and moral caliber. Yet Walter Lippmann reminded us more than 70 years ago that is essential to “separate as absolutely as it is possible to do so the staff which executes from the staff which investigates.” If Washington is serious about “reinventing government,” Lippmann’s admonition is a good place to start for the intelligence community.

The intelligence directorate has become far too large and unwieldy and, because of its failures during the past decade, has become permeated with the fear of being wrong or second-guessed. Hiring smarter, more informed people would help. In recent years, the CIA’s rigorous security standards have often filtered out analysts who have traveled and lived abroad and have collegial relations with their foreign counterparts. Not surprisingly, the intelligence directorate thus lacks people with the language skills and the regional expertise needed for dealing with today’s intelligence challenges.

The operations directorate also needs to be revamped. Its modus operandi is based on placing relatively junior people abroad, working out of U.S. embassies with State Department cover. Yet the directorate will not be able to substantially increase the amount of crucial information it collects unless it is willing to take greater risks by assigning experienced people abroad without diplomatic cover. Only then would intelligence personnel have the wherewithal to encounter the unsavory people who threaten our interests. In addition, the operations directorate must rely more heavily on foreign liaison services that have access to sensitive intelligence on terrorism and criminal activities abroad. Doing so would allow the CIA to concentrate clandestine collection efforts on countries where no access currently exists, such as Somalia, Sudan, and Yemen.

Just as the U.S. military could be used to perform clandestine actions in wartime, State Department foreign service officers could collect intelligence more effectively than their clandestine counterparts. However, recent budget cuts have seriously eroded the department’s capabilities. At the same time, the demands of an unstable and fractious world have created additional demands on the department, which must supply an ambassador and staff to 192 independent countries. Because of budget cutbacks, the department has been forced to close important posts in Zagreb, Medellin, Lahore, Alexandria, and Johannesburg, to name just a few, and has had to post political amateurs with deep pockets to key embassies in Europe and Asia. The staffs of most of these embassies could collect intelligence openly and less expensively than could their CIA counterparts, freeing the agency to concentrate on the collection of intelligence on terrorist networks, technology, and weapons of mass destruction in closed areas. One of the CIA’s first and most prestigious directors, Allen Dulles, emphasized that “the bulk of intelligence can be obtained through overt channels” and that if the agency grew to be a “great big octopus” it would not function well. The CIA has about 16,000 employees–more than four times as many as the State Department.

Increase intelligence sharing. The CIA must strengthen links across the intelligence community in order to share intelligence. Today, information tends to move vertically within each of the 13 intelligence agencies instead of horizontally across them. The CIA’s emphasis on the compartmentalization of intelligence and the need to know also serve as obstacles to intelligence sharing. In addition, the CIA must become more generous in sharing information with organizations that will be on the front lines in the war against terrorism, including the INS, the Federal Aviation Agency, the Border Guards, and the Coast Guard.

The intelligence community, particularly the CIA, faces a situation comparable only to that of 55 years ago, when President Truman created the CIA and the National Security Council. As in 1947 and 1948, the international environment has been fundamentally recast and the threats have been fundamentally altered. The institutions created to fight the Cold War must be redesigned. This is exactly the task that the new FBI director, Robert Mueller, has established for himself and his agency, and a failure to do so at the CIA could mean a repeat of the intelligence failures of September 11, 2001, and an additional erosion of CIA credibility. A reconstituted directorate of operations and directorate of intelligence could be the linchpin of a reform process that will restore a central and valued role to intelligence in the making of national security policy.


Melvin A. Goodman ([email protected]) is professor of international security at the National War College, senior fellow at the Center for International Policy, and author of The Phantom Defense: America’s Pursuit of the Star Wars Illusion (Praeger, 2001) and The Wars of Eduard Shevardnadze (Brassey’s, 2001). From 1966 to 1990, he was senior Soviet analyst at the CIA and the Department of State’s Bureau of Intelligence and Research

From the Hill – Winter 2002

Marburger confirmed as science advisor; OSTP moves questioned

New science advisor John H. Marburger III, confirmed by the Senate on October 23, has taken steps that have increased anxiety among some members of the science community about the Bush administration’s interest in science.

Marburger, who will be director of the Office of Science and Technology (OSTP) but who will not have the position of assistant to the president as previous advisors did, said he would eliminate two of OSTP’s four associate directors. The environment and national security positions will be incorporated into either the science or technology directorates.

Science policy experts said that with the new war on terrorism, Marburger should be forging ties between the scientific and national security communities. In addition, dropping the environment directorate, they said, reinforces the view that the administration has little interest in issues such as climate change.

Some members of the science community are also concerned about the nomination of Richard Russell, currently OSTP chief of staff and a former staff member of the House Science Committee, to serve as chief of the technology directorate. Unlike most of his predecessors, Russell does not have an advanced science degree or extensive industry experience.

“This is not an academic appointment, and dealing with academic aspects of technology is only part of what we do,” said Marburger, according to a report in the November 2, 2001, Science. In the same article, Marburger, formerly the director of Brookhaven National Laboratory, rejected speculation that the White House dictated the changes at OSTP. In addition to Russell, Kathie Olsen, science advisor at the National Aeronautics and Space Administration, has been nominated to head the science directorate.

In testimony at his October 9 confirmation hearing before the Senate Commerce Committee, Marburger highlighted the economic and national security challenges facing the United States and provided insights into how these challenges could affect the scientific community. He opened his remarks by reinforcing the importance of science and technology (S&T), stating that it has “provided us with increased security, better health, and greater economic opportunity and will continue to do so for many generations to come.” However, he said, “We must make important choices together, because we have neither unlimited resources nor a monopoly of the world’s scientific talent. While I believe that we should seek to excel in all scientific disciplines, we must still choose among the multitudes of possible research programs. We must decide which ones to launch, encourage, and enhance and which ones to modify, reevaluate, or redirect in keeping with our national needs and capabilities.”

Marburger outlined “grand challenges” in four areas: national security, environment, health care, and education. He noted that S&T could assist in developing innovative technologies and vaccines as well as traditional weapons for U.S. soldiers. He said that scientific advances hold promise for the “creation of a sustainable future in which our environmental health, our economic prosperity, and our quality of life are mutually reinforcing.” And he pointed out that genetic medicine offers the “greatest hope” but also raises important ethical, legal, and social implications. Although the United States should pursue the latest technologies, he said, we should also be sure to “incorporate our oldest and most cherished human values.” Marburger also noted that achieving diversity in the S&T workforce presents a “formidable challenge.”

Rep. Sherwood Boehlert (R-N.Y.) praised Marburger’s scientific and administrative experience. “No one can spend time around Jack Marburger without being impressed, ” he said. “He is thoughtful, articulate, and straightforward–traits all too rare around this town.” He noted the important role that the science advisor would play to “marshal our public and private research resources in service of the effort to protect our citizens and prosecute the war against terrorism.”

Indeed, Marburger’s first test in the realm of science policy will be how well he incorporates scientific research and education into terrorism-related programs. He was immediately tapped to work with the Foreign Terrorist Tracking Task Force, part of the new Homeland Security Council, to address the issue of monitoring nonimmigrant student visitors and using innovative technologies to enforce immigration policies.

Colleges drop opposition to more tracking of foreign students

In the wake of reports that one or more of the terrorists involved in the September 11 attacks entered the United States on student visas, the academic community has reversed its staunch opposition to electronic tracking of foreign students at U.S. colleges. Universities are now emphasizing that their objections apply only to the fee structure associated with the new system, not the system as a whole.

Nevertheless, because of the importance of attracting international students to study in the United States and because of a controversial moratorium on student visas briefly proposed by Sen. Diane Feinstein (D-Calif.), concerns about the issue remain.

Currently, colleges and universities are required to keep track of information such as program end date, field of study, credits completed per semester, and student employment. Schools must provide this information to the Immigration and Naturalization Service (INS) upon request, but the system is paper-based, meaning that requiring frequent reports of such information would generate a huge quantity of paperwork.

Feinstein has strongly criticized the current system. “Today, there is little scrutiny given to those who claim to be foreign students seeking to study in the United States,” she said in a September 27 press release. “In fact, the foreign student visa program is one of the most unregulated and exploited visa categories.”

Education groups dispute this claim, arguing that the existing tracking requirements, scant as they may be, still place foreign students among the most scrutinized groups of temporary visa holders. They point out that, according to INS statistics, only 1.8 percent, or 567,000, of the 31 million nonimmigrant visas issued in fiscal 1999 were issued to students.

The electronic tracking system under development by the INS is called the Student and Exchange Visitor Information System (SEVIS). Mandated by the 1996 immigration law, the system’s implementation has lagged behind schedule because of a lack of funds. Congress intended the system to be supported by fees collected from visiting students, but a fee structure has yet to be worked out.

INS first proposed a fee structure in 1999 that would have required colleges and universities to collect a $95 fee from each foreign student. But university officials objected to this proposal on the grounds that it imposed an undue burden. They particularly criticized INS for proposing to collect fees before the tracking system was operational in order to cover the system’s development costs. The INS then proposed a fee collection system requiring students to pay directly before entering the country, but this proposal raised new questions about students’ ability to pay.

Several proposals have been made on the Hill to hasten the implementation of SEVIS. The antiterrorism USA-Patriot Act, signed into law on October 26, authorizes the INS to spend $36.8 million on the system through the end of 2002. With this money in hand, the INS would not have to collect student fees to pay for the system’s development, although fees would still be necessary to cover maintenance costs. The antiterrorism act also expands SEVIS to include students at flight, language, and vocational schools.

Although Feinstein has dropped her idea for a moratorium on student visas and says she is now confident that the education community will cooperate with implementation of SEVIS, Reps. Michael Bilirakis (R-Fla.) and Marge Roukema (R-N.J.) are not so sure. They have introduced a bill (H.R. 3221) that would impose a nine-month moratorium on student visas.

Several other legislative proposals are in the works. One is a bill (S. 1627) authored by Feinstein and Sen. Jon Kyl (R-Ariz.) that would require implementation of SEVIS by January 1, 2003; require the State Department to impose an application fee on anyone applying for a student visa in order to fund SEVIS; require quarterly reports to be filed by any university hosting foreign students; and prohibit anyone from terrorist-supporting states such as Iran, Iraq, Sudan, and Libya from obtaining a student visa.

A border security bill (S. 1618, H.R. 3205) proposed by Sens. Edward M. Kennedy (D-Mass.) and Sam Brownback (R-Kan.) and Rep. John Conyers (D-Mich.) includes sections that would expand data collection and reporting requirements under SEVIS and mandate periodic reviews by the INS of institutions certified to host foreign students.

Another bill (S. 1518, H.R. 3077), offered by Sen. Kit Bond (R-Mo.) and Rep. Michael Castle (R-Del.), would expand SEVIS to include information on any dependent family members accompanying a student.

The White House has also taken an interest in the issue. The newly created Homeland Security Council’s Foreign Terrorist Tracking Task Force has as one of its main goals a “thorough review of student visa policies.” The task force has been asked to “institute tighter controls and ensure that student visas are being issued appropriately.” A goal of the program is to prohibit the education and training of foreign nationals “who would use their training to harm the United States and its allies.”

President appoints science and technology advisors

On December 12, President Bush named the members of the President’s Council of Advisors on Science and Technology (PCAST), which has the job of providing the president and the Office of Science and Techology Policy with independent expert guidance on science and technology issues. Although PCAST has not been particularly influential in federal policy, appointment does carry prestige, and the choice of members provides some indication of the president’s priorities and interests.

In comparison with recent PCAST rosters, President Bush’s council is drawn more heavily from business, particularly the information technology sector, than from the laboratory. Even the university representatives are almost all administrators rather than researchers. In fact, Arizona State University plant biologist Charles J. Arntzen is the only fulltime lab scientist.

The full PCAST roster is:

  • Charles J. Arntzen, chairman, department of plant biology, Arizona State University.
  • Norman R. Augustine, former chairman and chief executive officer, Lockheed Martin Corporation.
  • Carol Bartz, chairman and chief executive officer, Autodesk Inc.
  • M. Kathleen Behrens, managing director, Robertson Stephens & Company.
  • Eric Bloch, corporate research-and-development management consultant, Washington Advisory Group, and former National Science Foundation director.
  • Stephen B. Burke, president, Comcast Cable Communications.
  • Gerald W. Clough, president, Georgia Institute of Technology.
  • Michael S. Dell, chairman and chief executive officer, Dell Computer Corporation.
  • Raul J. Fernandez, chief executive officer, Dimension Data of North America.
  • Marye Anne Fox, chancellor, North Carolina State University.
  • Martha Gilliland, chancellor, University of Missouri at Kansas City.
  • Ralph Gomory, president, Alfred P. Sloan Foundation.
  • Bernadine P. Healy, former president and chief executive officer of the American Red Cross and former director of the National Institutes of Health.
  • Robert J. Herbold, executive vice president, Microsoft Corporation.
  • Barbara Kilberg, president, Northern Virginia Technology Council.
  • E. Floyd Kvamme, partner, Kleiner Perkins Caufield & Byers.
  • John H. Marburger III, director, White House Office of Science and Technology Policy.
  • Walter E. Massey, president, Morehouse College, and former National Science Foundation director.
  • Gordon E. Moore, chairman emeritus, Intel Corporation.
  • E. Kenneth Nwabueze, chief executive officer, SageMetrics.
  • Steven G. Papermaster, chairman, Powershift Group.
  • Luis M. Proenza, president, University of Akron.
  • George Scalise, president, Semiconductor Industry Association.
  • Charles M. Vest, president, Massachusetts Institute of Technology.

“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Ethanol Answer to Carbon Emissions

When the United States gets serious about the threat of global climate change, it should turn to ethanol to power cars.

The moment is fast approaching when the United States will have to face up to the need to reduce greenhouse gas emissions. The Intergovernmental Panel on Climate Change is finding growing scientific evidence that human activities are forcing a gradual warming of the planet, and recent international negotiations in Kyoto, Bonn, and Marrakech have demonstrated that the world’s political leaders are taking the threat more seriously. The United States opted out of the most recent rounds of negotiations, and many analysts have pointed out weaknesses in the Kyoto Protocol. Although there are large problems with the Kyoto agreement and its focus on unrealistic near-term targets for U.S. emissions reductions, the United States cannot ignore the need to reduce carbon emissions over the long term. The November 2001 Marrakech meeting concluded that the United States and other developed nations should reduce their emissions to 95 percent of the 1990 level in approximately 10 years. Although the United States rejected that goal, eventually it will have to reduce its emissions even further.

The United States is responsible for a quarter of the world’s total carbon emissions, and Americans’ per capita emissions are five times the world average. A major source of carbon emissions is the U.S. personal transportation system. Light-duty vehicles–cars, sport utility vehicles (SUVs), minivans, and other light trucks–are prolific CO2 emitters, producing 20 percent of total U.S. emissions.

The fuel economy of the average new vehicle has been declining because of the increasing market share of SUVs and other light trucks. Since 1990, gasoline consumption (which is proportional to vehicle CO2 emissions) has increased 19 percent because of the change in vehicle mix, an increase in the number of vehicles, and increases in vehicle miles traveled. If these trends continue, major reductions in CO2 emissions from cars and light trucks will be impossible; even attaining the Marrakech goals will be difficult.

A recent National Research Council (NRC) committee concluded that fuel economy could be improved by 55 percent by a series of small improvements, without changing vehicle size or performance; the increased manufacturing costs would be offset by the fuel saving over the lifetime of the vehicle. However, the committee found that phasing in these more fuel-efficient vehicles would be counterbalanced by the increase in fleet size and the shift toward larger, more powerful vehicles. Thus, taking advantage of the technologically and economically feasible efficiency options would result in little or no reduction in gasoline consumption.

Americans are not unique in wanting large vehicles with powerful engines. High-income consumers in Europe and Japan also want these vehicles, despite gasoline prices of more than $4 per gallon and high taxes on fuel-hungry vehicles. Comparing European and U.S. fuel prices and average fuel economies, it appears that roughly tripling the price of fuel is associated with about a 30 percent increase in fuel economy. Assuming that the demand response is proportional to price, this experience suggests that a further tripling of price would be required to induce drivers to choose a vehicle mix that averaged 50 miles per gallon. We doubt that a democratic government would be able to increase fuel taxes to a level that would raise the price of gasoline to the $13 per gallon range. Since the early 1980s, low U.S. fuel prices have induced Americans to ignore efficiency in their vehicle choices. The federally mandated Corporate Average Fuel Economy (CAFE) standards have forced automakers to offer more small and fuel-efficient cars, but consumers have flocked to SUVs, minivans, and light trucks that have a less stringent CAFE standard. With these larger vehicles now accounting for more than half of new vehicle sales, the fuel economy of the average new vehicle has been declining. Even if we could reverse this trend, the NRC study indicates that it would still be difficult to achieve a substantial reduction in total gasoline use and carbon emissions. The only practical path to achieving significant emissions reductions is to find an alternative to gasoline as a fuel.

Several choices

Three technologies have the potential to power motor vehicles with no net CO2 emissions. The first is batteries, where the electricity to charge the batteries comes from renewable energy or nuclear power. The second is fuel cells, where the hydrogen is produced by renewable energy or nuclear power. The third is an internal combustion engine using ethanol from cellulosic biomass that is grown and processed with no fossil fuels.

Battery-powered cars are expensive and a potential public health menace. To get even a 100-mile range, about 1,100 pounds of batteries are required for a two-passenger car. Making and recycling these batteries is expensive, leading to large increases in the cost of driving. Mining and smelting the heavy metals for the batteries, as well as making and recycling the batteries, would discharge large quantities of heavy metals into the air, water, and landfills. If the current U.S. fleet of 200 million vehicles were run on current lead acid, nickel cadmium, or nickel metal hydride batteries, the amount of these metals discharged to the environment would increase by a factor of 20 to 1,000, raising vast public health concerns. Unless there are major breakthroughs in electrochemistry, this is not an attractive strategy.

A great deal of attention has gone to fuel cells, which emit nothing but water vapor and are much more efficient than an internal combustion engine. Unfortunately, the current reality is that fuel cells are extremely expensive, and they cannot match the driving performance of current engines. Major technological breakthroughs are required to make fuel cells attractive for light-duty vehicles. The environmental implications of fuel cells cannot be known until we know what materials and processes will be used and how the hydrogen will be produced.

The technology that is most attractive and available is ethanol made from grasses and trees. Because 95 percent of the ethanol currently produced in the United States is made from corn, it is critical to understand how this process differs from what we are recommending. Corn is a very small part of the corn plant and contains less than half the total energy. Although it is easier to process the corn than the rest of the plant, the ethanol yield is small. The net energy from producing ethanol from corn is perhaps only 25 percent of the energy in the ethanol, with most of the energy used in processing the ethanol coming from petroleum and natural gas. No one who set out to produce ethanol would first grow food. Only someone who was interested primarily in subsidizing U.S. farmers would consider corn as a source of energy.

The amount of biomass, and thus the amount of ethanol, that can be produced per acre of land is much greater than the amount of corn that can be produced. Furthermore, low-grade land unsuitable for producing corn can produce biomass. Corn requires fertilizer and pesticides, and often irrigation as well; other forms of biomass can thrive without these inputs. Current farming practices result in large soil losses; growing biomass would essentially end soil loss, because once the first crop was planted, the soil would almost never be uncovered. If the biomass itself is used to power the production process and make any fertilizer, no fossil fuels need be used. Net energy would be about 75 percent of the gross energy produced.

U.S. ethanol production from corn is cost-effective now because of a $0.55 per gallon tax subsidy and a high value for byproducts from the process. Dry milling of corn produces an animal feed supplement as a byproduct, and wet milling produces starch sweeteners, gluten feed and meal, and corn oil as byproducts. Sale of these byproducts is an important source of income, but the market for these byproducts is likely to be saturated once total annual ethanol production reaches 5 billion gallons. When the byproducts are no longer valuable, cellulosic ethanol is predicted to be cheaper to produce than ethanol from corn.

Ethanol production from lignocellulosic feedstocks is undergoing rapid development with prototype pilot and full-scale plants under development. The main difference between corn processing and using cellulosic feedstocks is that the fermentable sugars in cellulose are more tightly bound. Freeing the sugars to permit fermentation requires more intense processing than is needed to remove starch from corn. However, an advantage of cellulose processing is that it also yields lignin that can be burned to provide energy to run the process and to generate electricity that can be sold.

Growing corn, wheat, rice, and sugarcane produces large amounts of agricultural wastes, some of which are burned, degrading air quality. In the production of cellulosic ethanol, the bulk of the biomass would become a valuable source of energy rather than a waste product. In fact, municipal solid waste (MSW) includes a large volume of cellulosic material that has the potential to be converted to ethanol. Because this material would have to be separated from other parts of the MSW, it is more expensive than energy crops. However, cities in the Northeast such as New York and Philadelphia have paid as much as $150 per ton to dispose of their MSW. At this price, it would be worth sorting the MSW to remove the cellulose and other materials.

To grow enough biomass to enable ethanol to replace gasoline would require an enormous amount of land. To provide sufficient ethanol to replace all of the 130 billion gallons of gasoline used in the light-duty fleet, we estimate that it would be necessary to process the biomass growing on 300 million to 500 million acres, which is in the neighborhood of one-fourth of the 1.8-billion acre land area of the lower 48 states. Most U.S. land is now grassland pasture and range (590 million acres), forest (650 million acres), or cropland (460 millions acres). The remaining acreage is used for human infrastructure, parks and wildlife areas, and marsh and wetlands. The 300 million to 500 million acres could be supplied from high-productivity land (39 million acres of idled cropland), from land currently used to grow grain that is sold below production cost (approximately 45 million acres), and from pasture and forestland that are not associated with farms. No land from national parks, wilderness areas, or land for buildings, highways, or other direct human use would be required.

If the goal is to minimize environmental intrusion, the emphasis should be on using a very large land area with a diverse array of trees and grasses that would not have to be harvested annually. With current technology, trees would be harvested by means of common forestry practices and grasses would be cut and baled like hay. Research is being done and should continue to develop harvesting practices that are most economically effective and environmentally benign for reaping biomass feedstock for ethanol production.

The United States is not the only country where ethanol production would make sense. Nations such as Brazil, Argentina, and Canada could grow enough biomass to produce ethanol for their own needs and for export. Indeed, Brazil has long produced ethanol as an auto fuel. Biomass ethanol could fuel a major proportion of the world’s automobiles, leading to a considerable reduction in CO2 emissions, making resource use more sustainable and reducing soil loss.

Possible objections

Although the United States could produce sufficient ethanol from energy crops to run all its cars and light trucks, this does not mean that it necessarily should do so. We need to look closely at the economic and environmental considerations.

The economic case against making a massive commitment to ethanol is that cellulosic ethanol currently costs too much to compete with gasoline. The refinery gate price of gasoline is about $0.80 per gallon; transportation, storage, and retailing add about $0.40 per gallon; and taxes raise the price at the pump to roughly $1.50 per gallon. Producing cellulosic ethanol costs about $1.20 per gallon (1.80 per gallon, gasoline equivalent, since ethanol has two-thirds of the energy of a gallon of gasoline). Assuming that the per-gallon distribution costs are the same for ethanol and holding total tax revenue constant, ethanol would sell for $1.80 per gallon at the pump. However, this is equivalent to $2.70 per gallon in order to get as much energy as in a gallon of gasoline. Technology improvements promise to reduce this cost, but it is unlikely to fall below the cost of producing gasoline.

Motorists will not switch to ethanol unless the price of gasoline is at least $2.70 per gallon. Although this price will seem astronomical to U.S. drivers, it is actually much lower than the price that would be required to convince Americans to buy 50-mile per gallon cars or the price that the Europeans and Japanese are paying now. If the United States decides that the motor vehicle sector must reduce its carbon emissions, it will be much easier to convince them to switch to ethanol fuel, even at $2.70 per gallon, than to convince them to drive smaller cars or cars with smaller engines.

Growing its own motor vehicle fuel would also make the United States less dependent on imported oil, which will enhance its political independence. It would be a gift to future generations to bequeath to them a stable fuel supply that is not subject to wars, civil unrest, or global politics. Farmers would be major beneficiaries. They could stop growing grains that fetch a price that is lower than the production cost. Growing energy crops would generate more than $100 billion in revenue to farmers and more than $80 billion in revenue to ethanol producers located within 30 miles of where the energy crops are grown. Since workers employed in hauling the energy crops to the ethanol plant and those working in the plant would be rural workers, this program would contribute $100 billion to $180 billion to rural and farm revenue.

Potentially more formidable than the economic barriers to ethanol would be the public reaction to using roughly a quarter of U.S. land for energy crops. Using land that is not currently being cultivated for crops is likely to raise the hackles of environmentalists and hunters, who will argue that the “natural” ecology is being destroyed. However, this milk has already been spilled. Little U.S. land has been spared from human activity. Almost all of the 460 million acres of cropland and the 590 million acres of grassland pasture and range have been altered from their native state as wetlands, forest, grassland, or other natural ecosystems.

The principal energy crops would be grasses such as switchgrass, which is a native prairie grass, and hybrid trees such as poplars or willows. A well-planned and thoughtful bioethanol program could return much of that land closer to its native state, enhancing the environment, as well as bringing the benefits of a renewable and sustainable fuel. Properly managed, the energy crops could help endangered species and enhance recreational opportunities. This proposal amounts to restoring much of the Great Plains to tall grasses. To be sure, the grass would be mowed annually, but there would still be plenty left to feed roaming bison, deer, and elk. Certainly, these grasslands and forests would create habitats for birds and other creatures, as well as land for hiking and other recreation. Providing environmental benefits such as these will be essential to making this fundamental shift in land use politically palatable.

Minimizing the land required to replace gasoline requires dense plantings of energy crops. Combinations of other plant species (whether for energy or not) would provide species diversity, encouraging animal diversity. Native plants have already demonstrated that they can thrive without human inputs such as water and fertilizer. Strategic placement of plantings could provide habitat connectivity to increase a pressured species range. It would require more land than would a strategy that emphasized planting just the most energy-rich species, but that might be a tradeoff that the U.S. public would be willing to make.

Staging the transition

If the United States decides to make the switch from gasoline to cellulosic ethanol, implementing that decision will pose formidable difficulties. The most appealing fuel appears to be E85, a mixture of 85 percent ethanol and 15 percent gasoline. Pure ethanol has an extremely low vapor pressure, which would make it difficult to start the engine on cold days. Adding a small amount of gasoline would overcome the problem.

E85 cannot be used in most of the cars on the road. Changes in the engine would be necessary. Congress already encourages the use of alternative fuels by giving a substantial fuel economy credit to flexibly fueled vehicles (FFVs). These light-duty vehicles, which cost about $250 more to manufacture, are capable of using gasoline-ethanol blends up to E85. The federal subsidies enable the manufacturers to sell FFVs for less than the cost of conventional vehicles. The attractive price has resulted in the sale of about four million FFVs (a combination of cars and light trucks), but almost none of them use E85 because the price is too high.

Before automakers would produce vehicles optimized for E85 and before customers would buy them, there would have to be a guarantee that there would be a substantial supply of this fuel universally available at an attractive price. Before large investments would be made in producing cellulosic ethanol, farmers and ethanol processors, distributors, and retailers would have to be assured that there will be a considerable demand for this fuel at a price that promises an attractive return. Thus, there is a chicken and egg problem: Which comes first, the investment in cellulosic ethanol or the investment in motor vehicles?

Properly managed, the energy crops could help endangered species and enhance recreational opportunities.

We think that increasing the supply of cellulosic ethanol should come first. All cars can use E10, a mixture of 10 percent ethanol and 90 percent gasoline. With a little modification, today’s vehicles could use fuel mixtures with up to 22 percent ethanol. Since the light-duty fleet uses 130 billion gases of gasoline annually, mandating that all gasoline be E10 would require 13 billion gallons of ethanol, roughly six times the current production of fuel ethanol. If E22 were mandated, 30 billion gallons of ethanol would be required. These are large levels of demand for a fledgling industry that produces almost no ethanol from cellulosic biomass at present.

Requiring that all gasoline be E10 is not feasible when current ethanol production is only 2 billion gallons. Regulators cannot simply order more ethanol to be added to gasoline. A more efficient way of increasing ethanol production would be a plan that increases the tax on gasoline in order to subsidize the production of ethanol. The price would be calculated to maintain the current government tax revenue and to subsidize ethanol by at least its cost premium of $0.80 per gallon ($1.20 per gallon of gasoline equivalent). The price structure could be set so that one would pay a lower price per gallon of gas equivalent the higher the percentage of ethanol in the fuel mix, up to E85. In that way, consumers would be likely to move immediately to E10, then to pay for modifications to their engines so that they could use E22, and eventually to buy FFV vehicles so that they could take advantage of the savings from using E85 fuel. The transition would occur over time, so that producers, distributors, and service stations would have time to scale up and make other adjustments as demand grew. The government subsidy could be reduced gradually as the cost of producing ethanol declines because of advances in technology and economies of scale. However, since ethanol is likely to continue to be more expensive than gasoline, the cost of E85 will be higher than the cost of straight gasoline, unless there are higher taxes on gasoline.

The dynamics of increasing cellulosic ethanol production are important. Since there are no commercial cellulosic ethanol plants now, there is a great deal to be learned. By increasing production gradually, we will be able to learn from experience and quickly incorporate insights into the design of new plants that will be coming online steadily. This deliberate approach slows the increase in production but lowers costs. At some stage, the process technology would be optimized and new plants would be built in parallel to increase capacity rapidly.

The combination of the federal subsidy for ethanol production and the growth in demand would be a magnet for R&D and infrastructure investment. It would not be hard to generate private-sector interest in what could become a $200 billion industry

One way of looking at the program is that in addition to achieving its primary goal of reducing greenhouse gas emissions, it would move resources used to protect our oil supplies to producing fuel at home, and it would eliminate the need for tens of billions of dollars a year in farm subsidies.

Looked at in strictly technocratic terms, this approach is compelling and far more reliable than the alternatives of switching to cars powered by electric motors or hydrogen-based fuel cells. However, two political contingencies overshadow any technological quibbles. The first is whether (and when) the United States will make a commitment to reduce greenhouse gas emissions. Without an agreement that the nation will commit extensive resources to achieving this goal, there is no chance that the public will support a massive subsidy for ethanol production. And unless the public is willing to consider an unprecedented change in the way that land is used, producing the biomass necessary for large-scale ethanol production would be impossible.

Although these contingencies are formidable, they shrink to manageable size beside the potential devastation that could accompany rapid global climate change. And although a transition to ethanol-based vehicles is a daunting challenge, it is actually less difficult to envision than a switch to completely different power system for vehicles.

Recommended reading

Harron S. Kheshgi et al., “The Potential of Biomass Fuels in Context of Global Climate Change: Focus on Transportation Fuels” Annual Review of Energy and the Environment 25 (2000): 199–244.

Charles E. Wyman “Biomass Ethanol: Technical Progress, Opportunities, and Commercial Challenges,” Annual Review of Energy and the Environment 24 (1999): 189–226.


Lester B. Lave ([email protected]) is the James H. Higgins Professor of Economics and University Professor at the Graduate School of Industrial Administration at Carnegie Mellon University in Pittsburgh. W. Michael Griffin is the executive director of the Green Design Initiative at Carnegie Mellon. Heather MacLean is assistant professor of civil engineering at the University of Toronto.

Preparing for and Preventing Bioterrorism

Strengthening the U.S. public health infrastructure is the key to enhancing the nation’s safety.

The tragic events of September 11th, followed by the recent anthrax incidents, have made us painfully aware of our nation’s vulnerability to terrorism, including bioterrorism. Although once considered a remote concern, the possibility that a biological agent might be intentionally used to cause widespread panic, disease, and death is now a common concern. Whether the event involves an unsophisticated delivery system with a limited number of true cases, as we have seen with the current anthrax scare, or a carefully orchestrated attack with mass casualties, the prospects are frightening. As the United States mobilizes to address an array of overlapping foreign policy, infectious disease, and national security threats, it must make sure that a comprehensive program to counter and prevent bioterrorism ranks high on the priority list.

The threat of bioterrorism is fundamentally different from other threats we face, such as conventional explosives or even a chemical or nuclear weapon. By its very nature, the bioweapons threat, with its close links to naturally occurring infectious agents and disease, requires a different strategy. Meaningful progress against this threat depends on understanding it in the context of epidemic disease. It requires different investments and different partners. Without this recognition, the nation’s preparedness programs will be inadequate, and we may miss critical opportunities to prevent such an attack from occurring in the first place.

Biological terrorism is not a “lights and sirens” kind of attack. Unless the release is announced or a fortuitous discovery occurs early on, there will be no discrete event to signal that an attack has happened, and no site that can be cordoned off while authorities take care of the casualties, search for clues, and eventually clean up and repair the damage. Instead, a bioterrorism event would most likely unfold as a disease epidemic, spread out in time and place before authorities even recognize that an attack has occurred. Recognition that an attack had occurred would emerge only when people began appearing in their doctor’s office or an emergency room with unusual symptoms or inexplicable disease. In fact, it may prove difficult to ever identify the perpetrators or the site of release–or even to determine whether the disease outbreak was intentional or naturally occurring.

The first responders to a bioterroism event would be public health officials and health care workers. Unfortunately, in many scenarios, diagnosis of the problem may be delayed, because medical providers and labs are not equipped to recognize and deal with the disease agents of greatest concern. What is more, effective medical interventions may be limited, and where they exist, the window of opportunity for successful intervention would be narrow. The outbreak is likely to persist over a prolonged period–months to years– because of disease contagion or continuing exposure. The speed of recognition and response to an attack will be pivotal in reducing casualties and controlling disease.

Not only are biological weapons capable of causing extraordinary devastation, but they are relatively easy to produce, inexpensive, and capable of causing significant damage even when small quantities are delivered by simple means. In addition, information about how to obtain and prepare bioweapons is increasingly available through the Internet, the open scientific literature, and other sources. Opportunities for access to dangerous pathogens can be fairly routine; some of these organisms are commonly found in nature or are the subject of legitmate study in government, academic, and industry labs. Furthermore, bioweapons facilities can be hidden within legitimate research laboratories or pharmaceutical manufacturing sites.

Developing a response

Although there are enormous challenges before us, many of the elements of a comprehensive approach are relatively straightforward. Some of the necessary activities are already under way, though they may need to be expanded or reconfigured; other programs and policies still need to be developed and implemented.

Perhaps most fundamental to an effective response is the understanding that public health is an important pillar in the national security framework and that public health professionals must be full partners on the U.S. security team. In fact, the president should appoint a public health expert to the National Security Council, and Governor Ridge must include public health experts among his key staff in his new Office of Homeland Security.

Today, experts agree that there is an urgent need to increase the core capacities of the public health system to detect, track, and contain infectious disease. State and local public health departments represent the backbone of our ability to respond effectively to a major outbreak of disease, including a bioterrorist attack. Yet these public health agencies have never been adequately supported or equipped to fulfill this mission. In fact, many hesitate to call the array of health structures at the state, county, and local level a public health “system,” because years of relative neglect and underfunding have left them undercapitalized, fragmented, and uncoordinated.

Upgrading current public health capacities will require significantly increased and sustained new investments. First and foremost, this means providing resources to strengthen and extend effective surveillance systems that can rapidly detect and investigate unusual clusters of symptoms or disease. This will entail expanding and strengthening local epidemiologic capabilities, including trained personnel and increasing laboratory capacity to rapidly analyze and identify biological agents. In addition, communication systems, including computer links, must be improved to facilitate collection, analysis, and sharing of information among public health and other officials at local, state, and federal levels. Beyond these critical domestic needs, successful strategies must also include a renewed commitment to improving global public health.

To improve detection, it is essential that physicians and other health care workers be trained to recognize unusual disease or clusters of symptoms that may be manifestations of a bioterroist attack. This must also include strengthening the relationship between medicine and public health so that physicians understand their responsibility to report disease or unusual symptoms to the public health department. Physicians must know whom to call and be confident that their call will contribute to the overall goal of providing information, guidance, and support to the medical community. Health care professional organizations, academic medical institutions, and public health officials must come together to develop appropriate training curricula, informational guidelines, and most important, the working partnerships that are critical to success.

Those same partnerships will be very important in addressing another critical concern: the urgent need to develop emergency plans for a surge of patients in the nation’s hospitals. We must enhance systems to support mass medical care and develop innovative strategies to deliver both protective and treatment measures under mass casualty and/or exposure conditions, especially when there may be an additional set of very difficult infection-control requirements as well. This will require careful advance planning since most hospitals are operating at or near capacity right now. Systematic examination of local capabilities and how they can be rapidly augmented by state and federal assets must be part of this effort.

Federal health leadership will be important in this effort to define needs and provide model guidelines and standards; federal resources may also be essential to support planning efforts and to create the incentives necessary to bring the voluntary and private health care sector fully on board. However, the final planning process must be undertaken on the local or regional level, engaging all the essential community partners and capabilities. It is critical to remember that the front line of response, even in a national crisis, is always local. Thus, across all these domains of activity, we must make sure that we have adequate capacity locally and regionally, which can then be supplemented as needed.

Another important example of this involves access to essential drugs and vaccines. A large-scale release of a biological weapon may require rapid access to quantities of antibiotics, vaccines, or antidotes that would not be routinely available in the locations affected. Given that such an attack is a low probability and unpredictable event in any given place, it would hardly be sensible or cost effective to stockpile supplies at the local level.

The first step in blocking the proliferation and use of biological weapons is to significantly bolster our intelligence.

As we ramp up our public health and medical capacity to respond to bioterrorism, we should continue to strengthen our national pharmaceutical stockpile so that vital drugs and equipment can be rapidly mobilized as needed. The federal Centers for Disease Control and Prevention (CDC) has the responsibility to maintain and oversee use of this stockpile, which currently represents a cache of supplies located in strategic locations across the country that can be delivered within 12 hours to any place in the nation. Current concerns make it clear that the nature and quantities of materials maintained in the stockpile must be enhanced, and the stockpile contents should be periodically reviewed and adjusted in response to intelligence about credible threats. New investments in the stockpile should also include contractual agreements with pharmaceutical manufacturer’s to ensure extra production capability for drugs and vaccines in a crisis as well as heightened security at the various storage and dispersal sites.

Beyond simply having the drugs and vaccines available, we must develop plans for how those critical supplies will be distributed to those who need them. CDC needs to provide strong leadership and support for state and local health departments to undertake contingency planning for distribution. We must also think about the broader mobilization of essential drugs, vaccines, or other materials in the event that they are needed outside the United States. Although this may raise complex diplomatic issues, especially when the necessary pharmaceutical is in short supply, addressing potential global need is essential for political and disease-control reasons.

To make sure that the United States can remain strategically poised, further investments must be made in biomedical research to develop new drugs, vaccines, rapid diagnostic tests, and other medical weapons to add to the arsenal against bioterrorism. We must learn more about the fundamental questions of how these organisms cause disease and how the human immune system responds so that we can develop better treatments and disease-containment strategies. It is also essential that we improve technologies to rapidly detect biological agents from environmental samples and develop new strategies and technologies to protect the health of the public.

Scientists will need the full support and encouragement of the public and the government confront this threat. Success will entail research endeavors and collaboration involving numerous government agencies, universities, and private companies. Looking to the future, an effective, well-funded research agenda may give us the tools to render the threat of biological weapons obsolete.

An ounce of prevention

Stopping a biological attack before it happens is obviously the most desirable way to avoid a crisis. The first step in blocking the proliferation and use of biological weapons is to significantly bolster our intelligence. The intelligence community could use additional scientific and medical expertise to help enhance the quality of data collection and analysis. This will require greater partnership and trust between the intelligence community, law enforcement, and public health and biomedical science. These disciplines do not routinely work together, and their professional cultures and practices are not easily merged. Nonetheless, greater coordination of effort is very important to our national defense and must be an element of our nation’s developing homeland security strategy.

Sadly, we must recognize that the possibility of bioweapons threats emerging from legitimate biological research is certainly real and embedded in the very science and technology that we herald in laboratories around the world. Vigilance is needed to ensure that the tools of modern genomic biology are not used to create new and more dangerous organisms. This is a complex challenge, for no one would want to impede the progress of legitimate and important science. However, we also have a responsibility to face up to a very real set of concerns. With leadership from the scientific community, we must begin to examine what opportunities may exist to constructively reduce this threat.

Related to this, we must continue to reduce access to dangerous pathogens by helping the scientific community improve security and ensure the safe storage and handling of these materials. Over the past five years, new regulations and requirements have tightened access to biological materials from culture collections in the United States and strengthened the government’s ability to monitor the shipping and receipt of dangerous pathogens through a registration process, which also requires disclosure of the intended use for the agents. These are important steps, but more can and should be done to assure that our nation’s laboratories have adequate oversight of the use and storage of these materials.

International cooperation will be essential to achieving these goals. The safety and control methods developed for domestic must be extended across the globe if they are to make a real and enduring difference. Coupled with this, we should enhance efforts to provide socially useful research opportunities to scientists who had been employed in the Soviet Union’s bioweapons program. Many of these scientists are under- or unemployed, and it is in our interest to see that economic need does not drive them to peddle their knowledge to potential terrorists. We must also support efforts to help them secure or destroy potentially dangerous materials. The U.S. government has supported such efforts through the Cooperative Threat Reduction (CTR) program, but these programs desperately need to be strengthened and expanded. Opportunities to extend the reach of the program to include university and industry R & D collaborations will also be essential to long-term success.

In the final analysis, it may prove impossible to prevent future bioweapons attacks from occurring, but planning and preparation could greatly mitigate the death and suffering that would result. As a nation, we need comprehensive, integrated planning for how we will address the threat of bioterrorism, focusing both on prevention and response. We need to define the relative roles and responsibilities of the different agencies involved, and identify the mechanisms by which the various levels of government will interact and work together. The new Office of Homeland Security is well situated to take on this task. Congress and the president must give this office the resources and authority necessary to develop and implement protective measures. Likewise, federal officials must vigorously pursue interantional cooperation in this effort.

The United States has always been willing to meet the requirements and pay the bills when it came to our defense systems and security needs. We must now be willing to do the same when it comes to funding critical public health needs. Public health has too often received short shrift in our planning and public funding. This must change. Congress and the public need to understand that strengthening disease surveillance, improving medical consequence management, and supporting fundamental and applied research will be essential in responding to a biological weapons attack in this nation or anywhere in the world. These investments will also enhance our efforts to protect the health and safety of the public from naturally occurring disease. We have a chance to defend the nation against its adversaries and improve the public health system with the same steps. We cannot afford not to do this.


Margaret A. Hamburg is vice president for biological programs at the Nuclear Threat Initiative in Washington, D.C. She was assistant secretary for planning and evaluation in the Department of Health and Human Services during the Clinton administration and before that New York City commissioner of health.

Real Numbers: The Uninsured: Myths and Realities

Much of what Americans think they know about people without health insurance is wrong. National polling data and market research reveal that the popular wisdom is that the number of uninsured people is small, includes largely healthy young adults who volutarily forego coverage or are unemployed, that recent immigrants account for much of the increase in the number, and that the uninsured manage somehow to get the medical care that they need. Peer-reviewed findings from health services research, economics, and the clinical literature paint a markedly different picture: The United States has a longstanding, sizable, and growing uninsured population of about 40 million people, roughly one out of every seven Americans, and being uninsured can have serious medical and economic consequences, not only for individuals but for their families as well.

The uninsured are less than half as likely to receive needed care for a serious medical condition. They have fewer visits annually and are more than three times as likely to lack a regular source of medical care as are those with either private or public health insurance. Uninsured persons receive fewer preventive services and less care for chronic conditions than do the insured.

Informing the public and policymakers is an essential first step in addressing the problem. The Institute of Medicine’s Committee on the Consequences of Uninsurance has begun a comprehensive effort to refocus policy attention and stimulate public debate on the issue. The data presented here are taken from Coverage Matters: Insurance and Health Care (National Academy Press, 2001, www.iom.edu/uninsured), the first of six scheduled reports.

The size of the uninsured population

MYTH: The number of uninsured Americans is not particularly large and has not been increasing in recent years.

REALITY: During 1999, about 15 percent of the population was uninsured. Almost three out of every ten Americans–more than 70 million people–were uninsured for at least a month over a 36-month period from 1993 to 1996. Although the uninsured population decreased slightly in 1999, the long-term trend has been a growing uninsured population. Without substantial restructuring of the opportunities for coverage, this trend is likely to continue.

Members of working families

MYTH: Most of the uninsured don’t work, or they live in families where no one works.

REALITY: More than 80 percent of uninsured children and adults under age 65 live in working families. Although working improves the chances that the worker and his or her family will be insured, even people in families with two full-time wage earners have almost a one-in-ten chance of being uninsured.

Immigrants and the uninsured population

MYTH: Recent immigration has been a major source of the increase in the uninsured population.

REALITY: Between 1994 and 1998, over 80 percent of the net increase in the size of the uninsured population consisted of U.S. citizens. Immigrants who arrived within the past 4 years are nearly three times as likely as members of the general population to be uninsured, but they comprise only about 6 percent of the whole uninsured population, and the uninsured rate for immigrants declines with increasing length of residency.

Who declines coverage and why

MYTH: People without health insurance are young, healthy adults who choose to go without coverage.

REALITY: Young adults aged 19 to 34 are uninsured more often than other age groups largely because they are ineligible for workplace health insurance. They are often too new in their jobs or work in firms that do not provide coverage to employees. Only 4 percent of all workers ages 18 to 44 (roughly 3 million people) are uninsured because they decline available workplace health insurance, and many do so because they cannot afford the cost. Another 15 percent (11 million) of workers in this group are uninsured because they are not offered health insurance at work and do not obtain it elsewhere. For some in this group, poor health can be a barrier to purchasing health insurance outside of work. The cost can be too high, preexisting conditions might not be covered, or an insurance company could refuse to cover them.


Lynne Page Snyder was program officer for the Institute of Medicine report Coverage Matters: Insurance and Health Care (National Academy Press, 2001).

Keeping National Missile Defense in Perspective

If we’re going to pursue this strategy, let’s do so in a realistic way that minimizes the economic and political costs.

The United States is in the midst of its third major debate on nationwide ballistic missile defense–the first culminating in the 1972 ABM Treaty and the second sparked by President Reagan’s “Star Wars” speech in 1983. This time the Cold War is over, the objectives for the defense are limited, and technology has advanced to the point where some options may be technically feasible.

However, intercontinental ballistic missiles (ICBMs) are not the primary threat to the United States, as events since September 11 demonstrate. Other homeland defense programs, especially civil defenses against bioterrorism, are more important. Yet emerging missile states may acquire ICBMs some day. To the extent that this is a concern, diplomatic efforts can limit the spread of ballistic missiles, and deterrence can dissuade their use. National missile defense (NMD), then, is insurance against the relatively unlikely event that ICBMs will be launched against the United States.

If the United States decides to deploy a limited NMD, the questions become what type and how much? A midcourse NMD system (one that attempts to intercept missile warheads as they fall through outer space) of the sort proposed for deployment in Alaska is the most technically mature option and would probably work well enough against emerging ICBM threats to justify limited deployment, assuming that the threat materializes. However, such a defense should contain only about 20 interceptors to minimize adverse political reactions from Russia and China. Over the long run, midcourse defenses may be vulnerable to sophisticated countermeasures. Therefore, the United States should place greater emphasis on land, naval, and air-based boost-phase intercept options (defenses that attempt to intercept the ballistic missile while its rocket motors are still burning) because they are more robust to countermeasures and they pose relatively little threat to Russia and China. Space-based boost-phase NMD systems have the advantage of global coverage; however, they are technically more challenging, probably more expensive, and more destabilizing.

How serious is the threat?

Ballistic missiles, predominately single-stage missiles with ranges less than 1,500 kilometers, are spreading. Indigenously produced variants of the former Soviet Scud B and Scud C missiles are the most common. Missiles with ranges greater than about 1,500 kilometers require two-stage boosters. North Korea has the most advanced missile program of the emerging missile states, and it has been willing to sell ballistic missiles and related technologies abroad. In the past decade, North Korea produced the No Dong missile (with a range of approximately 1,300 kilometers) and has exported components to Pakistan and Iran to help them develop the Gauri and Shahab-3 variants, respectively. On August 31, 1998, North Korea launched a three-stage missile in an attempt to put a small satellite weighing approximately 15 kilograms into orbit. The launch was a failure. However, the first two stages worked and are believed to have been the Taepo Dong-1 missile with a range of approximately 2,000 kilometers. Intelligence estimates project the appearance of a larger Taepo Dong-2 missile with a range of approximately 4,000 kilometers.

Currently, Russia and China are the only states that can threaten the U.S. homeland with long-range ballistic missiles. However, the Rumsfeld Commission report, released in July 1998, and a subsequent 1999 U.S. National Intelligence Estimate on ballistic missile threats argued that North Korea could threaten the U.S. homeland with ICBMs within five years of a decision to do so by deploying a third stage on a Taepo Dong-2 missile. These reports cited Iran as a possible ICBM threat within 5 years and perhaps Iraq within 10 years. They also noted threats from shorter-range missiles launched from ships or territories close to the United States. Little evidence has emerged in the open literature since the publication of these reports to suggest that new ICBM threats will appear in the next few years. Nevertheless, they remain a hypothetical possibility.

ICBM proliferation is a serious concern only when coupled with nuclear weapons. Conventional ICBMs do not pose a serious threat, at least not one that justifies large expenditures on NMD. Chemical warheads do not approach the lethality of nuclear or biological warheads, because the amount of chemical agent that can be carried by an ICBM is too small to cause widespread effects. In fact, under some meteorological conditions, they may be less lethal than conventional explosives. Biological payloads can be as lethal as nuclear weapons (under some circumstances) and, if released as submunitions, can easily overwhelm midcourse ballistic missile defenses. However, biological weapons are better suited for covert delivery because they are odorless, invisible, and the incubation time before disease symptoms become manifest (typically several days) allows the perpetrators to escape and possibly to elude identification altogether–as the anthrax attacks via the U.S. Postal System in October 2001 illustrate (to date). Ballistic missile delivery allows one to determine the time and location of biological agent release. This improves the efficacy of medical treatment because it can begin shortly after exposure, which considerably increases the chance that exposed individuals will survive. Knowing the territory from which an ICBM is launched makes U.S. threats to retaliate more effective, thereby reducing the likelihood of such attacks in the first place. Consequently, ballistic missile delivery is neither the most likely nor the most effective delivery mode for biological weapons. Covert biological delivery is a far more serious threat. If the United States develops effective civil defenses to protect against the latter, an important priority in the wake of the recent anthrax letters, the former is a less serious concern. Therefore, ballistic missiles armed with nuclear weapons are the most serious proliferation concern.

Accidental or unauthorized Russian missile launches are another possible threat. Chinese accidental or unauthorized missile launches are thought to be less serious, because China does not place warheads on its missiles in peacetime. This, of course, could change, as China deploys mobile ICBMs. Finally, accidental or unauthorized attacks may be a concern with emerging ballistic missile states because their command and control systems are likely to be rudimentary. The problem with these threats as a rationale for NMD is that one doesn’t know their likelihood, leading one to wonder whether a defense against large asteroids on a collision course with Earth–an event the probability of which can be determined with reasonable accuracy–should take precedence.

Coping with ballistic missile proliferation

Diplomacy, deterrence, and defense are three complementary approaches for coping with ballistic missile proliferation, although tensions exist between them. Diplomatic initiatives can help prevent the spread of ballistic missiles (and nuclear weapons), thereby eliminating the problem at its source. Moving beyond traditional arms control (such as the Missile Technology Control Regime and the Nuclear Non-Proliferation Treaty), diplomatic efforts should focus on specific states of concern. For example, in 1999 the Clinton administration came close to negotiating a freeze on North Korea’s ballistic missile program in exchange for a gradual normalization of relations. Although the deal fell through, North Korea continues to adhere unilaterally to a missile flight test moratorium. Unfortunately, the Bush administration has not pursued this opportunity. The U.S.-Russian Cooperative Threat Reduction Program is another example, in this case aimed primarily at preventing nuclear weapons, nuclear material, and weapon design expertise from leaking out of the former Soviet Union. Parallel efforts should be explored regarding missile technology (and perhaps biological weapons). The United Nations Special Commission charged with dismantling Iraq’s weapons of mass destruction and ballistic missiles after the 1991 Gulf War is a third example, albeit one that illustrates the weakness of diplomatic efforts if they lack international consensus. In any case, the potential gains of creative diplomacy are too great for the United States to relegate this approach to the back burner.

ICBMs are not the primary threat to the United States, as events since September 11 demonstrate.

Despite the best diplomatic efforts, ICBMs may still spread. The question then becomes whether they ever will be launched against the United States. This is a question of deterrence. The United States relied on deterrence throughout the Cold War to dissuade the former Soviet Union from launching a nuclear attack. Some people question the efficacy of deterrence against emerging missile states because, so the argument goes, their leaders are irrational and hence cannot be dissuaded by retaliatory threats. This argument distorts the character of regional leaders. They may be ruthless, unsavory characters with little regard for their civilian population. However, they are not suicidal. Effective deterrence depends on the capability and the resolve to carry out retaliatory threats that have been clearly communicated to an opponent. The United States has tremendous retaliatory capability in its conventional military forces. In addition, nuclear response options cannot be ruled out, although the emphasis clearly should be on conventional retaliation. There should be little doubt about U.S. resolve to retaliate after being attacked with an ICBM armed with nuclear (or biological) weapons, especially since the attacker’s identity will be known. Therefore, deterrence can dissuade ICBM attacks against the United States under a wide range of circumstances. Rather than eschewing the “grim premise” of deterrence, as President Bush put it, the United States should reformulate deterrence to make it more effective against authoritarian regimes armed with ballistic missiles. Failure to do so ignores an existing tool the United States can wield with considerable effect.

Nevertheless, deterrence can fail, not because the opponent is irrational but because leaders may find themselves in situations where they have nothing left to lose. Deterrence can also fail through misperception, misunderstanding, and miscommunication between emerging missile states and the United States, a realistic concern because regional opponents often misgauge U.S. resolve and vice versa. If one is concerned about deterrence failure, one naturally turns to defense. As insurance against the failure of diplomacy and deterrence, one must ask whether NMD can work and, if so, whether the benefits of deployment outweigh the costs.

Can NMD work?

The question of whether NMD will work is easy to ask but difficult to answer. The answer is neither binary (yes or no) nor static. Nor can an answer be given in the abstract. The NMD architecture and the opponent against whom it is to be effective must be specified. The defense performance criterion is also important, because technical feasibility is inversely correlated with expected performance. Defenses may not have to be perfect to be of value. However, they should be quite effective if they are to provide meaningful protection against nuclear attack (for example, a probability of 0.80 that no warheads leak through the defense for attacks containing fewer than 10 warheads).

The question of whether NMD will work cannot be addressed unless the architecture is specified in detail. This includes the type of defense [such as boost phase, midcourse, or terminal (the latter attempts to intercept warheads as they reenter the atmosphere)]; the lethal mechanism (such as hit-to-kill interceptors or lasers); the basing mode (ground-based, naval, airborne, or space-based); and especially the sensor architecture (such as early-warning radar, X-band tracking radar, or long-wave infrared sensors in space). Moreover, any NMD architecture can and will evolve with time, as will the missile threat.

Whether a given defense architecture works depends on the technical sophistication of the United States relative to that of emerging missile states. The prior question of whether defenses will work on the test range is not the most important issue, despite the political fanfare that surrounds such tests. The real question is whether an NMD system will be effective against a reactive opponent that deploys countermeasures to degrade the defense. This is primarily an issue of sensor architecture performance.

Specifying the opponent is obviously important, because this determines the size of the threat, its technical sophistication, and the economic resources available for the offense-defense competition. Emerging missile states such as North Korea, Iran, and Iraq are appropriate targets for NMD. Such states will have limited arsenals and relatively unsophisticated payloads, at least initially. Moreover, the financial resources available to the United States greatly exceed those available to emerging missile states, suggesting that the United States would be better able to engage in an offense-defense competition even if offensive forces are cost-effective at the margin, as is often the case.

Should a U.S. NMD system be designed against Russia or China? Russia is no longer a mortal enemy of the United States and, in any case, its arsenal is too large for a limited NMD system to be of much use against intentional attacks. A limited defense against accidental or unauthorized Russian missile launches is problematic because it must be effective against Russian countermeasures, thereby posing technical challenges and raising Russian suspicions of U.S. intentions. Consequently, the latter is better addressed by means other than missile defense; for example, by sharing early warning data, detargeting missiles and reducing their alert status, and improving transparency with respect to strategic command and control. Therefore, Russia should not be the focus of U.S. NMD efforts.

China is more complex. Whether China will become a hostile military power or an economic competitor with common interests in regional stability is one of the most important emerging U.S. foreign policy debates. It is premature to assume that military confrontation is inevitable. Moreover, the United States has many long-term economic, political, and strategic interests in common with China: for example, by promoting China’s transformation to a more democratic society based on free markets, combating terrorism, preventing the spread of weapons of mass destruction, and avoiding regional conflicts. Therefore, a U.S. NMD system should not be directed against China–not out of deference to a strategy based on mutual assured destruction (the Sino-U.S. strategic relationship has never been one of “mutual” assured destruction) but rather because it may undermine U.S. long-term interests. China may react by building a larger missile force than currently planned, which in turn will pose a greater threat to China’s neighbors, specifically India and Japan. If India responds by building a larger, more overt nuclear arsenal, Pakistan will feel pressured to follow suit. This would not promote stability in South Asia. Japanese concerns may reinvigorate a debate about Japan’s role in regional security, in particular the wisdom of an independent nuclear option. Besides, a limited U.S. NMD system against China would not remain limited for long, raising the prospect of a long-term offense-defense competition with China. The irony is that if China is a “peer” competitor, this competition will be costly and the end result probably will not be an effective defense. On the other hand, if China remains militarily weak, NMD may be more effective but less necessary. In short, the United States should not deploy an NMD system specifically against China.

The level of intelligence each side has about the other’s capabilities is also important because it allows each side to adapt to the other. However, one must beware of the fallacy of the last move: assuming that one side will have the last opportunity to adapt to the other’s system. Frequently, the offense will have the last move because it can adapt to a defense that has been fielded may years before. But this may not always be the case if countermeasures are flight-tested years before the missiles are used in war.

For example, emerging missile states lack instrumented test ranges, much less precision X-band radar with range resolutions below 10 centimeters and long-wave infrared (LWIR) sensors with which to view their tests in midcourse. In fact, these states will have little knowledge of LWIR signatures for objects in space because the atmosphere precludes LWIR observations from Earth’s surface. Cryogenically cooled LWIR focal plane arrays in space are beyond the capability of all but the most advanced spacefaring nations. Laboratory measurements alone are inadequate. Consequently, the United States may learn more about how to defeat countermeasures from an opponent’s flight tests than the latter learns about their effectiveness. For this reason, emerging missile states have little incentive to conduct flight tests. Yet without testing they will have little confidence that their countermeasures will work, unless they are purchased fully tested and ready to deploy from more advanced states–a questionable proposition. Therefore, despite the shortcomings in U.S. intelligence capabilities regarding emerging missile threats noted in the Rumsfeld Commission report, U.S. defenses may be able to adapt more quickly to the offense than the other way around.

Midcourse NMD

The Clinton administration originally proposed a midcourse defense with 20 interceptors based in Alaska by 2005, 100 to 125 interceptors by 2010, and a second site with 100 to 125 interceptors deployed by 2011. These ground-based interceptors use kinetic-kill vehicles (KKVs) that home in on warheads as they fall through outer space, using LWIR sensors. To date, three out of five midcourse NMD flight tests have been successful: an impressive technical achievement, appropriately dubbed “hitting a bullet with a bullet.” The NMD sensor architecture consisted initially of one X-band tracking radar located on Shemya Island in Alaska, five upgraded early warning radars, and the Space-Based Infrared System–High Earth orbit (SBIRS-High) satellites to replace the Defense Support Program ballistic missile early warning satellites. Up to eight additional X-band radars were to be added later, along with Space-Based Infrared System–Low Earth orbit (SBIRS-Low) satellites to track objects in space with LWIR sensors. A variant of this midcourse NMD system is still under consideration by the Bush administration, which has yet to articulate a clear NMD architectural preference.

The outcome of the technical competition between U.S. midcourse defenses and emerging missile threats is not easy to assess. At a rhetorical level, the argument is often made that any state that can deploy a crude unreliable ICBM can deploy countermeasures that can defeat midcourse NMD architectures. This is not obvious. ICBM development largely involves chemical engineering for propellants and mechanical engineering for structural design of the missile body, rocket motors, and reentry vehicles. On the other hand, effective countermeasures depend on knowledge of radar and optical sensor design, signal processing, discrimination signatures, and discrimination algorithms–branches of engineering in which emerging missile states may have little competence. Knowing an object’s signature at different wavelengths in outer space requires extensive test experience or access to data collected by more advanced nations, as noted above for LWIR signatures. As a general proposition, it becomes increasingly difficult to mimic warhead signatures in multiple spectral bands at different viewing angles, as would be required to defeat a sensor architecture with multiple X-band radars and infrared sensors. Therefore, without a closer engineering analysis, it is not obvious that emerging missile states could readily defeat a U.S. midcourse NMD system.

Even if sophisticated countermeasures such as anti-simulation techniques (whereby warheads are made to look like decoys) could be devised, it is still possible that the defense could adapt to defeat them. For example, mass is the fundamental discriminate for anti-simulation countermeasures. The defense might be able to detect subtle differences in motion due to random forces during decoy release that might discriminate light objects from heavy objects; to accurately track the payload’s center of mass trajectory before decoy release, then select the decoy that lies closest to the center of mass trajectory, which necessarily contains the warhead; or to apply an external force and observe the resulting motion. The very-high-range resolution of modern X-band radar makes these discrimination techniques worth exploring. The point is not to suggest that midcourse defenses can necessarily defeat sophisticated countermeasures. Rather, the offense-defense competition is dynamic, and the outcome is difficult to predict using arguments from elementary physics alone. Engineering details matter. Without access to experimental data, if not classified information, it is difficult to determine the outcome of this competition with any degree of rigor. Ultimately, flight test data against a range of plausible countermeasures must be collected to shed light on the likely outcome of the offense-defense competition. Clearly this should occur before any decision on midcourse NMD deployment is made.

Creative diplomacy can help prevent the spread of ballistic missiles, and deterrence can dissuade their use under a wide range of circumstances.

Midcourse NMD systems probably will work against simple threats with unsophisticated countermeasures; however, their performance against sophisticated countermeasures remains to be determined. This implies that NMD deployment should be judged more on whether the benefits outweigh the financial and political costs. The principal benefit of a limited NMD system is to reduce the risks associated with regional intervention against states armed with nuclear-tipped ICBMs, especially if these conflicts turn into wars to topple the opponent’s regime, because deterrence is apt to fail under these circumstances. In this regard, NMD is important for an interventionist U.S. foreign policy.

The financial costs associated with midcourse NMD systems of the sort proposed by President Clinton range between $25 billion and $70 billion (20-year life cycle costs), depending on the number of sites and the number of interceptors deployed. The larger figure represents an average annual expenditure of $3 billion to $4 billion. The United States currently spends $5.4 billion annually on national and theater missile defense. This was increased to $8.3 billion in fiscal year 2002. It is debatable whether this higher spending level is prudent in light of other important defense needs (for example, improved homeland security and conventional force modernization); however, it is not obvious that annual NMD expenditures of $3 billion to $4 billion are unaffordable. More robust architectures, including space-based weapons, may not be affordable.

The geopolitical costs are more important. Russian and Chinese opposition to U.S. midcourse NMD deployment, especially in light of the Bush administration’s decision to unilaterally abrogate the ABM Treaty, will strain relations with these major powers, undermining cooperation in other areas that affect U.S. security. Even some NATO allies have opposed unilateral U.S. NMD deployment. Russia’s objections are less pronounced than China’s for the simple reason that such a defense poses very little threat to the current and projected Russian strategic nuclear force. Russia’s concern with NMD breakout can be mitigated by U.S. transparency measures and by allowing Russia to retain warheads in its stockpile as a hedge. The Russian strategic bomber force also acts as a hedge. Nevertheless, Russia remains opposed to a U.S. midcourse NMD system. Thus, unilateral deployment over Russia’s objections may hinder cooperation on counterterrorism, weapons proliferation, and regional security issues extending from Europe to the Far East.

China’s opposition is stronger because its strategic arsenal is small, currently consisting of approximately 20 DF-5 ICBMs. China is modernizing its force with the addition of the DF-31 and DF-41 solid-propellant mobile ICBMs and a new submarine carrying longer-range JL-2 SLBMs. The future size of this arsenal is unknown, although estimates between 100 and 200 warheads seem reasonable. Even with this modernized force, a limited NMD system could substantially reduce the effectiveness of China’s deterrent. Therefore, China may increase the size of its strategic arsenal beyond current plans. This could have an adverse impact on India and Japan, as noted above. Sino-Russian military cooperation may increase and China too may become less cooperative on a range of regional and global security issues of interest to the United States. Ultimately, Sino-U.S. relations may become dominated by military competition to the exclusion of political and economic cooperation.

Therefore, deploying a limited midcourse NMD system as insurance against threats from small powers risks alienating the world’s major powers. Russia and China should not have a veto over U.S. NMD deployment. However, the long-term security consequences should be carefully weighed before the United States proceeds with deployment. A midcourse NMD system, if deployed, should be limited to about 20 interceptors–enough to handle a few emerging ICBMs. Beyond this, NMD alternatives should be considered that have fewer political, if not economic, costs.

Boost-phase alternatives

Land, sea, and air-based boost-phase interceptors have been suggested as alternatives to a midcourse NMD system because they are less vulnerable to countermeasures and they have fewer geopolitical costs. Boost-phase interceptors attempt to destroy their target while the ballistic missile is still in powered flight, using a KKV that homes in on and collides with the booster seconds before missile burnout. The size of such interceptors is determined by the KKV mass and the interceptor flight speed. Boost-phase KKVs probably can be built with a mass between 25 and 50 kilograms. This implies that all three terrestrial options are feasible. For example, a two-stage airborne interceptor weighing approximately 850 kilograms and traveling 4 to 5 kilometers per second should have ICBM intercept ranges on the order of 450 to 600 kilometers. A two-stage naval interceptor that fits in existing vertical launch tubes of Aegis cruisers should also have flight speeds of 4 to 5 kilometers per second and ICBM intercept ranges between approximately 350 and 500 kilometers. Larger naval or ground-based interceptors with interceptors up to 10,000 kilograms and flight speeds up to 8 kilometers per second could have ICBM intercept ranges between 700 and 1,000 kilometers. Therefore, a hypothetical North Korean ICBM can be intercepted by airborne or naval boost-phase interceptors launched from the Sea of Japan or by ground-based boost-phase interceptors cooperatively deployed with Russia at sites near Vladivostok. Although reliable cost estimates for terrestrial boost-phase options are not available, they may be comparable to those of land-based midcourse NMD systems, or possibly less.

Boost-phase interceptors are more resilient to countermeasures, because booster decoys are difficult to build, fast-burn solid-propellant ICBMs will not be readily available to emerging missile states, and maneuvering boosters may not outmaneuver agile homing KKVs. More important, terrestrial boost-phase options are less threatening to Russia and China than midcourse defenses, because the interceptors cannot reach all possible ICBM and SLBM launch locations. Airborne interceptors are mobile; however, they lack the range to threaten ICBMs located deep within Russia or China. Russia and China also have extensive strategic air defenses. Naval boost-phase interceptors may pose a threat to Russian or Chinese SLBMs; however, they don’t threaten their ICBMs. Land-based boost-phase interceptors clearly cannot reach Russian or Chinese strategic missiles.

Deployment of a midcourse system should be limited to about 20 interceptors in Alaska until an ICBM threat becomes clear.

Terrestrial boost-phase options do not constitute a near-term NMD option, because several technical hurdles exist. First, KKVs with sufficient divert capability (high lateral thrust and sufficient fuel) to home in on an accelerating booster target must be designed and tested. A sensor architecture must also be designed to quickly and accurately track ICBM boosters. In flight, the KKV must switch from homing in on the ICBM rocket plume to homing in on the missile body, a difficult challenge because the plume is so much brighter than the infrared signature of the missile body. Intercepting a booster several seconds before burnout may cause the debris to land on allied or friendly territory. Although it is not obvious that a warhead can survive the collapse of a booster after intercept, it is also difficult to prove that it will be inert. To avoid having a live warhead reenter the atmosphere, KKVs can be designed to collide with the payload section of an ICBM. However, this places greater demands on KKV homing accuracy and lethality.

All three terrestrial boost-phase concepts are viable options in principle. Airborne interceptors have the advantage that they can perform theater missile defense, in addition to national missile defense, by flying over an opponent’s airspace; an important advantage, because threats from theater-range ballistic missiles already exist. On the other hand, airborne interceptors have limited endurance, their design is inherently less robust to KKV mass increases because the interceptors are small, and one must ensure their survival against advanced air defenses. Naval boost-phase interceptors generally are not effective for theater missile defense, because naval platforms cannot get close enough to the launch sites. In some cases, naval platforms may even lack accessible waters for national missile defense. Moreover, they require protection from antiship cruise missiles and diesel attack submarines. On the other hand, naval boost-phase interceptors have substantial endurance and can accommodate heavier interceptors for heavier KKVs or higher interceptor speeds. Ground-based boost-phase interceptors cannot intercept theater-range missiles, and host nation support may not be forthcoming for some emerging ICBM threats, such as Iran. In addition, ground-based interceptors are potentially vulnerable to attack by short-range ballistic missiles, cruise missiles, or covert attack. On the other hand, ground-based interceptors have excellent endurance and can accommodate large interceptors for heavier KKVs and higher speeds. The principal drawback with all terrestrial boost-phase systems is that they offer no protection against accidental or unauthorized Russian and Chinese missile launches, although they may offer protection against such launches by emerging missile states.

In contrast, space-based interceptors (formerly known as “Brilliant Pebbles”) and space-based lasers offer global protection against accidental and unauthorized ICBM launches. However, they are technically more challenging because they must remain reliable for years in orbit, and they are more expensive. More important, space-based boost-phase systems threaten Russian and Chinese strategic missiles, thereby eliminating the geopolitical benefits associated with terrestrial boost-phase options, although proposals have been made based on orbital inclination and sparsely populated constellations to minimize this effect.

Balanced approach needed

In the wake of September 11 and the subsequent anthrax attacks, it is difficult to argue that ballistic missiles pose a more clear and present danger than terrorism, especially bioterrorism. Consequently, the United States should reevaluate the priority it assigns to long-range ballistic missile threats. To the extent that this threat still is of concern, creative diplomacy can help prevent the spread of ballistic missiles, and deterrence can dissuade their use under a wide range of circumstances. If new ICBM threats appear, midcourse NMD systems may be effective enough to warrant deployment as a form of insurance. However, concerns exist about their effectiveness against sophisticated countermeasures. Hence, early deployment should be discouraged until the test program demonstrates greater confidence in the underlying technology. More important, deployment may undermine relations with Russia and especially China. Hence, deployment should be limited to about 20 interceptors in Alaska until the emerging ICBM threat becomes clear–thereby providing protection against a few ICBMs without threatening China. Beyond a very limited midcourse NMD system, greater emphasis should be placed on terrestrial boost-phase options because they are more resistant to countermeasures; they create a thin, layered defense when used in conjunction with a midcourse defense; and, most important, they pose little threat to Russia and China. Therefore, an effective NMD against emerging missile states does not come at the expense of relations with the major powers. Airborne boost-phase options are among the most attractive because they can perform national and theater missile defense simultaneously.

Recommended reading

The Federation of American Scientists’ Web site at www.fas.org/spp/starwars/program.

Geoffry Forden, Budgetary and Technical Implications of the Administration’s Plan for National Missile Defense (Washington, D.C.: Congressional Budget Office, April 2000).

Jack Mendelson et al., White Paper on National Missile Defense (Washington, D.C.: Lawyers Alliance for World Security, Spring 2000).

Andrew Sessler et al., Countermeasures (Cambridge, Mass.: MIT/Union of Concerned Scientists, April 2000).

Michael O’Hanlon, “Star Wars Strikes Back,” Foreign Affairs 78, no. 6 (November/December 1999).

David Tanks, National Missile Defense: Policy Issues and Technological Capabilities (Cambridge, Mass.: The Institute for Foreign Policy Analysis, July 2000).

Bob Walpole, Foreign Missile Developments and the Ballistic Missile Threat to the United States Through 2015 (U.S. National Intelligence Council, Central Intelligence Agency, Langley, Va., September 1999).

Dean Wilkening, Ballistic Missile Defence and Strategic Stability (Adelphi Paper 334, International Institute for Strategic Studies, London, May 2000).


Dean A. Wilkening ([email protected]) is director of the science program at the Center for International Security and Cooperation at Stanford University, Stanford, California.

Regulatory Challenges in University Research

Federal regulations must be streamlined and coordinated so that society’s values can be upheld without impeding science.

The body of federal regulations designed to ensure that university research adheres to generally accepted societal values and ethics has grown rapidly in recent years, creating an administrative burden and a potential impediment to research. As publicly supported and publicly accountable institutions, universities are expected by force of regulation to develop effective procedures to deal with a number of research-related issues. Some of the most challenging sets of issues, which we focus on here, involve protecting human and animal research subjects, and detecting and managing scholarly misconduct. Other important issues include dealing with conflicts of interest among researchers and protecting the research environment for the benefit of both scientific workers and research subjects. The university research community has long accepted responsibility for these various tasks and recognized that reasonable regulations to achieve these goals are worthwhile and necessary, but the requirements imposed by the regulatory system are reaching the point where they may no longer be called reasonable.

The university community now finds itself trying to resolve the tension that has developed between two missions: fostering a robust research program and monitoring and regulating the activities that constitute this program. To demonstrate their accountability to the public trust for funding the research, universities must demonstrate that their adherence to regulations is unequivocal and visible. However, badly designed or poorly coordinated regulations can create an unnecessary problem for universities. We outline some key issues in the regulation of research and point to possible national strategies to achieve compliance with these regulations without unduly hampering scholarly inquiry.

Protecting human subjects

The Federal Policy for the Protection of Human Subjects, generally referred to as the Common Rule, sets out requirements for the conduct of federally sponsored human research. The Food and Drug Administration (FDA) has developed additional, but quite similar, regulations for investigators and sponsors of clinical trials designed to bring drugs and devices to the marketplace.

These regulations embody the principles of the Belmont Report, which was published in 1979 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Three principles–respect for persons, beneficence, and justice–are accepted as the basic requirements for the ethical conduct of research involving human participants. Respect for persons demands that individuals be able to make their own decisions about participating in research and that they be fully informed about the project, its risks and benefits, and the voluntary nature of participation. Respect for persons also involves the protection and special consideration of individuals, such as children and the cognitively impaired, who are unable to provide their own independent informed consent. Beneficence requires that any harm associated with the research be minimized and that potential benefits be maximized. Justice denotes the expectation that participant selection will be equitable so as to ensure that no particular group or class of individuals will be unduly targeted for participation in or exclusion from the research.

These principles are widely accepted; in fact, colleges, universities, and academic medical centers commonly surpass regulatory requirements and extend these protections to all human research participants irrespective of the source of project funding or the goal of the research. Research institutions believe they are doing a good job of putting these principles into practice. But some segments of the public and government are less confident that this is the case. Indeed, we appear to have moved into an era of heightened public concern regarding the protection of human participants in research and of public questioning of the ethics and motives of researchers. This concern has been fueled, in part, by recent government sanctions, including temporary complete suspension of clinical research activities, at several leading research universities. For its part, the academic community views such sanctions, along with the institutions’ efforts to correct the problems or, in some cases, explain why the problems were more bureaucratic than scientific, as signs that the system is working. Some outside observers, however, have retained a darker view of events.

One current crisis facing institutions is the difficulty of complying with agency interpretations of regulations, many of which essentially constitute new rules imposed without the normal rulemaking process. Universities and academic medical centers have established committees called Institutional Review Boards (IRBs) to review experimental protocols involving humans. The institutions traditionally have regarded federal regulations as being largely performance-based guidelines under which IRBs have significant discretion to act on a protocol-by-protocol basis. During the past several years, however, we have seen the temporary shutdown of several academic IRBs and therefore clinical research, leaving administrators and researchers scrambling to protect subjects enrolled in trials and to move forward with the research.

A review of documents issued by the Department of Health and Human Services (HHS) Office of Human Research Protections to institutions whose research programs have been investigated suggests that federal regulatory agencies are now viewing the regulations in more strict and literal terms than previously applied at many institutions. The agencies are expecting to see much more written documentation in IRB files related to research projects. This shift has forced institutions to greatly expand staffs and committees. At the University of Iowa, we have had a nearly fourfold increase between 1995 and 2000 in the cost of the human subject protection process. The focus of many of these efforts is, unfortunately, on the documentation of the process to prevent regulatory sanction or closure, necessitating the use of resources that could otherwise be deployed to achieve more significant improvements in human participant protections. This disconnection becomes even more complex as politically charged issues, such as gene therapy and growth of human embryonic stem cells, enter the research arena.

In the area of protecting human research subjects, the federal government should rewrite all of its regulations so that there is a single set of rules.

At a time when IRBs are struggling under increased pressure to fully document all of their actions to satisfy new agency interpretations of the Common Rule and FDA regulations, the committees now must comply with yet another set of regulations, promulgated under the Health Insurance Portability and Accountability Act (HIPAA) and implemented by HHS. The HIPAA details special requirements for accessing a patient’s medical records for use in research, but these regulations largely reflect protections already afforded under the Common Rule. Requiring IRBs to develop additional documentation to satisfy new regulations that do little more than duplicate those already in existence–especially at the same agency–is a movement in the wrong direction.

Problems such as these are being addressed by a new consortium of organizations that is initiating an accreditation process for institutions. By offering a recognized seal of approval, accreditation could establish a measure of excellence sought by research organizations, raising the bar for all and laying a path for continuous improvement. Called the Association for Accreditation of Human Research Protection Programs (AAHRPP), the organization plans to begin operating in 2002 and to draw its expertise from seven nonprofit organizations representing the leadership of universities, medical schools, and teaching hospitals; biomedical, behavioral, and social scientists; IRB experts and bioethicists; and patient and disease advocacy organizations. The Department of Veterans Affairs has contracted for similar accrediting services from the National Committee for Quality Assurance. These standards will be most appropriate and widely accepted if they distinguish between requirements for accreditation based on regulations (that is, specific laws and statutes) and recommendations based on best practices (that is, methods that are generally accepted as being the best way to deal with a particular situation).

In what may prove to be an important first step in initiating dialogue between accrediting agencies, the Institute of Medicine (IOM) issued, in April 2001, a report titled Preserving Public Trust: Accreditation and Human Research Participant Protection Programs. Among its recommendations, the report urges that accrediting organizations be nongovernmental entities whose standards build on federal regulations, that participants in studies be more thoroughly integrated into the research oversight process, and that consideration be given to having pilot accreditation programs evaluated by the U.S. General Accounting Office and the HHS Office of the Inspector General. The IOM is now conducting a more comprehensive assessment of the overall system for protecting human research participants, with a report expected in 2002. This study will delve into issues such as improving the informed consent process, easing the burdens on IRBs, ensuring that investigators are educated about the ethics and practices involved in conducting research with humans, enhancing research monitoring, and bolstering institutional support and infrastructure.

Protecting animal subjects

Under the Animal Welfare Act, the U.S. Department of Agriculture (USDA) regulates the care and use of several species of vertebrate animals in all research. As with human research, additional requirements are imposed on institutions that receive federal funding. In particular, the Public Health Service (PHS) Policy for the Humane Care and Use of Animals applies to all live vertebrate animals used in PHS-sponsored research. To provide further guidance on the operation of animal care and use programs under PHS regulations, the Institute of Laboratory Animal Resources of the National Research Council has published the Guide for the Care and Use of Laboratory Animals. Most other funding agencies and private foundations also require that research comply with PHS policy. As a result, institutions almost universally apply PHS standards to all animal research. Under these regulations, institutions must establish Institutional Animal Care and Use Committees that function in much the same way that IRBs function to oversee human research.

Animal care and use programs have had the option for several years of voluntarily seeking accreditation by the Association for the Assessment and Accreditation of Laboratory Animal Care (AAALAC) International. By participating in this accreditation process, institutions are able to assess their level of compliance with federal regulations and to get help in interpreting various regulations that are not spelled out in detail. Not surprisingly, the group’s staff members have been active in planning the Association for AHRPP.

Although the use of domestic pet species (dogs and cats) and nonhuman primates in research has attracted the most public attention, 95 percent of vertebrate animals used in research are rats and mice. These species are covered under the PHS policy, but they are not covered under the Animal Welfare Act. In September 2000, USDA settled a lawsuit filed by the Alternatives Research and Development Foundation by agreeing to initiate the rulemaking process for inclusion of rats and mice, as well as birds, under the Animal Welfare Act. However, Congress temporarily halted this process the following month by including a provision in the 2001 Agricultural Appropriations Act that prohibited USDA from using appropriated funds to begin rulemaking on this issue during fiscal year (FY) 2001, and this prohibition has now been extended through FY 2002 as well. Numerous universities and professional societies have spoken against this expanded coverage of the Animal Welfare Act. Their objections, which we believe are valid, center not on the appropriateness of providing protections to these species, but on the redundancy and cost of compliance with such regulations. The same species already are covered under the PHS Policy, as well as under the accreditation guidelines used by the AAALAC. Supporters of the inclusion of rats, mice, and birds under the Animal Welfare Act often cite the need for regulation of commercial breeders and vendors. However, a review of AAALAC’s list of accredited organizations shows that most of the large laboratory animal supply companies already have sought and received voluntary accreditation for their programs, including programs involving these species.

Preventing scholarly misconduct

Regarding scholarly misconduct, there is good news but also potential for bad news. On the one hand, research universities have not been unduly burdened with regulatory requirements in this area. Moreover, the federal government has proposed a new definition of the term, to be implemented ultimately across enforcement agencies, that is nearly uniformly seen as a vast improvement over the previous definitions used by the various agencies. In particular, the old definitions of misconduct included an ambiguous category described as “other practices” that seriously or significantly differed from accepted scientific norms. The proposed policy restricts the definition of misconduct to “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research or in reporting research results.” These terms are further defined as follows:

Fabrication is “making up data or results and recording or reporting them.”

Falsification is “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.”

Plagiarism is “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.”

The proposed definition concludes, as did previous definitions, with the observation that honest error or differences of opinion do not constitute research misconduct.

This is a common sense approach with which few people can legitimately argue, in large part because the government, the public, and research universities share many of the same interests in regulating research misconduct. All three groups share an interest in the integrity and quality of research funded by public money and focused on progress in improving the collective public health, quality of life, and national security. Beyond that, it is vital to research universities, and to the national research enterprise of which they are a fundamental part, that these interests be fostered in a way that does not unnecessarily restrict or handicap research.

However, universities do see an important need to ensure that the costs they must bear in complying with these requirements are in direct relation to the risk of scholarly misconduct, which is deemed to be quite low. For example, the National Institutes of Health (NIH) in 1999 awarded nearly 13,000 grants and contracts, most involving multiple researchers. For that year, the PHS Office of Research Integrity (ORI) was called on to investigate 33 cases of alleged misconduct. Only 13 of these cases were sustained with a final finding of actual misconduct. Research universities also are concerned that they be able to maintain institutional responsibility and flexibility in investigating and adjudicating their own internal affairs. Here, too, there is optimism, as the proposed policy on misconduct includes maintaining local autonomy as a useful goal.

Thus, if misconduct were all that the government intended to regulate, research universities would not have much cause to complain. However, “research misconduct” seems to have become synonymous in regulatory parlance with “research ethics.” Reflecting this development, the PHS issued its final “Policy on Instruction in the Responsible Conduct of Research” in December 2000. Although implementation of the policy has been and remains suspended, as proposed, it would confront universities with the prospect of having to provide mandatory instruction in the “responsible conduct of research” for all staff involved in PHS-funded research, including research collaborators outside the university. The program of instruction includes nine core areas determined by the PHS to be “significant,” five of which have not previously been the focus of regulatory attention by the agency, including publication and authorship practices. The scope of the draft policy was somewhat revised in response to extensive criticism from the research community. For example, under the proposed policy it is no longer required but is still “recommended” that the program of instruction be extended to administrative and other support staff. But it took an inquiry by a House of Representatives committee to stop, even temporarily, this far-reaching initiative.

At present, the PHS has suspended implementation of the policy while the agency responds to the committee’s concern that the ORI exceeded its legal authority in issuing “a final policy . . . that would impose new requirements on our nation’s research institutions.” If the PHS ultimately issues a policy that resembles the current suspended version, then the costs of compliance are anticipated to be staggering for research universities and far out of line with the low risk detected by the ORI’s own study.

Taking action

Heeding widespread complaints from the research community concerning the crippling effects of the federal regulatory structure, the House Committee on Appropriations requested in its budget for FY 1998 that agencies mount an effort to streamline duplicative and unnecessary regulations governing the conduct of extramural scientific research. In response, NIH established the Peer Review Oversight Group/Regulatory Burden Advisory Group to address major issues of increasing regulatory burden. We applaud this initiative and the changes that have been implemented, such as just-in-time IRB review (the review of human subject research only after the likelihood of funding is known). But few, if any, researchers and university administrators would say today that their regulatory burden has been greatly simplified or reduced since the initiative began. Therefore, we call on this group to look not at the fringes of the regulations for ways to streamline processes, but at the heart of the regulations and their redundancy across departments and agencies.

Although there can be no disagreement about the need to carefully and consistently ensure adherence by universities to regulations based on societal values, the details of bureaucratic implementation of the regulations are critical to the health of the nation’s university research enterprise. An obvious goal should be to streamline the regulatory apparatus and make more uniform the plethora of regulations that now exist.

In the area of protecting human research subjects, the federal government should rewrite all of its regulations so that there is a single set of rules, as well as explicit interpretations of those rules. The Common Rule, which covers nearly every type of situation, might serve as a solid foundation, with additions and refinements made as necessary. There is simply no need to have multiple agencies enforcing redundant regulations. The government also should identify a single agency responsible for not only the implementation and interpretation of the regulations, but also for conducting IRB audits and reviews. This agency could be either an existing one or a newly created administrative entity. Managed properly, this restructuring would not only relieve universities of unnecessary burdens but lower regulatory costs for both the regulator and institution.

We must find a way to ensure that well-documented costs for compliance activities are appropriately reimbursed.

To better protect animals used in research, the time has come as well to provide regulatory oversight for nonprofit research and educational institutions under a single agency’s umbrella. Again, either an existing agency or a new one could function in this role. Such consolidation would eliminate concerns about redundant regulations with potentially different reporting requirements and inspection parameters.

In the case of scholarly misconduct policies and training in the responsible conduct of research, any additional administrative burden on universities must be more narrowly tailored to the true nature and extent of the problem. Requiring thousands of federally funded researchers to receive formal (often additional) ethics training will not result in a wise use of resources, nor will it do much to reduce the already low risk that misconduct will occur. The goal must be to develop policies that truly can make a difference, rather than merely make investigators (and support staff, in some instances) jump through another hoop.

Across all of these areas, regulatory agencies should be urged to develop model programs that instruct university personnel on how to comply with their requirements.

Finally, in the case of all research regulatory procedures, the true cost of implementing new regulations, as well as new and more stringent interpretations of existing regulations, must be addressed. Proposals for addressing this issue include removing the arbitrary cap on the administrative components of facilities and administrative costs (commonly referred to as “indirect costs”) or developing some other mechanism whereby universities can fairly recover from the government the growing cost of regulatory compliance. Whatever the solution, we must find a way to ensure that well-documented costs for compliance activities are appropriately reimbursed. However, irrespective of the cost or reimbursement for those costs, we must ensure that the time and energies of the research community are not inordinately distracted from their most important tasks: the development of new knowledge for the benefit of all.

Recommended reading

Committee on Assessing the System for Protecting Human Research Subjects, Board on Health Sciences Policy, Institute of Medicine, Preserving Public Trust: Accreditation and Human Research Participant Protection Programs (Washington, D.C.: National Academy Press, 2001).

Final Federal Policy on Research Misconduct, 65 Federal Register 76260 (December 6, 2000) ).

Steven Goldberg, “The Statutory Framework for Basic Research,” Culture Clash (New York: New York University Press, 1994), 44–68.

Institute of Laboratory Animal Resources, National Research Council, Guide for the Care and Use of Laboratory Animals (Washington, D.C.: National Academy Press, 1996). (http://www.nap.edu/readingroom/books/labrats/)

Sheila Jasanoff, Science at the Bar (Cambridge, Mass.: Harvard University Press, 1995), 93–113.

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report, Ethical Principles and Guidelines for the Protection of Human Subjects of Research (Washington, D.C.: Department of Health, Education, and Welfare, April 18, 1979) ).

National Bioethics Advisory Commission, Ethical and Policy Issues in Research Involving Human Participants (2001) ).

Office of Research Integrity, PHS Policy on Instruction in the Responsible Conduct of Research (RCR) (Washington, D.C.: U.S. Department of Health & Human Services, December 1, 2000) ).

Letter of W. J. Tauzin, chair of the House Committee on Energy and Commerce, to Chris Pascal, director of the PHS Office of Research Integrity, February 5, 2001 ).


David L. Wynes is assistant vice president for research, Grainne Martin is senior associate counsel for research, and David J. Skorton ([email protected]) is vice president for research at the University of Iowa.

Improving U.S.-Russian Nuclear Cooperation

Anticipating that nuclear proliferation problems might erupt from the disintegration of the Soviet Union a decade ago, the United States created a security agenda for working jointly with Russia to reduce the threat posed by the legacy of the Soviet nuclear arsenal. These cooperative efforts have had considerable success. Yet today, the administrations of both President George W. Bush and Russian President Vladimir Putin are neglecting the importance of current nuclear security cooperation.

If these programs fall victim to that neglect or become a casualty of renewed U.S.-Russian tensions over the proposed deployment of a widespread U.S. ballistic missile defense system and the future of the Anti-Ballistic Missile (ABM) Treaty, then international security will be imperiled. There is no value in renewed animosity between the world’s top nuclear powers, especially if it helps push nuclear weapons materials and scientists to other nations or terrorist groups that desire to develop or expand their own weapons capabilities. Both nations need to take action, individually and jointly, to continue and in some cases expand the programs underway, as well as to develop new programs to address emerging problems. Vast amounts of nuclear, chemical, and biological weapons materials have yet to be secured or eliminated; export and border controls are grossly inadequate; and Russian weapons facilities remain dangerously oversized, and their scientists often lack sufficient alternative work. The need to aggressively address these threats is at least equal in importance to the need to counter the dangers posed by ballistic missile proliferation.

In bipartisan action in 1991, Congress laid the foundation for the cooperative security agenda by enacting what became known as the Nunn-Lugar program, named for its primary co-sponsors, Senators Sam Nunn (D-Ga.) and Richard Lugar (R-Ind.). This initiative has since developed into a broad set of programs that involve a number of U.S. agencies, primarily the Departments of Defense, Energy, and State. The government now provides these programs with approximately $900 million to $1 billion per year, and the results are tangible.

The first success came in 1992, when Ukraine, Belarus, and Kazakhstan agreed to return to Russia the nuclear weapons they had inherited from the Soviet breakup and to accede to the Nuclear Nonproliferation Treaty as nonnuclear weapon states. The same year, the United States helped Russia establish several science centers designed to provide alternative employment for scientists and technicians who had lost their jobs, and in some cases had become economically desperate, as weapons work in Russia was significantly reduced.

In 1993, the United States and Russia signed the Highly Enriched Uranium Purchase agreement, under which the United States would buy 500 metric tons of weapons-grade highly enriched uranium that would be “blended down” or mixed with natural uranium to eliminate its weapons capability and be used as commercial reactor fuel. The two nations also established the Material Protection, Control, and Accounting program, a major effort to improve the security of Russia’s fissile material, and they signed an accord to build a secure storage facility for fissile materials in Russia.

In 1994, U.S. and Russian laboratories began working directly with each other to improve the security of weapons-grade nuclear materials, and the two countries reached an agreement to help Russia halt weapons-grade plutonium production. Assistance to the Russian scientific community also expanded, with weapons scientists and technicians being invited to participate in the Initiatives for Proliferation Prevention program, which is focused on the commercialization of nonweapons technology projects.

In 1995, the first shipments of Russian highly enriched uranium began arriving in the United States.

In 1996, the last nuclear warheads from the former Soviet republics were returned to Russia. In the United States, Congress passed the Nunn-Lugar-Domenici legislation, which expanded the original cooperative initiative and sought to improve the U.S. domestic response to threats posed by weapons of mass destruction that could be used on American soil.

In 1997, the United States and Russia agreed to revise their original plutonium production reactor agreement to facilitate the end of plutonium production.

In 1998, the two nations created the Nuclear Cities Initiative, a program aimed at helping Russia shrink its massively oversized nuclear weapons complex and create alternative employment for unneeded weapons scientists and technicians.

In 1999, the Clinton administration unveiled the Expanded Threat Reduction Initiative, which requested expanded funding and extension of the life spans of many of the existing cooperative security programs. The United States and Russia joined to extend the Cooperative Threat Reduction agreement, which covers the operation of Department of Defense (DOD) activities such as strategic arms elimination and warhead security.

In 2000, the United States and Russia signed a plutonium disposition agreement providing for the elimination of 34 tons of excess weapons-grade plutonium by each country.

These and other efforts have produced significant, and quantifiable, results, which are all the more remarkable because they have been achieved under often difficult circumstances, as ministries and institutes that only a decade ago were enemies have been required to cooperate. In Russia, more than 5,550 nuclear warheads have been removed from deployment; more than 375 missile silos have been destroyed; and more than 1,100 ballistic missiles, cruise missiles, submarines, and strategic bombers have been eliminated. The transportation of nuclear weapons has been made more secure, through the provision of security upgrade kits for rail cars, secure blankets, and special secure containers. Storage of these weapons is being upgraded at 123 sites, through the employment of security fencing and sensor systems, and computers have been provided in an effort to foster the creation of improved warhead control and accounting systems.

With construction of the Mayak Fissile Material Storage Facility, the nuclear components from more than 12,500 dismantled nuclear weapons will be safely stored in coming years. Security upgrades also are underway to improve the security of the roughly 600 metric tons of plutonium and highly enriched uranium that exist outside of weapons located primarily within Russia, and improvements have been completed at all facilities containing weapon usable nuclear material outside of Russia. Through the Highly Enriched Uranium Purchase Agreement, 122 metric tons of material, which was recovered from the equivalent of approximately 4,884 dismantled nuclear warheads, has been eliminated. Plus, on the human side of the equation, almost 40,000 weapons scientists in Russia and other nations formed from the Soviet breakup have been given support to pursue peaceful research or commercial projects.

Beyond yielding such statistical rewards, these cooperative programs also have created an important new thread in the fabric of U.S.-Russian relations, one that has proven to be quite important during times of tension. Indeed, the sheer magnitude of the cooperative effort and the constant interaction among U.S. and Russian officials, military officers, and scientists has created a relationship of trust not thought possible during the Cold War. These relationships are an intangible benefit that is hard to quantify in official reports, but they are a unique result of this work. Until now, no crisis in U.S.-Russian relations has significantly derailed the cooperative security agenda. Even the damaging rift between the counties that developed as a result of the bombing of Kosovo only slowed or temporarily halted some low-level projects on the Russian side, but it did not result in the elimination of any of them.

Problems persist

Despite such accomplishments, however, some of the programs face significant problems. Milestones have been missed. Promises have been made but not kept. The political atmosphere on both sides is less friendly now than when the programs began. And in some quarters of the Bush administration, questions are being raised about the enduring importance of this cooperation. For progress to continue, two critical problem areas need to be addressed: access by each nation to the other’s sensitive facilities, and Russia’s current cooperation with Iran.

Access and reciprocity. Since the beginning of the cooperative agenda, the United States has insisted on having greater access to Russian facilities, arguing that the United States needs to make sure that its funds are being spent appropriately. For example, DOD’s Cooperative Threat Reduction program requires regular audits and inspections by U.S. officials, and the Department of Energy’s (DOE’s) programs make use of less formal but still fairly stringent standards for inspection. In recent years, however, many clashes over access have occurred, and rigidity has replaced flexibility. Spurred by congressional requirements and bureaucratic frustration, the United States has hardened its demands for access. Russia has resisted, arguing that U.S. intrusion could compromise classified information and facilitate spying, and that Russian specialists already have less access to U.S. facilities than U.S. specialists do to Russia’s facilities.

The administrations of both President George W. Bush and Russian President Vladimir Putin are neglecting the importance of current nuclear security cooperation.

This tug-of-war has become a major bone of contention that has interfered with some cooperation and fed the political mistrust and resentment that still remains as an undercurrent of U.S.-Russian relations. Clearly, some balance on this issue must be found. The United States rightly desires to be assured that its funds are being used properly, and Russia has legitimate security concerns. But continuing the impasse will become destructive to the interests of both sides. Unfortunately, it is not clear that the issue is being adequately addressed. In many cases, individual programs are left free to define their own access requirements and pursue their own access methods and rules. The issue of access may need to be addressed at a higher political level and with more cohesiveness than has been exercised in the past.

Russia’s cooperation with Iran. The trigger for this disagreement was Russia’s decision in 1995 to help Iran complete a 1,000-megawatt light water reactor in the port city of Bushehr, and controversies between Russia and the United States over this arrangement have only grown sharper over the years. U.S. officials maintain that the process of building the plant is aiding Iran’s nuclear weapon ambitions. Russia denies this accusation and claims that its actions are consistent with the Nuclear Nonproliferation Treaty, which allows the sharing of civilian nuclear technologies among signatories. This fight has resulted in an informal stalemate under which Russia continues to work to rebuild the Iranian nuclear plant while agreeing to limit other nuclear cooperation. However, there have been problems with this uneasy truce, including charges by the United States that Russia is cooperating in other illicit nuclear exchanges and U.S. concerns about planned Russian transfers of sensitive technology and increased sales of conventional weapons. Resolving these issues in a way that satisfies both U.S. and Russian political and economic needs will be extremely difficult.

The new administration

When the Bush administration came to office, many observers expected that there would be significant support for nuclear security cooperation programs. During the election campaign, the president and his advisers made a number of positive statements on the subject, and pledged to increase spending on key programs. But the reality of the administration’s governance has not matched its campaign rhetoric. Indeed, in one of its first acts, the administration proposed significant cuts in several of the cooperative programs. Thus far, Congress, working with bipartisan support, is resisting many of the proposed reductions.

Some of the administration’s largest proposed cuts would hit some of the most important programs. For example, the program to ensure that Russia’s weapons-grade fissile material and some portion of its warheads are adequately protected is cut by almost 20 percent, even though this effort is already behind schedule. Another set of programs hit by cuts include those to eliminate equal amounts of the excess U.S. and Russian stockpiles of plutonium. These programs focus on the use of two types of technologies: one for immobilizing the plutonium in a radioactive mixture and the other for mixing the plutonium with uranium to create a mixed-oxide fuel that can be used in commercial power reactors. The goal of both approaches is to create a radioactive barrier around the plutonium that makes it extremely difficult to retrieve for use in weapons. The proposed budget significantly decreases funding for disposal of Russian plutonium. And although the budget slightly increases overall funding for the disposition of U.S. plutonium, it raises questions about the administration’s willingness to support both types of technologies, as it drastically cuts support for activities based on immobilization. Yet at the same time, administration officials have raised questions about the cost of the mixed-oxide fuel option. As a result, the program now remains in limbo, and the administration apparently has not decided how to proceed.

Perhaps even more difficult to understand, the budget eliminates a $500,000 effort to provide Russia with incentives to publish a comprehensive inventory of its weapons-grade plutonium holdings. Without knowing how much plutonium Russia has, it is impossible to know how much excess must ultimately be eliminated. The United States has published its plutonium inventory, and it should be encouraging Russia to do the same.

The budget also decimates the already relatively small Nuclear Cities Initiative. Certainly, this program to help Russia shrink its massively oversized nuclear weapons complex and create jobs for unneeded weapons scientists and workers has suffered problems, in part because its mission is difficult and in part because its strategy has been flawed. But simply eliminating the program would leave an important national security objective inadequately funded. Such a step also would jeopardize European contributions to the downsizing process–contributions that only recently have begun to materialize. Even the U.S. General Accounting Office, which has criticized some aspects of the program, declared in a report released in spring 2001 that the program’s goals are in U.S. national security interests.

Additional funding could help Russia to improve its ability to detect nuclear materials at ports, airports, and border crossings.

After the administration proposed its budget cuts, it then doubled back and launched a review of the cooperative security agenda. This was a prudent, if poorly prioritized step: It is proper for a new administration to want to be sure that federal programs are meeting national security needs. In fact, many observers had urged the Clinton administration to perform a comprehensive review of U.S.-Russian nuclear security programs, but to no avail.

Unfortunately, the complete results of this review are not known publicly. No final report has been issued, and administration officials have stated that no final decisions have been made. Through a few briefings on a draft report, officials have revealed that, at least preliminarily, the review endorsed many of the current programs. This is welcome news. But it remains unclear how the scope and pace of many future activities may be affected by the review’s outcome.

The draft review does call for significant restructuring in at least two areas. One recommendation would virtually eliminate the Nuclear Cities Initiative, as called for in the administration’s proposed budget. Successful projects conducted through the initiative would be merged with other programs. Congress is opposing such a move, however, and the administration has offered no other proposals on how to facilitate the downsizing of the Russian nuclear weapons complex in the absence of this program.

Another recommendation calls for restructuring the plutonium disposition programs, citing, in part, the administration’s concerns about cost. The price tags of these programs have inflated significantly. The Russian component is now estimated at more than $2 billion, and the U.S. component at approximately $6 billion–a roughly a 50 percent increase over the initial estimates made in 1999 for the U.S. program alone. One way that the administration is considering to reduce the spiraling costs is for the United States to design and build new reactors that can burn unadulterated plutonium and provide electricity. The implication is that this would help achieve national security goals and national energy objectives simultaneously. But if not done carefully, such R&D could violate U.S. nonproliferation policy. It also should be noted that a number of studies, by the National Academy of Sciences and by a joint U.S.-Russian team of experts, among others, have concluded that the immobilization and mixed-oxide fuel options are the most feasible and cost-effective methods for disposing of plutonium. It is not clear whether returning to restudy new options will facilitate the real security objective of the program, which is to eliminate plutonium as a proliferation threat as rapidly as possible.

Continued investment

Too much is at stake to allow the cooperative security programs to crumble in order to save a few hundred million dollars or even a few billion dollars, especially in the new environment in which billions of dollars will be spent to eliminate and thwart terrorist threats. Current spending on cooperative security is one-tenth of 1 percent of current defense spending in the United States. It is an affordable national security priority. What cannot be afforded is the destruction of programs and relationships that have taken years to nurture and that provide value to both sides. The U.S. approach should be to consolidate the successes, adopt new strategies for overcoming problems, and identify new solutions to enduring or new threats.

What is required is the creation of a policy for sustainable cooperation with Russia on nuclear security issues. Elements of such a policy include:

Engaging with Russia as a partner. The cooperative security work that occurs requires the involvement and acquiescence of both the United States and Russia. In recent years, Russian input into this process has been diminished, and problems have resulted from this disparity. On one level, there is the enduring dispute about how much of the cooperative security budget is spent in Russia versus in the United States. But there are other, perhaps more important, issues. There is the tendency of some U.S. officials to treat collaboration with Russia as a client-donor relationship, with Russia acting as a subcontractor to the United States rather than as a partner. This tendency has caused resentment and limited cooperation on the Russian side. Another issue is the Russian desire to modify the rationale for U.S.-Russian cooperation. Russia often bristles about being treated as a weapons proliferation threat, even though its own officials acknowledge their nation’s proliferation problems. Russia would prefer to cooperate with the United States in a more equal manner, as a scientific and security partner rather than as a potential proliferant.

Such a shift may not occur rapidly, but the goal has merit. Proliferation problems in Russia have been reduced during the past decade, and there is a long-term need to engage with all elements of Russian society during its continuing political transition. To achieve sustainable engagement in the weapons area, future cooperation will need to serve larger U.S. and Russian interests. One key step in this direction would be to integrate Russian experts into all phases of program design and implementation. Taking this step will require a considerable change of attitude in the United States, both in the executive branch and in Congress. It will also require a sea change of mentality in Russia. Russian officials must demonstrate that they are committed to nuclear security cooperation beyond the financial incentives for participation offered by the United States. Achieving real balance and partnership will be difficult, but it is possible with strong political leadership.

Raising the political profile and leadership. The significant expansion of the cooperative security agenda and the progress that has been made on it have been substantially facilitated by political relationships and leadership in the United States and Russia. In times when this political leadership has been lacking on one or both sides, progress has lagged and problems have festered. At present, political leadership on this agenda is lacking in both countries. This agenda needs to be carried out on multiple levels, and its technical implementation is essential. But for success to continue, there must be active political engagement at the White House, Cabinet, and sub-Cabinet political appointee levels in the U.S. government. Similar engagement must also occur in Russia. At a time when they are playing a weak hand on the future of the ABM Treaty, the Russians also have failed to push this agenda forward as a foundation for future cooperation, perhaps because it focuses primarily on shoring up areas of that nation’s weakness.

Identifying a strategic plan of action and appointing a leader. The Bush administration’s review of U.S.-Russian cooperative programs did not include a strategic review of how all the programs from multiple agencies can or should fit together from the policy perspective of the United States. Such a review is still needed, so that the president’s strategy for the implementation, harmonization, and leadership of these programs can be made clear in a public manner. In addition, there should be a joint U.S.-Russian strategic plan for how to achieve important and common objectives on an expedited basis. This would provide a roadmap of project prioritization and agreed-upon milestones for implementation. A precedent for this joint plan can be found in the joint technical program plans for improving nuclear material security that were developed in the early 1990s by U.S. and Russian nuclear laboratories.

There was a time when programs needed to be allowed to grow independently in order to facilitate progress, but the artificial separation between these programs now needs to be ended. In the United States, all of these efforts should be guided by a new Presidential Decision Directive that can bring order and facilitate progress. Congress desires a more cohesive explanation of how all the pieces fit together, and there are synergies among the programs that are being missed because of the separation. It is not necessary to consolidate all of the activities in one or two agencies. What is more important is that the work takes place as part of a cohesive and integrated security strategy with strong and enlightened high-level leadership in both countries.

Also, in the past, many programs have benefited from the involvement of outside experts in the review of programmatic successes, failures, and implementation strategies. The establishment of an outside advisory board for cooperative nuclear security would be very useful if it were structured to allow for interaction with individual programs and had the ability to report to the presidents of both nations.

Underlying such policy issues, there is a need for additional program funding, which would not only accelerate the progress of current programs but also enable new programs to be created. Some of the key examples of where accelerated or new initiatives could have a significant impact include:

Expanding the Materials Protection, Control, and Accounting program. This is the primary U.S. program to improve the security of Russia’s fissile material and to work with the Russian Navy to protect its nuclear fuel and nuclear warheads. Activities that could be implemented or speeded up include improving the long-term sustainability of the technical and logistical upgrades that are being made, accelerating the consolidation of fissile material to reduce the number of vulnerable storage facilities, and initiating performance testing of the upgrades to judge their effectiveness against a variety of threat scenarios.

Improving border and export controls. These programs render assistance to Russian customs and border patrol services, but they are fairly limited in scope. Additional funding could help Russia to improve its ability to detect nuclear materials at ports, airports, and border crossings, as well as to establish the necessary legal and regulatory framework for an effective nonproliferation export control system.

Accelerating the downsizing of the Russian nuclear complex and preventing proliferation via brain drain of its scientists. These programs now primarily fund basic science or projects that have some commercial potential. However, there are many other real-world problems that Russian weapons scientists could turn their attention to if sufficient funds and direction were provided. These include research on new energy technologies, development of environmental cleanup methods, and nonproliferation analysis and technology development.

Expediting fissile material disposition and elimination. Although programs that support the disposal of excess fissile materials in the United States and Russia have shown progress, there is room, and need, for improvement. The Highly Enriched Uranium Purchase agreement could be expanded to handle more than the current allotment of 500 metric tons. The plutonium disposition program, now in political limbo, could be put back on track so that implementation can proceed as scheduled. In addition, the United States and Russia should begin to determine how much more plutonium is excess and could be eliminated.

Ending plutonium production in Russia. Continuing plutonium production for both military and commercial purposes adds to the already significant burden of improving nuclear material security in Russia. Steps should be taken to end this production expeditiously. Russia has three remaining plutonium-producing reactors, which currently produce approximately 1.5 metric tons of weapons-grade plutonium per year. However, the reactors also provide heat and energy for surrounding towns, and in order to shut them down, other energy sources must be provided. In 2000, Congress prohibited the use of funds to build alternative fossil-fuel energy plants at these sites, the method preferred by both Russia and the United States for replacing the nuclear plants. The estimated cost of the new plants is on the order of $420 million. Congress should lift its prohibition and provide funding for building the replacement plants. Also, Congress should provide funds to enable the United States and Russia to continue their work on an inventory of Russia’s plutonium production. Finally, Congress should authorize and fund incentives to help end plutonium reprocessing in Russia. In 2000, program officials requested about $50 million for a set of projects to provide Russia with an incentive to end its continued separation of plutonium from spent fuel. But Congress approved only $23 million, and the Bush administration’s proposed budget eliminated all funding. These programs should be reconstituted.

There is no question that U.S.-Russian nuclear relations need to be adapted to the 21st century. The foundation for this transition has been laid by the endurance and successes of the cooperative security agenda. Today, each country knows much more about the operation of the other’s weapons facilities. Technical experts cooperate on topics that were once taboo. And the most secretive weapons scientists in both nations have become collaborators on efforts to protect international security. Both nations must now recognize that more progress is needed and that it can be built on this foundation of achievement–if, in fact, elimination of the last vestiges of Cold War nuclear competition and the development of effective cooperation in fighting future threats is what the United States and Russia truly seek.

From Genomics and Informatics to Medical Practice

Biomedical research is being fundamentally transformed by developments in genomics and informatics, and this transformation will lead inevitably to a revolution in medical practice. Neither academic research institutions nor society at large have adapted adequately to the new environment. If we are to effectively manage the transition to a new era of biomedical research and medical practice, academia, industry, and government will have to develop new types of partnership.

Why are genomics and informatics more important than other recent developments? The spectacular advances in cell and molecular biology and in biotechnology that have occurred in the past two decades have markedly improved the quality of medical research and practice, but they have essentially enabled us only to do better what we were already doing: to respond to problems when we find them. As our knowledge expands, for the first time genomics will provide the power to predict disease susceptibilities and drug sensitivities of individual patients. For motivated patients and forward-looking practitioners, such insights create unprecedented opportunities for lifestyle adaptations, preventive measures, and cost-saving improvements in medical practices.

To illustrate this point, let me tell you about a recent conversation I had with a friend who had successful surgery for colon cancer 10 years ago. My friend recently moved to a new city. He selected the head of gastroenterology at a nearby medical school as his new oncologist. His initial visit to this doctor was a great surprise. The doctor took a very complete history but didn’t do any laboratory tests or schedule any other examinations. The doctor simply asked my friend whether the cancerous tissue removed from his colon had been tested for mutations in DNA repair enzymes. It had, and no defects were identified. “If you had defects in your DNA repair enzymes,” said the new oncologist, “I’d have asked you to come in for a colonoscopy right away and every six months thereafter. Since you don’t have such defects, you don’t need another colonoscopy for three years.” Colonoscopies cost about $1,000 and between half a day and a day of down time. I calculate that DNA testing saved my friend $5,000 and 2.5 to 5 days of down time. Moreover, my friend’s children now know that when they reach age 50 they won’t need colonoscopies any more often than the rest of the population. I can’t conceive of a bigger change in medical practice than this.

Advances in informatics will make it possible for every individual to have a single, transportable, and accessible cradle-to-grave medical record. Advanced information systems will allow investigators to use the medical records of individual patients for research, physicians to self-assess the quality of their own practices, and medical administrators to assess the quality of care provided by the health care personnel they supervise. And by granting public health authorities even limited access to the data collected, it will be possible for them to assess the health of the public in real time. These are not pie-in-the-sky predictions. All of these things are now technologically feasible. The sequencing of the human genome, coupled with extraordinarily powerful new methods in DNA diagnostics, such as gene chip technologies, allow us to identify relationships between physiological states and gene expression patterns. They allow us to identify gene rearrangements, mutations, and polymorphisms at a rate previously thought impossible.

Information technology is advancing at a phenomenal pace. Given the enormous financial incentives for further advances, it is not a big stretch to predict that the technology required for storing and processing the data from tens of thousands of chip experiments and for storing and analyzing clinical and genomic data on millions of people will be available by 2005. Indeed, it may already be available.

Barriers to progress

What are the impediments to bringing all this to fruition? There are many, but I will focus on a few. The first is the lack of public understanding of genetics. I am surprised by how little my well-educated friends in other fields and professions know about genetics. The state of genetic knowledge among practicing physicians is also of concern. A 1995 study showed that 30 percent of physicians who ordered a commercially available genetic test for familial colon cancer–the same test my friend had–misinterpreted the test’s results. In another study, 50 neurologists, internists, geriatricians, geriatric psychiatrists, and family physicians managing patients with dementia were polled for their knowledge of lifetime risk of Alzheimer’s disease in patients carrying the apolipoprotein E4 allele. Fewer than half of these physicians correctly estimated the risk of Alzheimer’s disease in patients carrying the apo-E4 allele at 25 to 30 percent, and only one-third of those who answered correctly were moderately sure of the correctness of their response.

Life science researchers must alert their colleagues in other disciplines to the impact genomics will have on our understanding of all aspects of human life, from anthropology to zoology, and especially on what we know and think about the human condition. As C. P. Snow argued 42 years ago in The Two Cultures, science is culture. What Snow did not foresee is that genetics would become inextricably intertwined with the politics of everyday life, from genetically engineered crops to stem cell research. If we are to exploit the promise of genomics for the betterment of humankind, we must have a citizenry capable of understanding the rudiments of genetics. The research community can contribute to creating such a citizenry by ensuring that the colleges and universities at which they teach provide courses on genomics that are accessible to nonscience majors.

A second problem is the widespread public concern about the privacy of medical information, especially genetic information. In response to this public anxiety, Congress tried to develop legislation to protect the public against adverse uses of this information by insurers and employers, but it was unable to put together a majority in support of any of the proposals that attempted to find the right balance between the competing interests of individual privacy and the compelling public benefits to be derived from the use of medical information to further biomedical, behavioral, epidemiological, and health services research. As a result, it fell to the Clinton administration to write health information privacy regulations. These regulations were announced with much fanfare in the closing days of that administration and implemented by the Bush administration in April 2001.

Comprising more than 1,600 pages in the Federal Register, they contained plenty that the various constituencies could take issue with. The health insurance industry and the hospitals complained loudly that they were costly and unworkable. More quietly, the medical schools warned that they could be potentially damaging to medical research and education. According to an analysis by David Korn and Jennifer Kulynych of the Association of American Medical Colleges (AAMC), these privacy regulations provide powerful disincentives for health care providers to cooperate in medical research, because they impose heavy new administrative, accounting, and legal burdens, including fines and criminal penalties; and because they are ambiguous in defining permissible and impermissible uses of protected health information. This is of great concern when viewed in the context of the opportunities for discoveries in medicine and for improvements in health care that could arise from large-scale comparisons of genomic data with clinical records.

The capacity to link genomic data on polymorphisms and mutations of specific genes with family histories and disease phenotypes has enabled medical scientists to identify the genes responsible for monogenic diseases such as cystic fibrosis, Duchenne’s muscular dystrophy, and familial hypercholesterolemia. Such analyses will be even more important in identifying genes that contribute to polygenic diseases such as adult onset diabetes, atherosclerosis, manic-depressive illness, various forms of cancer, and schizophrenia. The AAMC study revealed that the proposed regulations could slow this progress. Consider one example.

A partnership of academia, industry, and government to create and implement a national system of electronic medical records is a feasible and desirable goal.

The regulations require that all individual identifiers be stripped from archived medical records and samples before they are made accessible to researchers. At first glance, that seems reasonable. But as one digs deeper, it becomes apparent that how one de-identifies these records is critical. De-identification must be simple, sensible, and geared to the motivations and capabilities of health researchers, not to those of advanced computer scientists who believe that the public will be best served by encrypting medical data so that even the CIA would have difficulty tracing them back to the individual to whom they relate.

The definition of identifiable medical information should be limited to information that directly identifies an individual. The AAMC describes this approach to de-identification as proportionality. It recommends that the burden of preparing de-identified medical information be proportional to the interests, needs, capabilities, and motivations of the health researchers who require access to it. AAMC says that the bar for de-identification has been set at too high a level in the new privacy regulations.

For example, these regulations require that “a person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods for rendering information individually identifiable” must certify that the risk is very small that information in a medical record could be used alone or in combination with other generally available information to link that record to an identifiable person. This certification must include documentation of the methods and the results of the analysis that justifies this determination.

Alternatively, the rules specify 18 elements that must be removed from each record. These include Zip codes and most chronological data. But removal of these data would render the resulting information useless for much epidemiological, environmental, occupational, and other types of population-based research. The regulations also require that device identifiers and serial numbers must be removed from medical records before they can be shared with researchers. This would make it difficult for researchers to use these records for postmarketing studies of the effectiveness of medical devices.

The AAMC argues, and I agree, that sound public policy in this area should encourage to the greatest extent possible the use of de-identified medical information for all types of health research. The AAMC has urged the secretary of Health and Human Services to rethink the approach to de-identification and to create standards that more appropriately reflect the realities of health research, not the exaggerated fears of encryption experts.

Individual and societal rights

This is a classic confrontation between individual and societal rights. Since Hippocrates there has been widespread agreement that an individual’s medical history and problems should be held in confidence. At the same time, there is equally widespread agreement that societies have legitimate interests in ascertaining the health status of their citizens, the incidence of specific diseases, and the efficacy of treatments for these diseases. The new regulations give too much weight to individual rights. We need to go back to the drawing board to try to get this balance right. With some creativity, we can satisfy both sides.

So far, the science community has not been involved in the privacy issue. I believe that this is the time for university researchers to join with the AAMC and others to ensure that the privacy regulations are changed so that all members of our society can benefit from our investment in medical and health research. Such information is needed now more than ever.

Although improved privacy regulations are essential, they will not reassure everyone. Toward that end, the scientific community can provide leadership in three ways: First, in the genomic era many, perhaps most, individuals will have genetic tests. Therefore, we must educate our faculty, staff, students, and the public about the benefits and complexities of the new genetics. Second, we must train faculty, staff, and health professions students to obtain informed consent from patients for use of historical and phenotypic data in conjunction with blood and tissue samples for research. And third, we must implement existing technologies and develop better ones to ensure the accessibility and security of medical records.

Implicit throughout this discussion is the need for widespread implementation of electronic medical records, which are as important to researchers as they are to physicians. Electronic medical records will facilitate communication among all health professionals caring for a patient, permit public health officials to assess the health of the public in real time, expand opportunities for self-assessment by individual professionals, and provide better methods for ensuring the quality and safety of medical practice.

In addition, there are special reasons for medical scientists to take an interest in this matter. The most straightforward is that without electronic medical records the process of de-identification will be hopelessly complex, time-consuming, and costly. But even if de-identification of paper records could magically be made simple and cheap, paper records will still be inadequate for genomic research. Genomic research requires the capacity to link specific genes and gene polymorphisms that contribute to disease with people who have that disease. Large-scale studies of this type will be markedly facilitated by the capacity to electronically scan the medical records of tens of thousands of patients.

The Institute of Medicine has issued several reports on the electronic medical record. However, progress has been slow. The reasons for this are many, including the complexity of capturing in standardized formats the presentations and courses of human diseases, the high cost of development and implementation of such systems, and the difficulties inherent in inducing health care professionals to use them. Yet without electronic medical records it will be extremely difficult for teaching and research hospitals to make full use of contemporary methods to screen and identify associations between genes and diseases.

The promise of genomics gives our teaching and research hospitals a new incentive for implementing electronic medical records, and industry and government should recognize that they have incentives for helping them do so. Our teaching and research hospitals have the clinical investigators and the access to patients needed to link genes and diseases. Industry has the capacity for high-throughput screening and the information systems needed to efficiently process these data to identify mutations and polymorphisms. And government, acting on behalf of society at large, has an interest in fostering such collaborations between the not-for-profit and the for-profit sectors. However, the problems, as I see them, are several.

First, there is at present no widespread consensus that the issues are as I have stated them. Second, the teaching and research hospitals have not yet recognized that they will have great difficulties in creating and implementing the necessary information systems without major assistance from government and/or industry. Third, industry has not yet recognized the magnitude of the task ahead and has not determined that the profits to be earned in this area are more likely to come from drug discovery than they are from finding gene targets. Fourth, with respect to intellectual property ownership, in the area of genetics the not-for-profit and for-profit sectors are in head-to-head competition.

We need to find win-win avenues for cooperation between academia and industry and for academia and industry to appeal jointly to government for assistance in catalyzing cooperative ventures with money and with appropriate legislation. The catalytic effect of the Bayh-Dole Act on the development of the biotechnology industry should alert us to the positive effect creative legislation can exert in this area. As I see it, academic medical centers have the patients, the clinical workforce to care for the patients, and the confidence of the public. Industry has already put into place many of the requisite technologies. The challenge before all of us is to see whether we can reach consensus on specific problems that impede cooperation between industry and academia in the area of human genetic research and to find avenues through which the enlightened self-interests of both academia and industry can be united for the benefit of the public.

For the reasons outlined above, I believe that the use of electronic medical records is a key ingredient in speeding progress in all types of medical research in this country and of genomics in particular. I believe that an academic-industrial-government partnership to create and implement a national system of electronic medical records is a feasible and desirable goal. It is one that will facilitate cooperation between academia and industry, speed discovery of linkages between genes and diseases, and at the same time contribute to the improvement of health care delivery in the United States.

The human genome belongs to every human being. The public has provided the resources to characterize and sequence it, and it has entrusted us with the responsibility to use what we have learned about it for the benefit of humankind.

A New Approach to Managing Fisheries

Most commercial fisheries in the United States suffer from overfishing or inefficient harvesting or both. As a result, hundreds of millions of dollars in potential income is lost to the fishing industry, fishing communities, and the general economy. Excessive fishing effort has also resulted in higher rates of unintentional bycatch mortality of nontargeted fish, seabirds, and marine mammals, and in more ecological damage than necessary to benthic organisms from trawls, dredges, and other fishing gear.

These documented losses underscore the nation’s failure to manage its fisheries efficiently or sustainably. The problems have been addressed through a wide variety of regulatory controls over entry, effort, gear, fishing seasons and locations, size, and catch. Yet the Sustainable Fisheries Act of 1996 emphasized the continuing need to stop overfishing and to rebuild stocks. In the management councils of specific fisheries, there is sometimes bitter debate about the best way to achieve this turnaround.

Particularly contentious are management regimes based on the allocation of rights to portions of the total allowable catch (TAC) to eligible participants in a fishery: so-called rights-based fishing management systems. Best known among rights- based regimes are individual transferable quota (ITQ) systems, in which individual license holders in a fishery are assigned fractions of the TAC adopted by the fishery managers, and these quotas are transferable among license holders by sale or lease.

Opinion on the merits of rights-based management regimes is divided. Within a single fishery, some operators might strongly favor shifting to a rights-based regime and other operators strongly oppose such a move. Among academic experts, economists generally favor the adoption of such systems for their promise of greater efficiency and stronger conservation incentives, but other social scientists decry the potential disruption of fishing communities by market processes and the attrition of fishing jobs and livelihoods. These divisions are reflected in the political arena. The U.S. Senate, responding to constituent concerns in some fishing states, used the 1996 Sustainable Fisheries Act to impose a moratorium on the development of ITQ systems by any fisheries management council and on the approval of any ITQ system by the National Marine Fisheries Service (NMFS). A recent National Research Council committee report, Sharing the Fish, which examined these controversies, is no more than a carefully balanced exposition of pros and cons, though the committee did recommend that Congress rescind its moratorium. Despite support from some senators, that recommendation has not been adopted, and the moratorium has recently been extended.

Only four U.S. marine fisheries operate under such regimes: the Atlantic bluefin tuna purse seine fishery, the mid-Atlantic surf clam and ocean quahog fishery, the Alaskan halibut and sablefish fishery, and the South Atlantic wreckfish fishery. In all four, there are too few years of data from which to draw firm conclusions regarding the long-term consequences. However, in all but one there have been significant short-term benefits. Excess capacity has been reduced, fishing seasons have been extended, fleet utilization has improved, and fishermen’s incomes have risen in all but the small wreckfish fishery, in which effort and catch have declined. Quota holders have adjusted their operations in various ways to increase the value of the harvest, by providing fresh catch year round, for example, or by targeting larger, more valuable prey.

Some other fishing nations, notably Iceland and New Zealand, use rights-based regimes to manage nearly all their commercial fisheries. Still others, such as Canada and Australia, use such regimes in quite a few of their fisheries. A recent overview by the international Food and Agriculture Organization finds that rights-based systems have generated higher incomes and financial viability, greater economic stability, improved product quality, reduced bycatch, and a compensation mechanism for operators leaving the fishery. The corresponding costs are higher monitoring and enforcement costs, typically borne by industry; reduced employment and some greater degree of concentration as excess capacity is eliminated; and increased high-grading in some fisheries as operators seek to maximize their quota values.

Experience with rights-based management indicates that it also promotes conservative harvesting by assuring quota holders of a share of any increase in future harvests achieved through stock rebuilding. Such systems also promote efficiency by allowing quota holders flexibility in the timing and manner of harvesting their share to reduce costs or increase product value. Studies have also found that ITQs stimulate technological progress by increasing the returns to license holders from investments in research or improved fishing technology.

Partly because controversies have blocked adoption of rights-based systems in the United States and partly because there has never been an evaluation of actual experience in all ITQ systems worldwide using up-to-date data and an adequate, comparable assessment methodology, debate continues in a speculative but heated fashion about the possible positive and negative effects of adopting ITQ systems. This lack of definitive information makes it imperative to study carefully all available experience that sheds light on the likely consequences of adopting rights-based fishing regimes.

Fortunately, a rare naturally occurring experiment in the U.S. and Canadian Atlantic sea scallop fisheries provides such an opportunity. Fifteen years ago, Canada adopted a rights-based system in its offshore sea scallop fishery, whereas the United States continued to manage its scallop fishery with a mix of minimum harvest-size and maximum effort controls. A side-by-side comparison of the evolution of the commercial scallop fishery and of the scallop resource in the United States and Canada illuminates the consequences of these two very different approaches to fisheries management.

The Atlantic sea scallop fishery is especially suitable to such a comparison. The fishery has consistently been among the top 10 in the United States in the value of landings. After dispersing widely on ocean currents for about a month in the larval stage, juvenile scallops settle to the bottom. If they strike favorable bottom conditions, they remain relatively sedentary thereafter while growing rapidly. After they are first recruited into the fishery at about age three, scallops quadruple in size by age five, so harvesting scallops at the optimal age brings large rewards. Spawning potential also increases substantially over these years: Scallops four years old or older contribute approximately 85 percent to each year’s enormous fecundity, which can allow stocks to rebound fairly quickly when fishing pressure is reduced. A high percentage of the scallop harvest in both countries is caught in dredges towed along the bottom. The recreational fishery is negligible. Both Canada and the United States draw most of their harvest from George’s Bank, across which the International Court of Justice drew a boundary line in 1984, the Hague Line, separating the exclusive fishing grounds of the two countries.

The U.S. and Canadian scallop fisheries were compared by the collection of biological and economic data pertaining to each one for periods before and after the Canadians adopted rights-based fishing in 1986. The data underlying the figures and tables here are derived from data supplied by the NMFS, the New England Fisheries Science Center, the New England Fisheries Management Council, and the Canadian Department of Fisheries and Oceans. This quantitative information was enriched by interviews carried out in Nova Scotia and in New England during the summer of 2000 with fishing captains, boat owners, fisheries scientists and managers, and consultants and activists involved with the scallop fisheries in the two countries.

The road not taken

The U.S. Atlantic sea scallop fishery extends from the Gulf of Maine to the mid-Atlantic, and the NMFS manages all but the Gulf of Maine stocks as a single unit. From 1982 through 1993, about the only management tool in place was an average “meat count” restriction, which prescribed the maximum number of scallop “meats” in a pound of harvested and shucked scallops. Entry into the scallop fishery remained open.

This approach was inadequate to prevent either recruitment or growth overfishing. (Growth overfishing means harvesting the scallops too young and too small, sacrificing high rates of potential growth. Recruitment overfishing means harvesting them to such an extent that stocks are reduced well below maximum economic or biological yield because the reproductive potential is impaired.) Limited entry was introduced through a moratorium on the issuance of new licenses in March 1994, but more than 350 license holders remained. This many licenses were estimated at the time to exceed the capacity consistent with stock rebuilding by about 33 percent.

If the U.S. scallop fishery were a business, its management would surely be fired.

Because of excessive capacity, additional measures to control fishing effort were also adopted. The allowable days at sea were scheduled to drop from 200 in the initial year to 120 in 2000, which is barely enough to allow a full-time vessel to recover its fixed costs under normal operating conditions. A maximum crew size of seven was adopted–an important limitation, because shucking scallops at sea is very labor intensive. Minimum diameters were prescribed for the rings on scallop dredges to allow small scallops to escape, and minimum-size restrictions were retained. These rules constructed a system of stringent effort controls.

In December 1994, another significant event for scallop fishermen occurred: Three areas of George’s Bank were closed to all fishing vessels capable of catching cod or other groundfish, a measure necessitated by the collapse of the groundfish stocks. Scallop dredges were included in this ban, cutting the fishery off from an estimated five million pounds of annual harvest and shifting fishing effort dramatically to the mid-Atlantic region and other open areas. (Two small areas in the mid-Atlantic region were subsequently closed to protect juvenile scallops.)

The U.S. scallop fishery was also strongly affected by provisions in the Sustainable Fisheries Act of 1996, which required fisheries management to develop plans to eliminate overfishing and restore stocks to a level that would produce the maximum sustainable yield. Because current scallop stocks were estimated to be only one-third to one-fourth that size, these provisions mandated a drastic reduction in fishing effort. The plan adopted in 1998 provided that allowable days at sea would fall from 120 to as few as 51 over three years, a level that would be economically disastrous for the fishery.

In response, the Fisheries Survival Fund (FSF), an industry group, formed to lobby for access to scallops in the closed areas, a relief measure that was opposed by some groundfish interests, lobstermen, and environmentalists. Industry-funded sample surveys found that stocks in the closed areas had increased 8- to 16-fold after four years of respite. On this evidence, direct lobbying of the federal government secured permission for limited harvesting of scallops in one of the closed areas of George’s Bank in 1999. Abundant yields of large scallops were found. In the following year, limited harvesting in all three closed areas of George’s Bank was permitted. This rebuilding of the stock, together with strong recruitment years, revived the fortunes of the industry and made it unnecessary to reduce allowed days at sea to fewer than 120 days per year. Today, all the effort controls on U.S. scallop fishermen remain, plus additional limitations on the number of days that they can fish in the closed areas as well as catch limits on each allowable trip.

Canada, which harvests a much smaller scallop area, introduced limited entry as far back as 1973, confirming 77 licenses. The only additional management tool was an average size restriction. During the next decade of competitive fishing with the U.S. fleet, stocks were depleted, incomes were reduced, and many Canadian owner-operators voluntarily joined together in fishing corporations. This resulted in considerable consolidation, so that by 1984 there were only a dozen companies fishing for scallops, most of them operating several boats and holding multiple licenses.

In 1984, after the adjudication of the Hague Line, the Canadian offshore scallop fishery began to develop an enterprise allocation (EA) system. In an EA system, portions of the TAC are awarded not to individual vessels but to operating companies, which can then harvest their quota largely as they think best. The government supported this effort, accepting responsibility for setting the TAC with industry advice but insisting that the license holders work out for themselves the initial quota allocation. After almost a year of hard bargaining, allocations were awarded to nine enterprises. Also in 1986, to support this system, the government separated the inshore and offshore scallop fisheries, demarcating fishing boundaries between the two fleets.

The two nations adopted different management regimes for their similar scallop fisheries for several reasons. The Canadian fishery was much smaller and had already undergone considerable consolidation by the mid-1980s. There were fewer than 12 companies involved in the negotiations over the initial quota allocation. All of these enterprises were located in Nova Scotia, where the fishing community is relatively small and close-knit. By contrast, the U.S. fishery comprised more than 350 licensees and 200 active vessels operating from ports spread from Virginia to Maine. In fact, although it had been suggested as an appropriate option in the 1992 National ITQ Study, the ITQ option was rejected early in the development of Amendment 4 to the scallop management plan on the grounds that negotiating initial allocations would take too long. There were also fears that an ITQ system would lead to excessive concentration within the fishery. Atlantic Canada had already started moving in the direction of rights-based fishing in 1982, with an enterprise allocation system for groundfish. This approach was strongly opposed in all New England fisheries, where the tradition of open public access to fishing grounds is extremely strong. In New England, effort and size limitations were preferable to restrictions on who could fish.

The results

Interviews in Canada reveal that a strong consensus has emerged among quota holders, the workers’ union, and fisheries managers in favor of a conservative overall catch limit. In recent years, the annual TAC has been set in accordance with scientists’ recommendations in order to stabilize the harvest in the face of fluctuating recruitment. This understanding has been fostered by the industry-financed government research program, which closely samples the abundance of scallops in various year classes to present the industry with an array of estimates relating this year’s TAC to the consequent change in harvestable biomass. Faced with these choices, the Canadian industry has opted for conservative overall quotas, realizing that each quota holder will proportionately capture the benefits of conservation through higher catch limits in subsequent years.

As a result, the Canadian fishery has succeeded in rebuilding the stock from the very low levels that were reached during the period of competitive fishing in the early 1980s. It has also succeeded in smoothing out fluctuations in the biomass of larger scallops in the face of large annual variations in the stock of new three-year-old recruits.

In the United States, effort reductions needed to rebuild stocks have usually been opposed unless seen to be absolutely necessary. The effort controls adopted in 1994 were driven by the need to reduce fishing mortality by at least one-half to forestall drastic stock declines. Those embodied in Amendment 7 to the Fisheries Management Plan in 1998 responded to a requirement in the Sustainable Fisheries Act of 1996 to eliminate overfishing and to rebuild stocks to the level that would support the maximum sustainable yield by cutting effort by 50 to 75 percent. As a result of such resistance, resource abundance in the U.S. fishery has fluctuated more widely in response to varying recruitment, and a larger fraction of the overall resource consists of new three-year-old recruits because of heavy fishing exploitation of larger, older scallops.

Because of its success in maintaining greater scallop stocks, the Canadian fishery has maintained harvest levels with less fishing pressure. The exploitation rate for scallops aged 4 to 7 years, the age class targeted in the Canadian fishery, has fallen from about 40 percent at the time the EA system was adopted to 20 percent or less in recent years. The exploitation rate for 3-year-old scallops has fallen almost to zero. Industry participants state unanimously that it makes no economic sense to harvest juvenile scallops, because the rates of return on a one- or two-year conservation investment are so high. Not only do scallops double and redouble in size over that span, but the price per pound also rises for larger scallops. Therefore, the industry has supplemented the official average meat count restriction with a voluntary program limiting the number of very small scallops (meat count 50 or above) that can be included in the average. Although industry monitors check compliance, there is no incentive for license holders to violate it because they alone reap the returns from this conservation investment.

The Canadian industry has clearly recognized the value of investments in research.

In the United States, the exploitation rates have been much higher. Exploitation rates for larger scallops rose throughout the period from 1985 to 1994, peaking above 80 percent in 1993. Only the respite of the closed areas gave the stock some opportunity to rebuild in subsequent years. Exploitation pressures have also been heavy on 3-year-old scallops despite the heavy economic losses this imposes. Exploitation rates have consistently exceeded 20 percent and rose beyond 50 percent when effort expanded substantially during the early 1990s in response to one or two strong year classes. Because there is no assurance in the competitive U.S. fishery that fishermen acting to conserve small scallops will be able to reap the subsequent rewards themselves, the fleet has not exempted these undersized scallops from the harvest.

Although there is no reliable data on fishermen’s incomes, there are still reasonably reliable indicators of their economic success. The first is capacity utilization. An equipped fishing vessel represents a large investment that is uneconomic when idle. Considerable excess capacity was already present in the U.S. fleet when license limitations were initiated in 1994, allowing the number of active vessels to expand and contract in response to stock fluctuations.

In Canada, there has been a steady and gradual reduction in the size of the fleet. When the EA system was introduced, license holders began replacing their old wooden boats with fewer, larger, more powerful vessels. The stability afforded by the EA system reduced license holders’ investment risk and enabled them to finance these investments readily. Overall, the number of active vessels in the Canadian fishery has already dropped from 67 to 28. The process continues. Two Canadian companies are investing in larger replacement vessels with onboard freezing plants in order to make longer trips and freeze the first-caught scallops immediately, thereby enhancing product quality.

Trends in the number of days spent annually at sea are similar to those in the number of active vessels. In the United States, effort has risen and fallen in response to recruitment and stock fluctuations. In Canada, there has been a steady reduction in the number of days spent at sea, reflecting the greater catching power of newer vessels, the greater abundance of scallops, and the increase in catch per tow. Consequently, the number of sea days per active vessel, a measure of capacity utilization, has consistently been higher by a considerable margin in Canada than in the United States. Because of the flexibility afforded to license holders and their ability to plan rationally for changes in capacity, the Canadian fishery has been able to use its fixed capital more effectively. In the United States, restrictions on allowable days at sea, now at 120 days per year, have impinged heavily on those operators who would have fished their vessels more intensively.

A second important indicator of profitability is the catch per day at sea. Operating costs for fuel, ice, food, and crew rise linearly with the number of days spent at sea. Therefore, the best indicator of a vessel’s operating margin is its catch per sea day. In Canada, catch per day at sea has risen almost fourfold since the EA system was adopted. Because overall scallop abundance is greater and the cooperative survey program has produced a more detailed knowledge of good scallop concentrations, little effort is wasted in harvesting the TAC. Moreover, fishing has targeted larger scallops, producing a larger and more valuable yield per tow. In the U.S. fishery, catch per sea day fell significantly over the same period because of excessive effort, lower abundance, greater reliance on immature scallops, and less detailed knowledge of resource conditions. As a result of these diverging trends, catch per sea day in 1998 favored the Canadian fleet by at least a sevenfold margin, although when the regimes diverged in 1986 the difference was only about 70 percent. The harvesting of large scallops in the U.S. closed area in 1999 helped only somewhat to reduce this difference. An index of revenue per sea day normalized to 1985 shows the same trend. It is clear that the Canadian fleet has prospered and that until the recent opening of the closed areas, the U.S. fleet has not.

Striking as these comparisons may be, the differences in technological innovation in the two fisheries are perhaps even more dramatic. The Canadian industry has clearly recognized the value of investments in research. License holders jointly and voluntarily finance the government’s research program by providing a fully equipped research vessel and crew to take sample surveys, enabling research scientists to take samples on a much finer sampling grid and resulting in a more detailed mapping of scallop concentrations by size and age. In addition, scallop vessels contribute data from their vessel logs, recording catch per tow and Global Positioning System information, to the research scientists, facilitating even better knowledge of scallop locations and abundance.

In the United States, the government-funded research program lacks the resources to sample the much larger U.S. scallop area in the same detail. However, the industry response has not been to finance government research, as in Canada, but to initiate a parallel sampling program, especially to monitor scallop abundance in the closed areas.

Recently, the Canadian industry has embarked on a new industry-financed program costing several million dollars to map the bottom of its scallop grounds using multibeam sonar. This technique can distinguish among bottom conditions, thereby pinpointing the gravelly patches where scallops are likely to be found. Confirmation of these maps with experimental tows has confirmed that this mapping can enable vessels to harvest scallops with much less wasted effort. Industry informants predict that they will be able to harvest their quotas with an additional 50 percent reduction in effort. Not only will this reduction in dredging increase the fishery’s net rent considerably, it will also reduce bycatch of groundfish, gear conflicts with other fisheries, and damage to benthic organisms on George’s Bank. All three side effects are of great ecological benefit to other fisheries.

Equity and governance issues

Both the U.S. and Canadian fisheries have traditionally operated on the “lay” system, which divides the revenue from each trip among crew, captain, and owner according to pre-set percentages, after subtracting certain operating expenses. In Canada, for example, 60 percent of net revenues are divided between captain and crew and 40 percent goes to the boat. For this reason, all parties remaining in the fishery after its consolidation have shared in its increasing rents. The government raised license fees in January 1996 from a nominal sum to $547.50 per ton of quota, thereby recapturing some resource rents for the public sector as well.

Although survivors in the Canadian fishery have done well, there has been a loss of employment amounting to about 70 jobs per year over the past 13 years. In the early years, many found berths in the inshore scallop fishery, which was enjoying an unusual recruitment bloom. More recently, the expanding oil and gas industry in Nova Scotia and the service sector have absorbed these workers with little change in overall unemployment. The Canadian union representing many of the scallop workers supports the EA system over a return to competitive fishing, favoring steady remunerative jobs over a larger part-time or insecure workforce. The union has negotiated full staffing of crews, which contain 17 workers in Canada (as compared with 7 in the United States) and preference for displaced crew in filling onshore or replacement crew jobs.

There is a pressing need for a thorough evaluation of the results of rights-based approaches.

One fear expressed by U.S. fisherman about the consequences of adopting a rights-based regime is that small fishermen will be forced out by larger concerns. Although exit from a rights-based fishery would be voluntary, the fear is that small fishermen would not be able to compete, perhaps because of economies of scale or financial constraints, and would have to sell out. Canada’s experience provides some evidence about the process of consolidation. Over a 14-year period, the number of quota holders has declined from nine to seven. Three medium-to-large quota holders sold out to Clearwater Fine Foods Ltd., which is now the largest licensee, holding slightly less than a third of the total quota. The other entrant, LaHave Seafoods, is the smallest licensee, having bought a part of the quota held by an exiting company. The remaining 65 percent of the quota remains in the original hands, including the shares held by two of the smallest original quota holders. There is little evidence in this record that the smaller players have been at a significant competitive disadvantage or that a rights-based regime results in monopolization of the fishery.

Another important issue is a regime’s effect on the process of governance and the success of co-management efforts. On this score, the Canadian record is clearly superior. The industry cooperatively supports government and its own research programs. Owners and operators speak respectfully about the scientists’ competence and have almost always accepted their recommendations in recent years. The industry also bears the costs of monitoring and enforcement of the EA regime and of its own voluntary restrictions on harvesting underaged scallops. Interviews reveal that fishermen feel that the system has freed them from disputes regarding allocations or effort restrictions and has enabled them to concentrate on maximizing the value of their quotas through efficiencies and enhanced product quality.

The contrast with the U.S. fishery is obvious. The industry created its own lobbying organization, the FSF, to contest the decisions of the New England Fisheries Management Council and the NMFS in maintaining area closures. The FSF has hired its own Washington lawyer and a lobbyist (a former congressman) to lobby Congress and the executive branch directly. It has also hired its own scientific consultants in order to contest the findings of government scientists, if necessary, and is conducting its own abundance sampling. Fishermen in the industry and their representatives are openly critical of government fisheries managers and scientists and of one another. All informants complain about the time-consuming debates and discussions about management changes. The larger fishermen complained repeatedly that smaller fishermen were motivated mainly by envy and were using the political process to try to hold others back. Adding further to the conflict, environmental groups that had won a place on the fisheries management council, having failed to stop the council’s decision to resume limited scallop fishing in the closed areas of George’s Bank, have initiated a lawsuit to block the opening. The co-management regime in the U.S. scallop fishery is conflicted, costly, and ineffective.

Charting the future

In Canada, neither industry nor government nor unions desire to replace the EA system with any other. The industry expects that its investment in research will substantially raise efficiency and profitability in the coming years, even with a stable TAC. The industry’s investment in new freezer vessels will also enhance product quality and the value of the catch by enabling the operators to freeze first-caught scallops and market fresh the scallops caught on the last days of the voyage.

The prognosis for the U.S. fishery is less certain but more interesting. The natural experiments with closed areas have demonstrated how quickly scallop stocks can increase when fishing pressure is relaxed. They have also raised suspicions that the larger biomass of mature scallops in the closed areas may be responsible for the good recruitment classes of recent years. This would suggest that the fishery had been subject to recruitment overfishing as well as growth overfishing. Developments in the closed areas have created substantial support both in the FSF and in the NMFS for adopting a system of rotational harvesting, in which roughly 20 percent of the scallop grounds would be opened in rotation in each year. Rotational harvesting would largely eliminate growth overfishing by giving undersized scallops in closed areas a chance to mature. This would improve yields in the fishery but would not resolve the problem of excessive effort. Rotational harvesting would also raise new management challenges regarding enforcing the closures and adjusting them with insufficient data on fluctuating geographical scallop concentrations.

Adopting a rotational harvesting regime would also lead toward a catch quota system. Already, limits on the number of trips each vessel may take into the closed areas and catch limits per trip amount to implicit vessel quotas for harvests in the closed areas. These would be formalized in a rotational harvesting plan. Then, perhaps, it might be only a matter of time before the advantages of flexible harvesting of quotas and transfers of quotas are realized. It seems quite possible that over the coming years, the U.S. scallop fishery will move toward and finally adopt a rights-based regime, putting itself in a position to realize some of the economic benefits that the Canadian industry has enjoyed for the past decade.

There has been little discussion in the United States of the Canadian experience, relevant though it is, or of the experience of other countries in using rights-based approaches to fisheries management. There is a pressing need for a thorough evaluation of the results of these approaches throughout the world, using adequate assessment methodologies and up-to-date data, in order to give U.S. fishermen and policymakers a more adequate basis for choice.

If the U.S. scallop fishery were a business, its management would surely be fired, because its revenues could readily be increased by at least 50 percent while its costs were being reduced by an equal percentage. No private sector manager could survive with this degree of inefficiency.

Experience has shown that moving from malfunctioning effort controls to a rights-based approach typically results in improved sustainability and prosperity for the fishery. Safeguards can be built into rights-based systems. For example, limits on quota accumulation can forestall excessive concentration. Vigorous monitoring and enforcement combined with severe penalties can deter cheating. Size limitations can be used if necessary to prevent excessive high grading. The concerns raised regarding the possible disadvantages of rights-based systems can be addressed in these ways rather than by an outright ban on the entire approach. Rather than requiring fisheries to adhere to management systems that have not worked well in the past, Congress should encourage fisheries that wish to do so to experiment with other promising approaches. Only the fruits of experience will resolve the uncertainties and allay the misgivings that now block progress.

Fall 2001 Update

UN forges program to combat illicit trade in small arms

Since the early 1990s, a global network of arms control groups, humanitarian aid agencies, United Nations (UN) bodies, and concerned governments have been working to adopt new international controls on the illicit trade in small arms and light weapons, as I discussed in my article, “Stemming the Lethal Trade in Small Arms and Light Weapons” (Issues, Fall 1995). These efforts culminated in July 2001 with a two-week conference at UN headquarters in New York City, at the end of which delegates endorsed a “Programme of Action to Prevent, Combat, and Eradicate the Illicit Trade in Small Arms and Light Weapons in All Its Aspects” (available at www.un.org/Depts/dda/CAB/smallarms/).

Although not legally binding, the Programme of Action is intended to prod national governments into imposing tough controls on the import and export of small arms, so as to prevent their diversion into black market channels. Governments are also enjoined to require the marking of all weapons produced within their jurisdiction, thus facilitating the identification and tracing of arms that are recovered from illicit owners and to prosecute any of their citizens who are deemed responsible for the illegal import or export of firearms. At the global level, states are encouraged to share information with one another on the activities of black market dealers and to establish regional networks aimed at the eradication of illegal arms trafficking.

Adoption of the Programme of Action did not occur without rancor. Many states, especially those in Africa and Latin America, wanted the conference to adopt much tougher, binding measures. Some of these countries, joined by members of the European Union, also wanted to include a prohibition on the sale of small arms and light weapons to nonstate actors. Other states, including the United States and China, opposed broad injunctions of this sort, preferring instead to focus on the more narrow issue of black market trafficking. In the end, delegates bowed to the wishes of Washington and Beijing on specific provisions in order to preserve the basic structure of the draft document.

Although not as sweeping as many would have liked, the Programme of Action represents a significant turning point in international efforts to curb the flow of guns and ammunition into areas of conflict and instability. For the first time, it was clearly stated that governments have an obligation “to put in place, where they do not exist, adequate laws, regulations, and administrative procedures to exercise effective control over the production of small arms and light weapons within their areas of jurisdiction and over the export, import, transit, and retransfer of such weapons,” and to take all necessary steps to apprehend and prosecute those of their citizens who choose to violate such measures.

The imposition of strict controls on the production and transfer of small arms and light weapons is considered essential by those in and outside of government who seek to reduce the level of global violence by restricting the flow of arms to insurgents, ethnic militias, brigands, warlords, and other armed formations. Because belligerents of these types are barred from entry into the licit arms market and so must rely on black market sources, it is argued, eradication of the illicit trade would impede their ability to conduct military operations and thus facilitate the efforts of peace negotiators and international peacekeepers.

Nobody truly believes that adoption of the Programme of Action will produce an immediate and dramatic impact on the global arms trade. Illicit dealers (and the government officials who sometimes assist them) have gained too much experience in circumventing export controls to be easily defeated by the new UN proposals. But the 2001 conference is likely to spur some governments that previously have been negligent in this area to tighten their oversight of arms trafficking and to prosecute transgressors with greater vigor.

The Programme of Action calls on member states to meet on a biennial basis to review the implementation of its provisions and to meet again in 2006 for a full-scale conference. This will give concerned governments and NGOs time to mobilize international support for more aggressive, binding measures.

Michael T. Klare

The Skills Imperative: Talent and U.S. Competitiveness

Is there anything fundamentally “new” about the economy? With the benefit of hindsight, we know that predictions about the demise of the business cycle were premature. “New economy” booms can be busted. All companies, even the dot-coms, need a viable business plan and a bottom line to survive. Market demand is still the dominant driver of business performance; the “build it and they will come” supply model proved wildly overoptimistic. But the assets and tools that drive productivity and economic growth are new. The Council on Competitiveness’s latest report, U.S. Competitiveness 2001, links the surge in economic prosperity during the 1990s to three factors: technology, regional clustering, and workforce skills.

Information technology (IT) was a major factor in the economic boom of the 1990s. The widespread diffusion of IT through the economy, its integration into new business models, and more efficient IT production methods added a full percentage point to the nation’s productivity growth after 1995. Now the information technologies that powered U.S. productivity growth are being deployed globally. The sophistication of information infrastructure in other countries is advancing so rapidly that many countries are converging on the U.S. lead. With 221,000 new users across the globe expected to log on every day, the fastest rates of Internet growth are outside the United States.

The growth in regional clusters of economic and technological activity also propelled national prosperity. The interesting feature of the global economy is that, even as national borders appear to matter less, regions matter more. Strong and innovative clusters help to explain why some areas prospered more than others. Clusters facilitate quick access to specialized information, skills, and business support. That degree of specialization, along with the capacity for rapid deployment, confer real competitive advantages, particularly given Internet-powered global sourcing opportunities. The early data from the council’s Clusters of Innovation study indicate that regions with strong clusters have higher rates of innovation, wages, productivity growth, and new business formation.

Finally, to an unprecedented degree, intellectual capital drove economic prosperity. Machines were the chief capital asset in the Industrial Age, and workers, mostly low-skilled, were fungible. In the Information Age, precisely the opposite is true. The key competitive asset is human capital, and it cannot be separated from the workers who possess it.

The nation has made enormous strides in workforce skills over the past 40 years. As recently as 1960, over half of prime age workers had not finished high school, and only one in 10 had a bachelor’s degree. Today, only 12 percent of the population has not finished high school, and over a quarter of the population has a bachelor’s degree or higher. This improvement in the nation’s pool of human capital enabled the transition from an industrial to an information economy.

Unfortunately, the gains in education and skills made over the past 40 years will not be sufficient to sustain U.S. prosperity over the long term. The requirements for increased skills are continuing to rise, outstripping the supply of skilled workers. The empirical evidence of a growing demand for skills shows up in two ways. First, the fastest-growing categories of jobs require even more education. Only 24 percent of new jobs can be filled by people with a basic high-school education, and high-school dropouts are eligible for only 12 percent of new jobs (see Figure 1). Second, the large and growing wage premium for workers with higher levels of education reflects unmet demand. In 1979, the average college graduate earned 38 percent more than a high school graduate. By 1998, that wage disparity had nearly doubled to 71 percent. Several trends are driving the push for higher skills: technological change, globalization and demographics.


Technological change. Technology enables companies to eliminate repetitive low-skilled jobs. During the past century, the share of jobs held by managers and professionals rose from 10 percent of the workforce to 30 percent. Technical workers, sales people, and administrative support staff increased from 7.5 percent to 29 percent. Technology has also forced an upskilling in job categories that previously required less education or skills. For example, among less-skilled blue collar and service professions, the percentage of workers who were high-school dropouts fell by nearly 50 percent between 1973 and 1998, while the percentage of workers with some college or a B.A. tripled.

Globalization. Another reason for the decline in low-skilled jobs is globalization. Low-skilled U.S. workers now compete head-to-head with low-skilled and lower-wage workers in other countries. This is not a reversible trend. Our competitiveness rests, as Carnevale and Rose noted in The New Office Economy, on “value-added quality, variety, customization, convenience, customer service, and continuous innovation.” Ultimately, a rising standard of living hinges on the availability of highly skilled workers to create and deploy new and innovative products and services that capture market share, not on a price competition for standard goods and services that sends wages in a downward spiral.

Skills and education will be a dominant, if not decisive, factor in the U.S.’s ability to compete in the global economy.

Demographic changes. Like other industrial economies, the United States is on the threshold of enormous demographic changes. With the aging of the baby boomers, nearly 30 percent of the workforce will be at or over the retirement age by 2030. Given that the rate of growth in the size of the workforce affects economic output (more work hours yield more output), a slow-growth workforce could profoundly affect economic well-being. The obvious way to offset the impact on the gross domestic product of a slow-growth workforce is to increase the productivity of each individual worker. Department of Labor studies find that a 1 percent increase in worker skills has the same effect on output as a 1 percent increase in the number of hours worked. Hence, the ability to raise the skills and education of every worker is not just a matter of social equity. It is an economic requirement for future growth–and an urgent one, given the generation time lag needed to develop skills and educate young workers.

Skills and education will be a dominant, if not decisive, factor in the United States’ ability to compete in the global economy. As noted by council chairman and Merck Chief Executive Officer Raymond Gilmartin at the council’s recent National Innovation Summit in San Diego: “The search for talent has become a major priority for Council members. If companies cannot find the talent they need in American communities, they will seek it abroad.” Former North Carolina Governor James Hunt warned that “Our ability to engage in the world economy–and to support open trade initiatives–must be accompanied by a commitment to boost the skills of every worker. We must give every American the tools to prosper in the global economy.” Achieving that goal will require action on several fronts.

Target at-risk populations

The United States could not have enjoyed a decade-long period of prosperity without a talented workforce. But because of rising demand for higher skills and education, a substantial minority of Americans is in danger of being left behind. Although access to quality education and lifelong learning opportunities must be increased for everyone, attention should focus on the groups within our population that are seriously underprepared and underserved. These include educationally disadvantaged minority populations, welfare-to-work populations, and the prison population.

Low-income minority populations. It does not bode well for the country’s social or economic cohesion that the most educationally disadvantaged among us also represent the fastest-growing groups in the workforce. High-school dropout rates for Hispanic students are more than four times higher than for white students. Black students have a dropout rate nearly double that of white students. Low education achievement is highly correlated to lower incomes. Rates of unemployment and poverty are 5 to 10 times higher for those without a high-school education.

Most jobs will require some form of postsecondary education, but the college-bound population is also far from representative of the population as a whole. A significantly smaller proportion of black and Hispanic students attend or graduate from college (see Figure 2). At least part of the problem is likely to be financial. Inflation-adjusted tuition at colleges and universities has more than doubled since 1992, but median family income has increased only 20 percent. The cost of attendance at four-year public universities represents 62 percent of annual household income for low-income families (versus 17 percent for middle-income households and 6 percent for high-income households). As a result, low-income students are highly sensitive to increases in college costs. One study shows that for every $1,000 increase in tuition at community colleges, enrollments decline by 6 percent.


In the past, the federal government played a much larger role in offsetting the burden of college tuition for low-income families. But federal assistance based on need has declined significantly. Although student aid overall increased in total value, most of the growth was in the form of student loans, about half of which are unsubsidized. Need-based tuition assistance declined from over 85 percent of the total in 1984 to 58 percent in 1999. This shift in student aid policy has limited access to postsecondary opportunities for low-income students.

Welfare-to-work programs. Welfare reform in the mid-1990s succeeded in taking millions of Americans off the welfare rolls but not out of poverty. An Urban Institute study indicates that although welfare leavers generally earn about the same as low-income families, they are less likely to have jobs with health insurance or enough money for basic necessities. Only 23 percent of welfare leavers receive health insurance from their employers, and more than one-third sometimes run out of money for food and rent. It should not be surprising that of the 2.1 million adults who left welfare between 1995 and 1997, almost 30 percent had returned to the welfare rolls by 1997.

The challenge is not simply to move people off the welfare rolls but to increase their skills and education to enable them to get better-paying jobs that offer upward mobility. The emphasis of the current welfare system, which was overhauled in 1996, is work, not training or education. The Personal Responsibility and Work Opportunity Reconciliation Act stipulates that welfare recipients can apply only one year of education–and only vocational education–to satisfy the requirements for assistance. More often than not, according to Carnevale and Reich, caseworkers urge welfare recipients to seek jobs first and opt for training only if they cannot find employment. Indeed, many states require welfare recipients to conduct a job search for six weeks before they can request job training. Others make it difficult or impossible for welfare recipients to pursue full-time education or training.

There is mounting evidence from the field, however, that the outcomes for individuals who pursue education or training activities are far better than for those who simply find a job. For example, only 12 percent of the participants in a Los Angeles County welfare-to-work program pursued education and training, but this group was earning 16 percent more than the other participants after 3 years and 39 percent more after 5 years. Regulations that narrow or restrict the opportunities for educational advancement cannot be in the best interests of the people trying to make a more successful welfare-to-work transition or of the nation, which must boost the skills of its workforce.

Prison populations. The United States has one of the highest incarceration rates in the world (481 prisoners per 100,000 residents versus 125 in the United Kingdom and 40 in Japan). Almost two-thirds of all U.S. prison inmates are high-school dropouts. Indeed, the national high-school dropout rate would likely be much higher if it included institutionalized populations.

About 7 out of 10 prisoners are estimated to have only minimal literacy skills. That means that most of the 500,000 inmates released every year have limited employment prospects. Targeting this at-risk population with education and training programs has also proven very cost-effective. In 1999, analysts from the state of Washington surveyed studies dating back to the mid-1970s on what works and what doesn’t in reducing crime. They concluded that every dollar spent on basic adult education in prison led to a $1.71 reduction in crime-related expenses; every dollar spent on vocational education yielded a $3.23 reduction. In Maryland, a follow-up analysis of 1,000 former inmates found a 19 percent decline in repeat offense for those who had taken education programs in prison. Although corrections spending has grown dramatically, educational funding for inmates has not. Only 7 to 10 percent of inmates with low literacy skills receive literacy education.

Expand workforce training opportunities

The skills gap is an integral part of the widening difference in income between those at the top of the economic ladder and those at the bottom; a gap that is wider in the United States than in any other industrial economy. Linked to the pay gap is a disparity in benefits. In 1998, more than 80 percent of workers in the top fifth of the wage distribution had health coverage, as compared with just 29 percent in the bottom fifth. Similarly, almost three-fourths of workers in the top fifth had pension benefits, as compared with fewer than 20 percent in the bottom fifth. Higher education and skills may not be the only strategy needed to reduce income inequality, but it is an essential first step toward higher living standards for all Americans.

As long as the S&E workforce is composed disproportionately of white males, its expansion prospects will remain limited.

Industry training programs reach only a small share of the workforce. Although companies spend tens of billions of dollars on training, their investment is skewed toward the upper end of the workforce. Only one-third of training dollars are targeted toward less-skilled workers. Two-thirds of corporate training funds are directed toward managers and executives or concentrated in occupations in which the workers already possess high levels of education or skills.

Options to expand opportunities and access to training include the following:

Expand the tax incentives for employer-provided tuition assistance. The current benefit is limited to undergraduate education and should be expanded to include a wider range of educational opportunities, including two-year vocational or academic tracks at community colleges as well as graduate studies. Nondiscrimination clauses in the credit could be strengthened to ensure that lower-skilled employees can also take advantage of the training.

Institute performance-based measurements, putting a premium on accountability. There are few, if any, standards for performance in job training programs, and the lack of standards impedes the portability of the training. Establishing stronger accreditation standards for public and private training centers and linking funding to performance will go far toward rewarding the best programs and eliminating those that squander limited human and financial resources.

Set practical goals to infuse information technology into the student’s learning process in K-12. Acquiring computer literacy is not a one-dimensional exercise, with students simply logging “seat time” in computer labs. Administrators and teachers need to incorporate technology into every discipline. Students who integrate computers and the Internet into their learning process are able to use the technology to develop the analytical skills and computer know-how that are prerequisites for most careers.

Increase the number of scientists and engineers

The U.S. Department of Labor projects that new jobs requiring science, engineering, and technical training will increase by 51 percent between 1998 and 2008: a rate of growth that is roughly four times higher than average job growth nationally. When net replacements from retirements are factored in, cumulative job openings for technically trained personnel will reach nearly 6 million.

Even as demand for science and engineering (S&E) talent grows, the number of S&E degrees at the undergraduate and graduate levels has remained flat or declined in every discipline outside the life sciences. Graduate S&E degrees did turn upward in the fall of 1999, but the increase was almost entirely due to the rise in enrollment by foreign students on temporary visas. For U.S. citizens, enrollment in S&E disciplines overall continued to decline.

This trend in the United States is not mirrored elsewhere. The fraction of all 24-year-olds with science or engineering degrees is now higher in many industrialized nations than in the United States. The United Kingdom, South Korea, Germany, Australia, Singapore, Japan, and Canada all produce a higher percentage of S&E graduates than the United States (see Figure 3). Although attracting the best and brightest from around the world will strengthen our own S&E base, the United States cannot rely on other nations to provide the human talent that will sustain our innovation economy. It must be able to increase the domestic pipeline.


The ability to increase the science and engineering workforce depends on several factors:

Increased diversity in the workforce. As long as the S&E workforce is composed disproportionately of white males, its expansion prospects will remain limited. Women and minorities, the fastest-growing segments of the workforce, are underrepresented in technical occupations. White males make up 42 percent of the workforce but 68 percent of the S&E workforce. By contrast, white women make up 35 percent of the workforce and 15 percent of the S&E workforce, and Hispanics and blacks make up about 20 percent of the workforce but only 3 percent of the S&E workforce (see Figure 4). Efforts to boost participation by these groups in the S&E workforce are the single greatest opportunity to expand the nation’s pool of technical talent.


Increased financial incentives for universities. Stanford economics professor Paul Romer maintains that many universities remain gatekeepers rather than gateways to an S&E career. He argues that budgetary constraints are a major factor. Educating S&E students is significantly more expensive than educating political scientists or language majors. Because universities have fixed investments in faculty and facilities across many disciplines, they try to maintain the relative size of departments and limit growth in the more expensive S&E programs. Unlike the education funding system in other countries, the U.S. system does not provide additional resources to universities based on the cost of the educational track. Romer proposes the establishment of a competitive grant program that would reward universities for expanding S&E degree programs or instituting innovative programs, such as mentoring, new curricula, or training for instructors that would raise retention rates for S&E majors.

Democracy requires a population that can understand the scientific and technological underpinnings of contentious political issues.

Empowered graduate students. At the graduate level, students often respond more to R&D funding than to market signals. A large part of student funding comes through university research grants that typically finance research assistantships. This may be an increasingly important part of graduate student support, since direct stipends from the government have steadily declined since 1980. Because students tend to gravitate toward fields where money is available, their specialization choices are sometimes dictated by the availability of research funding rather than their own interests or market needs. Romer points out that this leads to a paradoxical situation of a Ph.D. glut coinciding with a shortage of scientists and engineers in key disciplines. He proposes a new class of portable fellowships that would allow graduate students to choose a preferred specialty based on a realistic assessment of career options rather than the availability of funds for research.

Science and math education

Although K-12 education is a national priority, the science and math component merits special attention for several reasons. First, the demand for increased technical skills and independent problem solving in the workforce puts a premium on science and math education in the schools, and not just for those students pursuing S&E careers. Second, our democracy requires a population that can understand the scientific and technological underpinnings of contentious political issues: cloning, global warming, energy sufficiency, missile defense, and stem cell research, to name only a few. Finally, and perhaps most important, science and math education merits special attention, because even our best students are underperforming when compared with the rest of the world.

Educational achievement overall varies widely among school districts, and some schools are clearly failing. But the deficiencies in science and math education appear to cut across all schools. The Third International Science and Math Study (TIMSS) and its follow-up, TIMSS-R, indicate that U.S. students perform well below the international average in both science and math. Even more sobering, student achievement actually declines with years in the system. The relatively strong performance of U.S. 4th graders gradually erodes by 12th grade.

Since the TIMSS study was released in 1995, there has been considerable research devoted to understanding why our children are not world-class learners when it comes to science and math. That research points to needed reforms in some key areas.

Curriculum changes. U.S. science and math education has been characterized as “a mile wide and an inch deep”. It covers more topics every year than do other countries, and far less comprehensively. U.S. fourth and eighth graders cover an average of 30 to 35 math topics in a year, whereas those in Japan and Germany average 20 and 10, respectively. In science, the contrast is even more striking. The typical U.S. science textbook covers between 50 and 65 topics versus 5 to 15 in Japan and 7 in Germany. Given roughly comparable instructional time, this diversity of topics limits the amount of time that can be allocated to any one topic. Critics contend that in science and math education, “there is no one at the helm; in truth, there is no identifiable helm.”

More rigorous graduation requirements. Irrespective of content, students can’t learn science and math if they’re not taking science and math courses, and many school districts do not mandate a sufficient level of competence as part of the graduation requirements. The National Commission on Excellence in Education recommended a minimum of four years of English and three years of math, science, and social studies as the baseline requirement for graduation. Most school districts (85 percent) have instituted the English requirement, but only one-half of public school districts require three years of math and only one-quarter require three years of science. It is not difficult to imagine that the performance of high-school seniors, whose last course in math or science could well have been in the 10th grade, might be underwhelming.

Higher teacher pay. Teaching is said to be a labor of love, and the salary statistics confirm that the key motivation to become a teacher is probably not financial. Teachers earn substantially less than similarly credentialed professionals, and the gap in pay increases over time and with higher education. New teachers in their 20s earn an average of $8,000 less than other professionals with a B.A. By their 40s, the salary gap between teachers and other professionals with a master’s degree grows to more than $30,000 per year. Although most school districts have limited resources, the most innovative are reaching out to the private sector to form partnerships to boost the effective pay for teachers.

More professional development opportunities. The research shows that the use of effective classroom practices significantly boosts student achievement. For example, students whose teachers use hands-on learning tools, such as blocks or models, exceed grade-level achievement by 72 percent. Similarly, students whose teachers receive professional training in classroom management and higher-order thinking skills outperform their peers by 107 percent and 40 percent, respectively. Unfortunately, few of the factors that affect student achievement are widely used in the classroom. Research by the Educational Testing Service shows that only a small percentage of teachers in eighth-grade math use blocks or models. Higher-order thinking skills generally take a back seat to rote learning; teachers are more likely to assign routine problems than teach students to apply concepts to new problems. The lack of professional education in effective classroom practices is clearly a major obstacle. Fewer than half of teachers receive training in classroom management or higher-order thinking skills. Indeed, only half of all teachers receive more than two days of professional development in a year.

Seamless K-16 standards. There is no question that higher standards need to be imposed at the K-12 level, particularly in science and math. Colleges and universities spend over a billion dollars a year in remedial education, with the highest percentage of students in remedial math. Yet, schools of higher education rarely participate in the standards-setting process. K-12 and postsecondary education move in completely different orbits, with different sets of standards regarding what a student needs to know to graduate and what the student needs to succeed in college. The result is that we may be spending time and resources to develop standards for the K-12 level that bear little relation to what students actually needs to learn to continue their education beyond high school. Only a few states have established mechanisms to address these coordination issues and misalignment problems.

People are America’s future–and its path to prosperity. The president’s vow that no child will be left behind must be realized and expanded to a commitment to leave no American underskilled and underprepared to thrive in a global economy.