Bolstering U.S. Supercomputing

The nation’s needs for supercomputers to strengthen defense and national security cannot be satisfied with current policies and spending levels.

In November 2004, IBM’s Blue Gene/L, developed for U.S. nuclear weapons research, was declared the fastest supercomputer on the planet. Supercomputing speed is measured in teraflops: trillions of calculations per second. Blue Gene/L achieved on one computation 70.72 teraflops, nearly doubling the speed of Japan’s Earth Simulator, the previous recordholder at 35.86 teraflops. Despite Blue Gene/L’s blazing speed, however, U.S. preeminence in supercomputing, which is imperative for national security and indispensable for scientific discovery, is in jeopardy.

The past decade’s policies and spending levels are inadequate to meet the growing U.S. demand for supercomputing in critical national areas such as intelligence analysis, oversight of nuclear stockpiles, and tracking climate change. There has been little long-term planning for supercomputing needs and inadequate coordination among relevant federal agencies. These trends have reduced opportunities to make the most of this technology. The federal government must provide stable long-term funding for supercomputer design and manufacture and also support for vendors of supercomputing hardware and software.

Supercomputers combine extremely fast hardware with software that can solve the most complex computational problems. Among these problems are simulation and modeling of physical phenomena such as climate change and explosions, analyzing massive amounts of data from sources such as national security intelligence and genome sequencing, and designing intricate engineered products. Supercomputers are top not only in performance but also in cost: The price tag on the Earth Simulator has been estimated at $500 million.

Supercomputing has become a major contributor to the economic competitiveness of the U.S. automotive, aerospace, medical, and pharmaceutical industries. The discovery of new techniques and substances, as well as cost reduction through simulation rather than physical prototyping, underlies progress in a number of economically important areas. Many technologies initially developed for supercomputers have enriched the mainstream computer industry. For example, multithreading and vector processing are now used on personal computer chips. Application codes that required supercomputing performance when they were developed are now routinely used in industry. This trickle-down process is expected to continue and perhaps even intensify.

But progress in supercomputing has slowed in recent years, even though today’s computational problems require levels of scaling and speed that stress current supercomputers. Many scientific fields need performance improvements of up to seven orders of magnitude to achieve well-defined computational goals. For example, performance measured in petaflops (thousands of teraflops) is necessary to conduct timely simulations that, in the absence of real-world testing, will certify to the nation that the nuclear weapons stockpile is safe and reliable. Another example is climate modeling for increased understanding of climate change and to enable forecasting. A millionfold increase in performance would allow reliable prediction of regional and local effects of certain pollutants on the atmosphere.

The success of the killer micros

The disheartening state of supercomputing today is largely due to the swift rise of commodity-based supercomputing. That is clear from the TOP500, a regularly updated list of the 500 most powerful computer systems in the world, as measured by performance on the LINPACK dense linear algebra benchmark (an imperfect but widely used measure of performance on real-world computational problems). Most systems on the TOP500 list are now clusters, systems assembled from commercial off-the-shelf processors interconnected by off-the-shelf switches. Fifteen years ago, almost all TOP500 systems were custom supercomputers, built of custom processors and custom switches.

Cluster supercomputers are a prime example of Moore’s law, the observation that processing power doubles every 18 months. Cluster supercomputers have benefited from the huge investments in commodity processors and rapid increases in processor performance. For many applications, cluster technology offers supercomputing performance at the cost/perfor-mance ratio of a personal computer. For applications with the characteristics of the LINPACK benchmark, the cost of a cluster can be an order of magnitude lower than the cost of a custom supercomputer with the same performance. However, many important supercomputing applications have characteristics that are very different from those of LINPACK; these applications run well on custom supercomputers but achieve poor performance on clusters.

The success of clusters has reduced the market for custom supercomputers so much that its viability is now heavily dependent on government support. At less than $1 billion annually, the market for high-end systems is a minuscule fraction of the total computer industry, and according to International Data Corporation, more than 80 percent of high-end system purchases in 2003 were made by the public sector. Historically, the government has ensured that supercomputers are available for its missions by funding supercomputing R&D and by forging long-term relationships with key providers. Although active government intervention has risks, it is necessary in situations like this, where the private market is nonexistent or too small to ensure a steady flow of critical products and technologies. This makes sense because supercomputers are public goods, an essential component of government missions ranging from basic research to national security.

Yet government support for the development and acquisition of such platforms has shrunk. And computer suppliers are reluctant to invest in custom supercomputing, because the market is so small, the financial returns are so uncertain, and the opportunity costs of moving skilled personnel away from products designed for the broader IT market are considerable. In addition, the supercomputing market has become unstable, with annual variations of more than 20 percent in sales. Consequently, companies that concentrate primarily on developing supercomputing technologies have a hard time staying in business. Currently, Cray, which almost went out of business in the late 1990s, is the only U.S. firm whose chief business is supercomputing hardware and software and the only U.S. firm that is building custom supercomputers. IBM and Hewlett-Packard produce com-modity-based supercomputer systems as one product line among many. Most supercomputing applications software comes from the research community or from the applications developers themselves.

The limits of clusters

For increasingly important problems such as computations that are critical for nuclear stockpile stewardship, intelligence analysis, and climate modeling, an acceptable time to solution can be achieved only by custom supercomputers. Custom systems can sometimes reduce computation time by a factor of 10 or more, so that a computation that would take a cluster supercomputer a month is completed in a few days. Slower computation might cost less, but it also might not meet deadlines in intelligence analysis or allow research to progress fast enough.

This speed problem is getting worse. As semiconductor and packaging technology gets better, different components of a supercomputer improve at different rates. In particular, processor speed increases much faster than memory access time. Custom supercomputers overcome this problem with a processor architecture that can support a very large number of concurrent memory accesses to unrelated memory locations. Commodity processors support a modest number of concurrent memory accesses but reduce the effective memory access time by adding large and often multilevel cache memory systems. Applications that are unable to take advantage of the cache normally will scale in performance at the memory speed, not the processor performance speed. As the gap between processor and memory performance continues to grow, more applications that now make good use of a cache will be limited by memory performance. The problem affects all applications, but it affects scientific computing and supercomputing sooner because commercial applications usually can take better advantage of caches. A similar gap affects global communication: Although processors run faster, the physical dimensions of the largest supercomputers continue to increase, whereas the speed of light, which bounds the speed of interprocessor communication, does not increase.

Continued leadership in essential supercomputing technologies will require an industrial base of multiple domestic suppliers.

As transistors continue to shrink, hardware fails more frequently; this affects very large, tightly coupled systems such as supercomputers more than smaller or less-coupled systems. Also, the ability of microprocessor designers to translate the increasing number of transistors on a chip into increased processor performance seems to have reached its limits; processor performance continues to improve as clock rates continue to increase, but vendors now leverage the increased transistor count by putting an increasing number of processor cores on each chip. As a result, the number of processors per system will need to increase rapidly in order to sustain past rates of supercomputer performance improvement. But current algorithms and applications do not scale easily to systems with hundreds of thousands of processors.

Although clusters have reduced the hardware cost of supercomputing, they have increased the programming effort needed to implement large parallel codes. Scientific codes and the platforms on which they run have become more complex, but the application development environments and tools used to program complex parallel scientific codes are generally less advanced and less robust than those used for general commercial computing. As a result, software productivity is low. Custom systems could support more efficient parallel programming models, but this potential is largely unrealized. No higher-level programming notation that adequately captures parallelism and locality (the two main algorithmic concerns of parallel programming) has emerged. The reasons include the very low investment in supercomputing software such as compilers for parallel systems, the desire to maintain compatibility with prevalent cluster architecture, and the fear of investing in software that runs only on architectures that may disappear in a few years. The software problem will worsen as higher levels of parallelism are required and as global interprocessor communication slows down relative to processor performance.

Thus, there is a clear need for scaling and software improvements in supercomputing. New architectures are needed to cope with the diverging improvement rates of various components such as processor speed versus memory speed. New languages, new tools, and new operating systems are needed to cope with the increased levels of parallelism and low software productivity. And continued improvements are needed in algorithms to handle larger problems; new models that improve performance, accuracy, or generality; and changing hardware characteristics.

It takes time to realize the benefits of research into these problems. It took more than a decade from the creation of the first commercial vector computer until vector programming was well supported by algorithms, languages, and compilers. Insufficient funding for the past several years has emptied the research pipeline. For example, the number of National Science Foundation (NSF) grants supporting research on parallel architectures has been cut in half over little more than 5 years; not coincidentally, the number of scientific publications on high-per-formance computing has been reduced by half as well. Although many of the top universities had large-scale prototype projects exploring high-performance architectures a decade ago, no such effort exists today in academia.

Making progress in supercomputing

U.S. needs for supercomputing tostrengthen defense and national security cannot be satisfied with current policies and levels of spending. Because these needs are distinct from those of the broader information technology (IT) industry, it is up to the government to ensure that the requisite supercomputing platforms and technologies are produced. Government agencies that depend on supercomputing, together with Congress, should take primary responsibility for accelerating advances in supercomputing and ensuring that there are multiple strong domestic suppliers of both hardware and software.

The federal agencies that depend on supercomputing should be jointly responsible for the strength and continued evolution of the U.S. supercomputing infrastructure. Although the agencies that use supercomputers have different missions and requirements, they can benefit from the synergies of coordinated planning, acquisition strategies, and R&D support. An integrated long-range plan—which does not preclude individual agency activities and priorities—is essential to leverage shared efforts. Progress requires the identification of key technologies and their interdependences, roadblocks, and opportunities for coordinated investments. The government agencies responsible for supercomputing should underwrite a community effort to develop and maintain this roadmap. It should be assembled with wide participation from researchers, developers of both commodity and custom technologies, and users. It should be driven both top-down from application needs and bottom-up from technology barriers. It should include measurable milestones to guide the agencies and Congress in making R&D investment decisions.

If the federal government is to ensure domestic leadership in essential supercomputing technologies, a U.S. industrial base of multiple domestic suppliers that can build custom systems must be assured. Not all of these suppliers must be vertically integrated companies such as Cray that design everything from chips to compilers. The viability of these vendors depends on stable long-term government investments at adequate levels; both the absolute investment level and its predictability matter because there is no alternative support. Such stable support can be provided either via government funding of R&D expenses or via steady procurements or both. The model proposed by the British UKHEV initiative, whereby government solicits and funds proposals for the procurement of three successive generations of a supercomputer family over 4 to 6 years, is a good example of a model that reduces instability.

The creation and long-term maintenance of the software that is key to supercomputing require the support of the federal agencies that are responsible for supercomputing R&D. That software includes operating systems, libraries, compilers, software development and data analysis tools, application codes, and databases. Larger and more coordinated investments could significantly improve the productivity of supercomputing platforms. The models for software support are likely to be varied— vertically integrated vendors that produce both hardware and software, horizontal vendors that produce software for many different hardware platforms, not-for-profit organizations, or software developed on an open-source model. No matter which model is used, however, stability and continuity are essential. The need for software to evolve and be maintained over decades requires a stable cadre of developers with intimate knowledge of the software.

Because the supercomputing research community is small, international collaborations are important, and barriers to international collaboration on supercomputer research should be minimized. Such collaboration should include access to domestic supercomputing systems for research purposes. Restrictions on supercomputer imports have not benefited the United States, nor are they likely to do so. Export restrictions on supercomputer systems built from widely available components that are not export-controlled do not make sense and might damage international collaboration. Loosening restrictions need not compromise national security as long as appropriate safeguards are in place.

Supercomputing is critical to advancing science. The U. S. government should ensure that researchers with the most demanding computational requirements have access to the most powerful supercomputing systems. NSF supercomputing centers and Department of Energy (DOE) science centers have been central in providing supercomputing support to scientists. However, these centers have undergone a broadening of their mission even though their budgets have remained flat, and they are under pressure to support an increasing number of users. They need stable funding, sufficient to support an adequate supercomputing infrastructure. Finally, science communities that use supercomputers should have a strong say in and a shared responsibility for providing adequate supercomputing infrastructure, with budgets for acquisition and maintenance of this infrastructure clearly separated from the budgets for IT research.

In fiscal year (FY) 2004, the aggregate U.S. investment in high-end computing was $158 million, according to an estimate in the 2004 report published by the High-End Computing Revitalization Task Force. (This task force was established in 2003 under the National Science and Technology Council to provide a roadmap for federal investments in high-end computing. The proposed roadmap has had little impact on federal investments so far.) This research spending included hardware, software, and systems for basic and applied research, advanced development, prototypes, and testing and evaluation. The task force further noted that federal support for high-end computing activities had decreased from 1996 to 2001. The report of our committee estimated that an investment of roughly $140 million annually is needed for supercomputing research alone, excluding the cost of research into applications using supercomputing, the cost of advanced development and testbeds, and the cost of prototyping activities (which would require additional funding). A healthy procurement process for top-performing supercomputers that would satisfy the computing needs of the major agencies using supercomputing was estimated at about $800 million per year. Additional investments would be needed for capacity supercomputers in a lower performance tier.

The High-End Computing Revitalization Act passed by Congress in November 2004 is a step in the right direction: It called for DOE to establish a supercomputing software research center and authorized $165 million for research. However, no money has been appropriated for the recommended supercomputing research. The High-Performance Computing Revitalization Act of 2005 was introduced to the House Committee on Science in January 2005. This bill amends the High-Performance Act of 1991 and directs the president to implement a supercomputing R&D program. It further requires the Office of Science and Technology Policy to identify the goals and priorities for a federal supercomputing R&D program and to develop a roadmap for high-performance computing systems. However, the FY 2006 budget proposed by the Bush administration does not provide the necessary investment. Indeed, the budget calls for DOE’s Office of Science and its Advanced Simulation and Computing program to be reduced by 10 percent from the FY 2005 level.

Immediate action is needed to preserve the U.S. lead in supercomputing. The agencies and scientists that need supercomputers should act together and push not only for an adequate supercomputing infrastructure now but also for adequate plans and investments that will ensure that they have the tools they need in 5 or 10 years.


Susan L. Graham () and Marc Snir () cochaired the National Research Council (NRC) committee that produced the report Getting Up to Speed: The Future of Supercomputing. Graham is Pehong Chen Distinguished Professor of Electrical Engineering and Computer Science at the University of California, Berkeley, and former chief computer scientist of the multi-institutional National Partnership for Advanced Computational Infrastructure, which ended in October 2004. Snir is the Michael Faiman and Saburo Muroga Professor and head of the Department of Computer Science at the University of Illinois, Urbana-Champaign. Cynthia A. Patterson () was the NRC committee’s study director.

Secrets of the Celtic Tiger: Act Two

Ireland’s brilliant catch-up strategy of the 1990s offers important lessons for countries that want to build a modern technology-based economy. But Ireland is not growing complacent. It knows that a decade of steady and strong economic growth, high employment, and success in recruiting foreign investment hardly guarantees future results. Ireland is now supporting R&D activities designed to help it prosper not simply for years but for generations. This new effort might be instructive for the United States and other technology leaders.

Ireland’s growth in the 1990s was remarkable. From 1990 to 2003, its gross domestic product (GDP) more than tripled, from €36 billion to €138 billion, as did its GDP per capita, which rose from €10,400 to €35,200. During the same period, merchandise exports grew 4.5 times, from €18 billion to €81 billion. Meanwhile, Ireland’s debt as a percentage of GDP fell from 96 percent to 33 percent; the total labor force rose nearly 50 percent, to 1.9 million; and the rate of unemployment dropped from 12.9 percent to 4.8 percent. And inflation barely changed, from 3.4 percent to 3.5 percent.

Perhaps even more impressive was Ireland’s growth relative to the United States and the European Union (EU). Between 1994 and 2003, average annual real economic growth in the EU was just over 2 percent, in the United States just over 3 percent, and in Ireland 8 percent. Whereas Ireland’s income levels once stood at 60 percent of the EU average, they now stand at 135 percent of the average.

This rapid progress is no doubt due in part to the fact that Ireland had farther to travel than many other countries to become a 21st-century economic competitor. When the 1990s began, tourism and agriculture still dominated Ireland’s economy. All the while, however, Ireland possessed a combination of strengths that, it seems clear in retrospect, had long been ready to blossom once a more knowledge-based economic system began to emerge.

Four sources of growth

First among these attributes is Ireland’s excellent educational system. Though it is not perfect, and passionate debates continue about how the system can best serve Ireland’s changed society, it is in some respects a model. Today, 48 percent of the Irish population has attained college-level education, compared with less than 40 percent in countries such as the United Kingdom, United States, Spain, Belgium, and France, and less than 25 percent in Germany.

Ireland’s success in the 1990s, in fact, would not have been possible if the country had not taken a crucial step 40 years ago, when it began a concerted effort to increase educational participation rates and introduce programs that would match the abilities of students to the needs of a global economy and advanced, even high-tech, enterprises. At the same time, the country started making its already demanding K-16 education system more rigorous, creating links between industry and education and formalizing and supporting workplace education.

What happened thereafter is no coincidence. In the mid-1960s, fewer than 20,000 students were attending college in Ireland. By 1999, the number had risen sixfold, to 112,000. In 1984-1985, only 40 percent of 18-year-olds in Ireland were engaged in full-time education. Ten years later, the figure was 64 percent. During the first 5 years of the 1990s, the total number of students engaged in college-level programs grew by 51 percent. By 1995, Ireland had more students as a percentage of population with science-related qualifications than any of the other 30 countries in the Organization for Economic Cooperation and Development (OECD). A long-term commitment to education provided the foundation for the boom that followed.

Second, Ireland was ready to prosper when the knowledge-based economy emerged because of a combination of benefits it enjoyed as one of 15 members (the total is now 25) of the EU and as a nation with a historically strong cultural and political connection to the United States, where 40 million people trace some part of their heritage to Ireland. The EU made massive investments in Ireland, as the single-market system followed its plan of shifting a portion of EU contributions from richer members to those in need of development, on the principle that growing markets would benefit all members. This investment transformed infrastructure, including roads, ports, and communications, and gave overseas investors reason to look to Ireland as a haven of opportunity. Meanwhile, as an English-speaking country with a unique bond to the United States, Ireland was already an enticing marketplace for U.S. enterprises. When the technology age of the 1990s arrived in the United States, great opportunities existed for it to arrive in Ireland as well, especially given the country’s other advantages.

Third, a consistent political and public commitment to investment has existed in Ireland for decades. The country’s investment agency, IDA Ireland, for example, was established in 1969 and has played an important role in recruiting U.S. corporations. Today, there is hardly a leading U.S. manufacturer of computer software or hardware, pharmaceuticals, electronics, or medical equipment, among other knowledge-based businesses, without thriving operations in Ireland. IDA Ireland has meanwhile been able to develop relationships with overseas companies across the globe and has established offices in the United States and several other countries to serve clients and attract further investment.

Fourth, Ireland was shrewd enough to capitalize on these strengths by lowering corporate taxes. By 2003, Ireland’s corporate tax rate was 12.5 percent, covering both manufacturing and services (with a rate of 25 percent applying to passive income, such as that from dividends). This change, along with sustained efforts to reduce payroll and other business taxes, gave U.S. manufacturers operating in Ireland a financially competitive platform from which to serve the EU single market of 470 million people. In 1990, about 11,000 companies were exporting from Ireland. By 2002, the number had risen to 70,000.

What next?

Ireland now accounts for roughly one-quarter of all U.S. foreign direct investment (FDI) in Europe, including almost one-third of all FDI in pharmaceuticals and health care. Nine of the world’s top 10 drug companies have plants in Ireland. One-third of all personal computers sold in Europe are manufactured in Ireland, and OECD data indicate that the country is the world’s biggest software exporter, ahead of the United States. These statistics become even more impressive when one realizes that Ireland’s population is just over four million, about the same as Brooklyn’s.

But Ireland knows that 10 to 15 years of growth, however important, must be seen only as a beginning. Its political and business leaders well remember the days of economic stagnation and relative poverty, and they are not ready to relax. In fact, across all government departments, there is an impressive commitment to policies, programs, and investments designed to make Ireland an enduring knowledge society. The investments include support for R&D in scientific and engineering areas that capitalize on the same conditions that benefited the country in the past decade: a strong educational system, aggressive economic strategies, partnerships with nations rich in knowledge-based businesses, and, above all, highly skilled talent.

Watching Ireland imitate what is best in the U.S. system might be a helpful reminder to U.S. policymakers to preserve and strengthen their own government efforts.

With this philosophy, the Irish government established Science Foundation Ireland (SFI) in 2000 as part of the National Development Plan 2000–2006 (NDP). This investment followed a year-long study by a group of business, education, and government leaders appointed by the prime minister and deputy prime minister. The group’s job was to examine how an infusion of government funding could best improve Ireland’s long-term competitiveness and growth. To their credit, the leadership saw the promise in what the group’s report described.

The NDP funding that is focused on research, technological development, and innovation programs will total approximately €1 billion by 2006, a considerable sum for a country of Ireland’s size. SFI’s portion totals €635 million, or approximately $820 million, and SFI’s investment does not have to stand alone. To the contrary, 18 months before SFI came into being, the government began building R&D infrastructure and it continues to do so. Through the Higher Education Authority, Ireland has already committed almost €600 million to creating new labs and research space. As a result, SFI has been able to be aggressive in helping higher-education institutions recruit researchers to build programs in these facilities.

SFI’s ultimate goal is to foster an R&D culture by investing in superb individual researchers and their teams. We want them to uncover ideas that attract grants and inspire patents. We want them to recruit and train academic scientists and engineers from home and abroad and to explore daring ideas as well as ideas that can lead to knowledge-based businesses, create jobs, and generate exports. Perhaps most important, we want them to help inspire Irish students to pursue careers in science and engineering.

Importing ideas

The model for SFI, not surprisingly, is the U.S. National Science Foundation (NSF). Ireland recognized the contribution that NSF has made to U.S. science, economic growth, and talent development, among other areas. Because Ireland cannot afford the comprehensive scope of NSF, the Irish government has to target research investments in areas with the most likely scientific and economic impact, and where the country already has concentrated skills and industrial interest, such as computers, electronics, pharmaceuticals, and medical equipment. SFI’s initial mission was therefore to support world-class research programs in science and engineering fields that underpin biotechnology and information and communications technology. The SFI mission recently expanded to include the newest areas of science and engineering. This expanded mission will help Ireland build the talent needed for the long term and respond even better to creativity.

We also interpret our mission broadly. Our selection process lets innovation and imagination earn grants, using international experts to judge proposals and the likelihood of success. To their credit, SFI’s international board has insisted that we not make selections based on an averaging of reviewer scores, but instead aim to invest in performance and excellence and take risks. We follow the NSF model by having technical staff make final grant decisions based on the outside review, rather than having reviewers’ scores dictate the result. This approach helps address all questions before final rankings and ensures the accountability of the technical staff, an essential feature of the NSF system.

We also share with teams whose proposals we deny a summary of the weaknesses that the experts found in the submissions. This feedback loop has generated successful proposals from rejected ones and, in so doing, created stronger research measured against an objective standard. It is worth noting that as common as such practices might be in the United States, they seem uncommon in European research programs.

In the biotechnology areas, we are interested in work in a range of fields, from DNA chips to drug delivery, from biosensors to bioremediation. At the same time, we have particular interest in research that draws on special capabilities in Ireland’s academic and industrial system. We currently give special emphasis to agri-food, cell cycle control, enabling technologies, medical biotechnology/ biopharmaceuticals/therapeutics, microbiology, and neuro/developmental biology. But we are determined to stay open to the best ideas of the best researchers.

The same is true for our grants in information and communications technology (ICT). We take ICT to include broadband, wireless, and mobile transmission; parallel processing systems; engineering for reliability of data transfer; wearable sensors; computer modeling; distributed networking; computer-based training; nanoscale assembly; and human language understanding. Our specific focus is currently on the following areas:

  • Novel adaptive technologies for distributed networking of people, machines and sensors, and other devices.
  • Software engineering for improved reliability, security, and predictability of all software-based systems.
  • Machine learning and semantic web technologies and image processing to extract information from massive data sets, and enabling adaptive systems and significant applications for the future.
  • Nanotechnology breakthroughs in device design and information processing.

In both ICT and biotech, what drives us to specific areas is the sense that they will produce the greatest prospects for technological and economic development in the next few decades. But as the goals of researchers evolve, so will the proposals that get our attention. We also are open to, and actually encourage, proposals that recognize that the next major leaps could occur in areas where ICT and biotechnology overlap, in what is sometimes called digital genetics.

We support research aggressively. Our portfolio includes professorships that range up to €2.5 million over 5 years to help attract outstanding scientists and engineers from outside the country to Irish universities and institutes of technology, and principal investigator grants that are normally worth at least €250,000 per year over 3 or 4 years for researchers who are working in or will work in Ireland. We are now funding 450 projects through grants totaling €450 million. These projects include more than 1,190 individuals, research teams, centers, and visiting researchers from Australia, Belgium, Canada, England, Germany, Japan, Russia, Scotland, Slovakia, South Africa, Switzerland, and the United States.

This funding includes the Centers for Science, Engineering, and Technology (CSET) program, which connects researchers in academia and industry through grants worth as much as €20 million over 5 years and may be renewed for an additional term of up to 5 years. The idea is to fund centers that can exploit opportunities for discovery and innovation as no smaller research project can, link academic and industry researchers in promising ways, generate products of value in the marketplace, and contribute to the public’s interest in science and technology. These centers give Ireland another recruitment tool, again building on the relationships it has established around the world. Already, the CSETs have led to research partnerships in Ireland with Bell Labs, HP, Intel, Medtronic, and Proctor & Gamble.

As such work suggests, SFI has special obligations as Ireland’s first extended national commitment to a public R&D enterprise. The foundation must prove itself a reliable partner with the other sectors crucial to this enterprise, notably related government enterprises, the education system, and the industrial and business sectors. We engage with these sectors constantly, including by having their representatives on review committees, and, as with the CSET program, offering grants that can both draw interest among researchers and help in industrial R&D recruitment. Our board also includes leaders from every sector, both within and outside Ireland, and has been pivotal in determining the emphasis and structure of our programs.

The foundation’s work is not occurring in a vacuum, either. Investments in the science and technology infrastructure are continuing, and the government last year established the position of national science adviser. This adviser will report to a cabinet-level committee dedicated to science. At the highest levels of Ireland’s government, there is a deep conviction that R&D are crucial to the country’s future.

Europe’s next step?

Ireland’s early results have not gone unnoticed by its neighbors. Intrigued by what Ireland and SFI have begun, the European Commission asked me to lead an expert group in evaluating a potential EU-wide research-funding scheme. The program would pit researchers in Europe against one another for certain EU grants, using competition to drive up the value and number of ideas, patents, and products. The report, Frontier Research: The European Challenge, was published in April 2005. It places considerable weight on the value of independent outside expert review teams and the importance of having technical staff make final grant decisions. As in SFI, this process will allow for appropriate follow-up on issues raised by technical experts and, in principle, encourage more risk-taking. Europe has never tried a pan-national competitive approach before, or widely employed this decision model, so it will be interesting to see whether the EU can apply the NSF approach across multiple countries with a common interest in challenging U.S. R&D dominance.

Obviously, though, Europe is not the only country chasing the United States, as Ireland is not the only country using an NSF prototype to guide its research investments. Other nations also have seen how research can become the basis for a competitive knowledge-based system, innovation, and growth. Indeed, watching Ireland imitate what is best in the U.S. system might be a helpful reminder to U.S. policy-makers to preserve and strengthen their own government efforts that have contributed so much to its economic success.

Readers of this magazine know that higher education in India and China is advancing at a stunning pace (“Asian Countries Strengthen Their Research,” Issues, Summer 2004). In 1999, U.S. universities awarded 220,000 bachelor’s degrees in science and engineering. China awarded 322,000, and India awarded 251,000. Just two decades ago, these countries awarded barely a fraction of such degrees. China’s college enrollment grew by two-thirds between 1995 and 2000. India’s increased by more than a third between 1996 and 2002, to 8.8 million.

In 2003, the United States lost its status as the world’s leading recipient of foreign direct investment. The new leader is China—a worrisome sign of how the market now judges future opportunity.

At the same time, U.S. struggles in education—the basis of any society’s innovation culture—continue. It is remarkable to consider that at present rates, only 18 out of every 100 U.S. ninth-graders will graduate in 10 years with either a bachelor’s or an associate’s degree. Can any country afford to continue squandering such talent?

In a January 2005 study of U.S. education and competitiveness, the Business–Higher Education Forum—an organization of top executives from U.S. businesses, colleges, universities, and foundations—observed, “[T]he United States is losing its edge in innovation and is watching the erosion of its capacity to create new scientific and technological breakthroughs. Increased global competition, lackluster performance in mathematics and science education, and a lack of national focus on renewing its science and technology infrastructure have created a new economic and technological vulnerability as serious as any military or terrorist threat.”

Ireland, I believe, appreciates the new severity of competition. Its recent experience with innovation and growth built on education has given it reason not to slip backward. It has begun to believe the potential truth of what Juan Enriquez of Harvard University wrote at the start of this decade, “The future belongs to small populations who build empires of the mind.”

Ireland is acting as if it knows that empires of the mind emerge from the commitment to science and engineering that R&D requires, that powerful R&D starts with talent, and that a successful education system allows talent to flourish. The United States showed countries such as Ireland, among many others, the power of these connections. Other large and growing countries are now applying this lesson. I hope that the United States does not forget it.

A Reality Check on Military Spending

For fiscal year (FY) 2005, military spending will be nearly $500 billion, which is greater in real terms than during any of the Reagan years and surpassed only by spending at the end of World War II in 1945 and 1946 and during the Korean War in 1952. The White House is asking for an FY 2006 Department of Defense (DOD) budget of $413.9 billion, which does not include funding for military operations in Iraq and Afghanistan.

The administration argues that increased military spending is a necessary part of the war on terrorism. But such logic assumes that the war on terrorism is primarily a military war to be fought by the Army, Navy, Air Force, and Marines. The reality is that large conventional military operations will be the exception rather than the rule in the war on terrorism. Instead, the military’s role will mainly involve special operations forces in discrete missions against specific targets, not conventional warfare aimed at overthrowing entire regimes. The rest of the war aimed at dismantling and degrading the al Qaeda terrorist network will require unprecedented international intelligence and law enforcement cooperation, not expensive new planes, helicopters, and warships.

Therefore, an increasingly large defense budget (DOD projects the budget to grow to more than $487 billion by FY 2009) is not necessary to fight the war on terrorism. Nor is a huge budget necessary to protect the United States from traditional nation-state military threats, because the United States has no major military rivals, is relatively secure from conventional military attack, and has a strong nuclear deterrent force. Major cuts in the defense budget would be possible if the United States substantially reduced the number of troops stationed abroad and embraced a “balancer-of-last-resort” strategy, in which it would intervene overseas only if its vital interests were threatened and in which countries in Europe, Asia, and elsewhere would take greater responsibility for their own regional security.

According to the International Institute for Strategic Studies (IISS), in 2003 (the last year for which there is comparative worldwide data) total U.S. defense expenditures were $404.9 billion. This amount exceeded the combined defense expenditures of the next 13 countries and was more than double the combined defense spending of the remaining 158 countries in the world (Figure 1). The countries closest to the United States in defense spending were Russia ($65.2 billion) and China ($55.9 billion). The next five countries—France, Japan, the United Kingdom, Germany, and Italy—were all U.S. allies. In fact, the United States outspent its North Atlantic Treaty Organization (NATO) allies by nearly 2 to 1 ($404.9 billion versus $221.1 billion) and had friendly relations with 12 of the 13 countries, which included another NATO ally, Turkey, as well as South Korea and Israel. Finally, the combined defense spending of the remaining “Axis of Evil” nations (North Korea and Iran) was about $8.5 billion, or 2 percent of U.S. defense expenditures.

Such lopsided U.S. defense spending needs to be put in perspective relative to the 21st-century threat environment. With the demise of the Soviet Union, the United States no longer faces a serious military challenger or global hegemonic threat. President Vladimir Putin has charted a course for Russia to move closer to the United States and the West, both politically and economically, so Russia is not the threat that the former Soviet Union was. Indeed, Russia now has observer status with NATO, a dramatic change given that the NATO alliance was created to contain the former Soviet Union. And in May 2002, Russia and the United States signed the Strategic Offensive Reductions Treaty (SORT) to reduce their strategic nuclear arsenals to between 1,700 and 2,200 warheads each by December 2012. According to IISS, “despite disagreement over the U.S.-led action in Iraq, the bilateral relationship between Washington and Moscow remains firm.”

Even if Russia were to change course and adopt a more hostile position, it is not in a position to challenge the United States, either economically or militarily. In 2003, Russia’s gross domestic product (GDP) was a little more than 1-10th of U.S. GDP ($1.3 trillion versus $10.9 trillion). Although a larger share of Russia’s GDP was devoted to defense expenditures (4.9 percent versus 3.7 percent), in absolute terms the United States outspent Russia by more than 6 to 1. To equal the United States, Russia would have to devote more than 20 percent of its GDP to defense, which would exceed what the Soviet Union spent at the height of the Cold War during the 1980s.

Certainly, Chinese military developments bear watching. Although many see China as the next great threat, even if China modernizes and expands its strategic nuclear force (as many military experts predict it will), the United States will retain a credible nuclear deterrent with an overwhelming advantage in warheads, launchers, and variety of delivery vehicles. According to a Council on Foreign Relations task force chaired by former Secretary of Defense Harold Brown, “The People’s Republic of China is pursuing a deliberate and focused course of military modernization but . . . it is at least two decades behind the United States in terms of military technology and capability. Moreover, if the United States continues to dedicate significant resources to improving its military forces, as expected, the balance between the United States and China, both globally and in Asia, is likely to remain decisively in America’s favor beyond the next twenty years.”

Like Russia, China may not have the wherewithal to compete with and challenge the United States. In 2003, U.S. GDP was almost eight times more than China’s ($10.9 trillion versus $1.4 trillion). China spent fractionally more of its GDP on defense than the United States (3.9 percent versus 3.7 percent), but in absolute terms the U.S. defense expenditures were seven times greater than China’s ($404.9 billion versus $55.9 billion). To equal the United States, China would have to devote one-quarter of its GDP to defense.

If the Russian and Chinese militaries are not serious threats to the United States, so-called rogue states, such as North Korea, Iran, Syria, and Cuba, are even less of a threat. Although these countries are unfriendly to the United States, none have any real military capability to threaten or challenge vital U.S. security interests. The GDP of these four countries was $590.3 billion in 2003, or less than 5.5 percent of U.S. GDP. Military spending is even more lopsided: $11.3 billion compared to $404.9 billion, or less than 3 percent of U.S. defense spending. Even if North Korea and Iran eventually acquire a long-range nuclear capability that could reach the United States, the U.S. strategic nuclear arsenal would continue to act as a powerful deterrent.

Downsizing the U.S. military

According to DOD, before Operation Iraqi Freedom the total number of U.S. active-duty military personnel was more than 1.4 million troops, of which 237,473 were deployed in foreign countries. With an all-volunteer force, maintaining those deployments requires at least twice as many additional troops to be deployed in the United States so that the overseas force can be rotated at specified intervals. Thus, one way to measure the cost to the United States of maintaining a global military presence is the expense of supporting more than 700,000 active-duty troops along with their associated force structure. But without a great-power enemy that might justify an extended forward deployment of military forces, the United States could dramatically reduce its overseas commitments, and U.S. security against traditional nation-state military threats could be achieved at significantly lower costs.

If the United States adopted a balancer-of-last-resort strategy, most overseas military commitments could be eliminated and the defense budget substantially reduced.

Instead of a Cold War-era extended defense perimeter and forward-deployed forces (intended to keep in check an expansionist Soviet Union), the very different 21st-century threat environment affords the United States the opportunity to adopt a balancer-of-last-resort strategy. Such a strategy would place greater emphasis on allowing countries to take responsibility for their own security and, if necessary, to build regional security arrangements, even in important areas such as Europe and East Asia. Instead of being a first responder to every crisis and conflict, the U.S. military would intervene only when truly vital security interests were at stake. This strategy would allow the United States to eliminate many permanent foreign bases and substantially reduce the large number of troops deployed at those bases.

Although it is counterintuitive, forward deployment does not significantly enhance the U.S. military’s ability to fight wars. The comparative advantage that the U.S. military possesses is airpower, which can be dispatched relatively quickly and at very long ranges. Indeed, during Operation Enduring Freedom in Afghanistan, the Air Force was able to fly missions from Whiteman Air Force Base in Missouri to Afghanistan and back. The ability to rapidly project power, if necessary, can also be made easier by pre-positioning supplies and equipment at strategic locations. Troops can be deployed faster if their equipment does not have to be deployed simultaneously.

If U.S. ground forces are needed to fight a major war, they could be deployed as necessary. It is worth noting that both Operation Enduring Freedom and Operation Iraqi Freedom were conducted without significant forces already deployed in either theater of operation. In the case of Operation Enduring Freedom, the military had neither troops nor bases adjacent to Afghanistan. Yet military operations commenced less than a month after the September 11 attacks. In the case of Operation Iraqi Freedom, even though the military had more than 6,000 troops (mostly Air Force) deployed in Saudi Arabia, the Saudi government denied the use of its bases to conduct military operations. Instead, the United States used Kuwait as the headquarters and the jumping-off point for military operations. Similarly, the Turkish government prevented the U.S. Army’s 4th Infantry Division from using bases in that country for military operations in northern Iraq, forcing some 30,000 troops to be transported via ship through the Suez Canal and Red Sea to the Persian Gulf, where they arrived too late to be part of the initial attack against Iraq. Despite these handicaps, U.S. forces swept away the Iraqi military in less than four weeks.

Consider also that in the case of South Korea, the 31,000 U.S. troops deployed there are insufficient to fight a war. Operation Iraqi Freedom, against a smaller and weaker military foe, required more than 100,000 ground troops to take Baghdad and topple Saddam Hussein (and more to occupy the country afterward). If the United States decided to engage in an offensive military operation against North Korea, the troops stationed in South Korea would have to be reinforced. This would take almost as much time as deploying the entire force from scratch. If North Korea (with a nearly one-million-man army) decided to invade South Korea, the defense of South Korea would rest primarily with that country’s 700,000-man military, not 31,000 U.S. troops. Nor does the U.S. military presence in South Korea alter the fact that North Korea is believed to have tens of thousands of artillery weapons that can hold the capital city of Seoul hostage. At best, U.S. forces are a tripwire for defending South Korea.

Not only does the post-Cold War threat environment give the United States the luxury of allowing countries to take responsibility for security in their own neighborhoods, but the economic strength of Europe and East Asia mean that friendly countries in those regions can afford to pay for their own defense rather than relying on the United States to underwrite their security. In 2003, U.S. GDP was $10.9 trillion and total defense expenditures were 3.7 percent of that. In contrast, the combined GDP of the 15 European Union countries in 2003 was $10.5 trillion, but defense spending was less than 2 percent of GDP. Without a Soviet threat to Europe, the United States does not need to subsidize European defense spending. The European countries have the economic wherewithal to increase their military spending, if necessary.

Likewise, U.S. allies in East Asia are capable of defending themselves. North Korea, one of the world’s last bastions of central planning, is an economic basket case. North Korea’s GDP in 2003 was $22 billion, compared to $605 billion for South Korea. South Korea also outspends North Korea on defense by nearly 3 to 1: $14.6 billion versus $5.5 billion. Japan’s GDP was $4.34 trillion (more than 195 times larger than North Korea’s) and defense spending was $42.8 billion (almost eight times that of North Korea). South Korea and Japan certainly have the economic resources to adequately defend themselves against North Korea. They even have the capacity to act as military balancers to China (if China is perceived as a threat). In 2003, China had a GDP of $1.43 trillion and spent $22.4 billion on defense.

Consider, in very rough terms, what it would mean to adopt a balancer-of-last-resort strategy. Virtually all U.S. foreign military deployments and twice as many U.S.-based troops could be cut. Exceptions include Marine Corps personnel assigned to embassies. If the country also eliminated the troops maintained at home for rotation purposes, the result would be to reduce the total active-duty force by about half to 699,000, which would break down as follows:

Army: 189,000, a 61 percent reduction, which would result in a force strength of four active-duty divisions or their equivalent in a brigade force structure.

Navy: 266,600, a 31 percent reduction, which would result in an eight-carrier battle group force.

Marine Corps: 77,000, a 56 percent reduction, which would result in one active Marine Expeditionary Force and one Marine Expeditionary Brigade.

Air Force: 168,000, a 54 percent reduction, which would result in 11 active-duty tactical fighter wings and 93 heavy bombers.

Admittedly, this is a macro approach that assumes that the current active-duty force mix is appropriate. Interestingly enough, this top-down approach yields a force structure not markedly different from what the most recent DOD bottom-up review determined would be needed to fight a single major regional war: four to five Army divisions, four to five aircraft carriers, four to five Marine expeditionary brigades, and 10 Air Force tactical fighter wings and 100 heavy bombers. Therefore, it is a reasonable method for assessing how U.S. forces and force structure could be reduced by adopting a balancer-of-last-resort strategy. And as shown in Figure 2, the size of the defense budget correlates rather strongly with the number of U.S. troops deployed overseas.

According to DOD, the FY 2005 personnel budget for active-duty forces is $88.3 billion out of a total of $104.8 billion for military personnel. A 50 percent reduction in active-duty forces would translate into a FY 2005 active-duty military personnel budget of $44.1 billion and a total military personnel spending budget of $60.6 billion.

If U.S. active-duty forces were substantially reduced, it follows that the associated force structure could be similarly reduced, resulting in reduced operations and maintenance (O&M) costs. Using the same percentage reductions applied to active-duty forces, the O&M budget for the active Army force could be reduced from $26.1 billion to $12.8 billion, the active Navy force could be reduced from $29.8 billion to $20.6 billion, the active Marine Corps force could be reduced from $3.6 billion to $1.6 billion, and the active Air Force could be reduced from $28.5 billion to $13.1 billion. The total savings would be $39.9 billion, and the total spent on O&M would fall from $140.6 billion to $100.7 billion.

The combined savings in military personnel and O&M costs would total $84 billion, or about 21 percent of the total defense budget. Because military personnel and O&M are the two largest portions of the defense budget—26 percent and 35 percent, respectively—significant reductions in defense spending can be achieved only if these costs are reduced. And the only way to reduce these costs is to downsize active-duty military forces.

Unneeded weapon systems

Further savings could be realized by eliminating unneeded weapon systems, which would reduce the procurement budget ($74.9 billion) and the research, development, test, and evaluation (RDT&E) budget ($68.9 billion). The Pentagon has already canceled two major weapon systems: the Army’s Crusader artillery piece and Comanche attack helicopter, with program savings of $9 billion and more than $30 billion, respectively. But this is simply a good start. Other weapon systems that could be canceled include the F-22 Raptor, F/A-18 E/F Super Hornet, V-22 Osprey, and Virginia-class attack submarine.

The Air Force’s F/A-22 Raptor was originally designed for air superiority against Soviet tactical fighters that were never built. It is intended to replace the best fighter in the world today, the F-15 Eagle, which no current or prospective adversary can seriously challenge for air superiority. The Navy’s F/A-18E/F Super Hornet is another unneeded tactical fighter. The Marine Corps V-22 Osprey’s tilt-rotor technology is still unproven and is inherently more dangerous than helicopters that can perform the same missions at a fraction of the cost. The Navy’s Virginia-class submarine was designed to counter a Soviet nuclear submarine threat that no longer exists.

Canceling the F-22, F/A-18E/F, V-22, and Virginia-class attack submarines would save a total of $12.2 billion in procurement and RDT&E costs in the FY 2005 budget and a total of $170 billion in future program costs. Combined with the military personnel and O&M savings previously discussed, a revised FY 2005 defense budget would be $305.8 billion, a 21 percent reduction (figure 3). Of course, it is not realistic to say that the defense budget could be reduced immediately. But the budget could be reduced in increments to this proposed level during a five-year period.

Tools against terror

The defense budget can be reduced because the nation-state threat environment is markedly different than it was during the Cold War, and because a larger military is not necessary to combat the terrorist threat. It is important to remember that a large military with a forward-deployed global presence was not an effective defense against 19 hijackers. In addition, the shorthand phrase “war on terrorism” is misleading. First, as the National Commission on Terrorist Attacks upon the United States (also known as the 9/11 Commission) points out: “The enemy is not just ‘terrorism,’ some generic evil. This vagueness blurs the strategy. The catastrophic threat at this moment in history is more specific. It is the threat posed by Islamist terrorism—especially the al Qaeda network, its affiliates, and its ideology.” Second, the term “war” implies the use of military force as the primary policy instrument for waging the terrorism fight. But traditional military operations should be the exception in the conflict with al Qaeda, which is not an army with uniforms that operates in a specific geographic region but a loosely connected and decentralized network with cells and operatives in 60 countries. President Bush is right: “We’ll have to hunt them down one at a time.”

Although the president is also correct in being skeptical about treating terrorism “as a crime, a problem to be solved mainly with law enforcement and indictments,” the reality is that the arduous task of dismantling and degrading the network will largely be the task of unprecedented international intelligence and law enforcement cooperation. The military aspects of the war on terrorism will largely be the work of special forces in discrete operations against specific targets rather than large-scale military operations. Instead of spending hundreds of billions of dollars to maintain the current size of the armed forces and buy unneeded weapons, the United States should invest in better intelligence gathering, unmanned aerial vehicles (UAVs), special operations forces, and language skills.

Intelligence gathering. Better intelligence gathering about the threat is critical to fighting the war on terrorism. Although the budgets for the 15 agencies with intelligence gathering and analysis responsibilities are veiled in secrecy, the best estimate is that the total spent on intelligence is about $40 billion. As with the defense budget, it is not necessarily a question of needing to spend more money on intelligence gathering and analysis, but how to best allocate spending and resources. About 85 percent of the estimated $40 billion spent on intelligence activities goes to DOD and only about 10 percent to the Central Intelligence Agency, with the remainder spread among the other intelligence agencies. If the war on terrorism is not primarily a military war, perhaps the intelligence budget could be reallocated between DOD and other intelligence agencies, with less emphasis on nation-state military threats and more on terrorist threats.

Questions about the right amount of intelligence spending and its allocation aside, the war on terrorism requires:

Less emphasis on spy satellites as a primary means of intelligence gathering. That does not mean abandoning the use of satellite imagery. Rather, it means recognizing that spy satellite images might have been an excellent way to monitor stationary targets such as missile silos or easily recognizable military equipment, such as tanks and aircraft, but might not be as capable in locating and tracking individual terrorists.

Recognizing the problems involved with electronic eavesdropping. According to Loren Thompson of the Lexington Institute, “The enemy has learned how to hide a lot of its transmissions from electronic eavesdropping satellites.” The problem of having trouble finding and successfully monitoring the right conversations is further compounded by being unable to sift through the sheer volume of terrorist chatter to determine what bits of information are useful.

Greater emphasis on human intelligence gathering. Spies on the ground are needed to supplement and sometimes confirm or refute what satellite images, electronic eavesdropping, interrogations of captured al Qaeda operatives, hard drives on confiscated computers, and other sources are indicating about the terrorist threat. Analysis and interpretation need to be backed up with as much inside information as possible. This is perhaps the most critical missing piece in the intelligence puzzle in terms of anticipating future terrorist attacks. Ideally, the United States needs “moles” inside al Qaeda, but it will be a difficult task (and likely take many years) to place someone inside al Qaeda who is a believable radical Islamic extremist and will be trusted with the kind of information U.S. intelligence needs. The task is made even more difficult because of the distributed and cellular structure of al Qaeda and the fact that the radical Islamic ideology that fuels the terrorist threat to the United States has expanded beyond the al Qaeda structure into the larger Muslim world.

Language skills. Directly related to intelligence gathering is having a cadre of experts to teach and analyze the relevant languages of the Muslim world: Arabic, Uzbek, Pashtu, Urdu, Farsi (Persian), Dari (the Afghan dialect of Farsi), and Malay, to name a few. But according to a Government Accountability Office (GAO) report, in FY 2001 only half of the Army’s 84 positions for Arabic translators and interpreters were filled, and there were 27 unfilled positions (out of a total of 40) for Farsi. Undersecretary of Defense for Personnel and Readiness David Chu admits that DOD is having a “very difficult time . . . training and keeping on active duty sufficient numbers of linguists.” As of March 2004, FBI Director Robert Mueller reported that the bureau had only 24 Arabic-speaking agents (out of more than 12,000 special agents). According to a congressionally mandated report, the State Department had only five linguists fluent enough to speak on Arab television (out of 9,000 Foreign Service and 6,500 civil service employees).

According to the GAO, the Pentagon estimates that it currently spends up to $250 million per year to meet its foreign language needs. The GAO did not indicate whether the $250 million (about six 100ths of 1 percent of the FY 2005 defense budget) was adequate. Whether DOD and other government agencies are spending enough, this much is certain: Language skills for the war on terrorism are in short supply. According to a 9/11 Commission staff report, “FBI shortages of linguists have resulted in thousands of hours of audiotapes and pages of written material not being reviewed or translated in a timely manner.” Increasing that supply will not be easily or quickly done, especially since two of the four most difficult languages for Americans to learn are Arabic and Farsi.

UAVs. The potential utility of UAVs for the war on terrorism has been demonstrated in Afghanistan and Yemen. In February 2002, a Predator UAV (armed with Hellfire missiles and operated by the CIA) in the Tora Bora region of eastern Afghanistan attacked a convoy and killed several people, including a suspected al Qaeda leader. In November 2002, a Predator UAV in Yemen destroyed a car containing six al Qaeda suspects, including Abu Ali al-Harithi, one of the suspected planners of the attack on the USS Cole in October 2000.

If parts of the war on terrorism are to be fought in places such as Yemen, Sudan, Somalia, and Pakistan, and especially if it is not possible for U.S. ground troops to operate in those countries, UAVs could be key assets for finding and targeting al Qaeda operatives because of their ability to cover large swaths of land for extended periods of time in search of targets. A Predator UAV has a combat radius of 400 nautical miles and can carry a maximum payload of 450 pounds for more than 24 hours. The Congressional Budget Office states that UAVs can provide their users with “sustained, nearly instantaneous video and radar images of an area without putting human lives at risk.”

Armed UAVs offer a cost-effective alternative to ground troops or the use of piloted aircraft to perform missions against identified terrorist targets. One can only wonder what might have happened if the spy Predator that took pictures of a tall man in white robes surrounded by a group of people—believed by many intelligence analysts to be Osama bin Laden— in the fall of 2000 had instead been an armed Predator capable of immediately striking the target.

In addition to their utility, UAVs are particularly attractive because of their relatively low cost, especially when compared to that of piloted aircraft. Developmental costs for UAVs are about the same as those for a similar piloted aircraft, but procurement costs are substantially less. In FY 2003, the government purchased 25 Predator UAVs for $139.2 million ($5.6 million each), and in FY year 2004, it purchased 16 Predators for $210.1 million ($13.1 million each). The FY 2005 budget includes the purchase of nine Predators for $146.6 million ($16.3 million each). (The increases in per-unit cost reflect the arming of more Predators with Hellfire missiles.) Although UAVs are unlikely to completely replace piloted aircraft, a Predator UAV costs a fraction of a tactical fighter aircraft such as an F-15 or F-22, with unit costs of $55 million and $257 million, respectively. O&M costs for UAVs are also expected to be less than for piloted aircraft.

Thus, the $10 billion in planned spending on UAVs in the next decade (compared to $3 billion in the 1990s) is a smart investment in the war on terrorism. Even doubling the budget to $2 billion a year on average over the next 10 years would make sense and would represent less than 1 percent of an annual defense budget based on a balancer-of-last-resort strategy. The bottom line is that UAVs are a very low-cost weapon that could yield an extremely high payoff in the war on terrorism.

Special operations forces. The al Qaeda terrorist network is a diffuse target with individuals and cells operating in 60 or more countries. U.S. special operations forces are ideally suited for counteracting this threat. Indeed, according to the Special Operations Forces posture statement, counterterrorism is the number one mission of these forces, and they are “specifically organized, trained, and equipped to conduct covert, clandestine, or discreet CT [counterterrorism] missions in hostile, denied, or politically sensitive environments,” including “intelligence operations, attacks against terrorist networks and infrastructures, hostage rescue, [and] recovery of sensitive material from terrorist organizations.”

Secretary of Defense Donald Rumsfeld has been a strong advocate of using special operations forces against terrorist targets. In August 2002, he issued a classified memo to the U.S. Special Operations Command (SOCOM) to capture or kill Osama bin Laden and other al Qaeda leadership. Rumsfeld has also proposed sending these forces into Somalia and Lebanon’s Bekaa Valley, because these lawless areas are thought to be places where terrorists can hide and be safe from U.S. intervention.

As with UAVs, the cost of special operations forces is relatively inexpensive. The FY 2005 budget request for SOCOM is $6.5 billion, or only about 1.6 percent of the total defense budget. As with UAVs, the budget for special operations forces could be significantly increased without adversely affecting the overall defense budget. Given the importance and unique capabilities of these forces relative to those of the regular military in the war on terrorism, it would make sense to increase funding for special forces, perhaps doubling SOCOM’s budget.

Ever-increasing defense spending is being justified as necessary to fight the war on terrorism. But the war on terrorism is not primarily a conventional military war to be fought with tanks, planes, and ships, and the military threat posed by nation states to the United States does not warrant maintaining a large, forward-deployed military presence around the world. Indeed, there is a relationship between U.S. troops deployed abroad and acts of terrorism against the United States, as the Bush administration has acknowledged. According to former Deputy Defense Secretary Paul Wolfowitz, U.S. forces stationed in Saudi Arabia after the 1991 Gulf War were “part of the containment policy [of Iraq] that has been Osama bin Laden’s principal recruiting device, even more than the other grievances he cites.”

Therefore, a better approach to national security policy would be for the United States to adopt a less interventionist policy abroad and pull back from the Cold War-era extended security perimeter (with its attendant military commitments overseas). Rather than being the balancer of power in disparate regions around the world, the United States should allow countries in those regions to establish their own balance-of-power arrangements. A balancer-of-last-resort strategy would help the United States distinguish between crises and conflicts vital to its interests and those that do not threaten U.S. national security.

The Global Water Crisis

People living in the United States or any industrialized nation take safe drinking water for granted. But in much of the developing world, access to clean water is not guaranteed. According to the World Health Organization, more than 1.2 billion people lack access to clean water, and more than 5 million people die every year from contaminated water or water-related diseases.

The world’s nations, through the United Nations (UN), have recognized the critical importance of improving access to clean water and ensuring adequate sanitation and have pledged to cut the proportion of people without such access by half by 2015 as part of the UN Millennium Development Goals. However, even if these goals are reached, tens of millions of people will probably perish from tainted water and water-borne diseases by 2020.

Although ensuring clean water for all is a daunting task, the good news is that the technological know-how exists to treat and clean water and convey it safely. The international aid community and many at-risk nations are already working on a range of efforts to improve access to water and sanitation.

It is clear, however, that more aid will be needed, although the estimates of how much vary widely. There is also considerable debate about the proper mix of larger, more costly projects and smaller, more community-scale projects. Still, it seems that bringing basic water services to the world’s poorest people could be done at a reasonable price—probably far less than consumers in developed countries now spend on bottled water.

The global water crisis is a serious threat, and not only to those who suffer, get sick, and die from tainted water or water-borne disease. There is also a growing realization that the water crisis undercuts economic growth in developing nations, can worsen conflicts over resources, and can even affect global security by worsening conditions in states that are close to failure.

Mounting death toll

According to a Pacific Institute analysis, between 34 and 76 million people could perish because of contaminated water or water-related diseases by 2020, even if the UN Millennium Development Goals are met.

The spending gap

Despite the toll of the global water crisis, industrial nations spend little on overseas development efforts such as water and sanitation projects. Only 5 of 22 nations have met the modest UN goal of spending 0.7 percent of a nation’s gross national income on overseas development assistance. And only a fraction of all international assistance is spent on water and sanitation projects. From 1999 to 2001, an average of only $3 billion annually was provided for water supply and sanitation projects.

Water from a bottle

Although tap water in most of the developed world is clean and safe, millions of consumers drink bottled water for taste, convenience, or because of worries about water quality. Comprehensive data on bottled water consumption in the developing world are scarce. However, some water experts are worried that increased sales of bottled water to the developing world will reduce pressure on governments to provide basic access to non-bottled water. Others are concerned that the world’s poorest people will have to spend a significant amount of their already low incomes to purchase water.

Too dear a price?

Consumers spend nearly $100 billion annually on bottled water, according to Pacific Institute estimates. Indeed, consumers often pay several hundred to a thousand times as much for bottled water as they do for reliable, high-quality tap water, which costs $.50 per cubic meter in California. This disparity is often worse in developing nations where clean water is far out of reach for the poorest people.

Harnessing Nanotechnology to Improve Global Equity

Developing countries usually find themselves on the sidelines watching the excitement of technological innovation. The wealthy industrialized nations typically dominate the development, production, and use of new technologies. But many developing countries are poised to rewrite the script in nanotechnology. They see the potential for nanotechnology to meet several needs of particular value to the developing world and seek a leading role for themselves in the development, use, and marketing of these technologies. As the next major technology wave, nanotechnology will be revolutionary in a social and economic as well as a scientific and technological sense.

Developing countries are already aware that nanotechnology can be applied to many of their pressing problems, and they realize that the industrialized countries will not place these applications at the top of their to-do list. The only way to be certain that their needs are addressed is for less industrialized nations themselves to take the lead in developing those applications. In fact, many of these countries have already begun to do so. The wealthy nations should see this activity as a potential catalyst for the type of innovative research and economic development sorely needed in these countries. Strategic help from the developed world could have a powerful impact on the success of this effort. Planning this assistance should begin with an understanding of developing-country technology needs and knowledge of the impressive R&D efforts that are already under way.

To provide strategic focus to nanotechnology efforts, we recently carried out a study using a modified version of the Delphi method and worked with a panel of 63 international experts, 60 percent of whom were from developing countries, to identify and rank the 10 applications of nanotechnology most likely to benefit the less industrialized nations in the next 10 years. The panelists were asked to consider impact, burden, appropriateness, feasibility, knowledge gaps, and indirect benefits of each application proposed. Our results, shown in Table 1, show a high degree of consensus with regard to the top four applications: All of the panelists cited at least one of the top four applications in their personal top-four rankings, with the majority citing at least three.

Table 1.
Top 10 Applications of Nanotechnology for Developing Countries

1. Energy storage, production, and conversion
2. Agricultural productivity enhancement
3. Water treatment and remediation
4. Disease diagnosisand screening
5. Drug delivery systems
6. Food processing and storage
7. Air pollution and remediation
8. Construction
9. Health monitoring
10. Vector and pest detection and control

Source: F. Salamanca-Buentello etal., “Nanotechnology and the Developing World,” PLoSMedicine2 (2003): e97.

To further assess the impact of nanotechnology on sustainable development, we asked ourselves how well these nanotechnology opportunities matched up with the eight United Nations (UN) Millennium Development Goals, which aim to promote human development and encourage social and economic sustainability. We found that nanotechnology can make a significant contribution to five of the eight goals: eradicating extreme poverty and hunger; ensuring environmental sustainability; reducing child mortality; improving maternal health; and combating AIDS, malaria, and other diseases. A detailed look at how nanotechnology could be beneficial in the three most commonly mentioned areas is illustrative.

Energy storage, production, and conversion. The growing world population needs cheap noncontaminating sources of energy. Nanotechnology has the potential to provide cleaner, more affordable, more efficient, and more reliable ways to harness renewable resources. The rational use of nanotechnology can help developing countries to move toward energy self-sufficiency, while simultaneously reducing dependence on nonrenewable, contaminating energy sources such as fossil fuels. Because there is plenty of sunlight in most developing countries, solar energy is an obvious source to consider. Solar cells convert light into electric energy, but current materials and technology for these cells are expensive and inefficient in making this conversion. Nanostructured materials such as quantum dots and carbon nanotubes are being used for a new generation of more efficient and inexpensive solar cells. Efficient solar-derived energy could be used to power the electrolysis of water to produce hydrogen, a potential source of clean energy. Nanomaterials also have the potential to increase by several orders of magnitude the efficiency of the electrolytic reactions.

One of the limiting factors to the harnessing of hydrogen is the need for adequate storage and transportation systems. Because hydrogen is the smallest element, it can escape from tanks and pipes more easily than can conventional fuels. Very strong materials are needed to keep hydrogen at very low temperature and high pressure. Novel nanomaterials can do the job. Carbon nanotubes have the capacity to store up to 70 percent of hydrogen by weight, an amount 20 times larger than that in currently used compounds. Additionally, carbon nanotubes are 100 times stronger than steel at one-sixth the weight, so theoretically, a 100-pound container made of nanotubes could store at least as much hydrogen as could a 600-pound steel container, and its walls would be 100 times as strong.

Agricultural productivity enhancement. Nanotechnology can help develop a range of inexpensive applications to increase soil fertility and crop production and thus to help eliminate malnutrition, a contributor to more than half the deaths of children under five in developing countries. We currently use natural and synthetic zeolites, which have a porous structure, in domestic and commercial water purification, softening, and other applications. Using nanotechnology, it is possible to design zeolite nanoparticles with pores of different sizes. These can be used for more efficient, slow, and thorough release of fertilizers; or they can be used for more efficient livestock feeding and delivery of drugs. Similarly, nanocapsules can release their contents, such as herbicides, slowly and in a controlled manner, increasing the efficacy of the substances delivered.

Water treatment and remediation. One-sixth of the world’s population lacks access to safe water supplies; one-third of the population of rural areas in Africa, Asia, and Latin America has no clean water; and 2 million children die each year from water-related diseases, such as cholera, typhoid, and schistosomiasis. Nanotechnology can provide inexpensive, portable, and easily cleaned systems that purify, detoxify, and desalinate water more efficiently than do conventional bacterial and viral filters. Nanofilter systems consist of “intelligent” membranes that can be designed to filter out bacteria, viruses, and the great majority of water contaminants. Nanoporous zeolites, attapulgite clays (which can bind large numbers of bacteria and toxins), and nanoporous polymers (which can bind 100,000 times more organic contaminants than can activated carbon) can all be used for water purification.

Nanomagnets, also known as “magnetic nanoparticles” and “magnetic nanospheres,” when coated with different compounds that have a selective affinity for diverse contaminating substances, can be used to remove pollutants from water. For example, nanomagnets coated with chitosan, a readily available substance derived from the exoskeleton of crabs and shrimps that is currently used in cosmetics and medications, can be used to remove oil and other organic pollutants from aqueous environments. Brazilian researchers have developed superparamagnetic nanoparticles that, coated with polymers, can be spread over a wide area in dustlike form; these modified nanomagnets would readily bind to the pollutant and could then be recovered with a magnetic pump. Because of the size of the nanoparticles and their high affinity for the contaminating agents, almost 100 percent of the pollutant would be removed. Finally, the magnetic nanoparticles and the polluting agents would be separated, allowing for the reuse of the magnetic nanoparticles and for the recycling of the pollutants. Also, magnetite nanoparticles combined with citric acid, which binds metallic ions with high affinity, can be used to remove heavy metals from soil and water.

Developing-country activities

Understanding how selected developing countries are harnessing nanotechnology can provide lessons for other countries and for each other. These lessons can be used to provide heads of state and science and technology ministers in less industrialized countries with specific guidance and good practices for implementing innovation policies that direct the strengths of the public and private sectors toward the development and use of nanotechnology to address local sustainable development needs. The actions of developing countries themselves will ultimately determine whether nanotechnology will be successfully harnessed in the developing world.

Developed-country governments should provide incentives for their companies to direct a portion of their R&D toward the development of nanotechnology in less industrialized nations.

We found little extant useful information on nanotechnology research in developing countries, so we conducted our own survey. This preliminary study used information we could collect on the Internet, from e-mail exchanges with experts, and from other publicly available documents. We were able to categorize countries based on the degree of government support for nanotechnology, on the presence or absence of a formal government funding program, on the level of industry involvement, and on the amount of research being done in academic institutions and research groups. Our results revealed a surprising amount of nanotechnology R&D activity (Table 2). Our plan now is to conduct individual case studies of developing countries to obtain a greater depth of understanding. Below is some detailed information we have acquired in the preliminary study.

China. China has a very strong and solid Nanoscience and Nanotechnology National Plan, a National Steering Committee for Nanoscience and Nanotechnology, and a National Nanoscience Coordination Committee. Eleven institutes of the Chinese Academy of Sciences are involved in a major nanotechnology research projects funded partly by the Knowledge Innovation Program. The Chinese Ministry of Science and Technology actively supports several nanoscience and nanotechnology initiatives. The Nanometer Technology Center in Beijing is part of China’s plan to establish a national nanotechnology infrastructure and research center; it involves recruiting scientists, protecting intellectual property rights, and building international cooperation in nanotechnology. China’s first nanometer technology industrial base is located in the Tianjin economic and development area. Haier, one of the country’s largest home appliance producers, has incorporated a series of nanotechnology-derived materials and features into refrigerators, televisions, and computers. Industry and academic researchers have worked together to produce nanocoatings for textiles that render silk, woollen, and cotton clothing water- and oilproof, prevent clothing from shrinking, and protect silk from discoloration. Nanotech Port of Shenzhen is the largest manufacturer of single-walled and multi-walled carbon nanotubes in Asia. Shenzheng Chengying High-Tech produces nanostructured composite anti-ultraviolet powder, nanostructured composite photocatalyst powder, and high-purity nanostructured titanium dioxide. The last two nanomaterials are being used to catalyze the destruction of contaminants using sunlight.

Table 2.
Selected Developing Countries and Their Nanotechnology Activity

Front Runner China National government funding program
South Korea Nanotechnology patents
India Commercial products on the market or in development
Middle Ground Thailand Development of national government funding program
Philippines Some form of existing government support (e.g., research grants)
South Africa Limited industry involvement
Brazil Numerous research institutions
Chile
Up and Comer Argentina Organized government funding not yet established
Mexico Industry not yet involved
Research groups funded through various science and technology institutions

India. Indian nanotechnology efforts cover a wide spectrum of areas, including microelectromechanical systems (MEMS), nanostructure synthesis and characterization, DNA chips, quantum computing electronics, carbon nanotubes, nanoparticles, nanocomposites, and biomedical applications of nanotechnology. The Indian government catalyzed, through the Department of Science and Technology, the National Nanotechnology Program, which is funded with $10 million over 3 years. India has also created a Nanomaterials Science and Technology Initiative and a National Program on Smart Materials; the latter will receive $15 million over 5 years. This program, which is focused on materials that respond quickly to environmental stimuli, is jointly sponsored by five government agencies and involves 10 research centers. The Ministry of Defence is developing projects on nanostructured magnetic materials, thin films, magnetic sensors, nanomaterials, and semiconductor materials. India has also formed a joint nanotechnology initiative with the European Union (EU). Several academic institutions are pursuing nanotechnology R&D, among them the Institute of Smart Materials Structures and Systems of the Indian Institute of Science; the Indian Institute of Technology; the Shanmugha Arts, Science, Technology, and Research Academy; the Saha Institute of Nuclear Physics; and the Universities of Delhi, Pune, and Hyderabad. The Council for Scientific and Industrial Research, India’s premier R&D body, holds numerous nanotechnology-related patents, including novel drug delivery systems, production of nanosized chemicals, and high-temperature synthesis of nanosized titanium carbide. In the industrial sector, Nano Biotech Ltd. is doing research in nanotechnology for multiple diagnostic and therapeutic uses. Dabur Research Foundation is involved in developing nanoparticle delivery systems for anticancer drugs. Similarly, Panacea Biotec has made advances in novel controlled-release systems, including nanoparticle drug delivery for eye diseases, mucoadhesive nanoparticles, and transdermal drug delivery systems. CranesSci MEMS Lab, a privately funded research laboratory located at the Department of Mechanical Engineering of the Indian Institute of Science, is the first privately funded MEMS institution in India; it carries out product-driven research and creates intellectual property rights in MEMS and related fields with an emphasis on social obligations and education.

Brazil. The government of Brazil considers nanotechnology a strategic area. The Brazilian national nanotechnology initiative started in 2001, putting together several existing high-level nanotechnology research groups in several academic institutions and national research centers. Four research networks have been created with initial funds provided by the Ministry of Science and Technology through the National Council for Scientific and Technological Development. Two virtual institutes operating in the area of nanoscience and nanotechnology have also been created through the national program. The total budget for nanoscience and nanotechnology for 2004 was about $7 million; the predicted budget for 2004-2007 is around $25 million. About 400 scientists are working on nanotechnology in Brazil. Activities include a focus on nanobiotechnology, novel nanomaterials, nanotechnology for optoelectronics, biosensors, tissue bioengineering, biodegradable nanoparticles for drug delivery, and magnetic nanocrystals.

South Africa. South African research in nanotechnology currently focuses on applications for social development and industrial growth, including synthesis of nanoparticles, development of better and cheaper solar cells, highly active nanophase catalysts and electrocatalysts, nanomembrane technology for water purification and catalysis, fuel-cell development, synthesis of quantum dots, and nanocomposites development. The South African Nanotechnology Initiative (SANi), founded in 2003, aims to build a critical mass of universities, science councils, and industrial companies that will focus on those areas of nanotechnology in which South Africa can have an advantage. To this end, SANi has an initial budget of about $1.3 million. The total spending in South Africa on nanotechnology is about $3 million. SANi is also interested in promoting public awareness of nanotechnology and assessing the impact of nanotechnology in the South African population. There are currently 11 universities, 5 research organizations (including the Water Research Commission), and 10 private companies actively participating in this initiative. The areas of interest of the private sector in South Africa appear to be chemicals and fuels, energy and telecommunications, water, mining, paints, and paper manufacturing.

Addressing the legitimate concerns associated with nanotechnology can foster public support and allow the technology platform to progress in a socially responsible manner.

Mexico. Mexico has 13 centers and universities involved in nanotechnology research. In 2003, the National Council of Science and Technology spent $12.5 million on 62 projects in 19 institutions. There is strong interest in nanoparticle research for optics, microelectronics, catalysis, hard coatings, and medical electronics. Several groups have focused on fullerenes (in particular, carbon nanotubes), nanowires, molecular sieves for ultrahard coatings, catalysis, nanocomposites, and nanoelectronics. Novel polymer nanocomposites are being developed for high-performance materials, controlled drug release, nanoscaffolds for regenerative medicine applications, and novel dental materials. Last year, Mexican researchers, along with the Mexican federal government and private investors, unveiled a project for the creation of the $18 million National Laboratory of Nanotechnology, which will be under the aegis of the National Institute of Astrophysics, Optics, and Electronics. The initiative was funded by the National Council of Science and Technology, several state governments, and Motorola.

Biotechnology lessons

Lessons from successful health-biotechnology innovation in developing nations, by analogy, might also offer some guidance for nanotechnology innovation. We recently completed a 3-year empirical case study of the health-biotechnology innovation systems in seven developing countries: Cuba, Brazil, South Africa, Egypt, India, China, and South Korea.

The study identified many key factors involved in each of the success stories, such as the focus on the use of biotechnology to meet local health needs. For instance, South Africa has prioritized research on HIV/AIDS, its largest health problem, and developments are under way for a vaccine against the strain most prevalent in the country; Egypt is responding to its insulin shortage by focusing its R&D efforts on the drug; Cuba developed the world’s only meningitis B vaccine as a response to a national outbreak; and India has reduced the cost of its recombinant hepatitis B vaccine to well below that in the developed world. Publications on health research in each of these countries follow the same trend of focusing on local health needs.

Political will is another important factor for establishing a successful health-biotechnology sector, because long-term government support was integral in all seven case studies. In efforts to promote health care biotechnology, governments have developed specific polices, provided funding and recognition for the importance of research, responded to the challenges of brain drain, and provided biotechnology enterprises with incentives to overcome problematic economic conditions. Close linkages and active communication are important as well. Whereas in some countries, such as Cuba, strong collaboration and linkages yielded successful health-biotechnology innovation, lack of these factors in China, Brazil, and Egypt has resulted in less impressive innovation. Defining niche areas, such as vaccines, emerged as another key factor in establishing a successful health-biotechnology sector. Some of the countries have also used their competitive advantages, such as India’s strong past focus on generic drug development.

Our study identified private-sector development as essential for the translation of health-biotechnology knowledge into products and services. South Korea significantly surpassed all other countries in this respect, with policies in place to assist technology transfer and allow university professors to create private firms. China has also promoted enterprise formation, converting existing research institutions into companies. To further explore the role of the private sector, we are currently examining how the domestic health-biotechnology sector in developing countries contributes to addressing local health needs and what policies or practices could make that contribution more effective.

The 2004 report of the UN Commission on Private Sector and Development, Unleashing Entrepreneurship, Making Business Work for the Poor () emphasized the important economic role of the domestic private sector in developing countries. The commission, chaired by Paul Martin and Ernesto Zedillo, highlighted how managerial, organizational, and technological innovation in the private sector, particularly the small and mid-sized enterprise segment, can improve the lives of the poor by empowering citizens and contributing to economic growth. The work of the commission also emphasized the lack of knowledge about best practices and the need for more sustained research and analysis of what works and what does not when attempting to harness the capabilities of the private sector in support of development.

The catalytic challenge

Although the ultimate success of harnessing nanotechnology to improve global equity rests with developing countries themselves, there are significant actions that the global community can take in partnership with developing countries to foster the use of nanotechnology for development. These include:

Addressing global challenges. We have proposed an initiative called Addressing Global Challenges Using Nanotechnology, which can catalyze the use of nanotechnology to address critical sustainable development problems. In the spirit of the concept of Grand Challenges, we are issuing a call to arms for investigators to confront one or more bottlenecks in an imagined path to solving a significant development problem (or preferably several) by seeking very specific scientific or technological breakthroughs that would overcome this obstacle. A scientific board, similar to the one created for the Foundation for the U.S. National Institutes of Health/Bill and MelindaGates Foundation’s Grand Challenges in Global Health, with strong representation of developing countries, would need to be established to provide guidance and oversee the program. The top 10 nanotechnology applications identified above can be used as a roadmap to define the grand challenges.

Helping to secure funding. Two sources of funding, private and public, would finance our initiative. In February 2004, Canadian Prime Minister Paul Martin proposed that 5 percent of Canada’s R&D investment be used to address developing world challenges (). If all industrialized nations adopted this target, part of these funds could be directed toward addressing global challenges using nanotechnology. In addition, developed-country governments should provide incentives for their companies to direct a portion of their R&D toward the development of nanotechnology in less industrialized nations.

Forming effective North-South collaborations. There are already promising examples of North-South partnerships. For instance, the EU has allocated 285 million Euros through its 6th Framework Programme (FP6) for scientific and technological cooperation with third-partner countries, including Argentina, Chile, China, India, and South Africa. A priority research area under FP6 is nanotechnology and nanoscience. Another example is the U.S. funding of nanotechnology research in Vietnam, as well as the U.S.-Vietnam Joint Committee for Science & Technology Cooperation. IndiaNano, a platform created jointly by the Indian-American community in Silicon Valley and Indian experts involved in nanotechnology R&D, aims to establish partnerships between Indian academic, corporate, government, and private institutions in order to support nanotechnology R&D in India and to coordinate the academic, government, and corporate sectors with entrepreneurs, early-stage companies, investors, joint ventures, service providers, startup ventures, and strategic alliances.

Facilitating knowledge repatriation by diasporas. We have recently begun a diaspora study to understand in depth how emigrants can more systematically contribute to innovation and development in their countries of origin. A diaspora is formally defined as a community of individuals from a specific developing country who left home to attend school or find a better job and now work in industrialized nations in academia, research, or industry. This movement of highly educated men and women is often described as a “brain drain” and is usually seen as having devastating effects in the developing world. Rather than deem this migration, which is extremely difficult to reverse, an unmitigated disaster, some developing countries have sought ways to tap these emigrants’ scientific, technological, networking, management, and investment capabilities. India actively encourages its “nonresident Indians” diaspora to make such contributions to development back home, and these people have made a valuable contribution to the Indian information technology and communications sector. We foresee a significant role for diasporas in the development of nanotechnology in less industrialized nations.

Emphasizing global governance. We propose the formation of an international network on the assessment of emerging technologies for development. This network should include groups that will explore the potential risks and benefits of nanotechnology, incorporating developed- and developing-world perspectives, and examine the effects of a potential “nanodivide.” The aim of the network would be to facilitate a more informed policy debate and advocate for the interests of those in developing countries. Addressing the legitimate concerns associated with nanotechnology can foster public support and allow the technology platform to progress in a socially responsible manner. Among the issues to be discussed are who will control the means of production and who will assess the risks and benefits? What will be the effects of military and corporate control over nanotechnology? How will the incorporation of artificial materials into human systems affect health, security, and privacy? How long will nanomaterials remain in the environment? How readily do nanomaterials bind to environmental contaminants? Will these particles move up through the food chain, and what will be their effect on humans? There are also potential risk management issues specific to developing countries: displacement of traditional markets, the imposition of foreign values, the fear that technological advances will be extraneous to development needs, and the lack of resources to establish, monitor, and enforce safety regulations. Addressing these challenges will require active participation on the part of developing countries. In developing these networks, the InterAcademy Council of the world’s science academies could play a role in convening groups of the world’s experts who can provide informed guidance on these issues.

The inequity between the industrialized and developing worlds is arguably the greatest ethical challenge facing us today. The gap is even growing by some measures. For example, life expectancies in most industrialized nations are 80 years and rising, whereas in many developed nations, especially in sub-Saharan Africa where HIV/AIDS is rampant, life expectancies are 40 years and falling.

Although science and technology are not a magic bullet and cannot address problems such as geography, poor governance, and unfair trade practices, they have an essential role in confronting these challenges, as explained in the 2005 report of the UN Millennium Project Task Force on Science, Technology and Innovation (http://www.unmillenniumproject.org/documents/Science-complete.pdf). Some will argue that the focus on cutting-edge developments in nanotechnology is misplaced when developing countries have yet to acquire more mature technologies and are still struggling to meet basic needs such as food and water availability. This is a short-sighted view. All available strategies, from the simplest to the most complex, should be pursued simultaneously. Some will deal with the near term, others the long-term future. What was cutting-edge yesterday is low-tech today, and today’s high-tech breakthrough will be tomorrow’s mass-produced commodity.

Each new wave of science and technology innovation has the potential to expand or reduce the inequities between industrialized and developing countries in health, food, water, energy, and other development parameters. Information and communication technology produced a digital divide, but this gap is now closing; genomics and biotechnology spawned the genomics divide, and we will see if it contracts. Will nanotechnology produce the nanodivide? Resources might be directed primarily to nanosunscreens, nanotrousers, and space elevators to benefit the 600 million people in rich countries, but that path is not predetermined. Nanotechnology could soon be applied to address the critical health, food, water, and energy needs of the 5 billion people in the developing world.

Archives – Spring 2005

Albert Einstein Memorial Statue, Copyright 1978 by Robert Berks

Albert Einstein

One hundred years ago, Albert Einstein received his doctorate in physics from the University of Zurich and quickly made an indelible mark on the field. He published three papers that year. The first explained photoelectric effect by suggesting that light be thought of as discrete packets, or quanta, of energy particles. (In 1921 he was awarded the Nobel Prize for this work.) The second presented his special theory of relativity and contained the famous equation E=mc2, which provides the foundation for the development of atomic energy. The third virtually demonstrated the reality of atoms by showing that Brownian motion—the irregular movement of particles suspended in a liquid or a gas—is a consequence of molecular motion. At the age of 26 he had established himself as a scientist for the ages.

He went on to a brilliant career that included at least one other epochal paper, his 1916 discussion of gravitational fields in “The Foundation of the General Theory of Relativity.” When Hitler and the Nazis came to power in Germany in 1933, Einstein emigrated to the United States, where he joined the newly formed Institute for Advanced Studies at Princeton. He died in 1955.

Einstein first visited the National Academy of Sciences in 1921, when he spoke at the annual meeting, and he was named a foreign associate in 1922. He was elected to full membership in 1942, after becoming a U.S. citizen. In 1979, to commemorate the centennial of Einstein’s birth, the National Academy of Sciences erected on its grounds the four-ton bronze statue of Einstein that is pictured above.

Copyright reconsidered

Digitization is reshaping industries that are based on copyright, such as music, movies, books, and journals. Although the ramifications of digitization are widespread, the main focus of current interest is in industries such as movies and music that have large audiences.

Not all of the effects of digitization are universally admired. Peer-to-peer downloading of music files has reduced CD sales, and the recording industry, in an action reeking of desperation, has brought thousands of lawsuits against heavy users of file-sharing networks. The U.S. Supreme Court is about to rule on the industry’s attempt to shut down some file-sharing services. Movie producers, worrying that they too will suffer a slide in sales, have made protection against unauthorized copying a key consideration in choosing the next-generation DVD technology. Congress is considering several pieces of legislation attempting to “solve” these problems.

Into this environment strides William W. Fisher III, a copyright expert at Harvard Law School. His book boldly proposes a complete overhaul of the current copyright system. He does not flinch from suggesting concrete changes that he believes will lead not only to a superior copyright regime but also to a healthier society, by increasing what he refers to as “semiotic democracy,” an idea that will be explained below. Fisher’s views are shared by many, though not all, legal theorists, and their arguments are taken seriously in policy debates. Much of this discussion has not caught the attention of other academics or the public at large, so this book performs a valuable service in providing specific proposals for others to examine.

The first part of the book discusses the current copyright system and its many applications in the sound recording and movie industries. Fisher analyzes copyright before 1990 and then explores the difficulties brought about by the growth in digitization in the 1990s. He concludes that the current situation of unbridled pirating and industry lawsuits against file-sharers is untenable.

The entire book is well written, but I particularly recommend the first three chapters, which include a primer on complex nature of copyright in these industries. A lawyer by training, Fisher nevertheless explains many economic concepts used in contemporary analyses of copyright.

The main thrust of the book, presented in chapters 4 to 6, compares three alternative scenarios that he believes will improve the current copyright regime. These three possibilities include measures for strengthening copyright protection, adding regulations to that stronger copyright regime, and finally his favorite alternative: a solution based on government financing.

This crucial portion of the book is considerably weaker than the first chapters. There are numerous minor problems with his economic analysis, but it is his big-picture explication that is most questionable.

In chapter 4, he examines the possibility of increasing copyright protection along the lines of protection for physical property. Unlike owners of physical property, copyright owners are often forced (usually by compulsory licenses) to make their products available at fixed prices (for example, cable retransmission of television broadcasts and jukebox playing of records) or free (such as radio broadcasts of records). Owners of traditional property have far fewer restrictions. Furthermore, criminal sanctions with government enforcement are typical for property theft, whereas copyright sanctions are often civil, and the costs of detection and prosecution are usually paid by the copyright owners.

Fisher analyzes pros and cons of protecting copyright in the way the physical property is protected and concludes that this strengthening of copyright would be “superior to our current state of affairs”— unless manufacturers of electronics goods were required to “recognize and enforce usage restrictions embedded in digital recordings by the owners of the copyrights therein.” In other words, government-mandated technologies (such as the “broadcast flag” intended to limit high-definition copying of high-definition television programs) are so harmful, in his opinion, that they swamp any positive impacts of stronger copyright.

As a libertarian, I sympathize with Fisher’s opposition to government mandates, although such thinking hasn’t won the day in numerous instances such as seatbelts or aspirin-bottle covers. Fisher’s concern, however, is based on something quite different. He fears that these mandates would weaken the “end-to-end”] principle of the Internet (which states that most of the heavy work should be done at the edge of the network, not within the network), a topic that, like semiotic democracy, receives only a few paragraphs and is thus difficult to evaluate. Although Fisher admits that requiring anticopying technology in DVD recorders doesn’t seem to endanger the Internet in the near term, he suggests that in the future it would cause dire harm. He describes this harm in only the vaguest sense, although in a key paragraph he suggests that it will reduce “variety, experimentation, and freedom.” This echoes his description of semiotic democracy, which he declares leads to a “more variegated … stimulating … collaborative and playful” world. The goal of facilitating collaboration and the creative reuse of published materials turns out to be the driving force in his analysis.

In chapter 5, Fisher sketches the outlines of a music industry with stronger copyright but additional government regulation intended to prevent copyright owners from exercising too much control. This is where the values underlying his argument finally start to become clear. For example, he suggests that new forms of price discrimination by copyright owners (charging different consumers different prices for the same product) be regulated because they are “noxious.” Although price discrimination (for example, higher markups on hardcover books than paperbacks and first-run movie prices higher than those for later exhibitions) is common, digital technology might allow copyright owners to price the item far more precisely than they have in the past. For example, a copyright owner could require a consumer to pay each time a recording is played. Economic analysis often suggests that price discrimination would enhance efficiency, particularly if carried out with sufficient precision.

Fisher attempts to explain why price discrimination by copyright owners would be harmful in spite of these economic arguments. He first suggests that consumers might be less inclined, in their role as potential creators, to freely give to society their creations, apparently because price discrimination would be reflective of a selfish society. Price discrimination, therefore, is bad because it will lead to fewer (free) creative works. Yet elsewhere in the book he expresses concern that successful creators already make too much money, wastefully luring too many individuals into these endeavors. Because price discrimination will make creators richer, so Fisher fears it will exacerbate this wasteful overabundance of creators, which is Fisher’s second strike against price discrimination.

Any seeming inconsistency is resolved once one understands that Fisher envisions two distinct types of creative works: paid and unpaid. Stronger copyright, he believes, will generate too many people wanting to become professional (paid) creators but too few wishing to be amateur (unpaid) creators.

There is also a strong preference in this view for derivative works based on changes to earlier works as opposed to original works. A Jim Carrey movie based on the Dr. Seuss book How the Grinch Stole Christmas is a traditional derivative work, but Fisher appears far more concerned with less commercial derivative works. He provides several examples of derivative works that he believes exemplify semiotic democracy. These include ‘‘mashes” of records, where, for example, the music of one CD is mixed with the vocals of another (the most famous being the “Grey Album” by DJ Dangermouse, which mixed the Beatles “White Album” with the Black Album of rapper JayZ). It also includes rearrangements of movie scenes (try Googling “phantom edit” to discover 8,000 web pages discussing the “improved” version of Starwars: The Phantom Menace, which pales in comparison to the 97,000 pages discussing the “Grey Album”). It appears that semiotic democracy requires broad but thin creativity that can be accomplished in a short time in anyone’s bedroom or study.

Fisher’s final reason for rejecting price discrimination echoes this theme. He fears that those who wish to use or modify someone else’s work will be charged higher prices than are ordinary consumers, and this might deter the creation of derivative works. Again, his concern is about noncommercial use, since the makers of the Grinch movie expected to pay the Seuss estate for the rights.

The more one reads Fisher’s reasons for preferring one policy over another, the more apparent it becomes that the driving force is not a careful weighing of the various economic costs and benefits he presents so meticulously, but an attempt to promote semiotic democracy.

Let bureaucrats decide

Which brings us to Fisher’s preferred solution. He suggests that we scrap our current system and instead use federal funds to pay creators for their works, which will then be made freely available on the Internet. Taxpayers who do not download movies or music will in effect be subsidizing those who do. The distribution of revenues to creators would be based on surveys of usage, with longer and more popular works receiving greater payments than shorter or less popular works. In this egalitarian world, the tastes of those who care very much about music are to be given no more weight than those who just use music to drown out background noise.

Some parts of this idea are based on the current workings of ASCAP and BMI, the performing rights societies representing composers. A court in New York determines how much total compensation the composers as a group receive from television and radio broadcasters, and the societies distribute the money based on formulas they create, which tend to use length of song as a factor.

The most difficult economic problem with such a system is determining the total amount of money the government should spend on artistic creations. Fisher largely sidesteps how this is to be accomplished. As someone who has testified at performing rights hearings, I can assure you that the process can be quite capricious. But that does not really matter to Fisher.

Remarkably, he states that a “socially optimal output” should be rejected as a goal, because it would “draw an excessively large number of workers into the industry.” This is something of a non sequitur, since “socially optimal” means, well, socially optimal.

After discarding the goal of market efficiency, Fisher suggests replacing it with the goal of a “flourishing entertainment industry.” How would the government office charged with managing intellectual property make this assessment? He states: “If … the industry seemed starved, the office would enrich …if it seemed flush, the office would constrict … the flow of money.” These murky rules are likely to lead to battles between taxpayers and copyright owners that will make our current copyright problems look quaint in comparison.

It is possible that the idea of semiotic democracy might, if fully explained, prove so appealing that readers would agree with his conclusions, but Fisher provides too little detail for readers to judge either the practicality or the desirability of such a world. Is it something worth striving for, or is it merely one in a long list of romanticized Nirvanas, which historically have led to some of the greatest fiascos in humankind’s history? I strongly suspect it is the latter.

Fisher has performed a useful service in describing current copyright problems and laying out some alternative solutions. This alone is a significant achievement. What he has failed to do is provide a basis on which to choose from among those alternatives.

Bioweapons

“The age of bioterrorism is now,” the Washington Post said in January 2005. Many politicians, policymakers, and scientists agree, and so billions of dollars are being spent on biodefense R&D. A dramatic increase in classified threat-assessment research is imminent, including the exploration of potential new bioweapons agents and technologies. Senate Majority Leader Bill Frist says that bioterrorism is “the greatest existential threat we have in the world today” and calls for a biodefense R&D effort that “even dwarfs the Manhattan Project.” Many others agree.

Milton Leitenberg does not agree. He believes that the threat of catastrophic bioterrorism has been grossly and irresponsibly exaggerated. He says that faulty threat assessments are legion, that the bioweapons problem is larger and more complex than the focus on bioterrorism suggests, and that current U.S. policies are making the problem worse.

Leitenberg, a senior research scholar at the University of Maryland’s Center for International and Security Studies, is a bioweapons and arms control expert with nearly 40 years of experience. In his provocative new book, The Problem of Biological Weapons, he presents what he considers a more reasoned, comprehensive, and evidence-based assessment of the bioweapons threat. Focusing on events of the past 15 years, Leitenberg examines national bioweapons programs, bioterrorism, international efforts to bring bioweapons under control, and current U.S. policies and activities. He concludes that the primary threat today is not bioterrorism but rather the proliferation of bioweapons programs among states.

Widespread U.S. concern about bioterrorism first emerged in 1995 after the Aum Shinrikyo cult released sarin (a chemical nerve agent) in the Tokyo subway system and was then found to have made earlier, unsuccessful attempts to use bioweapons. Nonetheless, a dramatic increase in biodefense funding didn’t occur until after the 9/11 terrorist attacks demonstrated that some terrorists were willing to kill unlimited numbers of people. The anthrax attacks that followed shortly thereafter sealed the case for the biodefense R&D boom.

Leitenberg agrees that terrorists are interested in bioweapons, but he concludes from a detailed review of the evidence that there is little or no trend indicating that terrorist capabilities are improving. Whereas others see Aum Shinrikyo as demonstrating the ease with which nonstate actors could develop and use bioweapons, Leitenberg sees the opposite: a group incapable of mounting a successful bioweapons attack despite significant interest and major effort. He says that al Qaeda remains far behind Aum in its capabilities. Finally, he notes that the U.S. government maintains that someone closely connected to the U.S. biodefense program probably perpetrated the 2001 anthrax attacks. As for damage assessments, Leitenberg says that those used in recent planning exercises are unreasonable and reflect only the most extreme and least likely consequences of worst-case scenarios. His conclusion is clearly stated: “A terrorist use of a BW agent is best characterized as an event of extremely low probability, which might—depending on the agent, its quality and its means of dispersion—produce high mortality.”

State-sponsored bioweapons

Leitenberg is much more concerned about the proliferation of state-sponsored bioweapons programs. He calls this “the most serious threat in the long term,” because it would undermine the norm against bioweapons, lead to their widespread dissemination, and increase the danger that nations and terrorists will use them.

The reasons for Leitenberg’s concern are threefold. The countries participating in the Biological and Toxin Weapons Convention (BWC) have failed to seriously confront the bioweapons problem, first by failing to enforce compliance with the convention in the face of strong suspicions and even clear evidence of violations, and second by failing to develop better preventive measures, including a strong BWC verification protocol. He says that the United States is largely responsible for the latter failure, acting primarily to protect sensitive and sometimes questionable bioweapons-related R&D activities.

Although these failures have weakened the BWC and made it easier for nations to pursue bioweapons programs, Leitenberg argues that a third factor—the current expansion of U.S. biodefense R&D—may actually motivate such pursuit. In the final two chapters of his book, he discusses the difficulty of distinguishing offensive from defensive intent in national bioweapons-related programs. He notes that the BWC prohibits the development of bioweapons and weapons agents but does not mention research. Yet it is widely recognized that research on dangerous pathogens can be used not only for beneficial, defensive purposes but also to cause harm. Further, the boundary between research and development has always been unclear and is only becoming more so. Leitenberg disputes U.S. claims about the types of research activities that are permitted under the BWC and argues that it is the aggregate level of activity rather than the details of individual projects that raises cause for concern. Indeed, he shows that it is nearly impossible to determine the intent of an individual research project without knowledge of the overall program within which it is embedded. Finally, he emphasizes that the problem of determining intent is most difficult when research is classified, as is increasingly the case for growing U.S. threat-assessment activities. Leitenberg concludes that the current U.S. approach to biodefense and nonproliferation increases international suspicion and could lead to a dangerous cycle of escalating and secretive R&D activity. At the very least, the United States is “facilitat[ing] the ability of other nations to justify secret programs following the example already provided by the United States.”

Leitenberg’s critics charge that he fails to recognize that past experiences and current trends in bioterrorist activity aren’t useful indicators of future events. However, these critics routinely cite Aum Shinrikyo and base their own threat assessments and attack scenarios, including large urban aerosol releases and genetically engineered pathogens, on their knowledge of old national offensive programs and on trends in science and technology. Clearly, for both Leitenberg and his critics, consideration of past experiences and trends is an important part of threat assessment.

A Washington Post article in December 2004 cited broad agreement among bioweapons experts that significant “technical obstacles [exist] that would confound even skilled scientists” who attempted to develop bioweapons. This would seem to support Leitenberg’s arguments. However, these experts also asserted that the life sciences revolution makes it all but inevitable that terrorists will eventually be able to launch mass-casualty biological attacks. Leitenberg admits that advances in science and technology may increase terrorist capabilities in the future but says little more on this topic than to point out that experts have been saying this for nearly 20 years and that states are much more able and likely to exploit these advances. Although technically correct, the brevity of Leitenberg’s response indicates a failure to more thoroughly consider factors that may in fact increase the likelihood of bioterrorist attacks.

The fundamental problem facing analysts and decisionmakers is the great level of uncertainty surrounding the bioweapons problem. How and when might terrorists achieve breakthrough capabilities? How rapidly are their capabilities likely to advance? Most public threat assessments parse this uncertainty by assuming a 100 percent likelihood of attack over some unspecified length of time. They ignore factors that affect likelihood and focus on the worst consequences of worst-case scenarios. They infer capabilities from intentions and vice versa, and tend to assume that today’s terrorists will think and act like yesterday’s governments. Leitenberg focuses on assessing likelihood based on trends in terrorist capabilities, but these are hard to discern, given a meager data set. Meanwhile, he largely ignores vulnerabilities and consequences. Leitenberg is probably correct that the likelihood of a significant bioterrorist attack is low and will be so for some time. However, it is not zero, and it may be greater than he thinks.

Forging policy

What are we to do? Responsible security decisions require thorough consideration of the likelihood and the potential consequences of a security threat. They also require thorough consideration of the trade-offs associated with various possible responses. The great value of Leitenberg’s approach is its insistence on examining critical aspects of the bioweapons threat that are largely being ignored today.

We do need rigorous, realistic, and comprehensive threat assessments that consider all relevant factors and their interactions and associated uncertainties. These factors include vulnerabilities; the properties of threat agents; the current state of technology; the capabilities, motivations, and intentions of attackers (based on current knowledge and identified trends); and the likely effects of various policy choices. Threat assessments like these will help decisionmakers identify and sustain balanced and prioritized strategies for reducing bioterrorist and broader bioweapons threats. In a few cases, the nation might decide that a particular biological attack could have such extreme consequences that it must be prepared for even if its likelihood is low. Such decisions must still be informed by solid and broadly acceptable threat assessments if preparedness efforts are to be sustainable, and policymakers will still have to consider trade-offs in order to identify best strategies. The current crisis management approach to decisionmaking instead exhibits rapidly changing priorities, underemphasizes nonproliferation and other prevention efforts, tends to compete with other public health and scientific research needs, and generally ignores potentially serious negative consequences.

Leitenberg’s book also demonstrates that the new problem of bioterrorism doesn’t eliminate the old problem of state bioweapons programs and reminds us of all the trade-offs that must therefore be considered. He quite rightly raises many concerns about current U.S. policies, to which he could have added the apparent U.S. interest (shared with some other nations) in developing incapacitating biochemical weapons. It should be obvious that governments make national security decisions based in part on their perceptions and uncertainties about the activities of others. However, most U.S. policymakers are not giving serious consideration to the negative consequences of current policies and actions.

As Leitenberg discusses, one of the areas where such consideration is most critical is in bioweaponsrelated R&D. It is essential that U.S. activities not make proliferation among states or terrorist breakthroughs more likely. In practical terms, this means first developing strong internal and external oversight and review mechanisms for bioweapons-related R&D projects and programs. Self-regulation by scientists and funding agencies, although critical, is not enough to provide broad reassurance and accountability. It is neither a fair nor an effective substitute for government responsibility. Second, effective action must be taken toward the difficult goal of implementing universal biosafety and biosecurity standards. Third, transparency must be the dominant principle guiding biodefense R&D. Transparency is required both for biomedical advance and to reassure concerned parties at home and abroad. However, transparency cannot be universal, or we will risk revealing critical information that could aid bioweaponeers. Thus, reasonable, effective, and well-defined mechanisms for restricting access to critical information must be developed, consistently implemented, and made publicly known.

All of these arguments apply most especially to threat-assessment and -characterization research programs, which run significant risks of stimulating and justifying the threats they are supposed to defend against and even of creating new problems that must then be solved. Thus, the United States must establish mechanisms for identifying the most critical gaps in knowledge and for strictly limiting threat-assessment research to filling those gaps. Threat-assessment activities must not undermine international efforts to control bioweapons.

Finally, no defensive wall will be high enough by itself to protect the United States from the horrors of bioweapons. All U.S. actions must reinforce the international norm against bioweapons. Although it makes sense to prepare for the possibility of a bioweapons attack, it is irresponsible to loudly assert the certain inevitability of catastrophic bioterrorism. Such talk will undermine the nation’s ability to minimize panic and ensure public trust and cooperation in the event of an attack. It also promotes malevolent interest in and public tolerance of bioweapons. Likewise, it is irresponsible to engage in activities that are likely to spur proliferation, whether knowingly or thoughtlessly. There must be no pursuit of biological or biochemical weapons of any kind, for any reason. Today more than ever, nations need to exercise responsible and collective restraint. Governments must also take greater international action in support of nonproliferation and other means of prevention. Such action, forged around common support for the norm, is essential for its maintenance and thus for reducing the bioterrorist threat. In all of these areas, the United States should be setting an example and leading the way. The scientific community, occupied today with concerns about dual use and scientific responsibility, can do much to help.

Tilting at warheads

During the past decade, arms control has fallen on hard times. The decline began during the Clinton administration, when the Arms Control and Disarmament Agency was abolished and its functions absorbed into the Department of State, followed in 1999 by the U.S. Senate’s refusal to ratify the Comprehensive Test Ban Treaty (CTBT). Further retrenchment has occurred under President George W. Bush. With the sole exception of the 2002 Treaty of Moscow, which calls for bilateral cuts in U.S. and Russian strategic nuclearforces but is devoid of verification measures, the Bush administration has declined to negotiate any new agreements and has repudiated or rejected several existing ones, including the Anti-Ballistic Missile Treaty, the Ottawa Landmines Treaty, and a draft verification protocol for the Biological Weapons Convention. President Bush has also ruled out joining the CTBT, and the administration’s pursuit of new nuclear warhead designs could generate pressures to break the 13-year-old U.S. moratorium on nuclear testing.

Given this bleak record, the very title of The Future of Arms Control must be viewed as optimistic. The authors, defense analysts Michael Levi and Michael O’Hanlon of the Brookings Institution, argue that although arms control as practiced during the Cold War is dead, the concept should be revived in a new framework that is adapted to the changed security environment. Levi and O’Hanlon perform a useful service by reaffirming the importance of arms control, but many of the specific policies they recommend are either politically impractical or internally inconsistent.

Ever since the breakup of the former Soviet Union in 1991, the United States has been the world’s only military superpower, with no near-term rival to its supremacy. The main threats to U.S. security arise from the spread of materials and technologies for nuclear and biological weapons, which in the hands of hostile states or terrorists could inflict massive civilian casualties and create a counterweight to Washington’s overwhelming conventional military strength. Levi and O’Hanlon argue that arms control can help to prevent the spread of nuclear and biological capabilities by providing early warning of a country’s intent to acquire such weapons and by creating legal and political “predicates” for multilateral action to contain, manage, and reverse proliferation and to deter other states from going down that path.

According to the authors, the U.S.-Russian strategic relationship has lost its salience in the post–Cold War world. Although each country continues to possess upwards of 5,000 deployed strategic nuclear warheads (with thousands more stored warheads and “tactical” weapons), Levi and O’Hanlon contend that these stockpiles are “no longer so dangerous” in the current political environment and that going below 1,000 deployed warheads each in the U.S. and Russian arsenals “holds little appeal for the foreseeable future.” They also argue that deep cuts, down to a few hundred nuclear warheads per side, would be undesirable, creating dilemmas about how to deal with China’s nuclear stockpile and distracting policy-makers from more urgent proliferation concerns.

The authors’ tendency to downplay U.S.-Russian arms control ignores the dangers still associated with the two countries’ strategic nuclear arsenals 15 years after the end of the Cold War. Although Washington and Moscow are no longer political adversaries, the condition of mutual assured destruction continues to play itself out in their opposing force structures and alert postures. Each country still has thousands of nuclear weapons ready for launch on short notice against targets on the other’s territory. Most Americans remain unaware of the bizarre disconnect between the transformed U.S.-Russian political relationship and the persistence of Cold War nuclear force structures and war plans. Defusing this dangerous situation in an irreversible and verifiable manner remains an important task for negotiated arms control.

The command-and-control system for Russia’s nuclear forces is also aging and under increasing strain, increasing the risk of malfunction and accidental nuclear war. As Levi and O’Hanlon note, “There is always a danger under present circumstances that an the authors contend that “de-alerting” U.S. and Russian nuclear weapons is not a high priority. They also suggest that if the United States unilaterally reduces the alert status of its nuclear forces in the expectation that the Russians will reciprocate, Washington should retain the capability to “re-alert” some of its weapons rapidly. Here, as elsewhere in the book, Levi and O’Hanlon qualify their policy recommendations to the point that they lose clarity and focus.

The authors further suggest— wisely, in this case—that Washington should reduce the role of nuclear weapons in U.S. doctrine by renouncing the development and testing of new types of nuclear warheads. In particular, they oppose the Bush administration’s proposed Robust Nuclear Earth Penetrator (“bunker-buster”) bomb, which would be used to target deeply buried facilities. Yet after asserting that there is no military or strategic rationale for new types of nuclear weapons, Levi and O’Hanlon undermine this position by claiming that the U.S. policy of “strategic ambiguity”— the implicit threat to respond to an enemy’s use of chemical or biological weapons (CBWs) with nuclear weapons—is “sound,” although it would be less than credible in many circumstances. The problem with this argument is that it provides a compelling rationale to develop new nuclear warheads specialized for destroying bunkers containing stocks of CBW agents. Moreover, to the extent that the policy of strategic ambiguity applies to non-nuclear weapon states that possess chemical or biological arms, it contradicts the pledge by the United States not to use nuclear weapons against countries that forego them—a key element of the nuclear nonproliferation regime. Again, the authors’ desire to appear moderate by splitting the difference between liberal and conservative positions leads them to a flawed policy recommendation.

Levi and O’Hanlon argue that arms control in the post–Cold War era should rely less on negotiated agreements and more on unilateral restraint and voluntary parallel actions by like-minded countries. For example, they contend that a formal treaty to limit the deployment of ballistic missile defenses is not needed “as long as the American system is sized and scaled to respond to a North Korean or Iranian (rather than Chinese) offensive threat.” Their reasoning is that because limited U.S. missile defenses would have little capability to block a Chinese or Russian retaliatory strike, they would not provoke these countries to build up their nuclear forces. Nevertheless, the authors’ assumption of future U.S. restraint in expanding its defensive systems may well prove incorrect.

The Future of Arms Control takes an equally complacent attitude toward the weaponization of space. At present, the U.S. military benefits greatly from the use of space for communications, reconnaissance, and targeting, and would have much to lose if these assets were put at risk. Even so, Levi and O’Hanlon argue that the concept of space as a weapons-free zone is difficult to justify because satellites are increasingly used in support of conventional military operations. They contend that efforts to control antisatellite (ASAT) weapons are “impractical and undesirable” because of the inherent ASAT capabilities of many missile defense systems and the eventual need to counter efforts by other countries to use satellites to target U.S. military assets. Instead of pursuing formal agreements to prevent the weaponization of space, the authors recommend that the United States should take modest unilateral actions, such as declaring that it has no ASAT weapons in order to maintain the status quo as long as possible. Given the destabilizing potential of an arms race in space, this go-slow approach seems inadequate to the threat.

Controlling dangerous technologies

Levi and O’Hanlon contend that the “central organizing principle” for arms control in the post–Cold War era should be to pursue multilateral efforts to prevent the most dangerous weapons technologies— nuclear and biological—from falling into the hands of militant regimes and terrorist organizations such as al Qaeda. To this end, the authors propose an overhaul of Article IV of the 1968 nuclear Non-Proliferation Treaty (NPT), which affirms that non-nuclear weapon states have an “inalienable right” to develop nuclear technologies for peaceful purposes. Ostensibly civilian facilities for producing highly enriched uranium (HEU) and separating plutonium from spent nuclear fuel can be redirected to produce bomb-grade fissile materials, enabling countries to “break out” of the NPT. Accordingly, the authors argue that Article IV should be formally reinterpreted to make it harder for countries to go down this path.

Levi and O’Hanlon recommend the suspension of HEU production worldwide and an indefinite moratorium on the construction of new uranium enrichment facilities by individual states. Any future enrichment plants to support civilian nuclear power would be owned by multilateral consortia, whose members would be limited to countries with a good record of NPT compliance. The authors also propose a total ban on plutonium reprocessing. To facilitate the early detection of illicit activities, all states currently under International Atomic Energy Agency (IAEA) safeguards would be required to adopt the Additional Protocol, which permits intrusive inspections of dual-use nuclear facilities.

Although these proposals are desirable in principle, Levi and O’Hanlon fail to lay out a politically feasible roadmap for achieving them. Reinterpreting Article IV of the NPT would provoke strong political resistance from countries including Brazil, Japan, and Iran, which are developing uranium enrichment facilities for their nuclear power industries. Halting uranium enrichment and plutonium separation on a national basis would be acceptable only if all states—those that possess nuclear weapons and those that do not—agreed to submit to the same set of rules, which is unlikely. Moreover, even if the uranium enrichment facilities on Iranian territory were controlled by a multinational enterprise, the host country could decide to expropriate the plant and expel the foreign owners. Finally, identifying countries with a good record of compliance with the NPT is not necessarily straightforward. Brazil, for example, has recently attempted to limit the degree of access provided to IAEA inspectors at its uranium enrichment facility.

Levi and O’Hanlon are correct in praising the Pentagon’s Cooperative Threat Reduction program, which has worked since 1991 to secure and eliminate nuclear, chemical, and biological weapons technologies, materials, and know-how in the former Soviet Union, putting obstacles in the path of would-be proliferators. Far less compelling, however, is the authors’ proposal to address the security concerns that often promote the spread of nuclear weapons. As an alternative to taking major steps toward nuclear disarmament, they contend that the United States, Britain, and France should offer security guarantees to all states that agree to forego nuclear weapons and meet several other conditions, such as a democratic form of government, civilian control of the military, and a nonaggressive foreign policy. Yet unless Washington addresses the legitimate security concerns of autocratic regimes such as North Korea and Iran, these countries will have no incentive to abandon their nuclear ambitions. Indeed, Levi and O’Hanlon note that the new U.S. policy of preventive war, first implemented in Iraq, could “provoke some adversaries to seek the very weapons the United States seeks to deny them.”

With respect to India and Pakistan, which have already built and tested nuclear weapons, Levi and O’Hanlon argue that the United States should encourage responsible stewardship of these arsenals while capping quantitative and qualitative improvements. Yet the authors’ suggestion that the United States should help India and Pakistan to secure their nuclear stockpiles with electronic locks called permissive action links could backfire by suggesting that Washington tacitly endorses proliferation. More prudently, the authors note that persuading India and Pakistan to join the Comprehensive Test Ban Treaty would bring them into the global nonproliferation fold, but they acknowledge that achieving this goal will be difficult as long as Washington remains outside the treaty.

Levi and O’Hanlon also downplay the failure of the nuclear powers to fulfill their side of the NPT bargain by taking the disarmament obligations of the treaty as seriously as the nonproliferation obligations. At the 2000 NPT Review Conference, the member states, including the nuclear powers, adopted by consensus a list of 13 steps to demonstrate tangible progress toward nuclear disarmament. Although three of these steps have since been overtaken by events, the rest are still valid. Yet the Bush administration has backed away from the earlier U.S. commitment. Without tangible progress on disarmament by the nuclear weapon states, it is highly unlikely that the United States can win the support of non-nuclear weapon states to impose tighter restrictions on access to dual-use nuclear technologies and to organize an effective united front to address the NPT noncompliance of Iran and North Korea.

The other area of weapons proliferation to which Levi and O’Hanlon assign high priority is the spread of advanced biological warfare (BW) capabilities, including the ability to genetically engineer pathogens to make them more deadly or effective. Because of the dual-use nature of biotechnology and the fact that BW production facilities can be small and easily hidden, the authors contend that “treaty-based control regimes relying heavily on international inspection are not particularly promising” and that the Bush administration was “substantially right” in rejecting a draft verification protocol for the Biological Weapons Convention. Instead, they favor harmonized international guidelines for the safety and security of research with dangerous pathogens, to be implemented through domestic legislation, although they fail to explain how the guidelines would be negotiated and monitored.

The Future of Arms Control embraces a centrist political compromise that calls for multilateral action to halt the proliferation of nuclear and biological weapons while generally dismissing the need for formal treaties and rejecting the goal of nuclear disarmament as utopian and undesirable. On balance, Levi and O’Hanlon have taken a useful first step in challenging neoconservative anti-arms control orthodoxy, but they too often pull their punches when a knockout blow is warranted.

Book Review: Commercializing the university

Commercializing the university

Academic Capitalism and the New Economy: Markets, State, and Higher Education, by Sheila Slaughter and Gary Rhoades. Baltimore, Md.: Johns Hopkins University Press, 2004, 424 pp.

Robert Zemsky

Sixty years ago, in Science, the Endless Frontier, Vannevar Bush defined the context that would guide the federal government’s massive investment in research capacity. After arguing that “industry is generally inhibited by preconceived goals, by its own clearly defined standards, and by the constant pressure of commercial necessity,” Bush proclaimed that the nation’s leading universities were the right home for large-scale scientific research projects. “It is chiefly in these institutions that scientists may work in an atmosphere which is relatively free from the adverse pressure of convention, prejudice, or commercial necessity,” Bush wrote. “At their best they provide the scientific worker with a strong sense of solidarity and security, as well as a substantial degree of personal intellectual freedom. All of these factors are of great importance in the development of new knowledge, since much of new knowledge is certain to arouse opposition because of its

Today, however, Bush’s description of the U.S. university as a free academy untainted by commercial necessity is largely an anachronism. The hallmarks of change are simply cataloged: the 1980 passage of the Bayh-Dole legislation that allowed (some would say, mandated) universities to own, develop, and profit from the federally funded research performed by their faculty; the rising dominance of a handful of commercial publishers who have extracted substantial profits from their near monopoly of scholarly publications in the sciences; the increasing amount of federal funding going to an even smaller set of research universities; and the insistence that the dissemination of research results wait until the intellectual property rights of the university and the principal investigator are fully secured. It is a world in which principal investigators are regularly reminded not to say too much in public or even in private among colleagues lest they violate the nondisclosure agreements that increasingly govern the dissemination of scientific results.

What remains, of course, is the federal government’s direct support of universities and their research interests and capacities— and that too is an important part of the story.

Bush and his intentions notwithstanding, what the federal government has created—through the National Institutes of Health (NIH) and the National Science Foundation (NSF) along with the often larger research contracts proffered by the Departments of Defense (DOD) and Energy—is a massive market for research in which the growth of real revenue has become the strategic imperative of every major research university. The result is that the 60 or so largest research universities that win most of these awards have become market enterprises, organized and rewarded as though they were commercial endeavors.

The impact that this rise of market forces has had on the U.S. university is examined by Sheila Slaughter and Gary Rhoades in Academic Capitalism and the New Economy. Their work is an important though too often meandering exploration of how changing values combined with escalating opportunities to recast how and why universities and their faculties engage in scientific research. The villain of the piece—though Slaughter and Rhoades would probably argue that their tale has no villains—is the neoliberal state that “focuses not on social welfare for the citizenry as a whole but on enabling individuals as economic actors. To that end, neoliberal states move resources from social welfare to production functions.” The result is what the authors call “regime change.” Pushed aside are the academic values and the organizational rules that the sociologist Robert Merton gave voice to in the 1950s and that Bush championed in Science, the Endless Frontier: a way of thinking that stressed knowledge as a public good, that supported university autonomy, and that precluded scientists from having a direct stake in research outcomes. What has emerged instead is an “academic capitalist knowledge regime” that restricts the flow of knowledge precisely because scientific results bring economic advantage to the universities and the faculty responsible for their production. “The academic capitalism knowledge regime,” they write, “values knowledge privatization and profit taking in which institutions, inventor faculty, and corporations have claims that come before those of the public.”

Academic Capitalism and the New Economy is at its best when documenting the changing attitudes and rules governing the use of intellectual property. Chapters on patent policies, disagreements over copyrights, and the new limits being placed on student roles in the production of scientific knowledge are important contributions in their own right. The authors have an enviable knack for assembling and then navigating the often tangled legal disputes that now accompany attempts to decide who owns what. There is also an important discussion, based on interviews with a wide variety of faculty, documenting how the shift to the academic capitalist knowledge regime is becoming an integral part of the everyday discourse of working scientists.

The book unfortunately is marred by some distracting gaps. For example, the authors are apparently unaware of Charles Goldman’s and William Massy’s 2001 book, The PhD Factory: Training and Employment of Science and Engineering Doctorates in the United States, which includes an analysis that supports their argument. That work documents how the surplus of Ph.D.s in the sciences can best be explained by the actions of federally funded principal investigators who care more about their expanding publication lists than about the future career opportunities of the graduate students working in their labs. The PhD Factory suggests that money and patent rights are not the only currency in the market for research. Although the patent policies and licensing income that flowed after the passage of Bayh-Dole are important to the sponsoring university’s bottom line as well as the investigator’s financial wellbeing, it turns out that the volume of publication is just as important. In the actual workings of the market for sponsored research, a team’s publication record becomes an important index of future productivity and hence access to the financial rewards associated with Bayh-Dole.

Slaughter and Rhoades’s discussion of academic entrepreneurship is similarly incomplete. It would have benefited from a better understanding of the process by which scholars win more time and freedom to pursue their research while the university administrators who pick up the tasks that the faculty have abandoned (like advising) build entrepreneurial enterprises within their universities.

At the same time, Academic Capitalism and the New Economy largely misses the very real differences that NIH, NSF, and DOD funds introduce into the mix. NIH and DOD, for example, add indirect costs to a total grant, whereas NSF makes indirect costs a part of the competitive package. The result is that NSF grants do not allow the same level of cost recovery. NSF is also likely to look askance at senior investigators including their own term-time salaries in their grants, further reducing the importance to the institution of NSF funds.

Finally, the authors were apparently unaware of the campaigns that the Association of American Universities and the Association of Research Libraries mounted to explain to universities and their faculty the realities introduced by changing patent and copyright rules and opportunities. These campaigns were important examples of the kind of regime shift that is the focus of their book.

The editors at Johns Hopkins University Press also failed Slaughter and Rhoades. Readers interested in the book as a whole should begin with the summary chapter (chapter 12), which provides the best overview of their theory of academic capitalism and the new economy. The book’s first two chapters spend too much time regurgitating the authors’ previous work. The writing also contains off-putting vocabulary, including commodification, marketizing, and profitizing, all of which are used to diminish things the authors do not like.

Had the authors had some of the grace and humor that embellishes David Kirp’s new book, Shakespeare, Einstein, and the Bottom Line: The Marketing of Higher Education, the resulting volume would have been more readable and, one suspects, more enduring. On the other hand, Kirp could have benefited from Slaughter and Rhoades’s understanding of processes and contexts. The bottom line is that the full story of how the rise of markets has recast universities as market enterprises has yet to be told.


Robert Zemsky () is professor and chair of the Learning Alliance for Higher Education at the University of Pennsylvania. tendency to challenge current beliefs or practice.”

Cartoon – Spring 2005

“Don’t give me any of this healthy body mass index nonsense. I say you’re scrawny.”

“Don’t give me any of this healthy body mass index nonsense. I say you’re scrawny.”

Forum – Spring 2005

Securing nuclear material

I agree with Matthew Bunn that the scope and pace of the world’s efforts to prevent terrorists from acquiring nuclear weapons or weapons-usable materials do not match the urgency of the threat (“Preventing a Nuclear 9/11,” Issues, Winter 2005). He correctly points out that if terrorists were to acquire nuclear materials, we must assume that they will eventually be able to produce a crude but deadly nuclear device and deliver it to a target.

Bunn correctly focuses on vulnerable nuclear facilities that house weapons-usable plutonium or highly enriched uranium (HEU). He lists Russia, Pakistan, and HEU-fueled research reactors around the world as serious concerns. My own list of greatest current nuclear threats, also based on the likelihood that terrorists could acquire weapons-usable materials, is topped by Pakistan, followed in order of decreasing priority by North Korea, HEU-fueled reactors, Russia, Kazakhstan, and Iran.

Bunn points out that U.S. cooperative threat reduction programs have done much to reduce the nuclear danger, but much more is needed. I agree that the United States must push a security-first agenda. However, threat reduction must be tailored to specific threats. For example, Pakistan and North Korea pose very different threats than Russia or Kazakhstan and hence will require very different solutions.

Bunn prescribes some useful steps in dealing with the remaining problems in the Russian nuclear complex. However, I have not found that Russians view the nuclear terrorist threat as “farfetched.” Instead, my Russian colleagues believe that their nuclear facilities are not vulnerable to theft or diversion of nuclear materials. Russian officials rely primarily on physical security (which was enhanced after 9/11 and again after Beslan), instead of a rigorous modern nuclear safeguards system that includes control and accounting of nuclear materials, along with physical protection. The single most important step to improve Russian nuclear materials security is for the Russians to own this problem: “loose” nuclear materials threaten Russia as much as they do the United States. Russia must implement its own modern materials protection, control, and accounting system. Our ability to help is stymied primarily by Russia’s belief that they don’t have a problem.

Finally, Bunn states that nuclear terrorism could be reliably prevented if the world’s stockpiles of materials and weapons could be effectively secured. I believe that securing the world’s huge stockpile of nuclear materials is so difficult that we cannot “reliably” prevent nuclear terrorism. The basic problem in the world today is that there are roughly 1,900,000 kilograms of HEU and almost as much plutonium (although 1,400,000 kilograms of plutonium are in spent nuclear fuel, which provides self-protection for some time). The uncertainty in these estimates and in the exact whereabouts of these materials is much greater than the tens of kilograms required for a crude nuclear device. Hence, the job of preventing these materials from getting into the wrong hands is daunting, to say the least. Nevertheless, as Bunn points out, we must act now and do the best we can, and this will take presidential leadership and international cooperation to be effective.

SIEGFRIED S. HECKER

Senior Fellow

Los Alamos National Laboratory

Los Alamos, New Mexico


Matthew Bunn is correct when he notes the urgency of the nuclear threat. However, his article’s focus on the challenges facing our nonproliferation programs emphasized failure where credit is due.

The National Nuclear Security Administration (NNSA) is working with its international partners to reduce the nuclear proliferation threat by removing dangerous materials, securing vulnerable facilities, and eliminating at-risk materials whenever and wherever possible. Challenges associated with developing new technologies and negotiating with sovereign countries can sometimes complicate our efforts. Despite these challenges, NNSA has made tremendous progress in meeting, even accelerating, its goals.

One program that has completed extensive security upgrades in Russia is NNSA’s Material, Protection, Control and Accounting (MPCA) program. MPCA began its efforts in 1994 by first securing the most vulnerable sites in Russia, which tended to be the smaller sites. The larger sites that remain to be secured are fewer in number but contain significant amounts of nuclear material. These remaining sites can be secured with roughly the same amount of time and effort as previously completed sites containing much less material. As a result, NNSA will secure much more material per year as the remaining sites are addressed. By the end of 2005, more than 70 percent of the sites will be complete, and we will continue to work toward the goal of securing the targeted 600 metric tons of nuclear material by 2008. It is important to note that none of this work could be completed without assistance of our Russian colleagues.

Last year, the Bush administration established the Global Threat Reduction Initiative (GTRI) to consolidate and accelerate the Department of Energy’s existing nuclear materials removal efforts. GTRI works to expeditiously secure and remove high-risk nuclear and radiological materials that pose a threat to the United States and the international community. Under GTRI programs, 105 kilograms of fresh highly enriched uranium has been repatriated to Russia and placed in more secure facilities. More than 6,000 spent fuel assemblies have been returned from research reactors around the world to the United States for final disposition. And between 2002 and 2004, more than 5,000 radioactive sources in the United States have been recovered and securely stored.

This administration has been steadfast in its commitment to preventing the spread of weapons of mass destruction and will remain so in the next four years. Each country, however, is responsible for the security of its own nuclear material. We must continue to hold nations responsible for actions that increase the risk of nuclear proliferation. Bunn’s efforts to shine a spotlight on these critical national security challenges should be applauded, but NNSA’s successful endeavors in reducing these same threats should also be recognized.

PAUL M. LONGSWORTH

Deputy Administrator for Defense Nuclear Nonproliferation

U.S. National Nuclear Security Administration

Washington, D.C.


I read with great interest Matthew Bunn’s article. That this issue featured so prominently in the recent presidential race brought public attention to a matter heretofore left to specialists. It is good that the public is concerned, and Bunn has written a cogent piece summarizing the issue fairly and highlighting the challenges we face in improving nuclear security. We fully embrace the goal of ensuring that terrorists cannot gain access to these catastrophically dangerous materials.

The article makes clear that this is a world-wide problem that includes actual nuclear weapons, military stockpiles of fissile materials, and highly enriched uranium in research reactors scattered around the globe. There is no fundamental disagreement on what action needs to be taken, but the pace and priorities for action can be debated.

The nongovernmental organization and academic communities are performing a vital service in raising public consciousness about this danger; without such an appreciation the needed resources will not be easily forthcoming. The U.S. government has not been inactive, however, as Bunn acknowledges in his piece. Since the collapse of the USSR, more than $9 billion has been invested in the broad range of Nunn-Lugar programs, which include the Materials Protection Control and Accounting assistance program of the Department of Energy, that are most directly involved in securing nuclear stockpiles and materials. Physical upgrades have improved security greatly at the most vulnerable facilities, and we have improved security at over 70 percent of the facilities with fissile material. We have also provided major support for: the safe storage of tens of tons of formerly military plutonium in Russia; the shutdown of plutonium production reactors; the program to dispose of such excess weapons plutonium by burning it in nuclear power plants; and programs to “redirect” scientists who had worked on weapons of mass destruction. By providing these scientist with a port in the storm of economic collapse, we have enabled them to serve as catalysts to spur research in public health, commercial science, and even direct antiterrorist applications.

In launching the Energy Department’s Global Threat Reduction Initiative last fall, the administration specifically recognized the danger presented by vulnerable research reactors, and within the G8 Global Partnership, concerted efforts led by Under Secretary of State John Bolton have lent special international urgency to applying resources around the world to address the issue. Certainly no one inside or outside government would deny that more remains to be done, and the faster the better. We are working hard to that end.

Finally, I would like to address the “nuclear culture” issue that Bunn rightly points to as crucial. It will do us very little good—in fact it buys us a false sense of security— to spend large sums of money installing systems and teaching procedures that are not accepted or will not be maintained or implemented as designed. This is probably the hardest problem to solve, but there are ways to get at it, and we have seen encouraging signs in the form of generational change.

Overall, I found Bunn’s article a reasoned and informative contribution to the debate on the issue of nuclear security.

EDWARD H. VAZQUEZ

Director

Office of Proliferation Threat Reduction

U.S. Department of State

Washington, D.C.


Conflicts of interest

In “Managing the Triple Helix in the Life Sciences” (Issues, Winter 2005), Eric G. Campbell, Greg Koski, Darren E. Zinner, and David Blumenthal provide a thoughtful and scholarly analysis of the benefits and risks of academic/industry relationships and offer their recommendations for managing them. All of their proposals involve public disclosure of the financial ties between academics and companies, a policy shared by several major professional organizations. No doubt such transparency would be an improvement on the current largely recondite nature of such relationships, but does disclosure truly solve the conflict of interest problem? Many academics approve of disclosure largely because it allows business as usual. I don’t see it as a satisfactory solution.

Take the opinion of an academic who writes a strongly positive appraisal of a drug made by a company for which he or she serves as a speaker or consultant. How are we, nonexperts, to interpret the assessment? It might be identical to one by a nonconflicted expert: rigorously objective. It might be biased in favor of the company, although the author, who cannot be expected to reach into his subconscious, is unaware that he has slanted the analysis. Or, least likely, out of an effort to enhance his status with the company, the author might consciously tilt his opinion. We just don’t know which explanation is closer to the truth. Thus, disclosure leaves the receiver of information in a difficult position, trying to interpret the motives of the conflicted author. In fact, people given such disclosure information often underestimate the severity of the conflicts.

James Surowiecki, who writes the economics page for the New Yorker magazine, says, “It has become a truism on Wall Street that conflicts of interest are unavoidable. In fact, most of them only seem so, because avoiding them makes it harder to get rich. That’s why full disclosure is so popular: it requires no substantive change.”

To avoid harm to patients and to protect the validity of our medical information, we must therefore go beyond disclosure. We should discourage faculty from participating in all industry-paid relationships except research collaborations that promise to benefit patient care. Physicians who eschew participation in company-sponsored speaker’s bureaus, continuing medical education (CME), and marketing efforts should be the first to be called on to evaluate drugs and devices, testify to the Food and Drug Administration (FDA), lead professional organizations, and edit medical journals. I would not exclude conflicted faculty from engaging in many of these activities, but I would require an additional layer of supervision. For example, I would require clinical practice guideline committees and FDA panels to have a predominant representation of nonconflicted experts and committees that review research proposals to have substantial oversight by a diverse and independent group of outsiders. Just as blinded peer review in medical journals tends to protect against bias in publications, mechanisms to have nonconflicted experts sample CME lectures by company-sponsored speakers would hold all speakers to a high standard.

Disclosure papers over an unresolved problem. If the profession refuses to give up its extensive financial connections to industry, patients must be protected with methods that override the hazards of these conflicts.

JEROME P. KASSIRER

Distinguished Professor

Tufts University School of Medicine

Boston, Massachusetts

Editor-in-Chief Emeritus, New England Journal of Medicine


“Managing the Triple Helix in the Life Sciences” makes a reasonable case for practical government guidelines and uniform institutional policies for reporting and tracking relationships that create real or perceived conflicts of interest in research. Nonetheless, it is difficult to be optimistic that those who could make the changes recommended by the authors will do so without more compelling reasons and/or a clear mandate, most likely from government.

Research institutions feel that they are already overburdened with costly requirements that affect the use of human and animal subjects, laboratory safety, cooperative agreements, workplace conditions, and so on. The additional staffing and resources needed to maintain well-trained, effective institutional review boards for human subjects research alone can run into the millions of dollars at a research-intensive university. To suggest adding a similar administrative structure for the review of disclosure forms, however helpful this might be, is sure to meet stiff resistance from budget-conscious research administrators. It is therefore reasonable to predict that the institutions that the authors recommend take the lead in instituting change will be the least likely to do so without compelling reasons for action.

Academic/industry and government/industry relationships (AIRs and GIRs) have increased in number, size, and complexity. Studies have identified correlations between industry funding and findings favorable to industry. Even so, there is no agreement on whether the documented cases of questionable relationships or the apparent biases that result from them are significant enough to justify the adoption of a more formal and expensive review process. Researchers commonly assume that they can work in situations that create conflicts without compromising their personal or scientific integrity. Presumably, most research institutions make the same assumption. It will therefore take more than a limited, albeit growing, number of case studies or suspicious correlations to move research institutions to act.

Finally, given the sometimes conflicting agendas of the different players in academic research, it is difficult to imagine any agreement on “uniform policies related to the disclosure of AIRs” that would both “give institutions significant discretion regarding the review and oversight of AIRs at the local level” and, at the same time, have any teeth. Any significant discretion at the local level already does and will lead to different decisions about appropriate relationships and management strategies, which could invite the sort of institutional shopping that the authors are trying to avoid.

Even if the proposed solution is not likely to be adopted by research institutions, the underlying problem that the authors identify cannot be ignored. At a time when federal funding for research is experiencing at best only modest growth and more frequently stagnation and cutbacks, the commitments some academic institutions are making to dramatically increase the size of their research programs (to double in some cases) should be cause for concern. AIRs may only be straining the system now, but in the future they could lead to serious breaks, such as the erosion of public confidence in research or the implementation of uncompromising policies.

Based on past experience, it is reasonable to assume that if researchers and research institutions do not take seriously proposals for some sort of meaningful self-regulation and restrain the unchecked growth of AIRs, government will, as it has recently done with the National Institutes of Health. This prospect may perhaps at last provide the compelling evidence needed to prompt research institutions to take “Managing the Triple Helix” as seriously as they should.

NICHOLAS H. STENECK

Professor of History

University of Michigan

Ann Arbor, Michigan


Eric G. Campbell et al. raise important conflict of interest concerns regarding university/industry and government/industry relationships (AIRs and GIRs). At the core of their argument is the belief that AIRs and GIRs have an appropriate place in the facilitation of life science innovation but that standardization of disclosure and management processes, especially for AIRs, is needed. Failure to act, they suggest, risks compromising the public trust placed in life science researchers that has served the enterprise so well for decades.

Campbell et al. should be complimented for their willingness to confront this problem and are uniquely appropriate spokespersons for such issues, given their stream of large-scale research on the AIR practices of life science researchers and their institutions. Institutional policies and procedures for the disclosure and management of potential conflicts of interest are varied. This creates ample room for possible violations to go undetected, human subjects to be put at potential risk, and ultimately end-consumers to be ill informed about the benefits and limitations of particular life science innovations. Thus, when the authors suggest that the academic community should set higher standards than those currently expected by the federal government (for example, require disclosure by all faculty and institutional administrators, not just principal investigators; and a reduction in exclusions to the $10,000 threshold for disclosure) and establish a uniform policy on publication delay time, they home in on actions that could do much to protect against the risks of conflicts of interest.

Yet in two key ways they might have gone further. First, they weakened their own argument by suggesting that the wide variations among institutional types, structures, and decisionmaking processes necessitate “flexibility to decide which relationships require oversight and how to design, implement, and evaluate institutional oversight plans and activities.” This could offer convenient cover for institutions to continue their pattern of selective oversight. Second, there was no mention of what should be done if a conflict is discovered. Should the faculty member receive an admonition “not to do it again” or a formal disciplinary action; be prevented from receiving future funding of some kind; and/or be terminated, depending on the severity of the violation? Academic tradition would suggest that the wisest course would be to consider conflict of interest violations as being on the same level as, say, plagiarism or doctoring data in publication. Achieving this, however, will require universities, and likely the federal labs, to decide whether they wish to employ researchers or entrepreneurs and whether both can realistically be employed. Furthermore, it will require federal and state governments to recognize the degree to which they exacerbate conflict of interest problems for universities through certain policy actions and budget decisions.

JOSHUA B. POWERS

Coordinator, Higher Education Leadership Program

Indiana State University

Terre Haute, Indiana


Although Eric G. Campbell et al. agree with most observers that academe and government need new tools for managing conflicts of interest in biomedical research, their preferred solution—greater disclosure—won’t get the job done.

For example, the authors claim that industry-funded scientists are more productive in publishing than their non-industry-funded peers. During the past year, there have been several high-profile cases [one involving selective serotonin reuptake inhibitors (SSRIs) and childhood suicidality and the other involving COX-2 inhibitors and heart disease] in which flawed industry-funded academic studies led to poor public health outcomes. In February 2004, the Center for Science in the Public Interest reviewed the entire academic literature involving SSRIs and children and found that more than 90 percent of published clinical trials, virtually all of them industry-funded, supported their use. Meanwhile, virtually all the clinical trials submitted to the Food and Drug Administration (FDA) to win extended patent life for the drugs, most of which were never published, showed that the drugs had no positive effects.

The scientists in the first group may have been more productive in publishing than their peers, but as David Blumenthal has pointed out elsewhere, thinly disguised marketing studies published in second-tier academic journals are hardly a good measure of the benefits of academic/industry research ties. Moreover, the funding sources and many of the financial ties of the studies’ authors were fully disclosed in the academic journals when they appeared. The results led not to increased skepticism but to increased sales.

The same can be said about patenting and associated commercialization activities at our academic medical centers and universities, another benefit touted by the authors. Just as businessmen have long known that not everything patentable is worth commercializing, the nation’s health care system every day confronts the fact that not everything that is commercialized contributes to public health. Juxtapose for a moment the uproar over heart ailments caused by COX-2 inhibitors with the efforts by the University of Rochester, which patented the mechanism of action, to cash in on those drugs’ massive cash flow. Clearly some of the commercialization activity at the nation’s universities has crossed the line that used to separate the institution’s larger public health mission from private gain.

As the authors point out, industry ties with academic researchers continue to grow. The result is that our health care system today suffers from a paucity of objective information. Industry-funded researchers and clinicians conduct most significant clinical trials, write many clinical practice guidelines, deliver most continuing medical education, and sit on (and in some cases dominate) government advisory bodies at the FDA and Centers for Medicare and Medicaid Services.

That some of these ties remain hidden is abominable. But although greater transparency is mandatory, it is no longer the only or even best disinfectant. In the wake of congressional investigations into the massive hidden corporate financial ties of some of its scientists, the National Institutes of Health recently imposed new rules prohibiting such ties with private firms. Events have outpaced the disclosure prescription. Stronger medicine is called for.

MERRILL GOOZNER

Director, Integrity in Science Project

Center for Science in the Public Interest

Washington, D.C.


Genetically modified crops

In their excellent “Agricultural Biotechnology: Overregulated and Underappreciated” (Issues, Winter 2005), Henry I. Miller and Gregory Conko lay out a compelling argument in support of ag biotech. I agree with their principal conclusions. However, I must set the record straight on one item in their piece that seems to have become an enduring myth about the early days of the science. They state that “some corporations … lobbied for excessive and discriminatory government regulation … they knew that the time and expense engendered by overregulation would also act as a barrier to market entry by smaller companies.” Monsanto, DuPont, and Ciba-Geigy (now Syngenta) were listed in the article as the short-sighted companies that brought long-lasting restrictive regulations. In reality, only Monsanto argued for regulation; the other companies were not then significant players in the field.

I was CEO of Monsanto in 1983 when our scientists for the first time put a foreign gene into a plant, which started the commercial path to the science. My job for the dozen years before first commercialization was to ensure funding for this far-future science. Wall Street hated something that might pay out in the mid-1990s—if ever. Even within Monsanto, there were quarrels about R&D resources being siphoned away from more traditional research, especially toward research that might never succeed. Besides, even if it did, it would face the avalanche of opposition sure to come from “tinkering with Mother Nature.” Consider: We were only a few years away from the Asilomar conference, which had debated that perhaps this “bioengineering” should be left stillborn. Rachel Carson had warned of science gone amok. Superfund had been enacted to clean up hazardous waste from science-based companies. A little later, an ambitious researcher in California had grown genetically modified strawberries on a lab rooftop, causing a furor by violating the rules at that time, which forbade outdoor testing. Pressure was mounting, as one opponent put it, “to test the science until it proved risk-free, since the scientists obviously couldn’t self-police.” “How long should we test?” I asked the opponent. “Oh, for about 20 years” was the response.

I had been invited to participate in a debate in support of ag biotech against Senator Al Gore at the National Academy of Sciences. I don’t think we won against his TV camera. As we proceeded in the research, the new Biotechnology Industry Organization (comprised primarily of the small companies doing research) was lobbying for no regulation. Their champion was Vice President Dan Quayle, who headed the Competitiveness Council in the first Bush administration. I visited Quayle on several occasions and finally persuaded him that the public would not accept this new science without regulation and that we needed the confidence that the public had in the regulatory bodies. I argued that each agency should practice its traditional oversight in its field: the Food and Drug Administration, Environmental Protection Agency, and U.S. Department of Agriculture—without establishing a new agency just for biotech, a move that was gaining traction in the opposing communities. I argued that the real test should be the end product and its risk/benefit, not the method of getting there. Quayle is one of the unsung heroes of the ag biotech saga. He carried the day on the “no new regulatory body” argument. Be assured that at no time did I or my associates working on these policies give a moment’s thought to shutting out smaller companies with a thicket of regulation. We wanted only “the right to operate with the public’s acceptance.”

The U.S. public now accepts the products of agricultural biotechnology in large part because they have confidence in the institutions that approved them. Regrettably, in Europe in the late 1990s another course was taken by those making decisions at the time: confrontation, not collaboration. The price is still being paid. The new leadership of Monsanto, with some 90 percent worldwide market share in these biotech crops, is making Herculean efforts to work within the culture, laws, and regulations of the European Union, much as we did in the United States in the 1980s and early 1990s. They will eventually succeed, because public confidence is a necessary ingredient of new technologies—something, to their credit, that this current management recognizes.

RICHARD J. MAHONEY

Chairman/CEO Monsanto Company (retired)

Executive in Residence

The Weidenbaum Center on the Economy, Government, and Public Policy

Washington University at St. Louis

St. Louis, Missouri


Henry I. Miller and Gregory Conko are valiant champions of reason against the forces of unreason. As someone who has seen the growing influence of the anti-science lobby in Britain and Europe, I find it disturbing to discover similar attitudes reflected in regulatory policy in the United States. It seems that for the world to benefit from new transgenic staple crops that could reduce hunger and poverty, we will have to look mainly to China, and in due course India, rather than Europe or America. By the end of 2003, more than 141 varieties of transgenic crops, mainly rice, had been developed in China, 65 of which were already undergoing field trials. However, overregulation in Europe casts a shadow even in China, because rules on labeling and traceability present a formidable hurdle to the export of transgenic crops or even of any crops that contain the slightest so-called “contamination” by genetically modified products.

Why has this technology not been treated according to its merits? The influence of green activists goes further than opposition to transgenic crops. It can be traced to a form of environmentalism that is more like religion than science. It is part of a back-to-nature cult with manifestations that include the fashion for organic farming and alternative medicine. The misnamed “organic” movement (all farming is of course organic) is based on the elementary fallacy that natural chemicals are good and synthetic ones bad, when any number of natural chemicals are poisonous (arsenic, ricin, and aflatoxin for starters) and any number of synthetic ones are beneficial (such as antibacterial drugs like sulphonamides or isoniazid, which kill the tuberculosis bacillus). The movement is essentially based on myth and mysticism.

Similarly, homeopathy is growing in popularity and official recognition, although it is based on the nonsense that “like cures like” and that a substance diluted to a strength of 1 to the power of 30 or more (1 followed by more than 30 zeros) can still have any effect except as a placebo. Many of those who believe in alternative medicine also argue that remedies that have been used for centuries must be good, as if medical practice is some kind of antique furniture whose value increases with age. It is belief in magic rather than science.

However, antiscience views are most passionately aroused in the debate about genetic modification. Campaigners even tear up crops in field trials that are specifically designed to discover if those crops cause harm to biodiversity. Like the burning of witches, such crops are eliminated before anyone can find out if they actually cause harm. In many parts of Europe, the green movement has become a crusade. That makes it dangerous. Whether the rejection of the evidence-based approach takes the form of religious fundamentalism (Islamic, Jewish, or Christian) or ecofundamentalism, the threat is not just to scientific progress but to a civilized and tolerant society.

LORD DICK TAVERNE

London, England

Lord Dick Taverne is a former member of Parliament and the founder of Sense About Science.


The overregulation of agricultural biotechnology, so well described by Henry I. Miller and Gregory Conko, carries a particularly heavy price for farmers in developing countries. In South Africa, the only nation in Africa to have permitted the planting of any genetically modified (GM) crops so far, small cotton farmers have seen their incomes increase by $50 per hectare per season as a result, and one group of academics has projected that if cotton farmers in the rest of Africa were also permitted to plant GM cotton, their combined incomes might increase by roughly $600 million per year. If India had not delayed the approval of GM cotton by two years, farmers in that country might have gained an additional $40 million. India has not yet approved any GM food or feed crops. One biotech company recently gave up trying to get regulatory approval for a GM mustard variety in India, after spending nearly 10 years and between $3 million and $4 million, in regulatory limbo. Robert Evenson at Yale University has recently estimated the total loss of potential farm income due to delayed regulatory approval of GM crops throughout the developing world, up through 2003, at roughly $6 billion.

The case made by Miller and Conko for a less stifling regulatory environment may soon grow even stronger, particularly for the poorest developing countries. Several biotech companies have recently been able to transfer genes conferring significant drought tolerance into a number of agricultural crop plants, including soybeans, rice, and maize, with exciting results in early greenhouse and field trials. Something of exceptional value to the poor will be provided if these drought-tolerant genes can also be engineered into crops grown by farmers in semiarid Asia and Africa. The drought of 2001-2002 in southern Africa put 15 million people at risk of starvation. If overregulation keeps new innovations such as drought-tolerant crops out of the hands of poor African farmers in the years ahead, the costs might have to be measured in lives as well as dollars.

Miller and Conko might have added to their argument a list of the international agencies that have recently acknowledged an apparent absence of new risks to human health or the environment, at least from the various GM crop varieties that have been commercialized to date. Even in Europe, the epicenter of skepticism about genetic modification, the Research Directorate General of the European Union (EU) in 2001 released a summary of 81 separate scientific studies conducted over a 15-year period (all financed by the EU rather than private industry) finding no scientific evidence of added harm to humans or to the environment from any approved GM crops or foods. In December 2002, the French Academies of Sciences and Medicine drew a similar conclusion, as did the French Food Safety Agency. In May 2003, the Royal Society in London and the British Medical Association joined this consensus, followed by the Union of German Academies of Science and Humanities. Then in May 2004, the Food and Agriculture Organization (FAO) of the United Nations issued a 106-page report summarizing the evidence—drawn largely from a 2003 report of the International Council for Science (ICSU)—that the environmental effects of the GM crops approved so far have been similar to those of conventional agricultural crops. As for food safety, the FAO concluded in 2004 that, “to date, no verifiable untoward toxic or nutritionally deleterious effects resulting from the consumption of foods derived from genetically modified foods have been discovered anywhere in the world.”

ROBERT PAARLBERG

Professor of Political Science

Wellesley College

Wellesley, Massachusetts


Henry I. Miller and Gregory Conko make a convincing case that a different paradigm for regulating agricultural biotechnology is desperately needed. Recombinant DNA technology allows plant breeders and biologists to identify and transfer single genes that encode specific traits, rather than relying on trial-and-error methods of conventional biotechnology. Thus, it is much more precise, better understood, and more predictable than conventional genetic modification.

Agricultural biotechnology should have been a boon for the green revolution, but gene-spliced crops still represent a small fraction of total world supply. Why has recombinant DNA technology not borne fruit? Unscientific fears, fanned by activists and short-sighted government policies, have led to a regulatory framework that singles out genetically modified crops for greater scrutiny and even prohibition. Guided by the “precautionary principle,” whose purpose is “to impose early preventive measures to ward off even those risks for which we have little or no basis on which to predict the future probability of harm,” European governments, in particular, have chosen to err on the side of caution when it comes to agricultural biotechnology.

The trouble with the precautionary principle is that it ignores the risks that would be reduced by a new technology and focuses only on the potential risks it might pose, creating an almost insurmountable bias against trying anything new. In the case of agricultural biotechnology, the implications can be heartbreaking. The authors describe how Harvest Plus, a charitable organization dedicated to providing nutrient-rich crops to hungry people, feels it must eschew gene-spliced crops because of regulatory barriers and uncertainties.

To begin to realize the potential that agricultural biotechnology holds, we must change the incentives that guide policymakers in Washington, the European Union, and elsewhere. Regulatory gatekeepers face well-documented incentives to err in the direction of disapproving or delaying approval of new products. If a gatekeeper approves a product that later turns out to have adverse effects, she faces the risk of being dragged before Congress and pilloried by the press. On the other hand, since the potential benefits of a new product or technology are not widely known, the risks of disapproval (or delays in approval) are largely invisible, so the consequences of delay are less severe.

Policymakers regulating agricultural biotechnology face pressure from well-organized activists to constrain the new technology. Large biotech companies do not speak out aggressively against unscientific policies, either because they don’t dare offend the regulators on whom their livelihood depends, or because regulations give them a competitive advantage. There is no constituency for sound science, and the general public, particularly in developing nations, who would gain so much from innovations in agricultural biotechnology, are unaware of their potential.

Miller and Conko encourage scientists, academics, the media, companies, and policymakers to help correct these biases by raising awareness of the potential benefits that molecular biotechnology promises, speaking out against irrational fears and unscientific arguments and championing sound scientific approaches to overseeing agricultural applications. We should heed Miller and Conko’s prescription for rehabilitating agricultural biotechnology if it is to fulfill its promise.

SUSAN E. DUDLEY

Director, Regulatory Studies Program

Mercatus Center at George Mason University

Fairfax, Virginia


Henry I. Miller and Gregory Conko’s assertion that high regulatory approval costs have limited the number and variety of transgenic crops on the market to four commodity crops and essentially two traits is supported by data presented at a November 2004 workshop sponsored by the U.S. Department of Agriculture’s (USDA’s) ARS/CSREES/APHIS, the National Center for Food and Agricultural Policy (NCFAP), and Langston University. (Workshop proceedings will be available on the USDA-CSREES Web site in June 2005.)

For readers unfamiliar with the long-established system for developing new crop varieties, the public sector assumes responsibility for funding research on small-market (i.e., not profitable) crops. Plant breeders at land-grant universities and government research institutes use the funds to genetically improve crops, and then they donate the germplasm or license it to private firms for commercialization. At the November workshop, public-sector scientists and small private firms described scores of small-acreage transgenic crops that they had developed but could not release to farmers because of the $5 million to $10 million price tag for regulatory approval, not to mention additional millions to meet regulatory requirements for postcommercialization monitoring of transgenic crops. (Based on recommendations from the November workshop, the USDA agencies are now developing methodologies to help public-sector researchers move transgenic crops through the regulatory approval process.)

Miller and Conko also attribute ag biotech’s disappointing performance to “resistance from the public and activists,” perhaps because the authors accept the media’s extrapolation from activist resistance to public resistance. During the 15 years that I have made presentations on ag biotech to diverse audiences that truly represent the general public, I have encountered little resistance (even in Europe!). Instead, I continually find open-minded people, eager for factual information, full of common sense, and perfectly capable of assimilating facts and making informed decisions. In addition, objective measures of public sentiment such as product sales, surveys, and ballot initiatives consistently reveal a public that is not resistant to transgenic crops.

The activist community’s contribution to the paltry number and variety of transgenic crops on the market is indisputable, however. Their remarkable success stems not only from effectively lobbying for a regulatory process that is so costly only large companies developing commodity crops can afford it, but also from causing the premature demise of transgenic crops that had made it through the approval process. Because food companies are fearful of activist demonstrations that are so imaginative and visually compelling that the media cannot resist making them front-page news, some companies have told their suppliers that they will not buy approved transgenic crops, such as herbicide-tolerant sugar beets and pest-resistant potatoes. According to NCFAP, if U.S. farmers had grown these twotransgenic crops, they would have increased total yields while decreasing pesticide and herbicide applications by 2.4 million pounds per year.

I find this paradox fascinating: Through their words and deeds, activists have created their own worst nightmare. In the words that a biologist would use, they have established an environment that selects for large corporations with deep pockets and large-scale farmers who grow huge acreages of a few commodity crops, and selects against small companies, small farms that promote sustainability through small acreages of diverse crops, and crops that maintain yields with fewer chemical inputs.

ADRIANNE MASSEY

A. Massey and Associates

Chapel Hill, North Carolina


Science and math education

Rodger W. Bybee and Elizabeth Stage have done an excellent job in highlighting some important results from the Program for International Assessment (PISA) and Trends in International Mathematics and Science Study (TIMSS), both administered in 2003 (“No Country Left Behind,” Issues, Winter 2005). Although I do not disagree with the policy implications that the authors draw based on their interpretation of the assessment results, I offer the following amplifications.

The authors stress the importance of understanding these tests. I completely agree. In this connection, two additional points may be helpful: First is the changing participation of countries in these assessments, particularly in TIMSS. For example, at the 8thgrade level, 14 developed countries, mostly Organization for Economic Cooperation and Development members that participated in TIMSS-1995, did not participate in TIMSS-2003. Seven of these countries outperformed U.S. students in mathematics in 1995, and six were not statistically different from the United States. All 14 of these countries participated in PISA, and 13 outperformed U.S. students in mathematics and in problem-solving in PISA-2003, including five that had outperformed U.S. students on the TIMSS-1995mathematics assessments: Canada, Ireland, Switzerland, France, and the Czech Republic. Thus, one needs to take care in comparing the international standing of U.S. students in 1995 and 2003.

Second, and more important, both PISA and TIMSS are snapshots in time, given to samples of students once every three or four years. A limited period of 60 to 90 minutes is allowed each student to respond to some 25 to 30 items. Because students take different forms of the test, it is possible to gather information much more broadly on students’ mathematics and science knowledge. In TIMSS2003, for instance, the whole sample of 4th graders provided responses on 313 items and 8th graders on 383 items.

Important goals of science and mathematics education cannot be assessed through these types of large-scale assessments. They cannot tell us whether students are motivated and can do the kind of sustained work necessary to succeed in science or mathematics; generate alternative solutions to problems and revise their approaches based on an iterative process of trial and revision; or search for and evaluate information needed to solve a problem or research a question. These are important competencies for students who may want to prepare for scientific and technical careers and for general scientific and mathematical literacy. To obtain the requisite information, we need well-done classroom-based assessments prepared and evaluated by teachers to add to the results of large-scale assessments. But our teachers receive little if any education, either in their preparation or professional development, on student testing and assessment. Hence, I would add to Bybee and Stage’s comments on well-qualified teachers a strong recommendation for a required two-year induction period for all new teachers, including a strand on how to evaluate students’ work.

Lastly, I strongly support the authors’ comments on assessing the tests. And we might be well served to look at the testing practices of other countries, many of which stress individual student effort and reward, particularly at the secondary level.

SENTA A. RAIZEN

West/Ed National Center for Improving Science Education

Arlington, Virginia


Democratizing science

Although I agree with David H. Guston (“Forget Politicizing Science. Let’s Democratize Science!” Issues, Fall 2004) that it is important to have greater interaction between scientists and lay citizens, I believe that the proposal to involve “lay citizens more fully in review of funding applications” is not sensible.

Unfortunately, most Americans have little knowledge about science. A sizeable fraction rejects the theory of evolution. When members of Congress, most of whom are reasonably representative of lay citizens, have intervened regarding specific scientific research proposals, the results have nearly always been damaging to the cause of science.

When the National Science Foundation (NSF) was first approved by Congress in line with the proposal of Vannevar Bush, it was vetoed by President Truman because it was not democratic enough. This led to a revised version in which the president selected the members of the National Science Board. In spite of this, NSF has worked successfully with little “democratic” intervention. My experience has been that, on the whole, the NSF peer review system has worked very well in advancing fundamental science.

In the case of applied research, the major problem is that most of the financing comes from the profit-seeking private sector, as noted by Guston, or the Department of Defense. In this case, I agree that alternative funding sources are important, although this may not be easy.

LINCOLN WOLFENSTEIN

University Professor of Physics

Carnegie Mellon University

Pittsburgh, Pennsylvania

Law and the Public’s Health

Public health law is experiencing a renaissance. Once fashionable during the Industrial and Progressive eras, the ideals of population health began to wither in the late 20th century. In their place came a sharpened focus on personal and economic freedom. Political attention shifted from population health to individual health and from public health programs to private medicine. Signs of revitalization of the field of public health law can be seen in diverse national and global contexts. The Centers for Disease Control and Prevention (CDC) created a center of excellence in public health law—the Center for Law & the Public’s Health (www.publichealthlaw.net)—and other nations have followed suit. In the aftermath of September 11 and the anthrax attacks, the CDC requested the drafting of the Model State Emergency Health Powers Act, now adopted in whole or in part by 37 states. A consortium of state and federal partners then drafted the “Turning Point” Model Public Health Act, which outlines a modern mission, core functions, and essential services for public health agencies. At the global level, the World Health Organization (WHO) is revising the International Health Regulations and preparing a WHO Model Public Health Act to Advance the Millennium Development Goals.

Why are these diverse international, governmental, and nongovernmental organizations paying such close attention to public health law? The reason is that law is essential to achieving the goals of population health. Law creates public health agencies, designates their mission and core functions, appropriates their funds, grants their power, and limits their actions to protect a sphere of freedom. Public health statutes establish boards of health, authorize the collection of information, and enable monitoring and regulation of dangerous activities. The most important social debates about public health take place in legal forums—legislatures, courts, and administrative agencies—and in the law’s language of rights, duties, and justice. It is no exaggeration to say that the field of public health is grounded in statutes and regulations found at every level of government.

Law can be empowering, providing innovative solutions to the most implacable health problems. Of the 10 great public health achievements of the 20th century, most were realized, at least in part, through law reform or litigation: vaccinations, safer workplaces, safer and healthier foods, motor vehicle safety, control of infectious diseases, tobacco control, and fluoridation of drinking water. Only three (family planning, healthier mothers and babies, and reduced deaths from coronary heart disease and stroke) did not involve law reform.

Law, therefore, can be a powerful agent for change in society, and policymakers need to be familiar with all the legal tools at their disposal. However, the law also places limits on what public policy can do, and policymakers must be prepared to wrestle with difficult legal, social, and ethical concerns that will arise in conjunction with potential public health initiatives.

What is public health law?

I have defined public health law as the study of the legal powers and duties of the state to promote the conditions for people to be healthy and the limitations on the power of the state to constrain the autonomy, privacy, liberty, or proprietary or other legally protected interests of individuals for the protection or promotion of community health. To understand the role of law in public health, it is useful to begin with a description of the legal basis for government authority (the police power) and the limits on that authority.

The “police power” is the most famous expression of the natural authority of government to regulate for the public good. Although police power evokes images of an organized civil force for maintaining public order, the word “police” has its linguistic roots in the close association between government and civilization: politia (the state), polis (city), and politeia (citizenship). The word had a secondary usage as well: cleansing or keeping clean. This use resonates with early 20th-century public health connotations of hygiene and sanitation. I define police power as the inherent authority of the state to enact laws and promulgate regulations to protect, preserve, and promote the health, safety, morals, and general welfare of the people. To achieve these communal benefits, the state retains the power to restrict, within federal and state constitutional limits, private interests—personal interests in autonomy, privacy, association, and liberty, as well as economic interests in freedom to contract and uses of property.

The police powers include all laws and regulations directly or indirectly intended to reduce morbidity and mortality in the population. These powers have enabled states and localities to promote and preserve the public’s health in areas ranging from injury and disease prevention to sanitation, waste disposal, and water and air pollution. States exercise police powers for the common good: to ensure that communities live in safety and security; in conditions conducive to good health; with moral standards; and, generally speaking, without unreasonable interference in human well-being.

Government, to advance the common good, is empowered to enact legislation, regulate, and adjudicate in ways that necessarily limit private interests. For example, the police power affords states the authority to keep society free from noxious exercises of private rights, such as the dumping of toxic waste.

The police powers authorize government to exercise compulsory powers for the common good, but the state must act in conformity with constitutional and statutory constraints. Whenever government exercises coercive powers, it interferes with personal rights to liberty, bodily integrity, privacy, property, or other legally protected interests. The exercise of police power therefore often presents hard trade-offs between promoting the common good and protecting individual rights. Consider the following actions that government can take to protect the public from infectious disease as a classic illustration of the conflicts between public health and civil liberties.

Surveillance and privacy. Public health agencies collect, use, and disclose a considerable amount of personal health information. The law requires health care institutions and professionals to report specified information to health officials. Public health agencies can also monitor health records to provide early warnings of disease outbreaks. Surveillance is critically important to disease control, but it also interferes with the right of privacy. Notably, the federal privacy rules issued under the Health Insurance Portability and Accountability Act (HIPAA) have broad exemptions for public health data. The U.S. Supreme Court has upheld the state’s power to require reporting, but it does insist that public health agencies have adequate safeguards to protect individual privacy.

Vaccination, treatment, and bodily integrity. Public health agencies have the power to compel vaccination, medical examinations, and treatment, including directly observed therapy. These medical interventions are critically important in preventing or controlling the spread of infectious diseases, but they also can interfere with patients’ rights such as bodily integrity and religious freedom. The courts have upheld state therapeutic powers but with certain safeguards. Medical interventions must be necessary for the public’s health and therapeutically appropriate for the patient.

Quarantine and liberty. Public health officials have long had the power to order isolation or quarantine to protect against the spread of infectious disease. For example, these measures were used sporadically in the United States, and extensively in Canada and Asia, during the 2003 severe acute respiratory syndrome (SARS) outbreaks. Although courts authorize the deprivation of liberty for the public good, health officials must provide patients with procedural due process. Thus, before using these measures (or soon after in cases of emergency), individuals must have the right to a hearing with a legal representative. Isolation and quarantine are well-established measures for tuberculosis control, but they would be quite controversial in a large-scale public health emergency such as an influenza pandemic or a bioterrorism event.

The idea of “justice” is complex and multifaceted, but it remains at the heart of public health’s mission.

Commercial regulation and economic rights. Public health officials have a wide range of powers to control businesses and the professions, including licensing, inspections, and nuisance abatements. These measures are necessary to ensure that health care activities are conducted safely. The courts have long upheld these forms of economic regulation, even though they interfere with the rights to engage in a profession, enter into a contract, or conduct a business. Still, government must show that it has good grounds for economic regulation. For example, in many cases, public health officials might have to obtain a search warrant before conducting an inspection.

Social justice

Social justice is viewed as so central to the mission of public health that it has been described as the field’s core value. The idea of “justice” is complex and multifaceted, but it remains at the heart of public health’s mission. Justice is fair, equitable, and appropriate treatment in light of what is due or owed to individuals and groups. Justice does not require universally equal treatment, but it does require that similarly situated people be treated equally. Justice, in other words, requires that equals are treated the same and nonequals are treated differently.

Justice, which is the fair and proper administration of laws, has three important attributes of special relevance to public health. Perhaps the most important aspect is nondiscrimination: treating people equitably based on their individual characteristics rather than membership in a socially distinct group such as race, ethnicity, sex, religion, or disability. It cautions against public health judgments based on prejudice, irrational fear, or stereotype, such as singling out people living with HIV/AIDS for adverse treatment.

A second important aspect is natural justice: affording individuals procedural fairness when imposing a burden or withholding a benefit. The use of legal proceedings according to established rules and principles for the protection and enforcement of individual rights lies at the heart of due process. The elements of due process include notice, trial rights including the right to an attorney, and a fair hearing. Natural justice requires public health officials to afford individuals procedural safeguards in conjunction with the exercise of compulsory powers such as isolation or quarantine.

The final aspect is distributive justice: fair disbursement of common advantages and the sharing of common burdens. This form of justice requires that officials act to limit the extent to which the burden of disease falls unfairly on the least advantaged and to ensure that the burdens of interventions themselves are distributed equitably. Coercive public health powers, therefore, should not be targeted against vulnerable groups such as injection drug users, prostitutes, or gays without good cause based on careful risk assessments.

Distributive justice also requires the fair distribution of public health benefits such as vaccines and medical treatment. This principle might apply, for example, to the fair allocation of vaccines or antiviral medications during a major influenza outbreak. Public health actions, moreover, must be seen to be fair. For example, many people were upset at the government’s decision to aggressively screen and treat congressional staffers but not low-income postal workers in Washington, D.C., during the anthrax outbreak.

Public health tools

If government has the power to ensure the conditions for people to be healthy, what tools are at its disposal? There are at least seven models for legal intervention designed to prevent injury and disease, encourage healthful behaviors, and generally promote the public’s health.

Taxing and spending. The power to tax and spend is ubiquitous in national constitutions, providing government with an important regulatory technique. The power to spend supports the public health infrastructure, a well-trained workforce, electronic information and communications systems, rapid disease surveillance, laboratory capacity, and response capability. The state can also set health-related conditions for the receipt of public funds. For example, government can grant funds for highway construction or other public works projects on the condition that the recipients meet designated safety requirements.

The power to tax provides inducements to engage in beneficial behavior and disincentives to engage in high-risk activities. Tax relief can be offered for health-promoting activities such as medical services, child care, and charitable contributions. At the same time, tax burdens can be placed on the sale of hazardous products such as cigarettes, alcoholic beverages, and firearms.

Despite their undoubted effectiveness, the spending and taxing powers are not entirely benign. Taxing and spending can be seen as coercive, because the government wields significant economic power. They can also be viewed as inequitable if rich people benefit while the poor are disadvantaged. Some taxing policies, such as tax preferences for energy companies or tobacco farmers, serve the rich, the politically connected, or those with special interests. Other taxes penalize the poor because they are highly regressive. For example, almost all public health advocates support cigarette taxes, but the people who shoulder the principal financial burden are disproportionately indigent and are often in minority groups.

Altering the informational environment. The public is bombarded with information that influences life choices, and this undoubtedly affects health and behavior. The government has several tools at its disposal to alter the informational environment, encouraging people to make more healthful choices about diet, exercise, cigarette smoking, and other behaviors.

First, government uses communication campaigns as a major public health strategy. Health education campaigns, like other forms of advertising, are persuasive communications; instead of promoting a product or a political philosophy, public health promotes safer, more healthful behaviors. Prominent campaigns include safe driving, safe sex, and nutritious diets.

Second, government can require businesses to label their products to include instructions for safe use, disclosure of contents or ingredients, and health warnings. For example, government requires businesses to explain the dosage and adverse effects of pharmaceuticals, reveal the nutritional and fat content of foods, and warn consumers of the health risks of smoking and drinking alcoholic beverages.

Finally, government can limit harmful or misleading information in private advertising. The state can ban or regulate advertising of potentially harmful products such as cigarettes, firearms, and even high-fat foods. Advertisements can be deceptive or misleading by, for example, associating dangerous activities such as smoking with sexual, adventurous, or active images. Advertisements can also exacerbate health disparities by, for example, targeting product messages to vulnerable populations such as children, women, or minorities.

To many public health advocates, there is nothing inherently wrong with or controversial in ensuring that consumers receive full and truthful information. Yet not everyone believes that public funds should be expended or the veneer of government legitimacy used to proscribe particular social orthodoxies regarding personal choices related to sexual activity, abortion, smoking, high-fat diet, or sedentary lifestyle. Labeling requirements seem unobjectionable, but businesses strongly protest compelled disclosure of certain kinds of information. For example, should businesses be required to label foods as genetically modified (GM)? GM foods have not been shown to be dangerous to humans, but the public demands a “right to know.” Advertising regulations restrict commercial speech, thus implicating businesses’ right to freedom of expression. The U.S. Supreme Court, for example, has strongly supported the right to convey truthful commercial information. Courts in most liberal democracies, however, do not afford protection to corporate speech. There is, after all, a distinction between political and social speech (which deserve rigorous legal protection) and commercial speech. The former is necessary for a vibrant democracy, whereas the latter is purchased and seeks primarily to sell products for a profit.

Altering the built environment. The design of the built or physical environment can hold great potential for addressing the major health threats facing the global community. Public health has a long history of designing the built environment to reduce injury (workplace safety, traffic calming, and fire codes), infectious diseases (sanitation, zoning, and housing codes), and environmentally associated harms (lead paint and toxic emissions).

Many developed countries are now facing an epidemiological transition from infectious to chronic diseases such as cardiovascular disease, cancer, diabetes, asthma, and depression. The challenge is to shift to communities that are designed to facilitate physical and mental well-being. Although research is limited, we know that environments can be designed to facilitate health-affirming behavior by, for example, providing space for physical activities such as walking, biking, and playing; providing easy access to sources of fresh fruits and vegetables; limiting the places where people can purchase or consume cigarettes and alcoholic beverages; reducing violence associated with domestic abuse, street crime, and firearm use; and creating opportunities for social interactions that build social capital.

Coercive public health powers should not be targeted against vulnerable groups such as injection drug users, prostitutes, or gays without good cause.

Popular columnist Virginia Postrel offers a stinging assessment of public health efforts to alter the built environment: “The anti-sprawl campaign is about telling [people] how they should live and work, about sacrificing individuals’ values to the values of their politically powerful betters. It is coercive, moralistic, nostalgic, [and lacks honesty].” However, the evidence demonstrates that organized societies have a remarkable capacity to plan, shape the future, and help populations increase health and wellbeing. The empirical evidence does not make it inevitable that the state will, or always should, prefer health-enhancing policies. However, government does have an obligation to carefully consider the population’s health in its land use policies.

Altering the socioeconomic environment. A strong and consistent finding of epidemiological research is that socioeconomic status (SES) is correlated with morbidity, mortality, and functioning. SES is a complex phenomenon based on income, education, and occupation. The relationship between SES and health is often referred to as a “gradient” because of the graded and continuous nature of the association. It is not just the very poor who are at a disadvantage; health differences are observed well into the middle ranges of SES. These empirical findings have persisted across time and cultures and remain viable today.

Despite the strength of evidence, critics express strong objections to policies directed at reducing socioeconomic disparities. They dispute the causal relationship between low SES and poor health outcomes and argue that income redistribution is not within the legitimate sphere of public health.

Although SES disparities are political questions, the evidence should guide elected officials. Admittedly, the explanatory variables for the relationship between SES and health are not entirely understood. However, waiting for researchers to definitively find the causal pathways would be difficult and time-consuming, given the multiple confounding factors. This would indefinitely delay policies that could powerfully affect people’s health and longevity. What we do know is that the gradient probably involves multiple pathways, each of which can be addressed through social policy. People of low SES experience material disadvantage (in access to food, shelter, and health care); toxic physical environments (poor conditions at home, work, and community); psychosocial stressors (financial or occupational insecurity and lack of control); and social contexts that influence risk behaviors (smoking, physical inactivity, high-fat diet, and excessive alcohol consumption). Society can work to try to alleviate each of these determinants of morbidity and premature mortality.

Direct regulation of persons, professionals, and businesses. In a well-regulated society, public health authorities set clear, enforceable rules to protect the health and safety of workers, consumers, and the population at large. Regulation of individual behavior (such as the use of seatbelts and motorcycle helmets) reduces injuries and deaths. Licenses and permits enable government to monitor and control the standards and practices of professionals and institutions (such as doctors, hospitals, and nursing homes). Finally, inspection and regulation of businesses help to ensure humane conditions of work, reductions in toxic emissions, and safer consumer products.

Despite its undoubted value, public health regulation of commercial activity is highly contested terrain.

Despite its undoubted value, public health regulation of commercial activity is highly contested terrain. The U.S. economic philosophy favors open competition and the undeterred entrepreneur. Libertarians view commercial regulation as detrimental to economic growth and social progress. Commercial regulation, they argue, should redress market failures, such as monopolistic and other anticompetitive practices, rather than restrain free trade. On the other hand, public health advocates are opposed to unfettered private enterprise and suspicious of free-market solutions to complex social problems. They point out that unbridled commercialism can produce unsafe work environments, noxious byproducts, and public nuisances. Regulation is needed to curb the excesses of unrestrained capitalism to ensure reasonably safe and healthful business practices.

Indirect regulation through the tort system. Attorneys general, public health authorities, and private citizens possess a powerful means of indirect regulation through the tort system. Civil litigation can redress many different kinds of public health harms: environmental damage such as air pollution or groundwater contamination; exposure to toxic substances such as pesticides, radiation, or chemicals; hazardous products such as tobacco or firearms; and defective consumer products. For example, in 1998, tobacco companies negotiated a master settlement agreement with the states that required compensation in perpetuity, with payments totaling $206 billion through the year 2025.

The goals of tort law, although imperfectly achieved, are frequently consistent with public health objectives. The tort system aims to hold individuals and businesses accountable for their dangerous activities, compensate people who are harmed, deter unreasonably hazardous conduct, and encourage innovation in product design. Civil litigation, therefore, can provide potent incentives for people and manufacturers to engage in safer, more socially conscious behavior.

Although tort law can be an effective method of advancing the public’s health, like any form of regulation it is not an unmitigated good. First, the tort system imposes economic costs and personal burdens on individuals and businesses. Tort costs are absorbed by the enterprise, which often passes the costs on to employees and consumers. Second, tort costs may be so high that businesses do not enter the market, leave the market, or curtail R&D. Society might not be any poorer if tort costs drove out socially unproductive enterprises such as cigarette makers, but it would not be beneficial to destroy the vaccine industry. Third, the tort system can be unfair, distributing windfalls to isolated plaintiffs and their attorneys while failing to compensate the majority of injured people in the population. Studies of the medical malpractice system, for example, demonstrate that large awards often are given to undeserving plaintiffs, whereas most patients who suffer from medical error are never compensated.

Deregulation: Law as a barrier to health. Sometimes laws are harmful to the public’s health and stand as an obstacle to effective action. In such cases, the best remedy is deregulation. Politicians might urge superficially popular policies that have unintended health consequences. Consider laws that penalize exchanges or pharmacy sales of syringes and needles. Restricting access to sterile drug injection equipment can fuel the transmission of HIV and other blood-borne infections. Similarly, the closure of bathhouses where gay sex is practiced for the purpose of slowing the spread of AIDS can drive the activity underground, making it more difficult to reach gay men with condoms and safe-sex literature. Finally, laws that criminalize sex unless the person discloses his or her HIV status make common sexual behavior unlawful. These laws provide a disincentive for seeking testing and medical treatment, ultimately harming the public’s health.

Deregulation can be controversial because it often involves a direct conflict between public health and other social values such as crime prevention or morality. Drug laws, the closure of bathhouses, and HIV-specific criminal penalties represent society’s disapproval of specific behaviors. Deregulation can thus be perceived by many as a symbol of weakness. Despite the political dimensions, public officials should give greater attention to the health effects of public policies.

No simple answers

Even this brief examination of public health law demonstrates the power of law to promote the health of populations, ranging from vaccinations, tobacco control, and clean water, to safety standards for consumer products, workplaces, and roads. Much of this regulation has deep historical precedent and strong public support. However, many areas at the cutting edge of public health law are deeply controversial. Public health officials, for example, have a significant interest in genomics to achieve public goods. However, this may involve the collection of intimate information, implicating privacy concerns. Public health genomics also may increase health disparities if the rich have greater access to genetic technologies.

Much of the controversy rests on the question of who is responsible for personal behavior. Who is accountable for harms to the population: obese people or fast food chains, criminals or firearm manufacturers, smokers or the tobacco industry? Some believe that individuals have free choice and should take personal responsibility for their own behavior and that of their children. Under this school of thought, government should not be encouraging people, let alone forcing them, to change their behavior. Those in the public health community, however, believe that behavior is not solely a matter of free choice but is affected by the informational, built, and socioeconomic environments in which people live. Such advocates would use law to help ensure the conditions for population health.

Finally, much social and political controversy arises from the use of compulsory powers. This was particularly evident in the aftermath of September 11 and the anthrax attacks. Should government have the power, for example, to engage in active surveillance, compel treatment, and impose quarantines? Or, should individual rights to privacy, bodily integrity, and liberty prevail? These are the enduring questions surrounding public health law, and they pose fundamental problems that are central to our democracy.

From the Hill – Spring 2005

Bush budget would cut most R&D programs

On February 7, President Bush released his proposed budget for FY 2006. Against a backdrop of record-breaking federal budget deficits, a continuing and costly war in Iraq, an expansion of Medicare to pay for prescription drugs, and expensive proposals to introduce private accounts for Social Security in the future, the federal investment in R&D would barely grow in FY 2006, with cuts in R&D programs outnumbering increases. In order to restrain the budget deficit, the president proposes to hold nondefense discretionary spending flat for the third year in a row. Indeed, after factoring in increases for international aid and homeland security, domestic nonsecurity spending overall would fall in FY 2006 by 1 percent. Defense spending would increase modestly compared to previous years, but the true picture is uncertain because the budget excludes funding for the Iraq war. Federal R&D investment mirrors these overall trends, with flat funding for defense R&D and increases for homeland security and space exploration R&D offset by cuts in most other R&D programs.

The past few years have seen record-breaking totals for federal R&D because of enormous increases for defense weapons development, the creation of new homeland security R&D programs, and the now-completed campaign to double the National Institutes of Health (NIH) budget. The federal R&D investment hit an all-time high this year because of defense and homeland security increases, but in completing FY 2005 appropriations last December, Congress went along with the president’s proposals to freeze most domestic discretionary spending at FY 2004 levels. As a result, the nondefense, non-homeland security R&D portfolio stagnates this year, with modest increases in some areas offset by cuts in others. The FY 2006 budget for next year would continue this austerity and extend it to defense R&D. As a result, growth in the federal R&D portfolio would fail to keep pace with inflation for the first time in a decade, and most R&D programs would suffer cuts in real terms.

Proposed federal R&D spending in FY 2006 is $132.3 billion, an increase of 0.1 percent or $142 million above this year and far short of the 2 percent increase needed to keep pace with expected inflation. A $507 million increase for space exploration R&D in the National Aeronautics and Space Administration (NASA) budget would far exceed the $142 million increase, leaving all other R&D programs, including defense, with less money next year. The nondefense R&D investment would increase by 0.3 percent to $57 billion. When development spending is factored out, total federal support of research (basic and applied) would fall 1.4 percent to $55.2 billion.

The NIH budget, after doubling in the five years between 1998 and 2003, would see an increase of 0.5 percent in FY 2006 to $28.7 billion. NIH projects a decline in the number of research project grants for the second year in a row and a decline in the proposal success rate for the fifth year in a row.

The National Science Foundation (NSF), after a budget cut in 2005, would see a modest increase of 2.8 percent to $4.2 billion for its R&D portfolio, but most of the increase would go to R&D facilities. As a result, the average NSF research grant would shrink for the second year in a row. NSF’s education funding would fall steeply.

The Department of Energy’s (DOE’s) Office of Science would see its R&D funding fall 4.5 percent to $3.2 billion. Environmental R&D would decline across the board, including cuts to the USGS (down 4.8 percent to $515 million), the National Oceanic and Atmospheric Administration (down 11.2 percent to $565 million), andthe Environmental Protection Agency (EPA) (down 0.7 percent to $568 million).

There would be tough budgetary choices even in agencies with increasing budgets. At NASA, a 4.6 percent boost in R&D funding to $11.5 billion would still require steep cuts in aeronautics and earth sciences research and the cancellation of a Hubble servicing mission to pay for NASA’s ambitious space exploration plans and resumed construction of the International Space Station. Although DOE’s energy R&D portfolio would climb 8.4 percent to $1.2 billion because of increased investments in hydrogen, nuclear energy, fuel cells, and coal, DOE would eliminate R&D on gas and oil technologies and sharply reduce funding for other areas. R&D at the National Institute of Standards and Technology (NIST) laboratories would climb 12.7 percent to $357 million, but the budget proposes to eliminate NIST’s Advanced Technology Program and halve the budget of the Hollings Manufacturing Extension Partnership.

For the first time in a decade, defense R&D would be subject to fiscal restraints. Defense R&D would fall slightly by $16 million to $75.4 billion, after multibillion dollar increases for each of the past five years. Department of Defense (DOD) weapons development would see a modest increase overall, but there would be a $1 billion cut in missile defense. DOD science and technology programs would plummet 21 percent to $10.7 billion. DOE’s weapons-related R&D would fall by 2.6 percent, including cuts to inertial confinement fusion and advanced computing research.

R&D in the FY 2006 Budget by Agency (budget authority in millions of dollars)

FY 2004 FY 2005 FY 2006 Change FY 05-06
Actual Estimate Budget Amount Percent
Total R&D (Conduct and Facilities)
Defense (military) 65,948 70,929 71,009 80 0.1%
S&T (6.1-6.3 + medical) 12,377 13,578 10,691 -2,886 -21.3%
All Other DOD R&D 53,572 57,351 60,318 2,967 5.2%
Health and Human Services 28,521 29,084 29,139 55 0.2%
Nat’l Institutes of Health 27,248 27,784 27,925 141 0.5%
NASA 10,803 10,990 11,497 507 4.6%
Energy 8,763 8,614 8,452 -161 -1.9%
Atomic Energy Defense R&D 4,198 4,138 4,031 -107 -2.6%
Office of Science 3,279 3,334 3,184 -150 -4.5%
Energy R&D 1,285 1,141 1,238 96 8.4%
Nat’l Science Foundation 4,123 4,057 4,170 113 2.8%
Agriculture 2,222 2,403 2,051 -352 -14.6%
Commerce 1,139 1,134 1,013 -121 -10.6%
NOAA 640 636 565 -71 -11.2%
NIST 457 461 416 -45 -9.7%
Interior 627 615 581 -34 -5.5%
U.S. Geological Survey 553 541 515 -26 -4.8%
Transportation 665 744 807 63 8.5%
Environ. Protection Agency 662 572 568 -4 -0.7%
Veterans Affairs 866 784 786 2 0.3%
Education 299 297 261 -36 -12.1%
Homeland Security 1,028 1,243 1,287 44 3.6%
All Other 724


727


713


-14 -1.9%
Total R&D 126,389 132,193 132,335 142 0.1%
Defense R&D 70,344 75,395 75,379 -16 0.0%
Nondefense R&D 56,046 56,798 56,957 158 0.3%
Nondefense R&D excluding NIH 28,798 29,014 29,032 18 0.1%
Basic Research 26,635 26,841 26,533 -308 -1.1%
Applied Research 28,871


29,178


28,699


-479 -1.6%
Total Research 55,506 56,019 55,232 -787 -1.4%
Development 66,650 71,353 72,503 1,150 1.6%
R&D Facilities and Equipment 4,233 4,821 4,601 -220 -4.6%

Source: AAAS, based on OMB data for R&D for FY 2006, agency budget justifications, and information from agency budget offices. Note: The projected inflation rate between FY 2005 and FY 2006 is 2.0 percent. REVISED March 9, 2005

Federal homeland security-related R&D would total $4.4 billion in FY 2006, a gain of $208 million or 4.9 percent, which represents a leveling off of the federal investment after dramatic recent increases. The majority of the multiagency portfolio would remain outside the Department of Homeland Security (DHS), with the largest part of funding going to NIH for its biodefense research portfolio. NIH’s portfolio, mostly in the National Institute of Allergy and Infectious Diseases, would total $1.8 billion in FY 2006, up 0.4 percent but with room for an8 percent increase for biodefense research because of a drop in laboratory construction funding. After annual increases greater than 20 percent in the first few years of its existence, growth in the DHS R&D portfolio would level off with a FY 2006 request of $1.3 billion, up $44 million or 3.6 percent.

The Department of Agriculture, enjoying a record R&D portfolio in 2005, would see its R&D funding decline by 14.6 percent to $2.1 billion. Most of the decline is due to the proposed elimination of R&D earmarks.

The EPA overall budget would fall a steep 5.7 percent to $7.6 billion in FY 2006. EPA R&D would fare better, with a 0.7 percent cut to $568 million. Homeland security-related R&D would be the big winner in the R&D portfolio, with large increases for decontamination research and drinking water security. The proposed elimination of R&D earmarks would allow for modest increases in core EPA R&D programs in areas such as global change, particulate matter, drinking water, and water quality.

Department of Transportation R&D funding would rise 8.5 percent to $807 million. There would be a big boost in highway R&D, due in part to a perennial proposal to shift some resources away from state highway grants to highway research; similar proposals have been rejected by Congress in past years. R&D in the Federal Aviation Administration would decline 11.4 percent to $233 million, mirroring similar cuts in aeronautics research at NASA and in aviation security R&D at DHS.

Funding for all three multiagency R&D initiatives would decline in FY 2006. After a nearly $100 million increase this year, funding for the National Nanotechnology Initiative would fall 2.5 percent to $1.1 billion, well short of amounts authorized in the Nanotechnology R&D Act signed into law in December 2003. Funding for the Networking and Information Technology R&D initiative would decline 6.8 percent to $2.1 billion. The Climate Change Science Program would see its funding fall 1.4 percent to $1.9 billion, primarily because of steep cuts in NASA’s contributions in space-based observations of the environment.

Congress will tackle the FY 2006 appropriations process in a newly reorganized committee structure. The House and the Senate recently approved separate restructurings of their appropriations committees. Instead of 13 subcommittees in each chamber writing 13 appropriations bills, the House will now use 10 subcommittees. The Senate chose 12 subcommittees with jurisdictions similar to but not identical to those of the House. The result could be an appropriations process more protracted and confusing than normal. The federal R&D portfolio would be divided among all 10 House appropriations bills, and 10 of the 12 Senate bills. As before, four appropriations bills would fund 95 percent of all federal R&D, and the major R&D funding agencies of DOD, NIH, NASA, and DOE would continue to be funded in separate bills. NASA and NSF would move together from the eliminated Veterans’ Administration (VA), Housing and Urban Development (HUD), and Independent Agencies bill to a Commerce, Justice, and Science bill in the Senate (Science, Commerce, and Justice in the House) to join the Commerce R&D portfolio, whereas EPA would move from VA-HUD to the Interior bill to join the Department of the Interior.

Hubble repair mission in doubt

Congress will have to decide this year whether it should buck President Bush and provide funding for a mission to repair the Hubble Space Telescope. The president’s fiscal year (FY) 2006 budget proposal includes no money for a rescue mission.

Shortly after the January 2004 unveiling of President Bush’s space exploration initiative, Sean O’Keefe, then administrator of the National Aeronautics and Space Administration (NASA), cancelled a planned space shuttle mission to save the telescope. After much public outcry, O’Keefe reversed himself and endorsed a robotic servicing mission. Now, however, the administration no longer supports that option.

At a February 2 hearing, the House Science Committee examined whether Hubble should be serviced, and if so, the best option for doing so. Although witnesses and committee members agreed on the successes and importance of Hubble, no clear consensus emerged about its future.

Louis Lanzerotti, who chaired a National Research Council (NRC) committee in 2004 that backed a manned mission to repair the telescope, testified that the human risk of a shuttle mission is equivalent to that of the planned shuttle missions to the International Space Station. However, Paul Cooper of MDA Space Missions, the private company initially charged with developing a servicing robot, disagreed, arguing that the NRC had overstated the risks of a robotic mission and that such a mission could be accomplished in time to save Hubble. Joseph Taylor of Princeton University testified that he was worried that the estimated cost of more than $1 billion for a shuttle mission would drain funds from other space science activities.

Finally, the question arose about the effect that Hubble’s decommissioning would have on the research community, including new students. Hubble is expected to be decommissioned in 2007, but the next available space telescope is not expected to be launched until 2010.

Expanded tsunami warning system considered

In response to the December 26 Indian Ocean tsunami disaster as well as concerns that the Pacific Northwest could face such a disaster, the Bush administration and members of Congress have unveiled proposals to expand the U.S. Tsunami Warning System (TWS) in the Pacific Ocean and extend it to the Atlantic Ocean and Caribbean.

The administration’s plan, announced on January 14, would deploy 32 new Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys, which can measure the size of a passing wave, by 2007. Under the proposal, the National Oceanic and Atmospheric Administration (NOAA) would install 25 buoys in the Pacific and 7 buoys in the Atlantic and Caribbean at a cost of $24 million in the next two years. NOAA would also install 38 new sea-level monitoring/tide gauge stations.

In the Senate, Sen. Daniel Inouye (D-Hawaii) has proposed a bill that would authorize $35 million for expanding TWS between FY 2006 and FY 2012. A similar bill proposed by Sen. Joseph Lieberman (D-Conn.) calls for $30 million for FY 2005 and $7.5 million for FY 2006 to FY 2014. Both bills would expand TWS in the Pacific as well as extend it to the Indian and Atlantic Oceans and the Caribbean.

The current TWS, operated by the National Weather Service (NWS), consists of Pacific Warning Centers in Alaska and Hawaii. The centers operate in conjunction with NOAA seismic stations, the U.S. Geological Survey (USGS),universities, and a partnership of 26 countries in an effort to monitor earthquakes that might cause a tsunami. The current system includes six DART buoys.

At a January 26 House Science Committee hearing, USGS director Charles Groat explained that most tsunamis occur off the Pacific rim, making Hawaii, Alaska, California, Oregon, and Washington particularly vulnerable. The Caribbean, too, faces a notable risk, whereas the likelihood of a tsunami in the Atlantic is relatively small. Groat warned that there is a 10 to 14 percent chance in the next 50 years of an earthquake in the Cascadia subduction zone that could generate a massive tsunami off the Oregon coast.

Jay Wilson, Earthquake and Tsunami Programs Coordinator for Oregon Emergency Management, called for further study of seismic activity in the Cascadia subduction zone. Although he supports more technology, he pointed out that in a tsunami generated close to the U.S. coast, buoys would not helpOregonians near the earthquake. In Wilson’s opinion, the most cost-effective strategy would be to develop what he calls a “culture of awareness.” He praised tsunami education and preparation programs such as the NWS’s TsunamiReady but called for greater funding.

Also at the hearing, Brig. Gen. David L. Johnson, director of the NWS, lauded the DART system as a means of preventing false alarms and saving money that would otherwise be spent on evacuation procedures, noting that buoys had saved Hawaii an estimated $68 million in 2003, when the buoys indicated that an earthquake detected by seismograph had not generated a tsunami.

However, it was pointed out at the hearing that three of the six existing DART buoys are currently out of service and waiting for repairs. John Orcutt, deputy director of research at the Scripps Institution of Oceanography, said he was “extremely concerned about the ability and the willingness of the United States to maintain such a system.” Because of the high operating costs of the buoys and the relative infrequency of North American tsunamis, long-term funding commitments are essential, he said.

Orcutt stressed the importance of seismic measurements managed by the Global Seismic Network (GSN). The GSN is funded by the National Science Foundation (NSF) and run by USGS and Scripps. Orcutt complained that the administration’s plan “does not recognize NSF’s role and does not include an augmentation of the NSF budget for GSN growth and modernization.” Under the president’s plan, the GSN would receive $8.1 million in emergency supplemental funding and $5.4 million in the FY 2006 budget to improve seismic monitoring and communications. Orcutt proposed permanently doubling annual funds for the “deteriorating” GSN, which now receives $5 million a year. He also endorsed tsunami hazard mapping performed by NOAA and USGS.

Visa delays drop for students, scientists

A Government Accountability Office (GAO) report released on February 18 found that delays in granting visas to foreign students and scholars have been significantly reduced. The GAO report was released a week after the State Department announced an extension of visa limits to four years for students, two years for temporary workers, and one year for business visitors.

Because of concerns about the transfer of sensitive technology abroad, the United States in recent years has been screening foreign students and scholars under the Visas Mantis program. Since 9/11, however, many students and scientists have found it difficult to enter the United States because of long delays in the program. The GAO report, however, found that the average Visas Mantis processing time decreased from 67 days a year ago to 15 days now. The improvement, the report said, was accomplished through a coordinated effort by the State Department, the Department of Homeland Security, and the FBI. These agencies worked to expand the program’s staff, increase guidance to consular officers, develop an electronic tracking system, and ensure priority interviews for students and scholars.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Peaking Oil Production: Sooner Rather Than Later?

World demand for oil continues to increase, but Earth’s endowment of oil is finite. Accordingly, geologists know that at some future date, conventional oil supply will no longer be capable of satisfying world demand; conventional oil production will have peaked and begun to decline. No one knows with certainty when peaking will occur, but a number of competent forecasters think it could be soon, which could result in unprecedented worldwide economic problems. Policymakers should be preparing now to ease the passage through this inevitable transition.

The peaking of world oil production has been a matter of speculation from the very beginning of the modern oil era in the mid-1800s. In the early days, little was known about petroleum geology, so predictions of peaking were no more than uninformed guesses. Over time, geological understanding improved dramatically, and guessing gave way to more informed projections. Nevertheless, significant uncertainty still exists.

Oil production optimists like to point out that for more than 100 years people have been claiming that world peaking would occur in the next 10 to 20 years. Of course, the older predictions were all wrong. But that does not mean that recent peaking forecasts are incorrect. Nevertheless, the history of incorrect predictions and the “boy who cried wolf” syndrome seem to have anesthetized analysts and decisionmakers worldwide to the fact that oil production will indeed someday peak. The boy was eventually eaten by the wolf.

Given this long history of failed forecasts, what convinces us that our foresight will be better? In brief, the quality of the evidence has improved immeasurably.

First, extensive drilling for oil and gas has provided a massive worldwide database, and current geological knowledge is much more extensive than in years past. Quite simply, we know much more. Second, various seismic and other exploration technologies have advanced dramatically in recent decades, greatly improving our ability to discover new oil reservoirs. Nevertheless, the oil reserves discovered per exploratory well have been declining worldwide for more than a decade. We are finding less and less oil in spite of vigorous efforts, suggesting that nature may not have much more to provide. Third, many credible analysts have recently become much more pessimistic about the possibility of finding the huge new reserves needed to meet growing world demand. Fourth, even the optimistic forecasts suggest that world oil peaking will occur in less than 20 years (see Table 1). Finally, we are motivated by the knowledge that the peaking of world oil production in the current energy and economic environment could create enormous disruption on a scale much greater than that experienced during the 1973 oil embargo or the 1979 Iranian oil cutoff.

Table 1.

Projections of the Peaking of World Oil Production

Projected Date Source of Projection Background & Reference
2006–2007 Bakhitari, A. M. S. Iranian oil executive
Simmons, M. R. Investment banker
After 2007 Skrebowski, C. Petroleum journal editor
Before 2009 Deffeyes, K. S. Oil company geologist (ret.)
Before 2010 Goodstein, D. Vice Provost, Cal Tech
Around 2010 Campbell, C. J. Oil company geologist (ret.)
After 2010 World Energy Council Nongovernmental org.
Laherrere, J. Oil company geologist (ret.)
2016 EIA nominal case DOE analysis/information
After 2020 CERA Energy consultants
2025 or later Shell Major oil company
No visible peak Lynch, M. C. Energy economist

Calculating oil production

To project future world oil production, we need to estimate the combined output of oil reservoirs already in production, those found but not yet in production, and the yet-to-be discovered reservoirs. This is an extremely complex summation problem because of the uncertainties associated with the yet-to-be discovered and therefore unknown reservoirs. In practice, estimators use various approximations to predict future production. The remarkable complexity of the problem can easily lead to significant over- or underestimates.

Oil was formed by geological processes millions of years ago and is typically found in underground reservoirs of dramatically different sizes, at varying depths, and with widely varying characteristics. The largest oil reservoirs are called “super giants,” many of which were discovered in the Middle East. Because of their size and other characteristics, super giant reservoirs are generally the easiest to find, the most economical to develop, and the longest-lived. The last super giant oil reservoirs discovered worldwide were found in 1967 and 1968. Since then, smaller reservoirs of varying sizes have been discovered in what are called “oil-prone” locations.

The logic of world peaking follows from the wellestablished fact that the output of individual oil reservoirs rises after discovery, reaches a peak, and declines thereafter. Oil reservoirs have lifetimes typically measured in decades, and peak production often occurs roughly a decade or so after discovery. It is important to recognize that peaking does not mean running out. Peaking is a reservoir’s maximum oil production rate, which typically occurs after roughly half of the recoverable oil in a reservoir has been produced. The reservoir will continue to produce oil for decades at a declining rate. In many ways, what is likely to happen on a world scale is similar to what happens to individual reservoirs, because world production is the sum total of production from all of the world’s reservoirs.

Oil is very difficult to find. It is usually found thousands of feet below the surface, and oil reservoirs normally do not have an obvious surface signature. Advanced technology has greatly improved the discovery process and reduced exploration failures. Nevertheless, oil exploration is still inexact and expensive. Once oil has been discovered via an exploratory well, full-scale production requires many more wells across the reservoir to provide multiple paths that facilitate the flow of oil to the surface. This multitude of wells also helps to estimate the size of the discovery’s “reserves”: the total amount of recoverable oil in a reservoir.

The concept of reserves is generally not well understood. Reserves are an estimate of the amount of oil in a reservoir that can be extracted at an assumed cost. Thus, a higher oil price outlook often means that more oil will be produced and the reserves will increase. But geology places an upper limit on price-dependent reserves’ growth. In well-managed oil fields, although estimates of reserves will rise with increases in price, the maximum increase is usually only 10 to 20 percent no matter how high the price.

Reserves estimates are revised periodically as a reservoir is developed and new information provides a basis for refinement. Indeed, reserves estimation is a matter of using inherently limited information to gauge how much extractable oil resides in complex rock formations that typically exist one to three miles below the surface. It is a bit like a blindfolded person trying to judge what an elephant looks like from touching it in just a few places.

Specialists who estimate reserves use an array of technical methodologies and a great deal of judgment. Thus, different estimators might calculate different reserves from the same data. Sometimes politics or self-interest influence reserves estimates. For example, an oil reservoir owner might state a higher estimate in order to attract outside investment or to influence other producers.

Reserves and production should not be confused. Reserves estimates are but one factor in estimating future oil production from a given reservoir. Other factors include production history, understanding of local geology, available technology, and oil prices. A large oil field can have large estimated reserves, but if the field is past its maximum production, the remaining reserves will be produced at a declining rate. This concept is important because satisfying increasing oil demand not only requires continuing to produce oil from older reservoirs with their declining production, but also finding new reservoirs capable of producing sufficient quantities of oil to compensate for shrinking production from older fields and to meet the steadily growing demand for more oil.

The U.S. Department of Energy’s (DOE’s) Energy Information Administration expects world oil demand to grow by 50 percent by 2025. If large quantities of new oil are not discovered and brought into production somewhere in the world, then world oil production will no longer satisfy demand. When world oil production peaks, there will still be large reserves remaining. Peaking means that the rate of world oil production cannot increase. It also means that production will thereafter decrease with time.

Oil is classified as conventional and unconventional. Conventional oil is typically the highest-quality, lightest oil, which flows from underground reservoirs with comparative ease. Unconventional oils are heavy and often tarlike. They are not readily recovered because production typically requires a great deal of capital investment and supplemental energy in various forms. For that reason, most current world oil production is conventional oil.

In the past, higher prices led to increased estimates of conventional oil reserves worldwide. However, this price/reserves relationship has its limits, because oil is found in discrete packages (reservoirs) as compared to the varying concentrations that are characteristic of many minerals. Thus, at some price, world reserves of recoverable conventional oil will reach a maximum because of geological fundamentals. Beyond that point, world reserves of conventional oil will not increase in response to price increases.

Because oil prices have been relatively high, oil companies have conducted extensive exploration in recent years, but their results have been disappointing. If recent trends hold, there is little reason to expect that exploration success will dramatically improve in the future. This situation is evident in Figure 1, which shows the difference between annual world oil reserves additions minus annual consumption. The image is one of a world moving from a long period in which reserves additions were much greater than consumption to an era in which annual additions are falling increasingly short of annual consumption. This is but one of a number of trends that suggest the world is fast approaching the inevitable peaking of conventional world oil production.

Source: Kjell Aleklett and Colin Campbell. Association for the Study of Peak Oil and Gas. www.peakoil.net.

The U.S. experience

The United States was endowed with huge reserves of petroleum, which underpinned U.S. economic growth in the early and mid-20th century. However, U.S. oil resources, like those in the world, are finite, and growing U.S. demand resulted in the peaking of U.S. oil production in the lower 48 states in 1970. With relatively minor exceptions, U.S. oil production in the lower 48 states has been in continuing decline ever since. Because U.S. demand for petroleum products continued to increase, the United States became an oil importer. At the time of the 1973 Arab oil embargo, the United States was importing about a third of its oil. Currently, the United States depends on foreign sources for almost 60 percent of its needs, and future U.S. oil imports are projected to rise to 70 percent of demand by 2025.

By examining what happened in the lower 48 states, we can gain some insight into the effects of higher oil prices and improved technology on world oil production. The lower 48 are a useful surrogate for the world, because it was one of the richest, most geologically varied, and most productive areas up until 1970, when production peaked and started into decline. In constant dollars, oil prices increased by roughly a factor of three in 1973–1974 and by another factor of two in 1979–1980. The modest production upticks in the mid-1980s and early 1990s are probably responses to the 1973 and 1979 oil price spikes, both of which spurred a major increase in U.S exploration and production investments. The delays in production response are inherent to the implementation of large-scale oil field investments. The fact that the production upticks were moderate was due to the absence of attractive exploration and production opportunities because of geological realities.

Beyond oil price increases, the 1980s and 1990s were a golden age of oil field technology development, including practical three-dimensional seismic analysis, economic horizontal drilling, and dramatically improved geological understanding. Nevertheless, lower 48 production still trended downward, showing no pronounced response to either price or technology. In light of this experience, there is good reason to expect that an analogous situation will exist worldwide after world oil production peaks: Higher prices and improved technology are unlikely to yield dramatically higher conventional oil production.

Wildcards

There are a number of factors that could conceivably affect the peaking of world oil production. Factors that might ease the problem of world oil peaking include:

  • The pessimists are wrong again, and peaking does not occur for many decades.
  • Middle East oil reserves are much higher than publicly stated.
  • A number of new super giant oil fields are found and brought into production well before oil peaking might otherwise occur.
  • High world oil prices over a sustained period (a decade or more) induce a higher level of structural conservation and energy efficiency.
  • The United States and other nations decide to institute significantly more stringent fuel efficiency standards well before world oil peaking.
  • World economic and population growth slows, and future demand is much less than anticipated.
  • China and India decide to institute aggressive vehicle efficiency standards and other energy efficiency requirements, significantly reducing the rate of growth of their oil requirements.
  • Oil prices stay at a high enough level on a sustained basis that industry begins construction of substitute fuels plants well before oil peaking.
  • Huge new reserves of natural gas are discovered, a portion of which is converted to liquid fuels.
  • Some kind of scientific breakthrough comes into commercial use, mitigating oil demand well before oil production peaks.

On the other hand, factors that might exacerbate the problem include:

  • World oil production peaking is occurring now or will happen very soon.
  • Middle East reserves are much less than stated. Terrorism stays at current levels or increases and concentrates on damaging oil production, transportation, refining, and distribution.
  • Political instability in major oil-producing countries results in unexpected, sustained, world-scale oil shortages.
  • Market signals and terrorism delay clear indications of peaking, delaying the initiation of mitigation actions.
  • Large-scale, sustained, Middle East political instability hinders oil production.
  • Consumers demand even larger, less fuel-efficient cars and SUVs.

It is possible that peaking may not occur for a decade or more, but it is also possible that peaking may occur in the very near future. Public and private policymakers are thus faced with a daunting risk-management problem. On the one hand, mitigation initiated soon would be premature if peaking is decades away. On the other hand, if peaking is imminent, failure to initiate mitigation soon will have very significant economic and social costs for the United States and the world. The two risks are asymmetric. Prematurely initiated mitigation would result in a relatively modest misallocation of resources. Failure to initiate timely mitigation before peaking will result in severe economic and social consequences.

To repeat, peaking does not mean that the world has run out of oil. Rather, it is the point at which worldwide conventional oil production will no longer be able to meet demand. At peaking, unless adequate substitute fuels and transportation energy-efficiency policies have been implemented well in advance, the price of oil will increase dramatically with severe adverse national and international economic consequences.

The world has never faced a problem like this. Without massive mitigation more than a decade before the fact, the problem will be pervasive and long-lasting. Previous energy transitions (wood to coal and coal to oil) were gradual and evolutionary; oil peaking will be abrupt and discontinuous.

Mitigation options

Oil peaking is a liquid fuels problem, not an “energy crisis” in the sense that the term has often been used. Motor vehicles, aircraft, trains, and ships simply have no ready alternative to liquid fuels. Non-hydrocarbon-based energy sources, such as solar, wind, geothermal, and nuclear power, produce electricity, not liquid fuels, so their widespread use in transportation is at best decades away. Accordingly, mitigation of declining world conventional oil production must be narrowly focused, at least in the near term.

Our research identified a number of currently viable mitigation options, including increased vehicle fuel efficiency, enhanced recovery of conventional oil, and substitute liquid fuels from heavy oil/oil sands, coal, and remote natural gas. All would have to be initiated on a crash basis at least a decade prior to peaking, if severe economic damage is to be avoided worldwide. Such a massive, expensive program before obvious market signals are evident would require extensive government intervention and support.

Government-mandated improved fuel efficiency in transportation proved very important after the 1973 oil embargo and will be a critical element in the long-term reduction of liquid fuel consumption. The United States has a fleet of more than 200 million automobiles, vans, pickup trucks, and SUVs. Replacement of even only half of these with higher-efficiency models will require 15 years or more at a cost of more than $2 trillion, so upgrading will be inherently time-consuming and expensive. Similar conclusions generally apply worldwide. Improved fuel efficiency, particularly for light-duty vehicles such as pickup trucks and vans, holds great promise for longer-term reduction of gasoline and diesel fuel consumption.

Enhanced oil recovery can help moderate conventional oil production declines from oil fields that are past their peak. The most promising recovery process is miscible flooding with carbon dioxide, a technology in which the United States is the leader.

Unconventional oil will play an increasing role in meeting world demand for petroleum products as conventional oil production wanes. The most attractive of the unconventional sources are heavy oil/oil sands, liquid fuel from natural gas, and liquid fuel from coal. All are commercially available. Heavy oil/oil sands are very viscous, tarlike oils, the largest deposits of which exist in Canada and Venezuela. Production costs are much higher than for conventional oil, because significant quantities of process energy are required to extract and refine the oils, which are of lower quality and are expensive to refine. Worldwide, large reservoirs of natural gas exist far from ready markets. One method of bringing that “stranded gas” to market is via Fisher-Tropsch (F-T) synthesis to create clean liquid fuels. This technology has been improving rapidly and is now being commercially implemented on a large scale to bring remote natural gas to market. It remains to be seen how these optional uses will balance out. The United States and other countries have substantial resources of coal, which can be converted to liquid fuels with commercial and near-commercial technologies. The front-running process for producing liquid fuels from coal involves coal gasification, followed by synthesis of liquid fuels using the F-T process. The resultant fuels are extremely clean and require no refining.

Two other mitigation options are of potential interest but are not currently commercially viable: oil shale and biomass. The United States has vast resources of oil shale, and the government briefly funded an effort to develop this resource after the oil shocks of the 1970s. The resource cannot be ignored and justifies renewed R&D. Ethanol from biomass is currently used in transportation, not because it is commercially competitive but because it is mandated and highly subsidized. Biodiesel fuel is a subject of considerable current interest, but it too is not yet commercially viable. Again, a major R&D effort might change the biomass outlook.

Government push needed

In slowly evolving markets, industry normally responds to changes in timely ways. In the case of the peaking of world oil production, the scale of mitigation will be unlike anything yet experienced in energy markets. Even crash programs to implement all available commercial and near-commercial mitigation options will require industrial efforts beyond anything that has happened in more than 50 years. Furthermore, effective mitigation will require massive, expensive efforts a decade or more before market signals become obvious. Those circumstances demand decisive action by governments worldwide, but the burden of mitigation will fall on industry. Because of the need for urgent action, governments will have to provide support, incentives, and facilitation in ways that are only dimly understood today. Among the options for action are the following:

First, governments must seriously consider requiring much higher fuel economy for all transportation vehicles, especially light-duty vehicles. Because the time required for retooling, production, and market penetration is so long and the expenses extremely high, it is difficult to imagine meaningful change without strong government mandates.

Second, massive industrial mobilization for the production of substitute fuels and expanded enhanced oil recovery will involve huge financial commitments and large risks. Industry is unlikely to move on the scale and schedule required without government mandates and protections. Policies such as minimum price guarantees, loan guarantees, and tax credits are among the possible options.

Third, although it is hoped that environmental protection is not compromised in the massive mitigation efforts required, it will necessary for governments to expedite permitting and regulatory reviews. Countries that do not minimize barriers are likely to be at a comparative disadvantage relative to countries that facilitate approval of new facilities. If decisions are delayed until peaking occurs, substantial oil shortages and rapidly increasing oil prices will result in the kinds of economic distress that occurred after the 1973 oil embargo, or worse. Under those conditions, people will be much more concerned about their own well-being and much more likely to demand rapid mitigation, which could well lead to the erosion of some environmental protections. Thus, timely initiation of mitigation policies will also help protect the environment.

A history of repeated and erroneous predictions of oil peaking may have given false assurance and convinced many policymakers that such predictions can now be safely discounted. However, recent peaking forecasts rest on a much more robust geological foundation, and we cannot ignore the fact that worldwide additions to reserves have been falling short of consumption for roughly two decades. The risks of delay are dire. Prudent risk management calls for early action.

Interview: Sarah S. Brown

Sarah S. Brown is director of the National Campaign to Prevent Teen Pregnancy, a nonprofit, nonpartisan initiative she helped created in 1996 to improve the wellbeing of children, youth, and families by reducing teen pregnancy. As she explains in the following interview, the campaign played a critical role in a remarkably successful effort that reduced by one-third the number of pregnancies and births among teenage girls. This experience can serve as a model and an inspiration for other public health programs.

A specialist in women’s and adolescent health, Brown has worked in the public health sector for more than 30 years. Before cofounding the campaign, she served as senior study director at the Institute of Medicine, where she directed a range of maternal and child health projects, including The Best Intentions: Unintended Pregnancy and the Well-Being of Children and Families, a widely cited study of the probable causes, effects, and possible remedies of unintended pregnancy. Brown has served on the boards of the Alan Guttmacher Institute and the District of Columbia’s Mayor’s Advisory Board on Teenage Pregnancies and Out-of-Wedlock Births. She is the recipient of numerous awards, including the Irvin M. Cushner Lectureship Award from the Association of Reproductive Health Professionals, the Institute of Medicine’s Cecil Award for Excellence in Research, and the Martha May Elliot Award of the American Public Health Association.

Why was the National Campaign to Prevent Teen Pregnancy organized, and what made you think this progress was possible?

The National Campaign to Prevent Teen Pregnancy was organized in 1996 by a diverse group of individuals who had concluded that the problem of teen pregnancy was not receiving the intense national focus that it deserved; that too few Americans understood the central role that teen pregnancy plays in child poverty, out-of-wedlock childbearing, and welfare dependence; and that there was merit in raising the profile of this problem and in pushing hard for solutions. At its first meeting, the National Campaign’s board defined the organization’s mission: to improve the well-being of children, youth, and families by reducing teen pregnancy. The board also set a numerical goal for the nation and the National Campaign: to reduce the rate of teen pregnancy by onethird between 1996 and 2005.

Most observers considered this goal—to put it charitably—overly ambitious. Because rates of teen pregnancy spiked upward during the mid- and late1980s, there were many who felt that this nation’s high rates of teen pregnancy were inevitable and intractable. Our reading was a bit different. Taking a longer view, we saw that rates of teen pregnancy and birth had been declining slowly but steadily (more or less) for over two decades, with the exception of this late-1980s blip. Consequently, we believed that teen pregnancy rates could again start heading in the right direction, provided that the issue received the national attention it deserved.

What strategy has the National Campaign used?

The National Campaign’s strategy is based on a straightforward concept: Reducing teen pregnancy can be accomplished only by fewer teens being sexually active and/or by better use of effective contraception among those who are. Both behaviors contributed to earlier declines in teen pregnancy, and more of both are needed in going forward to sustain the decline. All of the National Campaign’s efforts center on affecting these two behaviors by communicating directly with teens themselves or by influencing intermediaries that research has shown influence the sexual behavior of teens.

The organization’s strategy works on two main fronts: building a more coordinated and effective grassroots movement in states and communities and influencing social norms and popular culture. In all of our work, we engage a range of sectors—teens, parents, state and community leaders, entertainment media executives, educators, faith leaders, policy-makers, the press, other national nonprofit groups, and more. In addition, we approach teen pregnancy in a nonideological big-tent way and work hard to reduce the conflicts that often impede action on this tough problem. All activities are based on high-quality research, an emphasis on using partnerships as a way to increase the reach and power of our efforts, and a commitment to evaluating our work. And, as a small group trying to influence a vast country, we rely heavily on technology—the Internet, in particular—to reach millions of teens and their parents.

What does this mean in practice?

The first strategy—building a grassroots movement— involves working with people in states and communities. The National Campaign provides research and data that they can use in their programs or coalitions, and we also offer direct technical assistance through site visits, regional conferences, and access to our Resource Bureau, which includes contacts and experts in all 50 states. Our Web site (www.teen pregnacy.org), which currently averages more than a million visitors each year, is another source of extensive information and support to practitioners in states and communities. For example, data on teen pregnancies and births are provided on the Web site for every county in the United States; many of our manuals about what to do at the local level to reduce teen pregnancy can be downloaded free of charge; and extensive bibliographical material is also posted.

The second strategy focuses on influencing cultural values and messages. Simply put, it’s fine to work with states and communities to make their efforts more research-based, tolerant of differing views, and tailored to local realities. But doing so is a futile exercise if the entire culture, especially popular teen culture, is sending the message that getting pregnant at a young age is no big deal, that having sex “early and often” with multiple partners is just fine, that contraception is not all that important, that sex has little meaning and few consequences, that postponing sex is a hollow idea, and that parents can’t do anything about their children’s sexual attitudes and behavior. The fact is that we have to work at both levels —state/local as well as popular culture—and the National Campaign is a pioneer in doing so.

Where have you encountered resistance?

Because teen pregnancy is closely connected to aspects of our lives that we hold most dear—our understanding of family and children; the meaning of love, marriage, and commitment; the role of self-expression and self-fulfillment; and, for many people, their religious beliefs—we fully expected that there would be disagreements over how best to reduce teen pregnancy. However, I think we have been surprised at the depth of disagreements and how these disagreements can often stymie action.

It seems to us that one of the most strident arguments at present—fighting over which strategy is better, sexual abstinence or contraceptive use—is a recipe for stalemate. This ideological struggle is obscuring an important cause of teen pregnancy: namely, that many teens are insufficiently motivated to adopt either approach. Note that in current American life and culture, both sexual abstinence and careful use of contraception require a lot of self-discipline, determination, and sometimes even money (in the case of the better methods of contraception). And while the adults argue over which means of pregnancy prevention is better, too many teens are becoming pregnant because they neither abstain nor use contraception. Research shows that both strategies, perhaps in equal portions, have driven the teen pregnancy rates down in recent years. There is no need to choose between the two.

Having said that, it is also important to note that Americans prefer by large margins that teens choose abstinence over sex with contraception. That is, they prefer that teens wait until they are adults to become sexually active and, in particular, to start families. But that does not mean that they think teens should be denied information about sex, love, and reproduction or the health care needed to avoid pregnancy and sexually transmitted diseases (STDs), including HIV/AIDS.

We have also learned that in a diverse country, it is essential to have many helpful approaches to preventing teen pregnancy. It is unrealistic to think that individuals or groups will always be able to put aside their deeply held beliefs on this issue and agree on one single way to reduce teen pregnancy. Often the best strategy is unity of goal but tolerance for a diversity of means.

What role has research played in your work?

The Campaign’s work is supported and shaped by a deep respect for data and research. Our feeling has always been that in a field so often buffeted by ideological skirmishing, having a sound grounding in science would help to stabilize the organization and make it more effective. Since the National Campaign to Prevent Teen Pregnancy was established in 1995, we have tried hard to build a solid reputation for conducting careful analyses and preparing top-notch research products that are widely used and relied on by the research community, professionals in teen pregnancy prevention and related fields, parents, government officials and policymakers, and the press. Our research activities are overseen by a top-notch scientific advisory group chaired by Brent Miller, the vice president for research at Utah State University.

We have learned that in a diverse country, it is essential to have many helpful approaches to preventing teen pregnancy.

We are constantly asking what implications new research and information have for our own work and for efforts nationwide to reduce teen pregnancy. As such, the Campaign can be seen in part as an intense effort to move relevant research into practice and to disseminate high-quality information to individuals working on this issue. For example, since the Campaign was established in 1996, we have commissioned over 20 research reports on topics such as evaluating abstinence programs, the role of men and boys in pregnancy prevention, programs that may help teen mothers avoid additional births while still in adolescence, and parental influence.

In addition, we emphasize how high-quality evaluation can improve the design and impact of prevention programs. The flagship and perhaps most widely requested product of the National Campaign is a review we commissioned of all the peer-reviewed published literature on the effectiveness of various community-based programs to reduce teen pregnancy. Emerging Answers: Research Findings on Programs to Reduce Teen Pregnancy, written by Douglas Kirby, senior research scientist at ETR Associates, has been widely disseminated—about 200,000 copies have been distributed or downloaded—to program leaders, foundation and government officials, and others who want to know what works. The enormous demand for the document signals to us that there was a need not only for more research-based information in this field, but also for information presented in simple and straightforward language.

Is the National Campaign’s strategy still evolving?

Yes. Although we are delighted that the country has made such stunning progress in reducing high rates of teen pregnancy (the rate has declined by about 30 percent over the past decade and more), the United States still has the highest rates of teen pregnancy, birth, and abortion in the fully industrialized world. Moreover, it may be that the “easy gets” have already been won and that sustaining the momentum going forward will be particularly difficult. Consequently, we are trying to focus particular attention in areas where rates of teen pregnancy remain stubbornly high. For example, although pregnancy and birth rates among Hispanic teen girls have declined in the past decade and more, it is still the case that 51 percent of Latina girls get pregnant at least once before age 20, as compared to the national average of 35 percent. The Hispanic community is currently the largest minority population in the United States, now making up 13 percent of the overall population and 16 percent of the teen population. But their representation is even larger in the issues central to our work: 23 percent of overall teen pregnancies and 30 percent of teen births are now Hispanic. Over the next 20 years, the Latino teen population will increase by 60 percent, whereas the overall teen population will grow by only 8 percent. By the year 2020, one in five teens will be Latino. All of these data suggest that Hispanic communities merit more intense support and attention in the national effort to reduce teen pregnancy. As a nation, we have not done a particularly good job of crafting messages and interventions specifically for the Hispanic community. As is true with any community, careful attention to cultural and religious differences is critical.

What seems to be working?

The teen pregnancy and birth rate in the United States has declined by about one-third since the early 1990s. Most investigators credit a decrease in sexual activity in addition to an increase in contraceptive use among teens as the reasons why these rates have improved. We usually offer several reasons why teens may be changing their behavior: concern about STDs in general and AIDS in particular; evidence that teens are taking a slightly more cautious approach to casual sex; the availability of long-lasting hormonal contraception such as Depo-Provera; possible effects from welfare reform; and increased attention to the sexual behavior of young people generally (in families, in the media, and elsewhere) may all help explain what has motivated teens to reduce their risk of pregnancy. But we are the first to say that there is often a lot of uncertainly about what precisely accounts for changes in fertility rates and trends, of which changes in teen pregnancy rates are only one recent example.

If the question is what seems to be working programmatically, we certainly know much more now than we did even five or six years ago. Thanks to some important investments in program evaluation, we now know more about the relative effectiveness of certain classes of teen pregnancy prevention programs. Interestingly, it appears that several approaches can help, all the way from classroom-based sex and HIV education programs, to intense youth development interventions, to programs that involve young people in community service. Many of these can be effective in helping teens delay first intercourse, increase the use of contraception, and—our goal—decrease teen pregnancy. Having an array of effective approaches is a heartening development, given that until quite recently, little was known about what programs might be most useful in preventing teen pregnancy. This growing pool of effective programs is particularly good news for communities searching for programmatic answers to still-high rates of teen pregnancy.

Of course, there are limits to programmatic solutions to a problem as complex as teen pregnancy. Most programs have not been well evaluated. Consequently, we know less than we would like to about their efficacy. It may very well be that there are any number of creative programs that are effective in helping adolescents avoid risky sexual behavior that simply have not been evaluated at all. Of those programs that have been carefully examined, many have not shown positive results. Put another way, since teen pregnancy is rooted partly in popular culture and social values, it is unreasonable to expect that programs alone can change forces of this size and power. Making true and lasting progress in preventing teen pregnancy will likely require a combination of community programs and broader efforts to influence values and popular culture.

What about abstinence-only approaches?

The jury is still out on abstinence-only programs. Because very little rigorous evaluation of this approach has been completed—and because those few studies that have been completed do not reflect the great diversity of abstinence-only programs currently offered—no definitive conclusion can be drawn. That might change in the future, because a rigorous federally funded study of some of these programs is under way.

Because, as noted earlier, there is a strong preference in America that school-age teens not be sexually active, programs that strongly support a delay in sexual activity are likely to remain popular in many communities. Still, it is also important to recognize the value of convincing sexually active teens to use contraception consistently and carefully. After all, the only teens who are getting pregnant or contracting an STD are those who are having sex but not using contraception consistently and carefully. For example, what is gained if we succeed in encouraging teens to delay first sex only to find that, once sexual intercourse begins (as it usually does in later adolescence), rates of unintended pregnancy and STDs are high because young people do not know enough about contraception and protection.

Did you evaluate the importance of supervised activities such as after-school and community center programs?

Because the reasons behind teen pregnancy vary, so do the types of programs designed to combat the problem. Although the most important antecedents of teen pregnancy and childbearing relate directly to sexual attitudes, beliefs, and skills, many factors closely associated with teen pregnancy actually have little to do directly with sex (such as growing up in a poor community, having little attachment to one’s parents, or failing at school). In fact, as noted earlier, certain community service programs, which might not focus on sexual issues at all, have very strong evidence that they reduce actual teen pregnancy rates while the youth are participating in the programs.

Certain community service programs, which might not focus on sexual issues at all, have very strong evidence that they reduce actual teen pregnancy rates.

Here is what research tells us: Large amounts of unsupervised time are associated with risky sexual behavior among teens. Adult supervision, which many of these programs provide, is strongly linked to reduced sexual risk-taking. After-school programs may reduce risky sexual behavior by simply involving teens in activities that provide alternatives to sex. Such programs may also put young people in close touch with caring adults who are able to provide much-needed guidance and support. Finally, teens who believe that they have future opportunities have incentives to postpone sexual involvement, use contraception more consistently, and avoid unwanted pregnancies or births.

How do you assess your organization’s results?

The most important result is that teen pregnancy rates are now coming down. Between 1990 and 2000, the teen pregnancy rate declined 28 percent; the teen birth rates declined by a third between 1991 and 2003; and declines occurred in all 50 states and in all of the major racial and ethnic groups. This, in turn, has contributed to a leveling off of the proportion of all children being born outside marriage. And both have contributed importantly to the decline in the child poverty rate—and especially in the black child poverty rate—since 1993. These trends suggest that progress is possible and that the National Campaign’s efforts and those of others are paying off. Of course, the precise contribution of the National Campaign’s effort is unknowable, but we do take some credit! Why not?

What changes do you see in national attitudes toward teen pregnancy?

A strong national consensus has developed among adults and teens alike that middle-and high-school kids, in particular, should be given a clear message that abstinence from sexual intercourse is the right thing to do because of the numerous important consequences of sexual activity, and because sexual intercourse should be associated with meaning and serious commitment. The view is not that abstinence should be presented to young people as one of several equally attractive options but as the strongly preferred one. As one father said, “Abstinence is options one through five.” Fully 9 in 10 adults and teens surveyed agreed that “it is important for teens to be given a strong message from society that they should abstain from sex until they are at least out of high school.” Some go on to urge abstinence until marriage specifically.

But it is also true that even when given strong advice to remain abstinent, some young people will not do so. After all, about 6 in 10 high-school kids report that they have had sex at least once by the time they graduate from high school. Some of these young people can perhaps be encouraged to stop having sex, but experience suggests that many will continue to be sexually active. For these young people—and, after all, only those kids who are sexually active are at risk of pregnancy and STDs—the clear national consensus is that young people should be provided with information about contraception and have access to appropriate medical care.

Are there other public health problems where your approach might be successful?

On reflection, several organizational aspects of the National Campaign have seemed especially important to our success and progress, and we commend them to others working on complex problems, particularly at the national level:

  • Set a clear goal or two that can be measured in order to assess progress. It has been a wonderful discipline to have an actual target and to follow progress.
  • Develop a board that is well connected, diverse in background and sector, and willing from day one to raise money. The first attribute has been especially important to us; there is nothing like having powerful people on your team. It can make all the difference. Corollary for those who are focused on legislative action: Line up bipartisan champions in the House and Senate.
  • About 6 in 10 high-school kids report they have had sex at least once by the time they graduate from high school.
  • Start with the science. The world is littered with advocacy groups who are seen as just that: advocacy groups. We have found it more effective to be seen as a research-based group tackling a tough social problem in a sensible, research-based, and bipartisan manner. Without such grounding, one gets caught up in the hurly burly of it all.
  • We chose not to develop state affiliates but rather to work with a variety of state-level groups. That has saved us endless amounts of time and aggravation. We suspect that the outcomes would have been the same.

What’s next for the National Campaign?

As the National Campaign celebrates its 10th anniversary in 2005, and building on the great progress the country has made over the past decade plus, our new challenge to the nation for the next decade will be to reduce teen pregnancy by another one-third. We will keep pushing hard, because even with recent declines, 35 percent of girls become pregnant at least once before turning 20, and U.S. taxpayers shoulder at least $7 billion each year in direct costs and lost tax revenues associated with teen pregnancy and childbearing. Should our new goals be reached, the United States would probably then sink from having the highest rate of teen pregnancy among industrialized nations to number two or three. Wouldn’t it be great to not be number one for a change?

Healthy Populations Nurture Healthy People

The fall 2004 decision by NBC to introduce a new TV program dramatizing the fight against emerging disease threats might be taken as a sign of rising glamour for the public health professions. Modeled on the successful format of ratings winners such as ER and CSI, the series Medical Investigation features a cast of young, dashing, and sometimes overbearing personalities devoted to saving lives and unraveling mysteries, all while spouting a steady stream of technical jargon as they combat threats ranging from “Blue Man” syndrome to Legionnaire’s disease.

If the series departs in some measure from the primary-care domain of ER (and predecessors ranging from St. Elsewhere to M.A.S.H.), its emphasis on acute crisis intervention will come as little surprise to public health professionals long accustomed to the fact that the general public rarely associates their work with life-saving drama, much less pulsepounding heroics. In real life, a cardiac surgeon may have many opportunities to experience the gratitude of patients after a successful triple-bypass surgery. However, it would be an unusual event indeed if patients sought out and thanked the elected official whose leadership on antismoking laws, the city planner whose design of pedestrian-friendly neighborhoods, or the health plan administrator whose enforcement of quality standards for treating high blood pressure or cholesterol levels might have obviated the need for acute cardiac care in the first place—and at far less expense and discomfort to the patient.

Our difficulty in conceptualizing the achievements of avoided risks, as compared to the defeat of realized illnesses, is reflected in the pattern of expenditures devoted to health care in the United States. For example, it is estimated that roughly 95 percent of national health expenditures are devoted to direct care services and related research, leaving only 5 percent for population health activities. Although the magnitude of the direct care expenditures explains some of the remarkable successes of biomedical research and advances in clinical care, public health officials are quick to point out that roughly 70 percent of avoidable mortality in the United States results from behavioral, social, and environmental factors that are potentially modifiable through preventive health measures.

Moreover, one can argue that the results of this disjunction are readily apparent in the poor standing of the United States relative to other industrialized nations in many leading health indicators. Recent rankings of countries that participate in the Organisation for Economic Co-operation and Development (OECD) place the United States below the OECD mean in life expectancy, 28th among a group of 39 industrialized nations in infant mortality, and highest among a group of 30 industrialized nations in cancer incidence. Rates of other chronic diseases are also on the rise in the United States, and the country lags behind most other industrialized nations in making health care available equitably to all. Small wonder then that the Institute of Medicine’s 2003 report The Future of the Public’s Health concluded that “the nation’s heavy investment in the personal health care system is a limited future strategy for promoting health.”

The need to allocate resources more effectively to preventive as well as curative health care efforts is only one element of a broader need to reconceptualize public health strategies as a whole. A further issue is the fact that when many people think of the public health profession, they are most likely to conjure up solely images of either vaccination programs or their (frequently understaffed) county and local public health departments. Both of these institutions are indeed part of a venerable lineage dating back to the major 19th-century sanitation and public health campaigns and their successors, which have provided health benefits across many generations. However, to build a maximally effective public health strategy for the future, we must move toward broader population perspectives on health.

What is population health?

At its core, the notion of population health is that sustained health improvement for individuals can often be accomplished only through efforts aimed at groups— that is, environmental, educational, organizational, social, or policy interventions that produce population-wide effects. Further, relatively small changes for individuals can often yield dramatic changes in disease incidence across the entire population.

The idea of population health recognizes that disease risk consists largely of a continuum across populations rather than a simple dichotomy between high-risk and low-risk individuals. There is simply no clear division between being at risk or not at risk of disease with regard to factors such as cholesterol levels, blood pressure, diet and physical activity, exposure to toxic substances, stress, and a wide range of other social and environmental conditions. Moreover, most commonly only a relatively small percentage of the population will fall at the extremes of either high or low risk. For this reason, the path toward achieving the greatest benefits within a population will often involve attempts to lower the distribution of risk for the population as a whole rather than simply targeting individuals or high-risk groups. Well-known examples of this approach include the fluoridation of water, seatbelt laws, the elimination of leaded gasoline, the regulation of residential hot water boilers to avoid scalding injuries, air quality standards, laws that concern the isolation and quarantine of infectious individuals, or laws that protect the public from unsafe foods and drugs or set standards for diet in public schools.

A major component of such an understanding is that an individual’s risk must be framed within the context of the larger community. Rather than simply asking why a particular individual has suffered illness or premature death, we must ask why a population includes its distinctive constellation of higher and lower risks in different areas. By understanding this composite structure, one can design an array of disease prevention and health promotion strategies that can improve the long-term health outcomes of the overall population rather than reacting only to the acute care needs of individuals after they have become ill. Improvements in children’s health, in particular, can reduce the need for costly treatments later in life for illnesses that can be prevented by health-promotion interventions. Similarly, prevention programs can be targeted to address risk factors in specific populations. For example, lifestyle changes to prevent diabetes in African-American and Hispanic communities, where rates of this disease are far higher than in the general population.

This vision guides the Healthy People 2010 initiative that is led by the U.S. Department of Health and Human Services. Like its predecessor Healthy People 2000, the initiative encompasses an overall strategy for preventing disease and improving health outcomes in a range of target areas across the nation. Similar efforts have been initiated by the World Health Organization and a number of governments.

A broader vision

Achieving substantial progress in population health requires far more than engaging only the health-related sectors of government and society. Creating healthier societies requires understanding the “social determinants of health”: the aspects of national, community, and family life that support or undermine health. Improvements in the social factors that influence health requires involvement of all sectors: education, transportation, labor conditions, agriculture, environmental protection, and of course, public health. Strategies must involve not only horizontal coordination across government sectors but vertical coordination from federal to state and local levels, and engagement with nongovernmental sectors such as business, the media, academia, and all segments of the health care delivery system.

A population approach will continue to be needed for adapting to the country’s future health care needs. For example, the graying of the U.S. population—with the over-65 population projected to reach 13.3 percent in 2010 and 18.5 percent by 2025—will bring with it an increasedneed for effective strategies and services to promote healthy aging. We will continue to need robust systems of surveillance and analysis at national and local levels, not to mention commitments that in times of budgetary crisis, preventive and population health measures will not be the first to fall to cuts in discretionary spending simply because they are more anonymous than cuts in direct care services.

Creating healthier societies requires understanding the “social determinants of health”: the aspects of national, community, and family life that support or undermine health.

Taking a population perspective does not mean adopting an either/or dichotomy between population and individual health needs or abandoning acute care for sick individuals in favor of preventive care for populations. Indeed, a final element of population health improvement that merits discussion is the application of population perspectives to the primary care system, as exemplified in the quality movement. First brought to wide attention by the Institute of Medicine’s report To Err is Human, the quality movement extends to efforts to develop and implement system-wide improvements that would eliminate or ameliorate below-standard care practices in numerous areas. A recent study by the RAND Corporation’s Elizabeth McGlynn and colleagues found that across 12 U.S. metropolitan areas, only about 55 percent of individuals received the recommended standard of acute, chronic, and preventive care. In its 2004 annual report, the National Commission on Quality Assurance concluded that substandard care resulted in between 42,000 and 79,000 premature deaths, as well as nearly $2 billion in extra hospital costs, each year.

A comprehensive vision of population health thus will encompass not only the social conditions that affect the health of all people but also the systems that deliver individual medical care. We need to see improvements across all aspects of population health. The articles that follow reflect the variety of fronts on which progress is possible, from the intricacies of the human genome to the structure of the legal system.

Preventing Childhood Obesity

After improving dramatically during the past century, the health of children and youth in the United States now faces a dangerous setback: an epidemic of obesity. It is occurring in boys and girls in every state, in younger children and adolescents, across all socioeconomic strata, and among all ethnic groups. Traditionally, most people have considered weight to be a personal statistic, of concern only to themselves or, on occasion, to their physicians. Both science and statistics, however, argue that this view must change. As researchers learn ever more about the health risks of obesity, the rise in the prevalence of obesity in children—and in adults as well—is increasingly becoming a major concern to society at large and hence a public health problem demanding national attention.

Since the 1970s, when the epidemic began to take hold, the prevalence of obesity has nearly tripled for children aged 6 to 11 years (from 4 percent to 15.3 percent), and it has more than doubled for youth aged 12 to 19 years (from 6.1 percent to 15.5 percent) and for children aged 2 to 5 years (from 5 percent to 10.4 percent). Although no demographic group is untouched, some subgroups have been affected more than others. Children in certain minority ethnic populations (including African Americans, American Indians, and Hispanics), children in low-income families, and children in the country’s southern region tend to have higher rates of obesity than the rest of the population.

Today, more than 9 million children over age 6 are considered obese, which means that they face serious immediate and long-term health risks. They are at increased risk as they grow older of a number of diseases, including type 2 diabetes, cardiovascular disease, hypertension, osteoarthritis, and cancer. By being obese in a society that stigmatizes this condition, they also may develop severe psychosocial burdens, such as shame, self-blame, and low self-esteem, that may impair academic and social functioning and carry into adulthood.

Pared to its core, the solution is simple: Preventing obesity will require ensuring that children maintain a proper energy balance. This means that each child will consume enough of the right kinds of food and beverages and get enough physical activity to maintain a healthy weight while supporting normal growth and development and protecting overall health. Although this “energy intake = energy expenditure” equation may appear fairly simple, in reality it is extraordinarily complex. At work are a multitude of factors—genetic, biological, psychological, sociocultural, and environmental—acting independently and in concert.

Thus, combating the epidemic will be challenging. But there is precedent for success in other public health endeavors of comparable complexity and scope. Major gains have been made, for example, in reducing tobacco use, including preventing youth from smoking, and in improving automobile safety, including promoting the use of car seats and seatbelts to protect young passengers. Some lessons can be drawn from these efforts, past and current, and many new ideas and approaches will be needed to meet conditions specific to the task at hand. One overarching principle is clear: Preventing childhood obesity on a national scale will require a comprehensive approach that is based soundly on science and involves government, industry, communities, schools, and families.

Such an approach is detailed in Preventing Childhood Obesity: Health in the Balance, issued by the Institute of Medicine in September 2004. The report examines the various factors that promote childhood obesity, identifies promising methods for prevention, describes continuing research needs, and assigns responsibilities for action across a broad sweep of society. Its recommendations, when implemented together, will help keep the vast majority of the nation’s children physically active and healthy. Some highlights of the report are offered in the following sections.

Strengthening political muscle

As many other public health programs have demonstrated, catalyzing national action to prevent childhood obesity will require the full commitment of government at all levels. The federal government should take the lead by declaring this a top public health priority and dedicating sufficient funding and resources to support policies and programs that are commensurate to the scale of the problem. The government also should ensure that prevention efforts are coordinated across all departments and agencies, as well as with state and local governments and various segments of the private sector.

Toward this end, the president should request the Department of Health and Human Services (DHHS) to convene a high-level task force (including the secretaries or senior officials of all departments and agencies whose work relates in any way to childhood obesity) to be responsible for establishing priorities and promoting effective collaborations. In order to foster full and free communication, the task force should meet regularly with local and state officials; representatives from nongovernmental organizations, including civic groups, youth groups, advocacy groups, and foundations; and representatives from industry.

In addition to providing broad leadership, the federal government should take a variety of specific steps. For example, funding should be increased for surveillance and monitoring systems that gather information needed for tracking the spread of childhood obesity and for designing, conducting, and evaluating prevention programs. In particular, the National Health and Nutrition Examination Survey, which for years has been used to monitor the population through home interviews and health examinations, should be strengthened, with more attention being paid to collecting and analyzing data that will inform prevention efforts. Special efforts should be made through this and other surveillance systems to better identify and monitor the populations most at risk of childhood obesity, as well as the social, behavioral, and environmental factors contributing to that elevated risk.

Among other steps, the government should increase support for public and private programs that educate children, youth, and their families about the importance of good nutrition and regular physical activity. Similarly, federal nutrition assistance programs, including the Department of Agriculture’s (USDA’s) Food Stamp Program and the Special Supplemental Nutrition Program for Women, Infants, and Children, should be expanded to include obesity prevention as an explicit goal. Congress should request independent assessments of these assistance programs to ensure that each provides adequate access to healthful dietary choices for the populations served.

In addition, pilot studies should be expanded within these programs to identify new ways to promote a healthful diet and regular physical activity behaviors. Ideas include using special vouchers or coupons for purchasing fruits, vegetables, and whole-grain baked goods; sponsoring discount promotions; and making it possible to use electronic benefit transfer cards at farmers’ markets or community-supported agricultural markets. Test programs that prove successful should be scaled up as quickly as possible.

Congress also should call for an independent assessment of federal agricultural policies, including subsidies and commodity programs that may affect the types and quantities of foods available to children through food assistance programs. For example, concern has been expressed about whether the increasing amounts of caloric sweeteners (primarily derived from sugarcane, beets, and corn) that people are consuming are contributing to the obesity epidemic, and whether subsidies for these crops are promoting the production of inexpensive caloric sweeteners. These possible relationships warrant further investigation. If problems are confirmed in this or other cases, then the government should revise its policies and programs to promote a U.S. food system that supports energy balance at a healthy weight.

Preventing childhood obesity will require a comprehensive science-based approach that involves government, industry, communities, schools, and families.

For their part, state and local governments should join in making the prevention of childhood obesity a priority by providing the leadership—and resources—needed to launch and evaluate a slate of programs and activities that promote physical activity and healthful eating in communities, neighborhoods, and schools. One important step, for example, will be for governments to strengthen their public health agencies. As the front line of the public health system, these agencies are ideally positioned to assess the childhood obesity epidemic; to identify local conditions that are fueling it; and then to develop, implement, and evaluate prevention programs. In order to perform most effectively, however, many agencies will need restructuring to make them better able to work collaboratively with diverse community partners. Such partners can include schools, child-care centers, nutrition services, civic and ethnic organizations, faith-based groups, businesses, and community planning boards.

Harnessing the market

Children, youth, and their families are surrounded by a commercial environment that strongly influences their purchasing and consumption behaviors as well as the choices they make in how to spend their leisure time. Thus, a variety of industries (including the food, beverage, restaurant, entertainment, leisure, and recreation industries) must share responsibility for preventing childhood obesity. Government can help strengthen industry efforts by providing technical assistance, research expertise, and, as necessary, targeted support and regulatory guidance.

As a general goal, industries should develop and promote products, opportunities, and information that will encourage healthful eating behaviors and regular physical activity. In order to improve the “expenditure” side of the energy balance equation, the leisure, entertainment, and recreation industries should step up efforts to promote active leisure-time pursuits and to develop new products and markets. Such efforts can help to reverse the recent trend that has seen people spending more time in passive sedentary pursuits and less in active leisure activities. Some companies already are setting the pace, apparently convinced that fostering physical activity will help to create significant markets for their products. For example, Nike, a manufacturer of athletic apparel, provides funding to build or refurbish sports courts and other public athletic facilities nationwide and supports physical education classes in elementary schools, among other projects. More projects of this kind are needed.

In order to improve the “intake” side of the equation, the food and beverage industries should put more effort into developing products that have low energy densities and are appealing to consumers. Foods with low energy densities, such as fruits and vegetables, promote satiety and reduce total caloric intake, but they sometimes meet resistance in the marketplace, especially among people who have become used to foods of higher energy densities. Manufacturers, perhaps motivated by some form of government incentive, should continue to push for healthful new products that are more appealing to a range of people. They also should speed up modifying existing products—for example, by replacing fat with protein, fruit or vegetable puree, fiber, or even air—to reduce energy density but maintain palatability without substantially reducing product size. As another line of attack, manufacturers should develop new forms of product packaging that would help consumers choose smaller, standard serving sizes without reducing product profitability.

Full-service and fast-food restaurants have important roles to play as well, given that people are consuming an increasing share of their meals and snacks outside of the home. Among a range of steps they should take, restaurants should continue to expand their healthier meal options by offering more fruits, vegetables, low-fat milk, and calorie-free beverages, and they should mount information campaigns to provide consumers at the point of purchase with easily understandable nutrition information about all of their products. The industry also should explore price incentives that encourage consumers to order smaller meal portions.

Industry also should make better use of nutrition labeling, which has been mandatory since 1990, to provide parents and youth with clear and useful information that will enable them to compare products and make informed food choices. Here, government can help. The Food and Drug Administration (FDA) should modify the nutrition facts panels—the familiar information charts printed on food products—to more prominently display the calorie content of a standardized serving size and the “percent daily value” (the percent of nutrients contained in a single serving, based on a 2,000-calorie-per-day diet) of key nutrients. But in many instances, people consume all at once quantities that are much larger than a standardized serving size. This is often the case for vending-machine items, single-serving snack foods, and ready-to-eat foods purchased at convenience stores. Such consumers are left on their own to calculate the nutritional content of their purchases. To help them out, the FDA should mandate that manufacturers prominently add the total calorie content to the nutrition facts panels on products typically consumed at one eating occasion.

Of course, any consideration of industry’s impact on the choices that families and children make about eating and engaging in physical activities cannot overlook the role of advertising. Together, these industries are the second-largest advertising group in the U.S. economy, after the automotive industry, and young people are a major target. Current evidence suggests that the quantity and nature of advertisements to which children are exposed daily, reinforced through multiple media channels, appear to contribute to choices that can adversely affect their energy balance. Thus, industry has an important responsibility and opportunity to help foster healthier choices.

As a catalyst, DHHS should convene a national conference, bringing together representatives from industry, public health organizations, and consumer advocacy groups, to develop guidelines for the advertising and marketing of foods, beverages, and sedentary entertainment directed at children and youth. The guidelines would cover advertising content, promotion, and placement. They should pay particular attention to protecting children under the age of 8, as they are especially susceptible to the persuasive intent of advertising. Industry would then be responsible, on a voluntary basis, for implementing the guidelines. However, the Federal Trade Commission should be given the authority and resources to monitor compliance and to propose more stringent regulations if industry fails in its actions.

Building healthy communities

Many factors in the community setting affect the overall health and fitness of children and youth. Writ large, a community can be a town, city, or other type of geographic entity where people share common institutions and, usually, a local government. In turn, each of these communities contains many interdependent smaller networks of residential communities, faith-based communities, work communities, and social communities. Thus, there is a host of leverage points at which communities can help foster social norms that promote attitudes and behaviors that will help their young members maintain a healthy weight.

In one approach, community groups—or, ideally, community coalitions—should expand current programs and establish new ones that widen children’s opportunities to be physically active and maintain a balanced diet. Many youth organizations, such as Boys and Girls Clubs, Girl Scouts, Boy Scouts, and 4H, already have a number of programs under way that illustrate the gains possible. In one Girl Scout program, for example, girls who participated with their troops in nutrition classes, which included tasting sessions and sending foods home, were found to consume more fruits and vegetables on a regular basis. Youth groups also can help get more kids involved in physical activity by pursuing innovative approaches that reach beyond traditional competitive sports. These sports are not of interest to everyone, so it will be important for communities to expand their range of offerings to include noncompetitive team and individual sports as well as other types of physical activities, such as dance and martial arts. To ensure equal access to physical activity programs, communities should help families overcome potential obstacles by providing transportation, paying fees, or providing special equipment.

The nation cannot wait to design a “perfect” prevention program. Wide-ranging intervention programs are needed now, based on the best evidence available.

Communities also should take a hard look at their built environments and expand the opportunities for children to be physically active outside, especially in their neighborhoods. Creating places to walk, bike, and play will require not only providing adequate space but also reducing risks from traffic or crime. Local governments, private developers, and community groups should work collaboratively to develop more parks, playgrounds, recreational facilities, sidewalks, and bike paths. It will be especially important for communities to ensure that children and youth have safe walking and bicycling routes between their homes and schools and that they are encouraged to use them. Making such improvements often will require local governments to revise their development plans, zoning and subdivision ordinances, and other planning practices, and to prioritize the projects in their capital improvement programs.

Similarly, communities should expand efforts to provide their residents with access to healthful foods within walking distance, particularly in low-income and underserved neighborhoods. Some promising approaches include offering government financial incentives, such as grants, loans, and tax benefits, to stimulate the development of neighborhood grocery stores; developing community and school gardens; establishing farmers’ markets; and supporting farm-to-school and farm-to-cafeteria programs.

It is within local communities, of course, where most health care is provided, and health care professionals have an influential role to play in preventing childhood obesity. As advisors to children and their parents, they have the access and influence to make key suggestions and recommendations on dietary intake and physical activity throughout children’s lives. They also have the authority to elevate concern about childhood obesity and advocate preventive efforts. By conducting workshops at schools, testifying before legislative bodies, working in local organizations, or speaking out in any number of other ways, health care professionals can press for changes within their communities that support and facilitate healthful eating and physical activity.

In their everyday practices, health care professionals (pediatricians, family physicians, nurses, and other clinicians) should routinely measure the height and weight of their patients and track their body mass indices (BMIs). They then should carefully communicate the results to the children themselves, in an age-appropriate manner, and to their parents or other caregivers; provide information that the families need to make informed decisions about nutrition and physical activity; and explain the risks associated with childhood overweight and obesity.

In order to make sure that health care professionals are well prepared to provide quality services, medical and nursing schools should incorporate training with regard to nutrition, physical activity, and counseling on obesity prevention into their curricula. Training should happen at all levels, from preclinical science through the clinical training years and into postgraduate training programs and continuing medical education programs for practicing clinicians. Health care professional organizations also should make obesity prevention a high priority. Actions they should take to back up their commitment include creating and disseminating evidence-based guidance and other materials on obesity prevention and establishing programs to encourage their members to be role models for proper nutrition and physical activity. In addition, accrediting organizations should add obesity prevention skills, such as tracking BMIs and providing needed counseling, to the measures they routinely assess.

Health insurers and group health plans can make valuable contributions as well. Indeed, the high economic costs of obesity provide them with major incentives to encourage healthful lifestyles. Creative options may include providing member families with incentives to participate in regular physical activity, perhaps by offering discounted fees for joining health clubs or participating in other exercise programs. It will be particularly important for insurers and health plans to consider incentives that are useful to high-risk populations, who often live in areas where easy access to recreational facilities is lacking or costs are prohibitive.

Lessons for schools

Given that schools are one of the primary locations for reaching children and youth, it is critically important that the total school environment—cafeteria, playground, classrooms, and gymnasium—be structured to promote healthful eating and physical activity. Needs abound.

Schools, school districts, and state educational agencies should ensure that all meals served in schools comply with the DHHS and USDA’s Dietary Guidelines for Americans, which recommend that no more than 30 percent of an individual’s calories come from fat and less than 10 percent from saturated fat. Further, USDA should conduct pilot studies to evaluate the costs and benefits of providing full funding for breakfasts, lunches, and snacks in schools with a large percentage of children at high risk of obesity.

Increasingly, students are getting more of their foods and beverages outside of traditional meal programs. Many of these “competitive” foods, which are sold in cafeterias, vending machines, school stores, and fundraisers, or provided as snacks in classrooms and after-school programs, are high in calories and low in nutritional value. Current federal standards for such items are minimal. USDA, with independent scientific advice, should establish nutritional standards for all food and beverage items served or sold in schools. In turn, state education agencies and local school boards should adopt these standards or develop stricter standards for their schools. Enforcing such schoolwide standards not only will promote student health but help establish a broader social norm for healthful eating behaviors.

Schools also need to reinvigorate their commitments to providing students with opportunities to be physically active. Many schools have cut physical education classes or shrunk recess times, often as a result of budget cuts or pressures to increase academic offerings. Students are paying the price. Schools should ensure that all children and youth participate in at least 30 minutes of moderate to vigorous physical activity during the school day. This goal is equally important for young children in child development centers and other preschool and child-care settings. Congress, state legislatures and education agencies, local governments, school boards, and parents should hold schools responsible for providing students with recommended amounts of physical activity. Concurrently, they should ensure that schools have the resources needed to do the job properly.

Among the actions that schools can take to get students more active, they should provide physical education classes of 30 minutes to an hour on a daily basis, and they should examine ways to incorporate into these classes innovative activities that will appeal to the broad range of student interests. Elementary schools, middle schools, and child development centers should provide equal amounts of recess. Schools also should offer a broad array of after-class activity programs, such as intramural and interscholastic sports, clubs, and lessons that will interest all students. In addition, schools should be encouraged to extend the school day as a means of providing expanded instructional and extracurricular physical activity programs.

Schools offer many other opportunities as well to help students avoid developing weight problems. They should ensure that nutrition, physical activity, and wellness concepts are taught throughout the curriculum from kindergarten through high school, and they should incorporate into health classes evidence-based programs that teach behavioral skills that students can use to make better choices about foods and physical activity. Federal and state departments of education, along with education and health professional organizations, can support this effort. These organizations should develop, implement, and evaluate pilot programs that use innovative approaches for teaching about wellness, nutrition, physical activity, and making choices that promote wellness, as well as for recruiting and training teachers to meet expanding needs.

Health clinics and other school-based health services also can play a prominent role in prevention efforts. In particular, they should measure yearly each student’s weight, height, and gender- and age-specific BMI percentile and make this information available to parents and to the student (when age-appropriate). It will be important that such data be collected and reported validly and appropriately, with the utmost attention to privacy concerns. The Centers for Disease Control and Prevention can help in this regard by developing guidelines that schools can follow in gathering information and communicating the results.

Family matters

Parents, defined broadly to include primary caregivers, have a profound influence on their children by fostering values and attitudes, by rewarding or reinforcing specific behaviors, and by serving as role models. The family is a logical target for interventions designed to prevent childhood obesity. This focus is made even more important by changes in society in recent decades that are adding pressures on parents and children that can adversely affect choices about food and physical activity. For example, with the frequent need for both parents to work long hours, it has become more difficult for many parents to play with or monitor their children and to prepare home-cooked meals for them.

Along with challenges, however, come opportunities and responsibilities. In order to promote healthful food choices, parents should make available in the home foods such as fruits and vegetables that are nutritious and have low energy densities and should limit purchases of items characterized by high calorie content and low nutritional value. Parents also should assist and educate their children in making good decisions regarding types of foods and beverages to consume, how often, and in what portion size. Similarly, parents should encourage their children to play outdoors and to participate in other forms of regular physical activity. By the same token, they should discourage their children from participating excessively in sedentary pursuits by, for example, limiting television viewing and other recreational screen time, such as playing video games, to less than two hours per day.

Among other actions, parents should consider the weight of their children to be a critically important indicator of health. Just as vaccination schedules require parental intervention during childhood, parents should be discussing the prevention of obesity with their health care providers to make sure that their child is on a healthy growth track. In practice, parents should have a trained health professional regularly (at least once a year) measure their child’s height and weight in order to track his or her BMI percentile. School health programs may be of critical help here, because many families lack insurance for preventive health services and cannot afford regular health screening. Underlying all of these efforts, parents should try their best to serve as positive role models by practicing what they are preaching.

Moving ahead

The epidemic of childhood obesity, long overlooked, now looms as a major threat to the nation’s health. Many stakeholders, public and private, are starting to take action to help slow and ultimately reverse its course. Preventing Childhood Obesity reviews progress and outlines a way to move forward in what must be viewed as a collective responsibility and an energetic and sustained effort. Some of the steps can be implemented immediately and will cost little. Others will cost more and will require a longer time for implementation and to see the benefits of the investment. Some actions will prove useful, either quickly or over the longer term, whereas others are likely to prove unsuccessful.

But the nation cannot wait to design a “perfect” prevention program in which every intervention has been scientifically tested ahead of time to guarantee success. Wide-ranging intervention programs are needed now, based on the best available evidence. At the same time, research must continue to refine efforts. Briefly, research is needed to evaluate the effectiveness, including the cost-effectiveness, of prevention programs; to better understand the fundamental factors involved in changing personal dietary behaviors, physical activity levels, and sedentary behaviors; and to explore the range of population-level factors that drive changes in the health of communities and other large groups of people.

Thus, the path ahead will involve surveillance, trial, measurement, error, success, alteration, and dissemination of the knowledge and practices that prove successful. The key is to move ahead, starting immediately, on every front. As institutions, organizations, and individuals across the nation begin to make changes, social norms are also likely to change, so that obesity in children and youth will be acknowledged as an important and preventable health outcome, and healthful eating and regular physical activity will be the accepted and encouraged standard. Given that at stake is the health of today’s children and youth, as well as the health of future generations, the nation must proceed with all due urgency and vigor.

A Second Look at Nuclear Power

For more than three decades, energy policies in the United States and much of the Western world have been held in the ideological grip of a flawed concept: the notion that we can achieve sustainable energy by relying solely on conservation and renewable resources, such as wind, the sun, the tides, and organic materials like wood and crop waste. Born in the wake of the 1973 oil embargo and arising out of renewed commitments to environmental quality, this idea has an almost religious appeal. An unintended result is that the world has become ever more reliant on fossil fuels and therefore less able to respond to global warming.

Although the vision of a renewable energy future has obvious appeal, it simply hasn’t worked. Yes, energy efficiency has improved. We can now produce incremental gains in gross national product with much less energy than in the past, and electricity growth rates have been cut by more than two-thirds. But renewable energy sources have not come close to displacing fossil fuels as our primary source of energy. The failure is significant, eroding a fundamental premise on which modern energy planning is based. The long-term goal has been consistent: a supply adequate to meet global human needs while moving away from fossil fuels, ensuring environmental sustainability (especially reducing greenhouse gas emissions), and achieving energy security. Instead, we are moving unwittingly toward a fossil fuel future, exactly what we’ve been trying to avoid.

Renewable energy has been sold on the premise that it has significant energy potential that could be tapped inexpensively. Yet after 30 years of effort, even with significant social, political, and financial incentives, the energy contribution from renewable sources has not budged. In 2002, renewable sources supplied about 6 percent of U.S. total energy consumption, unchanged from the 6 percent they provided in 1970. And the bulk of that 6 percent is supplied by sources that are far from new: hydropower and wood waste.

From 1988 to 1998, U.S. wind, solar, geothermal, and hydropower grew at 27 percent per year, and the contribution to U.S. energy supply from nonhydro, nonbiomass renewable sources grew nearly 100-fold from 1980 to 1995. Even so, wind, solar, and geothermal energy accounted for only about 0.5 percent of the energy consumed in 2002. The contribution from fossil fuels did drop from 93 percent in 1970 to 85 percent in 2002, but it did so only because nuclear power made a substantial new contribution, supplying 8 percent of the 2002 energy consumption. Globally, the situation is similar. In 2000, nearly 90 percent of global energy came from fossil fuels.

Current forecasts project little improvement. In its Annual Energy Outlook 2004, the U.S. Department of Energy (DOE) expects coal, oil, and natural gas to provide 89 percent of all new U.S. energy through the year 2025. In fact, fossil fuels are expected to increase, from 85 percent in 2002 to 87 percent in 2025. The International Energy Agency’s (IEA’s) World Energy Outlook for 2002 paints a similar picture: Coal, oil, and natural gas are expected to provide more than 90 percent of all new energy from 2000 through 2030.

This failure to perform cannot be blamed on inadequate support. Since 1978, DOE has invested more than $10 billion in renewable technologies, supplemented with generous tax incentives and state subsidies. Added support has come from the private sector. Oil behemoths such as Exxon, Shell, Mobil, ARCO, and Amoco, as well as non-oil energy companies such as General Electric, General Motors, Owens-Illinois, Texas Instruments, and Grumman, have all tried to enter the renewable energy market.

But renewable energy production has been constrained by physical limitations that have resulted in consistently high costs, because the energy that renewable energy technologies collect is both diffuse and intermittent. New York City, for example, uses 10 times more energy than its land area collects in sunshine. Resources such as sunlight and wind require large elaborate systems of collection, conversion, transport, and distribution to make them available as electricity. Substituting wind power for the Indian Point nuclear complex that now serves New York City would require somewhere between 125 and 385 square miles of wind farms, depending on the quality of the wind site and under the dubious assumption that a suitable site is available in the region. Even that huge field would not be sufficient, because wind turbines operate only when the wind blows, making backup supplies from other sources necessary. In California, for example, 73 percent of wind output is generated during six months of the year. Overall, California wind fields produce only about 23 percent of their energy capacity, because they are idle so much of the time.

Because the market has failed, efforts are now being made to force a shift to renewable energy through legislated mandates coupled with direct subsidies. The European Union has set an aggressive target of 22 percent of electricity from renewable sources by 2020. Many countries, including Denmark and the United Kingdom, have enacted targets into law. A dozen U.S. states have followed suit, legislating goals for renewable supplies, with penalties if they are not achieved.

It is doubtful that these mandates will be fully successful. Unless the penalties are very high, it is often cheaper to pay the penalty than the high price of renewable energy. But even if they succeed, the energy future would not change dramatically. The IEA forecasts that, even with such mandates, more than 60 percent of all new energy will still come from fossil fuels during the 30-year forecast period, and such fuels will still supply roughly 80 percent of all energy in the final year. And this projection applies only to the developed countries, where renewable energy mandates have been popularized. Globally, 87 percent of incremental new energy will still come from fossil fuels during the period, and coal consumption is expected to increase by 42 percent.

The grim conclusion is unavoidable. Both in the United States and around the globe, our hope that renewable energy will displace fossil fuels has left us with a de facto fossil fuel energy policy.

A fossil fuel future

There are many reasons to be concerned about continuing dependence on fossil fuels, but the most pressing one is global warming. If there is urgency at all in addressing global warming, energy policies must shift to non–carbon emitting resources more quickly. The environmentally favored energy source today is natural gas, because it is less polluting than coal and releases about half the carbon dioxide per unit of energy. However, when the total atmospheric carbon load continues to increase every year, it is little comfort to know that new energy contributes only half as much as previous sources. Moreover, in the rush to embrace natural gas, we have largely ignored environmental issues associated with exploration, drilling, recovery, and transportation.

There are growing concerns as well about supply vulnerability if we become more dependent on a single fuel source. Currently, the United States imports about 20 percent of its natural gas, mostly from Canada, and so far this amount seems manageable. As reserves in North America begin to dwindle, however, the United States will need to draw more heavily on distant sources. Russia, a problematic partner, has large reserves of natural gas (transportable as liquefied natural gas) and is one likely source. Dependence on natural gas also makes the United States more vulnerable to price spikes. Indeed, economic warning signs are already going up. As Alan Greenspan pointed out to Congress in 2004, the contract price for gas went from $2.55 per million Btu in July 2000 to $6.31 in July 2003, and there has been little relief since.

We didn’t plan it this way. Thirty years ago, no one intended that fossil fuels should dominate the energy supply as the new century advanced. Indeed, a major goal of energy policy planning was to avoid just such an outcome. This predicament was the unintended consequence of failing to see that conservation and renewable energy alone would not be enough.

Rethinking nuclear power

The one resource that might have made a difference is nuclear power. Despite the controversy it provokes, U.S. nuclear power quietly increased its contribution during the 1980s and 1990s, as plants ordered in the early 1970s were added to the grid. Twenty countries now depend on nuclear energy for more than 20 percent of their electricity, and nine countries count on it for more than 40 percent. Nuclear power remains the only mature and readily expandable source of energy that emits no carbon (or any other pollutant associated with fossil fuels). But because we cling to the belief that renewable sources will be sufficient, nuclear power’s contribution is predicted to remain static in the decades ahead. Should we not rethink the role that nuclear power might play?

Sustaining nuclear power for the long term eventually will require reprocessing to fully exploit the energy potential of uranium.

The problems of nuclear power are well known. Many Americans remain concerned about questions of safety and the disposal of nuclear waste, as well as nuclear proliferation and economic viability. Given the urgency of finding alternatives to fossil fuel, however, it is worth reconsidering what nuclear power can actually offer. We need to be more candid as well about the extent to which ideological considerations have influenced our perception of nuclear power’s problems.

The real advantage of nuclear energy is its potency. One pound of uranium contains the energy equivalent of roughly one million pounds of coal. Such potency means that nuclear power’s energy potential is vast, clearly sustainable as a long-term resource. It also means that nuclear’s environmental impact is inherently low. With so much energy coming from such a small volume of material, producing nuclear fuel requires much less exploration, mining, transportation, and collection, with all their attendant environmental problems, than do fossil fuels. For example, a 1,000-megawatt nuclear plant requires one refueling per year, whereas a similarly sized coal plant requires 80 rail cars of coal per day. And because the process of releasing nuclear energy occurs entirely inside the small fuel pellets that make up a reactor core, airborne releases from nuclear power plants are insignificant. This difference gives uranium a significant advantage over fuels, especially coal, that burn and emit airborne effluents. From 1973 through 1996, nuclear power displaced enough coal to reduce sulfur dioxide emissions by 5.3 million tons, nitrogen oxide emissions by 2.5 million tons, and CO2 emissions by 147 million tons.

The environmental and human health advantages of nuclear power over coal—even including accidents and nuclear waste—are actually well known. In his 1990 analysis The Nuclear Energy Option, University of Pittsburgh physics professor Bernard Cohen lists no fewer than 23 studies comparing coal with nuclear power. These include studies by the American Medical Association, the U.S. Environmental Protection Agency (EPA), the Stanford Research Institute, the Norwegian Ministry of Oil and Energy, and the National Academy of Sciences. All of these studies came to the same conclusion: that coal was far more hazardous, both to the environment and to human health, than nuclear power. According to a 2004 report for the EPA’s Clean Air Task Force, as many as 26,000 U.S. deaths a year can be attributed to the ambient particulate emissions in the atmosphere from coal-burning power plants. In terms of health effects, that’s roughly equivalent to one Chernobyl accident every two or three years. The report, which was intended to assess the relative effectiveness of policy approaches to reducing the harmful effects of coal combustion, estimated that even after federal action, coal-related deaths in 2010 would still range from 7,800 to 17,000, depending on the policy alternative adopted.

The overwhelming conclusion is that nuclear power is better than coal for both the environment and human health. That conclusion not only runs counter to the consistently shrill rhetoric from antinuclear activists, it says something far more telling: With their blind opposition to nuclear power and advocacy of policies that permit coal consumption to increase while nuclear power remains dormant, environmental groups have worked against their own stated objectives. No new nuclear plants are being built in the United States, but 94 new coal plants are in the planning stages. The story is the same across the globe. The Wall Street Journal recently reported a surge in coal consumption, particularly in China and India, as these developing giants feel the strains of rising oil demand. Using coal is the path of least resistance, given the current political resistance to nuclear power from the environmental community.

One astounding example of this is recent German energy policy. More than 50 percent of German electricity derives from coal burning; 12 nuclear power plants produce another 30 percent. Because its Green Party has become politically powerful, Germany has turned to the aggressive pursuit of wind and other renewable sources—not to reduce coal burning and coal pollution, but to shut down German nuclear power. Replacing 30 percent of German energy supply with renewable energy is improbable in itself; but why target nuclear power when coal burning is by far the largest source of environmental contamination from electricity production?

Ideological blinders

Many analysts have attempted to explain the visceral hostility toward nuclear power, and the most common explanation is that people link nuclear power with nuclear weapons. Others say it is simply irrational fear. Although fear of unfamiliar technology is understandable, it hardly explains the organized opposition from those who are well educated and technologically literate and who have given the movement its legitimacy. There is, however, a different question one might ask: To what extent have such fears been exploited and encouraged by nuclear opponents for reasons that are more ideological than scientific? Two surveys taken in the early 1980s speak volumes on this question.

In 1982, a random survey of scientists listed in American Men and Women of Science sought to describe with some objectivity the attitudes of scientists toward nuclear power. The survey was conducted roughly a year and a half after the accident at Three Mile Island, a time when virtually every environmental organization, claiming to act on the best science, had lined up in opposition. At the time the survey was taken, a poll had reported that almost one in four Americans believed that a majority of scientists who are energy experts opposed further development of nuclear energy. For years the media had hammered home the message that there were deep divisions within the scientific community about nuclear power, a message that reinforced the legitimacy of the antinuclear movement. But the results of the scientist survey showed overwhelming support for nuclear power. Nearly 90 percent of the scientists surveyed believed nuclear power should proceed, with 53 per cent saying it should proceed rapidly. So why would nearly the entire environmental community be on one side of the nuclear question while, overwhelmingly, scientists were on the other?

Six months later, another survey of attitudes toward nuclear power development focused on “opinion leaders.” Seven different groups were surveyed, each of which was assumed to play a key role in shaping opinions on nuclear power. Those surveys included directors of major national organizations such as the Natural Resources Defense Council, Friends of the Earth, the Sierra Club, and Critical Mass, as well as important regional anti nuclear groups.

Those surveyed were asked to rate the relative importance of 13 different areas of concern about nuclear power, including plant safety, risks to workers, high-level and low-level waste disposal, transportation, decommissioning, and proliferation. Every group except the nuclear opponents reported distinctions among the concerns, rating some quite important and others of little import. Opponents of nuclear power, on the other hand, considered virtually every item to be of critical importance. “Clearly the anti’s make few distinctions in their assessments of nuclear power’s dangers,” the researchers noted, “which raises the possibility that their views on these problems may be less the cause of their opposition to the development of nuclear energy than its consequence.” In other words, although the debate over nuclear power had been waged primarily on a technical front with arguments focused exclusively on technical issues, it seems likely that for many antinuclear activists their ideological position came first and the technical arguments were adopted to fit it.

These surveys have not been updated, so it is possible that attitudes may have shifted somewhat over the years. Even so, the rather remarkable alignment at the height of the controversy—virtually the entire environmental lobby on one side while virtually the entire group of scientists was on the other— strongly points to an ideological polarization that existed at the time and likely continues today. The link here is to a line of thought going clear back to Rousseau, with its evolutions through 19th-century romantics, 20th-century existentialists, and other individual thinkers, most prominently Nietzsche. The consistent theme has been hostility toward the “mechanical and soulless” world of science and the technologies that flow from it. During the 1960s, it resonated with writers such as Jacques Ellul and Herbert Marcuse, who saw our technological society as dehumanizing. Others such as Paul Ehrlich and Barry Commoner equated technological growth with a pending environmental crisis. Environmentalism itself changed, from a pre-1960s preservationist posture to a post1960s attack on Enlightenment visions of progress, identified especially with technology.

The preferable near-term approach to handling nuclear waste is to permit more latitude for aboveground dry storage.

This deeply felt philosophical position could help explain the harsh rhetoric. It is “modern technology with its ruthlessness toward nature,” as University of California, Los Angeles, historian Lynn White characterized it in a 1967 essay. The prominent psychologist Abraham Maslow attacked science as a “dead end” that had become a “threat and a danger to mankind.” E. F. Schumacher complained in his influential 1973 critique of modern society, Small is Beautiful, that humans are “dominated by technology,” and called technology a “force that is out of control … [It] tends to develop its own laws and principles, and these are very different from human nature.” The troubling consequence of these declarations has been a tendency to trivialize the enormous benefits in public health, material prosperity, and lengthened lifespan that science and technology have made possible. As a result, these ideologies have too often become barriers to developing and using the technologies humans really need.

A particularly revealing aspect of this has been the singular intensity with which environmentalists have opposed nuclear power, knowing full well it would mean a wider use of coal with its known environmental and human health disadvantages. Why would nuclear power receive such intense scrutiny since coal too supports industrial growth? A partial explanation for the difference in treatment is that coal combustion is a comfortingly familiar technology, whereas nuclear power symbolizes as nothing else the new world of technological advancement.

But nuclear power touches an even deeper ideological chord: mistrust of modern institutions. Nuclear power depends on functioning public institutions to ensure plant safety and to protect the public from radiation hazards. The political left, where environmental lobbies are most comfortable, doesn’t trust these institutions. More basically, they mistrust the values of modern Western society that these institutions embody, particularly their capitalist economics and their reliance on science and technology.

This philosophical predisposition against technology explains, at least to some extent, why virtually the entire environmental lobby would have opposed nuclear power when the overwhelming proportion of scientists was on the other side of the issue. Many people today remain skeptical about nuclear power, even though recent polls show that as many as 73 percent of college graduates favor nuclear power, as do 65 percent of the general population. Much of the skepticism about nuclear power has been influenced by a relatively small activist environmental lobby that is motivated as much by ideology as by concerns with the technology itself. These ideological differences make it difficult, if not impossible, to find a common ground and work collaboratively to use technologies such as nuclear power to their full advantage. Rather than seeing nuclear power as a beneficial technology with problems we could solve together, they view it as anathema and oppose it without regard to its benefits. As one example, the legal system of reviews intended to protect the public became for them a vehicle for blocking nuclear power. As a result, by the 1980s the process had become so cumbersome that it took more than 15 years for most nuclear projects to be completed. That economic burden was too much to handle, so no new U.S. nuclear plants have been ordered since the 1970s.

Making regulation work

Reforms currently being enacted in the United States could make the regulatory system more effective. They include consolidation of required hearings, preapproval and “banking” of project sites, and preapproval and certification of standardized designs. Some advanced designs have now been certified and are expected to reduce construction costs significantly and to make plants safer to operate. A consortium of manufacturers and potential owners has been formed to test the workability of this revised regulatory process.

International evidence suggests that these changes will help. New plants continue to be built in countries such as South Korea, Finland, India, Brazil, China, and Russia, where nuclear power has not been stifled by overregulation. In 1996, Japan completed a plant that took only four and a half years to build and came in under budget. Some newer designs have been targeted for completion in three years.

Nevertheless, the politicization of nuclear power continues to compromise efforts to solve the biggest issue of nagging public concern: the disposal of nuclear wastes. The sustainability of nuclear power depends on an adequate approach to nuclear waste, one that serves the public purpose and is workable. The difficulty in the current approach is demonstrated by the fact that efforts to locate a suitable U.S. waste repository have been underway since the aborted attempt in Lyons, Kansas, in the early 1970s. After more than $9 billion in expenditures, there is still uncertainty that the current site at Yucca Mountain, Nevada, will ever be approved. Unless there is a re-calibration of both the nature of the risk and the appropriate regulatory response, disposing of nuclear waste will remain a political quagmire.

The key question is what margin of safety is appropriate for nuclear waste, and the best way to answer that question is to think of nuclear waste in a broader context. In the current regulatory system, nuclear waste is treated quite differently than are nonradioactive hazardous wastes that pose similar long-term hazards. As the 1995 National Research Council report Technical Bases for Yucca Mountain Standards noted, “some nonradioactive substances are more persistent and can pose a greater hazard than many radionuclides.” Yet 60 million of tons of nonradioactive hazardous wastes are generated annually, from chlorinated hydrocarbons such as PCBs, to petroleum products used in refining, to solvents and cleaning agents, to arsenic and beryllium, and finally to heavy metals such as lead, cadmium, mercury, and nickel. Even the most toxic of these wastes are permanently stored every year without the expense, litigation, or public concerns that have so constrained progress on nuclear waste. The public policy implications are significant. As participants in a 1998 workshop cosponsored by Johns Hopkins University and the Environmental Law Institute observed, the differences in approach between these two waste forms have left us with what amounts to “two cultures,” with separate and distinct regulatory regimes that have never been harmonized. One obvious difference is that under current regulations radioactive waste storage must consider scenarios for thousands of years, whereas the typical timeframe for nonradioactive hazardous wastes is 30 to 70 years. Although the EPA imposes a 10,000-year storage requirement in the limited situation of hazardous waste disposal in injection wells, even there supporting studies and processing of the petition can typically be completed in two years, and permits are regularly granted without fanfare. As workshop participants observed, these differences not only corrupt public decisionmaking, they “create tensions between regulators that lead to public resentment and mistrust of risk managers.”

Much of this is a consequence of the public perception that radioactive wastes are more dangerous, a perception heightened by the ideological controversy over nuclear power. If one is considering the short term, this is largely correct. Especially during the first 100 years, when 90 percent of the toxicity decays away, radioactive wastes require special treatment. But after 500 or 600 years, these wastes, especially if reprocessed, pose hazards that are comparable to those of many nonradioactive hazardous wastes. Ensuring safety for 500 years is a serious challenge, but it poses very different regulatory and safety issues than does safe storage for tens of thousands of years. Providing safety for longer periods should remain a priority, but it makes little sense to impose radically different regimes for two forms of waste if the long-term health risks are substantially the same. Changing this situation will be difficult, given established public concerns and regulatory processes for nuclear waste. The National Council on Radiation Protection and Measurements has stepped in and suggested a technical approach for consistently classifying the long-term risks of chemical and nuclear wastes, but the critical stumbling block is applying such a standard and removing the inconsistency in regulatory regimes. A credible evaluation by an organization such as the National Research Council that focuses on this dichotomy and makes recommendations for harmonizing the two regulatory approaches might create conditions in which a genuine policy dialogue could begin.

A second key is to reconsider the reprocessing of spent fuel, a process in which plutonium and uranium are chemically separated from spent fuel so that they can be reused, as is done in France. Sustaining nuclear power for the long term eventually will require reprocessing to fully exploit the energy potential of uranium. Reprocessing will make it possible to tap the energy potential of the 99 percent of uranium238 that is virtually useless without reprocessing. Reprocessing also makes the disposal problem more manageable, because it reduces the long-term health risks and the volume of waste, while lowering the heat loading on a repository during the early years.

Reprocessing generates legitimate concerns about the proliferation of nuclear weapons. Increased inventories of separated plutonium raise the risk that it might be diverted to nuclear weapons, a concern exacerbated by recent threats of terrorism. But even here some have argued that maintaining control over and ultimately consuming these fissionable materials offer a better approach to nonproliferation than burying spent fuel, which would create what are, in effect, plutonium mines for future generations. As Michael May and Tom Isaacs argue in their recent article, “Stronger Measures Needed to Prevent Proliferation” (Issues, Spring 2004), “a fuel cycle that minimizes the accumulation of weapons-usable material will be increasingly viewed as necessary for security.” What is needed is the opportunity to fully explore and develop proliferation-resistant fuel cycles as well as institutional controls such as international fuel leasing. Under a leasing scheme, “fuel cycle” countries that handle the entire fuel cycle would be subject to rigid international safeguards. Other “reactor” countries would be allowed to have nuclear power plants, but they would be “loaned” fuel to operate their reactors and be required to return the spent fuel to the fuel cycle countries, where it would be reprocessed. Such a scheme would greatly limit both the means and opportunities for reactor states to process and divert weapons-suitable materials.

The preferable near-term approach is to permit more latitude for aboveground dry storage. Not only would it allow time for cooling to ease the design of existing repositories, it would also permit serious reconsideration of reprocessing options. We could also evaluate more advanced technologies that involve the recovery of longer-lived materials and their destruction by irradiation in specially designed nuclear plants or accelerators, virtually eliminating the long-term risks. Here too, the greatest barrier is the entrenched ideological opposition to nuclear power. Its rhetoric has led to a false sense of urgency, which makes it politically difficult to consider policy alternatives that might delay permanent underground disposal. Until a repository is approved and operating, the waste issue will remain an impediment that nuclear opponents gladly exploit. For this reason alone, even with a move to make greater use of aboveground storage, efforts to locate and approve a suitable repository should continue simply to demonstrate its feasibility.

Reframing these important questions could be greatly assisted by the environmental community itself. A growing number of enlightened environmental leaders are beginning to appreciate the role that nuclear power might play in achieving environmental sustainability. Seeing beyond the rigid ideologies that have constrained us for decades, they could be of inestimable importance in helping to reshape the public dialogue. An example is James Lovelock, the biophysicist and public health physician who proposed in his Gaia hypothesis that Earth is a self-regulating organism. In a recent appeal to his fellow Greens, he wrote: “We cannot continue drawing energy from fossil fuels, and there is no chance that renewables, wind, tide, and water power can provide enough energy and in time.” Voicing his concerns about greenhouse gases, he concluded, “we have no time to experiment with visionary energy sources: civilization is in imminent danger and has to use nuclear—the one safe, available energy source—now or suffer the pain soon to be inflicted on our outraged planet.” Patrick Moore, a founder of Greenpeace, subsequently followed suit, stating that “nuclear power is the only nongreenhouse-gas-emitting power source that can effectively replace fossil fuels and satisfy global demand.”

Moving beyond ideology

Modern environmentalism has too often coopted an idea that we all embrace—environmental quality— and used it to obscure an ideological agenda. One consequence is the way in which we define “sustainability.” Everywhere that sustainability is used to guide energy planners, it is limited by definition to “renewable” resources, which are the only sources considered to be adequate to meet future needs and to be environmentally benign. Not only has the first premise been shown to be wrong, the second assumption is questionable as well. It is now increasingly obvious that resources should not be given an environmental pass simply because they are renewable. Large hydro, for example, has come into disfavor because dams flood large areas of land, often eliminating communities or scenic beauty, and destroy fish habitat. Similarly, geothermal sites are often located in wilderness areas that environmentalists do not want to disturb.

Even the current environmental favorite, wind, is being challenged because of bird kills, aesthetics, and land use. Last year, several prominent environmental organizations issued a joint appeal to the U.S Department of the Interior and the U.S. Fish and Wildlife Service complaining that uncontrolled wind expansion throughout the Appalachian Mountain ridges endangered hundreds of migratory bird species, running the risk that the area would “become a gigantic deathtrap for migratory songbirds and raptors.”

Renewability per se should not be the issue; sufficiency for the foreseeable future with minimal environmental impact should be. Renewable sources are certainly one part of the answer, but nuclear power is another. Nuclear power is the one energy resource currently capable of displacing fossil fuels on a large scale as well as promoting other environmental goals: minimizing pressure on land use and the accompanying environmental problems of resource recovery, and avoiding atmospheric emissions that contribute to global climate change and health problems. A few key policy actions will help us move in this direction: complete licensing reforms, harmonize waste regulations with those for other similar hazards that we manage, legitimize aboveground storage as an interim solution for waste management, and focus more policy attention on reprocessing and the development of proliferation-resistant fuel cycles.

The most critical step is to build a consensus among energy planners and policymakers that “sustainability” as a policy goal should include nuclear power. Bringing nuclear power back into the mix for energy planning means shedding ideological biases. It means openness of thinking to resolve the tension between the human desire for modernization and the global need for sustainability. It means ceasing to deceive ourselves about what might be possible.

Genomics and Public Health

Breakthroughs in biology are changing our world. Just as chemistry and physics had broad ramifications in the preceding centuries, the New Biology unleashed by the Human Genome Project and associated developments will send ripples through many aspects of 21stcentury life and will be influential in improving the health of the public. The public health sciences will be essential for interpreting the health significance of genetic variation and the gene/environment interactions at the core of most diseases and biological phenomena. The combination of genomics and public health sciences will be critical to achieve the vision of predictive, personalized, preventive health care and community health services (Table 1).

Public and media interest was intense when President Bill Clinton and Prime Minister Tony Blair jointly announced accelerated progress by the public- and private-sector sequencing programs in June 2000. The details of those “blueprints” for the 25,000 to 30,000 genes of the human genome sequence were published in February 2001. These genes, through a variety of pathways, produce an even larger number of proteins, which then undergo numerous structural modifications critical to their functions; thus, the “proteome” is much more numerous and complex than the genome.

Raw genome information reveals very little directly about human health. Genome sequence information must be linked with information about nutrition and metabolism; behaviors; diseases and medications; and microbial, chemical, and physical exposures from the environment in order to understand the environmental/genetic interactions that ultimately affect human health (Table 2). Broadly, genetics and public health share salient attributes. Both focus on populations. Both seek to elucidate the larger patterns to be found among individual variations in genetic predispositions to diseases, sensitivity to environmental exposures, and responsiveness to preventive and therapeutic interventions—within and across population subgroups. Both are aware of the legacy and risks of discrimination on social and racial grounds. Thus, both explicitly recognize the importance of cultural, societal, and ethnic contexts, often explored as part of the Human Genome Project’s trailblazing investments in its Ethical, Legal, and Social Implications (ELSI) component.

Viewing health care as a part of public health, we can see that genetics provides a bridge between medicine and community-based public health, most directly in the setting of clinical genetics. Counseling and treatment of individual patients often must be expanded to nuclear or extended families. Screening for genetic predispositions involves workers and other populations suspected of having higher-than-background risks because of their exposure to potential hazards, their family history, or their ethnic background. Community outreach should be aimed not only at earlier diagnosis and treatment but at preventing problems by reducing nongenetic risk factors. The accelerating pace of discoveries and applications has put a premium on education about genetics for health professionals and education about public health for geneticists. Both disciplines have critical roles to play.

Epidemiology and randomized trials

During the past 15 years, there has been a remarkable transformation of epidemiology, highlighted by a dramatic move beyond simple statistical associations to cause-and-effect research that aims to identify the mechanisms of disease. Gene expression and protein expression are likely to be very useful early indicators of a developing disease. If this research is successful, then it will be possible to use gene or protein expression as a biomarker that will allow clinicians to identify preliminary signs of diseases with long latency periods before clinical symptoms and signs become manifest. Once etiological hypotheses have been generated and tied to credible potential mechanisms, investigators can devise clinical trials of prevention strategies that modify or remove relevant risk factors. Possible interventions include behavior change, such as helping smokers to quit. Other options include the use of antioxidant vitamins, natural products such as folic acid, or pharmaceuticals such as COX-2 inhibitors for chemoprevention.

Finding effective chemical interventions that do not have undesirable side effects is not easy. In the past few years, postmenopausal estrogen medications to reduce heart attack risks, beta-carotene to prevent lung cancers, and COX-2 inhibitor drugs to prevent the recurrence of colon polyps have been found to produce serious adverse effects. On the positive side, several new anticancer drugs (Gleevec, Herceptin, Iressa, and Avastin) successfully target specific molecular mechanisms and help well-defined patient subgroups. When the target and mechanism are precisely identified, the likelihood of unintended consequences is greatly reduced. For infectious agents, vaccines can be especially effective preventive interventions. For diseases related to environmental and occupational exposures, an effective combination is monitoring aimed at reducing emissions and exposures plus periodic checkups for workers to identify early clinical or laboratory indications of disease.

Table 1. Grand challenges for genomics and public health

  • Strengthen prevention in the public health/ clinical medicine continuum
  • Recognize heterogeneity among patients and populations
  • Initiate large-scale population studies, like the UK BioBank
  • Integrate genetic, environmental, and behavioral factors in preventing and treating illnesses and injuries globally
  • Make healthcare and community health services predictive, preventive, and personal

Molecular epidemiology and clinical research studies using “gene chips” to conduct global analyses of gene expression have yielded patterns that reveal striking differences between normal cells and cancer cells of the prostate, lung, breast, and other tissues. Different patterns are found also between tumor specimens from patients with localized cancers as compared with patients with invasive or metastatic cancers of the same type. The ability to distinguish among tumors that appear identical to the pathologist and surgeon should make it easier to prescribe appropriate treatment much earlier in the progression of disease and thus improve prospects for the patient.

Table 2. A golden age for the public health sciences

  • Sequencing and analyzing the human genome is generating genetic information on variation among people that must be linked with information about:
    • —Nutrition and Metabolism
    • —Lifestyle behaviors
    • —Diseases and medications
    • —Microbial, physical, and chemical exposures

The next critical step for epidemiology and clinical trials will be the identification of circulating biomarkers, particularly proteins, that make it feasible to diagnose cancers (or other diseases) much earlier. These protein biomarkers might be tumor proteins secreted or released during cell turnover. They also might be autoantibodies that the body makes to combat tumor antigens; in this case, the immune response represents a much-needed “biological amplification,” with much higher concentrations of the antibodies than of the original tumor antigen in the circulation. In fact, the most favorable outcome will be achieved when we can diagnose the development of cancers before they grow big enough to be visualized by x-ray or imaging, have a molecularly targeted treatment for the particular mechanism of that tumor, and have sufficient confidence that the diagnosis is correct and that a treatment will be effective to treat the whole patient without waiting for the tumor to grow, develop additional mutations, and metastasize.

Information technology

Database structures, software tools, data mining, and molecular modeling are critical elements of modern molecular research. Generally, they need to be coupled with sophisticated statistical design and analyses. Thus, scientists from a wide swath of mathematical fields have been recruited to studies of the genome, gene expression, protein expression, and metabolic patterns in health and disease. Under the broad banner of “systems biology,” researchers seek to link studies at each of these molecular levels with physical manifestations of disease or health problems that can be observed by a physician. As the biomedical literature has grown, natural language processing has emerged as a valuable approach to automated searching of the literature and databases.

An example of the emergent properties of databases is a report by graduate student Dan Rhodes and his faculty colleagues at the University of Michigan. They merged results of genetic analyses of many different types of cancers and uncovered several types of gene overexpression that were common to more than one type of cancer. This opens the possibility of developing therapeutic interventions that could be successful in some patients across a variety of cancers. Drugs that might work on several different kinds of cancers are of particular interest in overcoming what has been called the “pharmacogenomics nightmare” for drug development companies. We are finding that individuals can differ significantly in their response to therapies. For example, the drug Herceptin is particularly effective for the 15 percent of breast cancer patients who overexpress a specific chemical receptor, and the drug Iressa works only in the 10 percent of lung cancer patients with a mutation in a particular part of a different receptor. Patients without these specific molecular features will not respond. From the company perspective, a drug that works on only 10 percent of lung cancer patients has a far less attractive market than one that works on all or at least is prescribed for most patients just to try. On the other hand, a drug that is effective in treating lung cancer in patients with a specific genetic characteristic might also be found to be effective against other types of cancers with that characteristic. In that case, a company might find that the market for the drug is attractively large, and the overall benefit for patients could also be much greater, with a far more favorable benefit/risk ratio for patients.

Ecogenetics

After decades of polarized views pitting “genetic” versus “environmental” or “nature” versus “nurture” as the cause of various diseases, there is now widespread recognition that the more appropriate concept is “ecogenetics”—the realization that interacting genetic and environmental factors together influence predisposition to, or resistance to, developing specific diseases. The National Institute for Environmental Health Sciences has led the development of the subfields of “toxicogenomics” and now “toxicoproteomics.” The goal is to identify molecular signatures for exposures, early effects, and differential susceptibility to chemical agents that cause cancers, mutations, birth defects, and organ system dysfunction. This work is still in an early phase.

Ecogenetics fits into broad public health constructs for dealing with health risks of environment origin. In fact, the statutory and regulatory rationale for ecogenetics studies is quite explicit. The Occupational Safety and Health Act of 1972 mandated that health standards be set “such that no worker shall suffer adverse effect … [even] if exposed at the maximal permissible level for a working lifetime [45 years].” Like other physicians who see patients with workplace exposure-related clinical conditions, I have heard patients ask the logical question, “Why me, Doc? I’m no less careful than the next guy.” That is a question that we hope to answer by exploring what factors determine individual susceptibility.

The Clean Air Amendments of 1977 required that allowable levels of ozone, nitrogen dioxide, carbon monoxide, sulfur dioxide, lead, and particulate matter be set “so as to protect the most susceptible subgroup in the population.” That proviso can be met only if there are studies to define the most susceptible subgroup and the levels of exposure that are hazardous for that subgroup. In the case of the ozone standard in inhaled air, the Environmental Protection Agency (EPA) based its update in 1979 on susceptibility for the large population subgroup with asthma, bronchitis, or emphysema (3 to 5 percent of the general population); they could have proposed persons with cystic fibrosis, a smaller and probably more susceptible subgroup that had not been studied. Finally, the Food Quality Protection Act of 1996 requires EPA and Food and Drug Administration regulators to address risks for vulnerable or unusually exposed subgroups. In response, the EPA has given special attention to the estimation of risks to children from exposures to pesticides or pesticide residues.

The place of individual differences in susceptibility to environmental agents was highlighted 25 years ago in a report from the White House Office of Science and Technology Policy (Table 3) and expanded in the much-cited 1983 National Research Council report Risk Assessment in Federal Government: Managing the Process, commonly called “The Red Book.” Later, a Presidential/Congressional Commission on Risk Assessment and Risk Management created a six-stage Framework for Risk Management that introduced three enhancements: (1) putting each [new] problem into broader public health or ecological context; (2) engaging stakeholders from the start, in order to better inform the characterization of risks, development of options, decisions for risk reduction, and support for recommended actions; and (3) insisting on a postimplementation evaluation of benefits, costs, and unintended hazards.

One of the most active areas of genomic research has been the elucidation of genome sequences of many dozens of bacteria and fungi. Genomics has provided insights about gene content, repetitive sequences, sequence similarities, mobile genetic elements, and large numbers of genes of previously unknown function. Genes that are unique to a given species or virulent strains of a species are potential targets for selective therapy and vaccine development. In general, organisms that need to survive diverse environments have larger genomes with comprehensive biosynthetic pathways, whereas obligate parasites tend to have smaller genomes with adaptations that facilitate an existence entirely dependent on their hosts. Some of these adaptations are clearly important to public health. For example, Mycobacterium tuberculosis has genomic expansions of enzymes involved in lipid metabolism and cell wall biogenesis, which facilitate resistance to anti-tuberculosis (TB) drugs; this organism uses enzymes that seem to enhance its survival in the lung tissue of humans. Various disease-causing organisms contain “pathogenicity islands”: regions of 10 to 200 kilobase pairs with distinctive features that are determinants of bacterial virulence. Conversely, the human host has polymorphisms in genes that alter susceptibility to infections and response to antimicrobial drugs. There are quite prominent examples of ecogenetic relationships between variation in susceptibility and the infectious agents of malaria, TB, HIV-AIDS, cholera, and meningitis-otitis. With the combined Gates Foundation/ National Institutes of Health Grand Challenges Initiative on Global Infectious Diseases, the biology, host variability, and targets of opportunity for new drugs and new vaccines are topics of greatly increased salience. The anticipation of bioterrorism threats puts a high premium on related studies of infectious agents and counterterrrorism strategies.

Table 3. Framework for risk assessment & risk reduction

Hazard identification   Epidemiology
  Lifetime rodent bioassays
  Short term, in vitro/in vivo tests
  Structure-activity relationships
Risk characterization   Potency (dose/response)
  Exposure analysis
  Variation in susceptibility
Risk reduction   Information
  Substitution
  Regulation/Prohibition

(Source: Calkins et al., Office of Science & Technology Policy, 1980, J Natl Cancer Inst64:169-75.)

Nutrition and genetics

The diet is a key source of environmental variables: both nutritive factors and contaminants. Genomics and proteomics can help bring modern biology, chemistry, toxicology, and epidemiology to nutritional sciences. We already know in a general sense that genetic factors are important in common diseases with substantial dietary influences, beginning with obesity, diabetes mellitus, and heart disease; and we know that components of foods can induce carcinogen-activating and detoxifying enzymes that exist in variant forms. We are learning more about specific enzymes. For example, research has revealed that a variant of one enzyme is associated with low folate levels, with associated abnormalities that lead to increased risk of cardiovascular disease and death. Low folate levels are also associated with higher risk of colon cancer in women with a family history of common colon cancers. Treatment with folate is expected to reduce these risks. Administration of folic acid supplements to women before they are pregnant has been demonstrated in clinical trials in multiple countries to markedly reduce the incidence of very serious neural tube closure birth defects such as spina bifida. In fact, that benefit is so dramatic that in 1997 the United States and Canada mandated the fortification of flour products in the food supply with folic acid to ensure that the entire population gains the benefit. Genetic testing can also identify individuals with a predisposition to develop hemochromatosis, a tendency to absorb excessive amounts of dietary iron, with subsequent iron overload and deposition in various organs. Treatment is simple and inexpensive: Just donate a unit of blood every few months. Ironically, a large-scale trial at Kaiser Permanente in California showed that most people with genetic predisposition to this iron-overload disorder are minimally affected, putting a halt to plans to have widespread testing of the population.

Evolutionary aspects of nutritional genetics are quite significant. Our current related epidemics of obesity and diabetes mellitus surely reflect the consequences of the common availability of excess calories for a species that evolved with a need to sustain glucose levels during prolonged and highly variable intervals between eating and to survive frequent famines. Humans evolved to survive famine, not feast. Population differences in digestion of the sugar lactose in milk reflect different times of domestication of milk-producing animals and the introduction of milk after weaning as a major component of the diet. To this day, large proportions of non-Caucasian populations continue to turn off the lactose-digestion enzyme after weaning and so tend to be lactose-intolerant, with digestive discomfort when drinking substantial quantities of milk.

We are also gaining insights into the role of genetic factors in behaviors. There are numerous publications about genetic polymorphisms in dopamine receptors and many other functions related to cigarette smoking and the complications of smoking. Genetic variation in alcohol metabolism and in immediate and long-term organ damage from excessive alcohol intake is known, as well. There can be no doubt that there is important genetic variation in other unhealthful behaviors, probably including a lack of interest in physical activity.

Health services research

An important part of public health research is focused on the organization, effectiveness, ethics, and costs of community-based and clinically based health care services. As many diagnostic and prognostic genetic and protein tests are introduced and clinical genetic services are extended to more people, society will need well-framed research on what works and what does not, what is safe and what is not, and how best to make useful tests and services cost-effective against the backdrop of burdensome total health care costs and inequitable access to health insurance and health care. In general, information will be needed about the heterogeneity of genetic predispositions and their interactions with nongenetic exposures and other factors that influence disease risks and responses to treatment and preventive interventions. In pursuing these objectives, we must anticipate and recognize cultural, social, ethnic, and racial context to avoid discrimination based on genetic and related information. Community-based research studies should be designed to embrace the following principles: involve community partners from the earliest stages; ensure that community partners have real influence on the project; invite community members to be part of the review, analysis, and interpretation of findings; make sure that relevant research processes and outcomes benefit the studied community; make productive partnership last beyond a single project; and empower community members to initiate projects.

In the United Kingdom, discussion and planning for a BioBank with specimens and information from 500,000 individuals and their families began in 2001. In Iceland and more recently in Estonia, prospective genotype/phenotype studies are already under way, with quite a lot of speculation about the for-profit interests of deCode Genetics in Iceland and access by others to the proprietary findings. The European Union has instituted quality assurance and harmonization of genetic tests, though laboratory participation so far is disappointing. In the United States, the National Institute for Child Health has launched a long-term children’s study, the National Human Genome Research Institute is in the early stages of planning for a gene/environment cohort study, the National Cancer Institute has held a workshop with a similar thrust, and the National Institute for Environmental Health Sciences is assessing personal and community exposure measures for such studies. Planning is likely to take at least several years, given the complexities and costs. Complex choices are required about numbers of individuals to enroll, of what ages, with or without other family members; and how to collect, store, analyze, and share the specimens and data while earning the confidence of participants that their personal data will be kept confidential. At the same time, some participants and their advocates demand that the researchers report in real time to participants any findings that might have serious clinical implications. The principles mentioned above for community-based research should be applied to planning for these prospective studies.

Policy choices

The broad public/private Partnership for Prevention issued a report in 2003 called Harnessing Genetics to Prevent Disease and Improve Health: A State Policy Guide (www.prevent.org). Its stated aims are to help state policymakers to protect consumers; monitor the implications of genetics and genomics for health, social, and environmental goals; and ensure that genetic advances will be tapped not only to treat medical conditions but also to prevent disease and improve health before people become ill. The report is optimistic that the genomic era can lead to personalized health care and pharmacogenetics-enhanced drug development to prevent or better manage chronic diseases, with products and services that include diagnostic tests, drug therapies, and drug-monitoring protocols.

Its key policy finding is that genetics and genomics should be integrated into existing health, social, and environmental policies, rather than establishing stand-alone genetics programs. Policymakers at the state and federal levels should follow the example of Michigan, where a Governor’s Commission on Genetic Policy and Progress adopted an integration perspective and urged that genetic issues be dealt with in the context of overall medical care values and principles.

The case for integration is strong. All health conditions have a genetic basis. Most common diseases result from gene/environment interactions, so genetic advances are likely to extend and expand, not supplant, current practices in medicine, public health, and environmental protection. Because there is wide variation in the extent to which genetic factors affect health risks, a one-size-fits-all policy is inappropriate. Decisions about genetic policies involve complex issues about ethics, costs, benefits, and individual and societal interests. Medical care decisions should be linked with research, insurance, and broader public health policies. Finally, the intersection between genetics and public policy is both immediate and long-term, warranting close monitoring and timely actions in a broad context. Nevertheless, special legislative action seems warranted to prevent discrimination based on genetic tests or traits by insurance companies or employers.

Table 4. Components of the vision of genomics and public health

  • An avalanche of genomic information
  • Better environmental and behavioral datasets linked with genetics for eco-genetic analyses
  • Credible privacy and confidentiality protections
  • Breakthrough tests, vaccines, drugs, behavior modification methods, and regulations to reduce health risks and cost-effectively treat patients in the United States and globally

As an Institute of Medicine committee recommended in Who Will Keep the Public Health Healthy? Educating Public Health Professionals for the 21st Century (2002), the states should invest in reciprocal training in genetics for health professionals and in public health for geneticists. The avalanche of genomic information that links specific genetic factors with disease risks will only grow larger. Thus, it is essential, even urgent, to identify, develop, and enhance environmental and behavioral data sets for eco-genetic analyses and make sure that they can be linked to data on genetic variation. If the genetic data are made anonymous, such linkages and the capacity to link genes with risks (genotypes and phenotypes) will be undermined. Simultaneously, a foundation of credible privacy and confidentiality protections, not just for genetic information but for all personal and family medical information, must be established.

We envision breakthrough tests, vaccines, drugs, behavior modification strategies, environmental exposure reductions, and epidemiological surveillance to reduce health risks and cost-effectively manage illnesses and disease risks in the United States and globally (Table 4). But we should not let our enthusiasm for these potential benefits distract us from the slow but necessary work of ensuring that we use these tools wisely for the benefit of the greatest number of people.

The View from California

If what is happening in California is a leading indicator, and it usually is, many critical science and technology (S&T) policy debates are migrating from Washington to state capitals and even to local polling places. Unfortunately, the procedural, institutional, and human capacity for informed policymaking at these levels is often not as well developed as it is at the national level, which can result in confusing, contradictory, and short-sighted policy outcomes. Therefore, some attention to the health and functioning of non-national political processes and institutions, especially in California, is warranted.

Although federal dollars will always keep a good number of S&T policy decisions in Washington, state and local government leaders will increasingly be called on to guide us through many tough public conversations. As the state with its face to the future and its home on the technological frontier, California will likely have considerable influence on the direction and scope of the local, national, and even global public response to innovation. The state’s leadership position in R&D—first among the states in federal, industry, and total R&D funding— means that new technology products and processes, and the concurrent social upheaval that inevitably comes with them, will likely happen here first.

S&T policymaking is already a staple of the California government’s diet. Over the past decade, state lawmakers introduced an average of 270 bills per session (about 12 percent of all legislation) with some science or technology angle. California has also helped push the national envelope on issues such as financial privacy and identity theft. This year, Sacramento will decide whether to use the Global Positioning System to track criminals, whether to make peer-to-peer software makers liable for the illegal uses of their technology, and whether to stop Internet service providers from scanning the content of their subscribers’ e-mail messages.

California’s citizens are in the act as well. Although the 2004 statewide stem cell proposition dominated national headlines, there is also plenty of policymaking at the local level. For example, opponents of genetically modified (GM) products have been using a series of county-level ballot initiatives to institute what could eventually become a de facto statewide ban on GM foods, crops, and animals. Voters in Mendocino and Marin counties passed bans on GM products last year, and another dozen counties could be considering similar measures in the next few years.

California’s solutions to the public challenges posed by S&T will inevitably shape overall policy trends, possibly for centuries to come. The economic and legal impact of these measures will certainly not stop at the state border, which is why even those who live and work outside the state should be watching it carefully.

What’s at stake?

California’s S&T-based economic leadership is renowned. These industries contribute $159.7 billion to the gross state product and employ 1.29 million people in the state. Less widely known are the factors that drive the California’s high-tech engine: the relative availability of funding for research and startup companies, the state’s education system and well-educated workforce, and the effects of industry clustering. Nearly all these inputs can be advanced or hindered by government policy, and some, such as research funding for state universities and the state education system, are directly attributable to policy and spending decisions made by state officials. Unenlightened thinking on the part of state leaders, especially as it relates to the research and education budget, could result in short-term solutions that ultimately jeopardize the state’s long-term prosperity.

In addition to the economic and investment policy questions that state leaders must grapple with, there are some very dark clouds building at the nexus of environment, social issues, and S&T advances in California. Public backlash against emerging technologies, as seen in the county-level movement to ban GM products and the several small local skirmishes over nanotechnology, remains as constant a factor as ever. And given the historic tendency for deep political polarization over some S&T advances and the pervasiveness and convergence of new technologies, the likelihood of numerous small fights breaking out all over the state is almost inevitable. If state leaders do not handle these clashes wisely while there is still time to make minor, relatively painless course corrections, they could grow into vast, intractable, and expensive battles.

One would think that California would have every reason to do it right, to have the most advanced public institutions and the very best decision support for S&T policy. The state sets global policy trends, it has an economy and state budget that depend on high-tech industries, and very few politicians actually like being torn between warring religious, business, and environmental constituencies. Yet the state’s S&T governance structures serve best as a model of what not to do.

Even though California is a global leader in S&T innovation, the related policy figures relatively low on state government’s priority list. California spent the past five years gutting and then demolishing entirely the Division of Science, Technology and Innovation (DSTI) at the now closed state Technology, Trade and Commerce Agency. DSTI was the only state government unit charged with a broad mandate to track, support, inform, and provide coherence to state S&T policy. Though still a fledgling entity when it was axed, DSTI guided initiatives in biomass, next-generation Internet, rural e-commerce, high-tech manufacturing, and aerospace, among others.

Most important, DSTI served as an advocate for S&T policy within the executive branch, a function that is now nearly lost. This loss has been clearly felt in the relatively narrow agenda of California’s Governor Arnold Schwarzenegger. The governor’s S&T priorities have been limited almost entirely to hydrogen fuels and stem cells. When compared to the governors of other states, such as Michigan’s Jennifer Granholm and Ohio’s Bob Taft, or even to his predecessor, Gray Davis, Schwarzenegger meets only the bare minimum for vision and thoughtfulness in state S&T policy.

The state’s continual budget problems have undermined other tech-critical programs and policies. The state-funded research budget for the University of California was cut for three successive budget years by 10, 10, and 5 percent. Even more troubling, California now ranks 21st in per-capita university-based R&D in engineering, down from 12th five years ago. University-based research in engineering was a critical component of California’s technology leadership in the latter half of the 20th century.

The legislature

For its part, the state legislature, which would be the most appropriate platform for public discourse on the implications of S&T, is unmatched to the task. S&T policy often requires sustained attention over long periods, yet California’s term-limits law shortened average service time from more than eight years to less than three years. Too many legislators leave just as they become conversant with complex S&T issues, and there is no formal mechanism for mentoring or training new members to help them get up to speed quickly. S&T policy-making in California now depends almost entirely on the random attentions of a rapidly churning body of members.

Further, unlike the 17 other states with some form of dedicated standing committee on S&T policy, California’s legislature shuttles S&T-related bills through multiple standing committees with multiple jurisdictions and competing priorities. On the surface, it might seem admirable that the legislature has incorporated S&T concerns so well into its process that there is no need for a specialized S&T committee. Unfortunately, this is not the case. Bills with specific S&T implications receive some analysis during the standing committee process, but they typically do not receive the formal or expert assessment that they often require.

Even on those occasions when a standing committee chair is personally interested in devoting adequate attention to S&T complexities, term limits make sustained attention impossible. As in the legislature as a whole, the average service time of committee chairs has dropped precipitously from roughly 10 years to less than 3 years.

And although some California legislators will jockey for positions on tech-centered study committees, they do so at a high cost in political capital for a questionable policy gain. Although these study committees do examine issues closely, as a rule they are created in reaction to already well-known issues such as privacy or genetics, their jurisdictions are narrowly defined, their power is limited, their reports often gather dust, and they invariably sunset after the lead legislator terms out.

Finally, other than the authoring of bills, which brings media attention, campaign contributions, and constituent approval, the legislative process for S&T policy in California offers few routine institutional rewards for members who are willing to engage tough long-term issues. In other policy areas, such as health or education, legislators who assume leadership are rewarded with signs of power, such as a committee chairmanship, a higher media profile, and a cadre of well-trained staff. Without such benefits and within the context of California’s inefficient, ad hoc S&T policy process and term-limited environment, few state legislators are willing, or even capable, of tackling California’s most challenging issues.

Without supportive institutions in place in the state government, tough policy decisions might increasingly fall to the least nuanced means of policymaking: the ballot initiative. Complex issues such as GM foods and stem cell research are not suited to up or down votes. There are too many trade-offs to balance and consider. Stem cell voters, for example, were not offered the chance to determine whether they would rather spend their $3 billion investment on alternative energy research or green technology, industries that, like stem cell research, also have the potential to profoundly improve lives. Neither were voters asked if they might be willing to increase taxes to pay for the investment, rather than to finance it with bonds, or if they would consider a larger or smaller investment. These kinds of negotiations are possible only in the legislative process.

Leaving tough S&T policy decisions to the initiative process also has a practical downside. If California counties establish a patchwork of regulations around the state, it will become increasingly difficult for researchers and businesses to figure out and abide by community norms and rules. Tech-based development will certainly slow and possibly stall out altogether.

Structural reform

The bottom line to all of this is that it is simply too easy for California policymakers to put off S&T policy as “not my concern.” California’s S&T policy failures, as well as its successes, can be blamed on everyone and no one, which makes it exceptionally difficult for voters to assign responsibility for some of the toughest and most intractable public challenges. When so much is at stake, this is simply unacceptable.

Any solutions to California’s S&T policy failures, however, must meet some basic good governance standards. First and foremost, state policymakers need the intellectual capacity to make informed decisions. They need expert advice presented in nontechnical terms, in real time, and with a clear understanding of public values, preferences, and flash points.

Second, citizens must be able to easily identify, and gain access to, elected officials who are directly and uniquely accountable for the state’s overall S&T policy. Those officials should provide a neutral public platform for safe, respectful, and thorough dialogue on controversial matters.

Third, the state must manage information more effectively. S&T policy questions are often long-term issues. Elected officials in California are ultimately short-term players. Some degree of institutional memory and a formal capacity for mentoring and transitioning leadership must be a priority. However, information is only useful when it is shared. The state needs a central router to serve as universal translator, hub, and distributor for information related to S&T policy issues within government.

Last, the best way to move S&T policy to the top of the agenda is to reward public officials for their efforts with staff, authority, and media attention.

When the Founding Fathers devised the U.S. system of government, they were painstakingly careful in devising the process of government. They knew that the system would have to work whether the people in it were smart or stupid, experienced or naive, interested or not. California could improve its S&T policy system by adopting several former and current federal S&T governance structures:

Establish standing committees on S&T in the Assembly and the Senate. Committees are the heart and soul of the legislative policymaking process. They vet programs and policies, circulate information, clear out bad ideas and champion good ones, and maintain institutional memory about the arcane details of government. Committees also serve as a platform for public discourse and negotiation, and they provide political capital to their members. In a term-limited environment, committees are the key to moving issues over the long haul.

Host a corps of volunteer scientist-fellows in state government. The California government already has a widely respected, very competitive fellows program for recent college graduates; it should follow the model of the Jefferson Science Fellows program at the U.S. Department of State by creating fellowship positions for tenured faculty from the state’s universities. These fellows would advise various policy committees and targeted programs in the executive branch on the scientific and technical aspects of their work, thereby increasing staff capacity and providing a vital assessment-support function.

Incorporate S&T literacy in new member training at the state and local levels. California state legislators receive limited policy training when they enter service. The learning curve, especially on complicated matters, is steep. Nonpartisan S&T literacy training could increase the intellectual capacity of the legislature and result in better decisionmaking. Further, since a good number of state legislators begin their public service in local government and local officials are increasingly faced with controversial S&T issues in their own right, training for gateway offices is also necessary.

Create a California Office of Technology Assessment. Modeled after the now-defunct federal Office of Technology Assessment, this agency would serve as a research, analysis, and assessment unit, as well as a platform for public testimony. With additional funding, the California Council on Science and Technology, a state-chartered but underfunded nonprofit science advisory body, could grow into this role.

Appoint a governor’s S&T counsel. Appoint a cabinet-level S&T advisor to the inner circle of the governor’s personal staff. Similar to the director of the Office of Science and Technology Policy within the Executive Office of the President, the governor’s S&T counsel would champion S&T at the highest levels of the executive branch, bring unique expertise to policy development in the governor’s office, and serve as a single point of contact within the executive branch for the state’s S&T constituencies.

In the future, the battle over S&T policy will not be isolated to one field, one product, or one technology. Distinctions between scientific disciplines and engineering are disappearing, and technology is increasingly ubiquitous. Even a well-intentioned policymaker hoping to forestall a known negative outcome in one field could unwittingly set off storms of unintended consequences in others. Traditional divisions between policy areas such as health care and computing and between legal jurisdictions such as state, federal, and local government will be less meaningful. Policymakers at all government levels and in all fields are going to have to find ways to chart the brave new world now confronting them.

Real Numbers

With prospects for economic growth improving across the Organisation for Economic Co-operation and Development (OECD) region, renewed attention is being directed to ways of tapping into science, technology, and innovation to achieve economic and societal objectives. As OECD economies become more knowledge-based and competition from emerging countries such as China and India increases, OECD countries will become more reliant on the creation, diffusion, and exploitation of scientific and technological knowledge to enhance growth and productivity.

Weak economic conditions limited science and technology (S&T) investments at the turn of the century: Global R&D investments, for example, grew at a rate of less than 1 percent between 2001 and 2002, compared to 4.6 percent annually between 1994 and 2001. As a result, R&D spending slipped from 2.28 percent to 2.26 percent of gross domestic product (GDP) across the OECD. Nevertheless, many OECD countries have introduced new or revised national plans for science, technology, and innovation policy, and a growing number of countries have established targets for increased R&D spending.

Virtually all countries are seeking ways to enhance the quality and efficiency of public research, stimulate business investment in R&D, and strengthen linkages between the public and private sectors. Most OECD governments have successfully shielded public R&D investments from spending cutbacks and, in many cases, have been able to increase them modestly. Although they remain far below the levels of the early 1990s, OECD-wide government R&D expenditures rose from 0.63 percent to 0.68 percent of GDP between 2000 and 2002 as budget appropriations grew, most notably in the United States. In many countries, a growing share of funding is linked to public-private partnerships.

To enhance innovation capabilities, OECD governments will have to shape policy to respond to challenges related to supplies of S&T workers, service sector innovation, and globalization. Although demand for scientists and engineers continues to grow, many countries foresee declining enrollments in related academic fields. Services account for a growing share of R&D in OECD countries—23 percent of total business R&D in 2000 compared to 15 percent in 1991—and the ability of service firms to innovate will greatly influence overall growth, productivity, and employment patterns. In addition, science, technology, and innovation are becoming increasingly global. The combined R&D expenditures of China, Israel, and Russia were 15 percent of OECD country R&D spending in 2001, up from 6.4 percent in 1995, and in many OECD countries, the share of R&D performed by foreign affiliates of multinational enterprises (MNEs) has increased. Policymakers need to ensure that OECD economies remain strong in the face of growing competition and benefit from the expansion of MNE networks.

The source of this information as well as much more data and analysis of the policy environment is the OECD Science, Technology and Industry Outlook 2004, available at www.oecd.org/sti/sti-outlook.

Business R&D spending declines

Even though industry-funded R&D has increased sharply in Japan and modestly in the European Union (EU) in recent years, OECD-wide R&D has declined because of steep cutbacks in the U.S. business sector. In addition, venture capital investments plummeted from $106 billion to $18 billion in the United States between 2000 and 2003, and from 19.6 billion to 9.8 billion euros between 2000 and 2002 in the EU.

Is the patent boom slowing?

Driven by the information technology and biotechnology sectors, applications filed at the U.S., European, and Japanese patent offices surged to 850,000 in 2002, up from 600,000 in 1992. Yet for reasons that are not yet clear, the surge in patent applications slowed dramatically in the late 1990s, despite increased business R&D spending. In the United States, the growth rate of patent applications fell from 10 percent per year during the late 1990s to below 3 percent in 2001–2002. Measures of patent families—patents filed to protect the same invention—also slowed dramatically in the late 1990s.

Ensuring supplies of scientists and engineers

Employment in highly skilled occupations grew about twice as fast as overall employment between 1995 and 2002, and the number of researchers across the OECD grew, from 2.3 million in 1990 to 3.4 million in 2000, or from 5.6 to 6.5 researchers per 10,000 employees. But the share of university-level graduates with degrees in science and technology dipped slightly between 1998 and 2001. The number of foreign first-time Ph.D. students enrolled in U.S. universities appears to have declined in 2003–2004, whereas the United Kingdom and Australia posted increases in foreign enrollments. In addition, more Chinese students are receiving their university educations at home.

Services account for more innovation

In 2000, services accounted for 70 percent of total value added in the OECD. In addition, two-thirds of the increase in value added between 1990 and 2001 came from services, as did most employment growth. In a recent survey of European firms, more than 60 percent and 50 percent, respectively, of respondents from the business and financial services sectors reported that they had introduced a new product or service in the previous three-year period—higher than the average share among manufacturing firms. Services accounted for only about 20 percent of OECD-wide R&D spending in 2000, but growth rates have been considerably higher than in manufacturing.

Non–OECD country capabilities grow

China’s R&D investments climbed from $21 billion to $70 billion between 1996 and 2002, behind only those of the United States and Japan in absolute terms. The number of university graduates in China in 2000 (739,000) was equivalent to 13 percent of the OECD total, and half received science and engineering degrees. Graduates from Indian and Russian universities were equal to 12 percent and 11 percent, respectively, of OECD totals. Meanwhile, foreign R&D investments in emerging economies have grown rapidly as those nations’ technological capabilities have increased and markets have become more open.

Foreign affiliates play a larger role

The activities of foreign affiliates of multinational enterprises are fueling the globalization of R&D. R&D performed abroad by foreign affiliates increased by more than 50 percent in nominal terms between 1991 and 2001 and represented well over 12 percent of total expenditures on industrial R&D in the OECD in 2001. Policymakers must increasingly aim to both attract foreign investment in R&D and to extract economic benefits from foreign and domestic R&D.


Jerry Sheehan () is a senior economist in the OECD’s Science and Technology Policy Division in Paris.

Syndromic Surveillance

Heightened awareness of the risks of bioterrorism since 9/11 coupled with a growing concern about naturally emerging and reemerging diseases such as West Nile, severe acute respiratory syndrome (SARS), and pandemic influenza have led public health policymakers to realize the need for early warning systems. The sooner health officials know about an attack or a natural disease outbreak, the sooner they can treat those who have already been exposed to the pathogen to minimize the health consequences, vaccinate some or all of the population to prevent further infection, and identify and isolate cases to prevent further transmission. Early warning systems are especially important for bioterrorism because, unlike other forms of terrorism, it may not be clear that an attack has taken place until people start becoming ill. Moreover, if terrorism is the cause, early detection might also help to identify the perpetrators.

“Syndromic surveillance” is a new public health tool intended to fill this need. The inspiration comes from a gastrointestinal disease outbreak in Milwaukee in 1993 involving over 400,000 people that was eventually traced to the intestinal parasite Cryptosporidium in the water supply. After the fact, it was discovered that sales of over-the-counter (OTC) antidiarrhea medications had increased more than threefold weeks before health officials knew about the outbreak. If OTC sales had been monitored, the logic goes, thousands of infections might have been prevented.

The theory of syndromic surveillance, illustrated in Figure 1, is that during an attack or a disease outbreak, people will first develop symptoms, then stay home from work or school, attempt to self-treat with OTC products, and eventually see a physician with nonspecific symptoms days before they are formally diagnosed and reported to the health department. To identify such behaviors, syndromic surveillance systems regularly monitor existing data for sudden changes or anomalies that might signal a disease outbreak. Syndromic surveillance systems have been developed to include data on school and work absenteeism, sales of OTC products, calls to nurse hotlines, and counts of hospital emergency room (ER) admissions or reports from primary physicians of certain symptoms or complaints. Current systems typically include large amounts of data and employ sophisticated information technology and statistical methods to gather, process, and display the information for decisionmakers in a timely way.

Figure 1.

 

Source: Michael Wagner, University of Pittsburgh

The theory of syndromic surveillance

This theory was turned into a reality when some health departments, most notably New York City’s, began to monitor hospital ER admissions and other data streams. In 2001, the Defense Advanced Research Projects Agency funded four groups of academic and industrial scientists to develop the method. After 9/11, interest and activity in the method increased dramatically. The Centers for Disease Control and Prevention’s (CDC’s) BioSense project operates nationally and is slated for a major increase in resources. In addition, CDC’s multibillion-dollar investment in public health preparedness since 9/11 has encouraged and facilitated the development of syndromic surveillance systems throughout the country at the state and local levels. The ability to purchase turnkey surveillance systems from commercial or academic developers, plus the personnel ceilings and freezes in some states that have made it difficult for health departments to hire new staff, have also made investments in syndromic surveillance systems an attractive alternative. As a result, nearly all states and large cities are at least planning a syndromic surveillance system, and many are already operational.

In the short time since the idea was conceived, there have been remarkable developments in methods and tools used for syndromic surveillance. Researchers have capitalized on modern information technology, connectivity, and the increasingly computerized medical and administrative databases to develop tools that integrate vast amounts of disparate data, perform complex statistical analyses in real time, and display the results in thoughtful decision-support systems. The focus of these efforts is on identifying reliable and quickly collected data that are generated early in the disease process. Statisticians and computer scientists have adapted ideas from the statistical process control methods used in manufacturing, Bayesian belief networks, statistical pattern-recognition algorithms, and many other areas. Syndromic surveillance has also become an extraordinarily active research area. Since 2002, an annual national conference (www.syndromic.org) has drawn hundreds of researchers and practitioners from around the country and the world.

Many city and state public health agencies have begun spending substantial sums to develop and implement these surveillance systems. Despite (or maybe because of) the enthusiasm for syndromic surveillance, however, there have been few serious attempts to see whether this tool lives up to its promise, and some analysts and health officials have been skeptical about its ability to perform effectively as an early warning system. The balance between true and false alarms, and how syndromic surveillance can be integrated into public health practice in a way that truly leads to effective preventive actions, must be carefully assessed.

Practical concerns

Syndromic surveillance systems are intended to raise an alarm, which then must be followed up by epidemiologic investigation and preventive action, and all alarm systems have intrinsic statistical tradeoffs. The most well-known is that between sensitivity (the ability to detect an attack when it occurs) and the false-positive rate (the probability of sounding an alarm when there in fact is no attack). For instance, thousands of syndromic surveillance systems soon will be running simultaneously in cities and counties throughout the United States. Each might analyze data from 10 or more data series—symptom categories, separate hospitals, OTC sales, and so on. Imagine if every county in the United States had in place a single syndromic surveillance system with a 0.1 percent false-positive rate; that is, the alarm goesoff inappropriately only once in a thousand days. Because there are about 3,000 counties in the United States, on average three counties a day would have a false-positive alarm. The costs of excessive false alarms are both monetary, in terms of resources needed to respond to phantom events, and operational, because too many false events desensitize responders to real events.

Syndromic surveillance adds a third dimension to this tradeoff: timeliness. The false-positive rate can typically be reduced, but only by decreasing sensitivity or timeliness or both. Analyzing a week’s rather than a day’s data, for instance, would help improve the tradeoff between sensitivity and false positives, but waiting a week to gather the data would reduce the timeliness of an alarm.

Beyond purely statistical issues, the value of syndromic surveillance depends on how well it is integrated into public health systems. The detection of a sudden increase in cases of flulike illness—the kind of thing that syndromic surveillance can detect—can mean many things. It could be a bioterrorist attack but is more likely a natural occurrence, perhaps even the beginning of annual flu season. An increase in sales of flu medication might simply mean that pharmacies are having a sale. A surge in absenteeism could reflect natural causes or even a period of particularly pleasant spring weather.

Although the possibility of earlier detection and more rapid response to a bioterrorist event has tremendous intuitive appeal, its success depends on local health departments’ ability to respond effectively. When a syndromic surveillance system sounds an alarm, health departments typically wait a day or two to see if the number of cases continues to remain high or if a similar signal is found in other data sources. Doing so, of course, reduces both the timeliness and sensitivity of the original system. If the health department decides that an epidemiological investigation is warranted, it may begin by identifying those who are ill and talking to their physicians. If this does not resolve the matter, additional tests must be ordered and clinical specimens gathered for laboratory analysis. Health departments might choose to initiate active surveillance by contacting physicians to see if they have seen similar cases.

A syndromic surveillance system that says only “there have been 5 excess cases of flulike illness at hospital X” is not much use unless the 5 cases can be identified and reported to health officials. If there are 55 rather than the 50 cases expected, syndromic surveillance systems cannot say which 5 are the “excess” ones, and all 55 must be investigated. Finally, health departments cannot act simply on the basis of a suspicion. Even when the cause and route of exposure are known, the available control strategies—quarantine of suspected cases, mass vaccination, and so on—are expensive and controversial, and often their efficacy is unknown. Coupled with the confusion that is likely during a terrorist attack or even a natural disease outbreak, making decisions could take days or weeks.

Research questions and answers

Much of the current research on syndromic surveillance focuses on developing new methods and demonstrating how they work. Although impressive, this kind of research stops short of evaluating the methods from a theoretical or practical point of view. Comparing the promise of syndromic surveillance with practical concerns about its implementation leads to two broad research questions.

First, does syndromic surveillance really work as advertised? This includes questions about trade-offs among sensitivity, false-positive rates, and timeliness, as well as more practical concerns about what happens after the alarm goes off. Somewhat more positively, one can also ask how well syndromic surveillance works in detecting bioterrorism and natural disease outbreaks, and how this performance depends on the characteristics of the outbreak or attack. The performance likely depends on variables such as the pathogen causing the problem, the numbers of antee nor are necessarily required for timely detection. The most error-free and timely data will be useless if the responsible pathogen causes different symptoms than are represented in the data. On the other side, a sudden increase in nonspecific symptoms might indicate something worth further investigation.

The second question, and the focus of most current research, is about how the performance of syndromic surveillance systems can be improved. This includes gaining access to more, different, and timelier data, as well as identifying data streams with a high signal-to-noise ratio. Researchers are developing sophisticated statistical detection algorithms to elicit more from existing data and more accurate models that describe patterns in the data when there are no outbreaks, as well as detection algorithms that focus on particular kinds of patterns, such as geographical clusters, in the data. Other areas of exploration include methods for integrating data from a variety of sources and displaying it for decisionmakers in a way that enables and effectively guides the public health response. In response to these two broad questions, one line of research focuses on the quality and timeliness of the data used in syndromic surveillance systems. When patients are admitted to the emergency room, for instance, their diagnoses are not immediately known. How accurately, researchers can ask, does the chief complaint at admission map to diseases of concern? Are there more delays or incomplete reporting in data stream A than in B? Although such studies might help decisionmakers decide which data to include in syndromic surveillance systems, “good” data neither guarantee nor are necessarily required for timely detection. The most error-free and timely data will be useless if the responsible pathogen causes different symptoms than are represented in the data. On the other side, a sudden increase in nonspecific symptoms might indicate something worth further investigation.

A second line of research considers the epidemiologic characteristics of the pathogens that terrorists might use. Figure 2, for instance, illustrates the difference between an attack in which many people are exposed at the same time and one in which the contagious agent might cause large numbers of cases in multiple generations. Example A illustrates what might be found if 90 people were exposed to a noncontagious agent such as anthrax, and symptoms first appeared an average of 8 days after exposure. Example B illustrates the impact of a smaller number of people (24) exposed to a contagious agent such as smallpox with an average incubation period of 10 days. Two waves of cases appear, the second three times larger and 10 days after the first. The challenge— and the promise—of syndromic surveillance is to detect the outbreak and intervene by day 2 or 3. But the public health benefits of an early warning depend on the pathogen. In example A, everyone would already have been exposed by the time that the attack was detected; the benefits would depend on the ability of health officials to quickly identify and treat those exposed and on the effect of such treatment. On the other hand, if the agent were contagious, as in example B, intervention even at day 10 could prevent some or all of the second generation of cases.

Figure 2.

 

The solid line represents when 90 people exposed to a noninfectious pathogen such as anthrax would develop symptoms. The broken line represents two generations of an infectious pathogen with a generation time of 10 days, such as smallpox. Twenty-four individuals are exposed in the first wave, resulting in 72 cases in the second wave.

The retrospective analysis of known natural outbreaks represents a third approach to evaluation. One such study involved four leading research teams and compared the sensitivity, false-positive rate, and timeliness of their detection algorithms in two steps. First, an outbreak detection team identified actual natural disease outbreaks— eight involving respiratory illness and seven involving gastrointestinal illness—in data from five metropolitan areas over a 23-month period but did not reveal them to the research teams. Second, each research team applied its own detection algorithms to the same data, to determine whether and how quickly each event could be detected. When the detection threshold was set so that the system generated false alarms every 2 to 6 weeks, each research team’s best algorithms were able to detect all of the respiratory outbreaks. For two of the four teams, detection typically occurred on the first day that the outbreak detection team determined as the start of the outbreak; for the other two teams, detection occurred approximately three days later. For gastrointestinal illness, the teams typically were able to detect six of seven outbreaks by one to three days after onset. If the threshold were raised to make false alarms less frequent, however, sensitivity and timeliness would suffer.

A fourth approach to evaluation studies relies on statistical or Monte Carlo simulations. Researchers “spike” a data stream with a known signal, run detection algorithms as if the data were real, and record whether the signal was detected, and if so, when. They then repeat this process multiple times to estimate how the sensitivity—the probability of detection, the false-positive rate, and timeliness—depends on the size, nature, and timing of the signal and other characteristics.

One simple example of this approach used flu-symptom data from a typical urban hospital, to which a hypothetical number of extra cases spread over a number of days was added to mimic the pattern of a potential bioterror attack. The results indicate the size and speed that outbreaks must attain before they are detectable. These results are sobering: Even with an excess of nine cases over two days (the first two days of the “fast” outbreak), three times the daily average, there was only about a 50 percent chance that the alarm would go off. When 18 cases were spread over nine days, chances were still no better than 50/50 that the alarm would sound by the ninth day. Moreover, this finding holds true only outside of the winter flu season. In the winter, the detection threshold must be set high so that the flu itself does not sound an alarm, but a terrorist attack that appeared with flulike symptoms would be harder to detect.

This simulation study also assessed how quickly four specific detection algorithms would detect an attack. The first was based on one day’s data; the others used data from multiple days. The analysis found that all of the algorithms were equally effective in detecting a fast-spreading agent (one in which all simulated new cases were spread over three days—see Figure 3). However, detection algorithms that integrate data from multiple days had a higher probability of detecting a slow-moving attack (in which simulated new cases were spread over nine days—see Figure 4).

Figure 3.

 

Shaded bars correspond to four detection algorithms: the first using only one day’s data, the other three combining data from multiple days. All four syndromic surveillance methods worked equally well for fast-spreading bioterror attacks but had only about a 50-50 chance of detecting the outbreak by day 2.

Figure 4.

 

Methods that combine data from multiple days (the screened bars) were more effective at detecting slow-spreading attacks, but even the best method took until day 9 to have a 50-50 chance of detecting a slow outbreak.

Can this performance be improved?

Researchers are now exploring ways of improving system performance with better methods. Current syndromic surveillance systems, for instance, typically compare current cases to the number of cases in the previous day or week, the number in the same day in the previous year, or some average of past values. More sophisticated approaches use statistical models to “filter” or reduce the noise in the background data so that a signal will be more obvious. For instance, if a hospital ER typically sees more patients with flu symptoms on weekend days (when other facilities are not open), a statistical model can be developed to account for this effect.

Monitoring multiple data streams will increase the frequency of alarms, but the number of false positives will also increase with the number of series monitored. A simple way to address this is to pool data on the number of people with particular symptoms across all of the hospitals in a city, and indeed that is what cities such as Boston and New York are currently doing. If both the signal and the background counts increase proportionally, this approach will result in a more effective system. On the other hand, if all of the extra cases appeared at one hospital (say, for instance, the one closest to the Democratic or Republican national convention in 2004), this signal would be lost in the noise of the entire city’s cases. Researchers have developed multivariate detection algorithms to combine the available data to achieve the optimal tradeoff among sensitivity, specificity, and timeliness. Simulation studies, however, show that the payoff of these algorithms is marginal.

Another approach is to carefully tune detection algorithms to detect syndromes that are less common than flu symptoms. The combination of fever and rash, for instance, is rare and suggests the early stages of smallpox infection. A syndromic surveillance system set up to look at this combination would likely be more effective than the results above suggest, but it would be sensitive only to smallpox and not to terrorist agents that have other symptoms.

Ultimately, there really is no free lunch. As in other areas of statistics, there is an inherent tradeoff between sensitivity and specificity. The special need for timeliness makes this tradeoff even more difficult for syndromic surveillance. Every approach to increasing sensitivity to one type of attack is less specific for some other outbreak or attack scenario. To circumvent this tradeoff, we would have to have some knowledge about what to expect.

Where do we go from here?

Concerned about the possibility of bioterrorist attacks, health departments throughout the United States are enthusiastically developing and implementing syndromic surveillance systems. These systems aim to detect terrorist attacks hours after people begin to develop symptoms and thus enable a rapid and effective response. Evaluation studies have shown, however, that unless the number of people affected is exceptionally large, it is likely to be a matter of days before enough cases accumulate to trigger detection algorithms. Of course, if the number of people seeking health care is exceptionally large, no sophisticated systems are needed to recognize that. The window between what is barely detectable with syndromic surveillance and what is obvious may be small. Moreover, an early alert may not translate into quick action. Thus, the effectiveness of syndromic surveillance for early detection and response to bioterrorism has not yet been demonstrated.

Syndromic surveillance is often said to be cost-effective because it relies on existing data, but to my knowledge there have been no formal studies of either cost or cost efficacy. The costs of syndromic surveillance fall into three categories. First, the data must be acquired. Some early systems required substantial and costly human intervention for coding, counting, and transmitting data. In many cases, these costs have been reduced by the use of modern information technology to automate the process. Second, the information technology itself must be paid for, including installation, maintenance, and training of staff. There are various options here, and the cost will be in the range of thousands of dollars per year per data source. Although this is not expensive per source, the total costs for a local or state health department can mount rapidly because of the number of possible sources. Finally, and most importantly, there is the cost of responding to system alarms. Setting a higher threshold can control this cost, but that defeats the purpose of the system. The New York City Department of Health and Mental Hygiene estimates the annual cost of operating its syndromic surveillance system, including maintenance and routine followup of signals, but not R&D costs, as $150,000. President Bush’s proposed 2005 federal budget included over $100 million for CDC’s BioSense project of syndromic surveillance on a national scale, and this does not include the cost of responding to alarms.

Evaluating syndromic surveillance is not as simple as deciding whether or not it “works.” As with medical screening tests and fire alarms, it is important to know which situations will trigger a syndromic surveillance alert and which will be missed. Characterizing the performance of syndromic surveillance systems involves estimating the sensitivity, false-positive rate, and timeliness for the pathogens and outbreak types (defined by size, extent, timing, and other characteristics) that are expected. Doing this for specific syndromic surveillance data and detection algorithms can help health officials determine what combination of data and methods is most appropriate for their jurisdiction. It can also help health officials understand the meaning of a negative finding: If the system doesn’t raise an alarm, how sure can one be that there truly is no outbreak?

The search for new and better syndromic data, statistical detection algorithms, and approaches to integrating data from a variety of sources should also continue. The field is but a few years old, and it seems quite possible that system performance could be substantially improved or other areas identified where current methods work especially well.

Simultaneously, alternatives and supplementary approaches also should be explored. One possibility has been called “active syndromic surveillance.” A system called RSVP developed at Sandia National Laboratories takes a more interactive approach to syndromic surveillance, focusing on the relationship between physicians and public health epidemiologists. RSVP is a Web-based system that uses touch-sensitive computer screens to make it easy for physicians to report cases falling into one of six predefined syndromic categories without full clinical details or laboratory confirmation. The reports are transmitted electronically to the appropriate local health department, which may elect to follow up with the physician for more details. RSVP also includes analytical tools for state and local epidemiologists. To encourage participation, physicians get immediate feedback from the system on other similar cases in the region as well as guidelines for treating patients with the condition.

Another area that is ripe for research is the public health process that ensues when the alarm goes off. Certain types of syndromic surveillance data, or ways of presenting the results, might facilitate epidemiological investigations more effectively than do others and thus lead to a swifter or more appropriate response. Health systems might consider a formal system of triggers and responses, where the first step upon seeing a syndromic surveillance alarm would be to study additional existing data sources, the second step would involve asking physicians to be on the lookout for certain types of cases, and so on. Considering how and in what circumstances personal health data might be shared in the interest of public health goals while preserving patient confidentiality would be an important part of this research.

Finally, it is important to characterize the benefits of syndromic surveillance beyond the detection of bioterrorism. One possible use is to offer reassurance that there has been no attack, when there is reason to expect one. Such a reassurance is only legitimate, of course, if the surveillance system has been shown to be able to find outbreaks of the sort expected. Syndromic surveillance systems are also subject to false alarms when the “worried well” or people with other illnesses appear at hospitals after the news reports a possible problem. Although only five people in the Washington, D.C., metropolitan area were known to suffer health consequences of exposure to anthrax in 2001, syndromic surveillance systems set off alarms when people flooded area emergency rooms to be evaluated for possible exposure.

Syndromic surveillance systems can serve a variety of public health purposes. The information systems and the relationships between public health and hospitals that have been built in many cities and states will almost surely have value for purposes other than detecting a terrorist attack. For many public health issues, knowing what is happening in a matter of days rather than weeks or months would indeed be a major advance. Compared to bioterrorism alerts that try to detect events hours after symptoms occur, the time scale for natural disease outbreaks would allow for improvements in the sensitivity/false-positive rate tradeoff.

Syndromic surveillance might prove to be most useful in determining the annual arrival of influenza and in helping to determine its severity. Nationally, influenza surveillance is based on a network of sentinel physicians who report weekly on the proportion of their patients with influenza-like symptoms, plus monitoring deaths attributed to influenza or pneumonia in 122 cities. Laboratory analysis to determine whether a case is truly the flu, or to identify the strain, is only rarely done. Whether the flu has arrived in a particular state or local area, however, is largely a matter of case reports, which physicians often don’t file. Pandemic influenza, in which an antigenic shift causes an outbreak that could be more contagious and/or more virulent and to which few people are immune by virtue of previous exposure, is a growing concern. Syndromic surveillance of flulike symptoms might trigger more laboratory analysis than is typically done and in this way hasten the public health response.


Postdoctoral Training and Intelligent Design

“Kids, I’m here today to tell you why you should become scientists. In high school, while your friends are taking classes such as the meaning of the swim suit in contemporary TV drama, you can be taking biology, chemistry, physics, and calculus. College will be the same, except the labs will take a lot more time. After that, it gets better. While your classmates who go to law school or business school will be out on the street in three years looking for work, you can look forward to seven to eight years of graduate study and research. Sure, many of your college buddies will be earning more than $100,000 a year, but a few will be scraping by on $60,000.

Don’t be impatient, because your day will come. When you earn your Ph.D. and celebrate your 30th birthday, you still don’t have to get a real job. You can be a post-doc. This means you can spend an additional three to five years working in a campus lab, but now you will be paid for it. That’s right, $30,000 or even $40,000 a year— almost half what your 23-year-old little sister with a B.E. in chemical engineering will be earning. You won’t have health benefits, but you will be so hardened by your Spartan lifestyle that you will never get sick. And you won’t be eligible for parental leave, but you won’t have the time or the money to have a baby anyway.

When the postdoc finally ends and you’re wondering if you’ll ever spend any time away from a big research university, salvation is nigh. You see, there are very few tenure track positions at the university, so you will have the opportunity to develop new skills and look for other types of jobs. While your hapless contemporaries are already becoming bored with their careers, anxious about their teenage children, and worried about the size of their retirement accounts, you will be fresh, childless, and free of the burden of wealth. There used to be a TV ad that promised that ‘Life begins at 40.’ For you it could be true.”

The past decade has seen a rising tide of concern about postdoctoral research appointments, and with good reason. The fundamental promise of postdoctoral study—that one would move into a tenure track faculty position at a research university after completing what was essentially a research apprenticeship—has been broken. For far too many talented and hardworking young scientists, the postdoctoral appointment has become an underpaid and overworked form of indentured service that seldom leads to a faculty job and is poor preparation for alternative careers. Although no one has bothered to collect detailed information on what happens to these young people, the best estimate is that only 10 percent grab the golden ring of a faculty position in a major research university. What happens to the rest is open to conjecture.

In a country where everyone believes that science and engineering are vital to the nation’s economic prosperity, national security, and personal health, where we make enormous efforts to give the very young the skills to succeed in these fields, where we agonize at our inability to attract enough students (particularly women and minorities) to scientific careers, and where we provide a demanding undergraduate and graduate education to weed out the less qualified and less motivated, how is it possible that we treat this rare and precious resource of gifted, disciplined, and motivated Ph.D.s as so much worthless flotsam and jetsam? Is this is an elaborate and extended practical joke, a case of monumental cruelty, an instance of collective insanity, or simply a stunning example of human stupidity?

And what about the noncitizens who comprise 60 percent of the postdocs? Is this the latest version of inviting the Chinese to build U.S. railroads? They slave in U.S. labs for five years and then are sent home. Are they finding research faculty jobs in their home countries? Are they helping their domestic industry innovate? Who knows?

The responsible response

Well, perhaps this is a bit overstated. Fortunately, you can find a much more level-headed and rigorous discussion of the topic in Enhancing the Postdoctoral Experience for Scientists and Engineers, a report from the National Academies’ Committee on Science, Engineering, and Public Policy (COSEPUP). This report acknowledges the critical role that postdocs play in university research, the need for extensive training to prepare researchers for the demands of modern science, the understandable desire of principal investigators to make the most of limited research funds, and the pleasure and satisfaction that young scientists derive from devoting all their time to cutting-edge research. But it also emphasizes that in too many cases postdocs are exploited—underpaid and under-trained. The report finds that a successful postdoctoral system provides postdocs with the training they need to become successful professionals, with adequate compensation, benefits, and recognition, and with a clearly specified understanding of the nature and purpose of their appointment.

In the four years since the COSEPUP report was published, universities, federal agencies, and professional societies have taken actions to improve working conditions and compensation and to acquire more information about the treatment and career trajectories of postdocs. But as COSEPUP chair Maxine Singer points out in an article in Science (8 October 2004), stipends and benefits are still often inadequate, information about post-docs is still lacking, and far too many postdocs are not receiving the training and mentoring they need to be prepared for independent careers.

The lack of independence is of particular concern. In the early 1970s, the number of postdocs with fellowships to conduct their own research was about equal to the number that worked for a principal investigator. Today, about 80 percent of postdocs work for a principal investigator. This can be a valuable experience and good preparation for an independent career if it is properly managed, but we do not know how many postdoctoral appointments are well managed, and we hear too many reports of those that are not. Some scientists can still be stuck in postdoctoral positions when they turn 35. That’s old enough to be president of the United States. It should be old enough to manage a small research project.

No one intentionally designed the current postdoctoral system. It grew by accretion in response to short-term needs or opportunities, and the result is evidence that natural evolution does not always produce ideal outcomes. Perhaps we’ve finally found a place for “intelligent design” in education.

Forum – Winter 2005

Save our seas

As Carl Safina and Sarah Chasis point out in their article, “Saving the Oceans” (Issues, Fall 2004), public awareness about the condition of our oceans is growing, in part because of the release of reports by two blue ribbon oceans commissions. Providing a thorough comparison of the two commission reports, the authors drive home the fact that both commissions, despite their differences, reached the same conclusion: Our oceans are in dire straits. As cochairs of the bipartisan U.S. House of Representatives Oceans Caucus, we believe that the federal government is obligated to protect and sustainably cultivate our oceans.

Both the U.S. Commission on Oceans Policy and Pew Oceans Commission support the need for broad ocean policy changes. Without a more comprehensive approach, our nation is sorely limited in its ability to address issues like climate change, ecosystem-based management, shipping, invasive species, fisheries, water quality, human health, coastal development, and public education. Federal agencies need to coordinate better with one another as well as with state and regional agencies and groups working on oceans-re-lated issues.

In July 2004, the House Oceans Caucus introduced the Oceans Conservation, Education and National Strategy for the 21st Century Act (Oceans-21). Ours is a big bill with a big goal: compelling the government to rethink how it approaches oceans. The last time the government seriously considered its ocean management practices was more than 30 years ago, following the release of the Stratton Commission report. Since then, scientific understanding has grown by leaps and bounds, challenging policymakers to keep pace. The current ocean regulatory framework is piecemeal, ignoring interrelationships among diverse species and processes. Oceans-21 reworks this regulatory framework and puts ecosystem management at the forefront.

We, like the authors, believe that instilling an oceans stewardship ethic across the country is fundamental. Oceans-21 creates an education framework to promote public awareness and appreciation of the oceans in meeting our nation’s economic, social, and environmental needs. Only by understanding the critical role of oceans in our lives will people begin to understand the magnitude of our current crisis. The future of our oceans is a jobs issue, a security issue, and an environmental issue. How we deal with this crisis will determine what kind of world we pass on to our children and grandchildren.

The 109th Congress is now beginning. This session holds great promise for oceans legislation, especially with the president’s public response to the U.S. Commission’s report. The House Oceans Caucus will continue to tirelessly drive oceans issues into the limelight by expanding discussions and reintroducing Oceans-21, as well as other oceans-related legislation. Congress has heard the call for action, and we are answering.

REPRESENTATIVE TOM ALLEN

Democratic of Maine

REPRESENTATIVE SAM FARR

Democrat of California

REPRESENTATIVE CURT WELDON

Republican of Pennsylvania

Co-Chairs of the U.S. House of Representatives Oceans Caucus


The article by Carl Safina and Sarah Chasis presents a good summary of findings and recommendations from the Pew Ocean Commission and the U.S. Commission on Ocean Policy. Unfortunately, it was written before an election that may have condemned the good work of both commissions to the dustbin of history. It appears unrealistic to expect positive, meaningful (i.e., effective) action from Washington on ocean stewardship issues during the next several years. Sound science and proof of the need for action exist; probably lacking are the leadership vision and will to act. However, some action can be expected relative to increased emphasis on global ocean monitoring and observing systems in a continuing effort to increase understanding of ocean dynamics, ecosystems, and atmospheric interactions, including climate change. In any event, adequate funding for ocean initiatives will be problematic.

Perhaps the best hope for keeping both reports alive is to make the most of what little Washington is prepared to do, guard against regressive national ocean legislation, and focus energy and efforts on progressive initiatives at local, state, and international levels. California is a good example.

On October 18, 2004, Governor Schwarzenegger unveiled California’s Action Strategy, which seeks to restore and preserve the biological productivity of ocean waters and marine water quality, ensure an aquatic environment that the public can safely enjoy, and support ocean-dependent economic activities. It advances a strong policy for ocean protection and calls for effective integration of government efforts to achieve that goal. Accordingly, it improves the way in which California governs ocean resources and sets forth a strategy for ocean-related research, education, and technological advances. The action strategy includes the establishment of an Ocean Protection Council, funding to support council actions, implementation of a state ocean current monitoring system, and carrying out the state’s Marine Life Protection Act, which includes the establishment of marine protected areas. The strategy will be coordinated with the state’s coastal management, fisheries, and coastal water quality protection programs; the National Marine Sanctuary; the National Estuarine Research Reserve; and the Environmental Protection Agency’s National Estuary programs, among others. This pragmatic and active ocean agenda is consistent with Pew and U.S. Ocean Commission recommendations and can be emulated by other coastal states.

International efforts to advance ocean conservation programs should be supported. Much can also be done at the local level. Mirroring local, state, and international initiatives to address global climate change (initiatives taken despite inaction by the United States), this approach acts locally while embracing global concerns. It promotes public education about ocean issues and fosters coalition constituency-building that is vital to future campaigns to enact national ocean stewardship programs and policies. It may even serve to inspire, if not compel, national action.

The Pew and U.S. Ocean Commission reports were clarion calls to action that may well fall on deaf leadership ears in Washington. The crises and threats facing oceans will only grow in magnitude and intensity. Contrary to the implication in the title “Saving the Oceans,” oceans and coasts, like coveted geography everywhere, are never finally saved— they are always being saved. This is another reason why the work of ocean and coastal conservation supporters and advocates is never done, and why we can never give up the struggle.

PETER DOUGLAS

Executive Director

California Coastal Commission

San Francisco, California


Biotech relations

In “Building a Transatlantic Biotech Partnership” (Issues, Fall 2004), Nigel Purvis suggests that it is time for the United States and Europe to look toward their mutual interests in biotechnology, thus avoiding further harm from the current impasse. He proposes that the United States and Europe jointly address the needs of developing nations as one step toward a more productive relationship.

I fully support this recommendation. As Purvis notes, the U.S. Agency for International Development (USAID) has already renewed its focus on agriculture programs, and I want to assure him that biotechnology is fully a part of this focus. Our renewed emphasis includes a more than fourfold increase in support for biotechnology to contribute to improving agricultural productivity. USAID currently supports bilateral biotechnology programs with more than a dozen countries and several African regional research and intergovernmental organizations.

In addition to these bilateral efforts, we are already working with our European colleagues and other donors to support multilateral approaches. The Consultative Group on International Agricultural Research recently launched two new programs, on the biofortification of staple crops and on genetic resources, which include the use of modern biotechnology. The United States has worked through the G8 process to include biotechnology as one tool in the arsenal for addressing economic growth and hunger prevention.

Where I differ from Purvis’s analysis is over his characterization of developing countries’ interests. First, developing countries are not bystanders in this debate. They were active participants long before the recent controversy over U.S. food aid. The outcome of theCartagena Protocol in 2000 was due in large part to the strong participation of developing countries, whose negotiating positions were independent of those of the United States or European Union. Second, a greater range of developing countries such as India, South Africa, the Philippines, and Burkina Faso are also becoming producers, and thus potential exporters, of these crops. The United States and Europe cannot chart the way forward for biotechnology alone; developing countries are engaged in the technical and policy discussions already.

As is evident in these positions, developing countries are not likely to accept assistance “directed primarily to . . . keep their markets open to biotech imports and respect global norms on intellectual property rights.” USAID’s highest priority is to ensure that developing countries themselves have access to the tools of modern biotechnology to develop bioengineered crops that meet their agricultural needs. Many crops of importance to developing countries—cassava, bananas, sorghum, and sweet potatoes—are not marketed by the multinational seed companies and thus require public support. This will help us realize our first goal for biotechnology, which is economic prosperity and reduced hunger through agricultural development.

Tangible experience with biotechnology among more developing countries is a prerequisite to achieving Purvis’s goals of global scientific regulatory standards and open markets. We will not succeed until developing countries have more at stake than acceptance of U.S. and European products andhave the scientific expertise to implement technical regulations effectively. This can be achieved, as evidenced by the Green Revolution, which turned chronically food-insecure countries into agricultural exporters who now flex their muscles in the World Trade Organization. Ensuring that the current impasse between the United States and Europe does not cause broader harm will require that we recognize that developing countries may have as great a role in ensuring the future of biotechnology as the United States and Europe.

ANDREW S. NATSIOS

Administrator

U.S. Agency for International Development

Washington, D.C.


Public attitudes about biopharmaceuticals among the general public in Europe and the United States have more in common than not, as Nigel Purvis points out.

Yet there the similarities end. The regulatory approaches pursued by governments on both continents differ significantly, adversely affecting patient care and, in part, accounting for the departure of many of Europe’s best scientists to the United States.

Both European Commission and European Union member state regulations deny European consumers access to valuable information about new biotech drugs. By limiting consumer awareness, the European Commission and member states limit the ability of patients and doctors to choose from the best medically available therapies.

When setting drug reimbursement policies, some countries place restrictive limits on the ability of physicians to prescribe innovative biopharmaceuticals and then pay inflated prices for generic products. In the end, patient care suffers.

A recent report by the German Association of Research-Based Pharmaceutical Companies highlights the extent of the problem. In any given year, the report found that nearly 20 million cases can be identified—including cases of hypertension, dementia, depression, coronary heart disease, migraines, multiple sclerosis, osteoporosis, and rheumatoid arthritis—in which patients either did not receive any drug therapy or were treated insufficiently.

Patients in both the United States and Europe are optimistic about the benefits and improved health care available today through biotechnology. But how government policies limit or encourage access to those benefits affects patients everywhere.

ALAN F. HOLMER

President and Chief Executive Officer

Pharmaceutical Research and Manufacturers of America

Washington, D.C.


Science advising

Lewis M. Branscomb’s penetrating and comprehensive article “Science, Politics, and U.S. Democracy” (Issues, Fall 2004) ends with the sentence “Policymaking by ideology requires reality to be set aside; it can be maintained only by moving toward ever more authoritarian forms of governance.” This should be read as a warning that what has gone awry at the intersection between science and politics is dangerous not only because it can lead to policies that are wasteful, damaging, or futile, but because this development contributes to forces that can, over time, endanger American democracy itself.

As Branscomb emphasizes, in the United States the paths by which science feeds into government form a fragile organism that cannot withstand sustained abuse by the powers that be. The law is too blunt an instrument to provide appropriate protection. The Whistleblower Protection Act illustrates the problem, for it only applies if an existing statute or regulation has been violated, not if government scientists allege that their superiors have engaged in or ordered breaches of the ethical code on which science is founded. Furthermore, it is difficult to construct legislation that would provide such protection without unduly hampering the effectiveness of the government’s scientific institutions.

Democratic government depends for its survival not only on a body of recorded law but equally on an unwritten code of ethical conduct that the powerful components of the body politic respect. If that code is seen as a quaint annoyance that can be forgotten whenever it stands in the way, the whole body is threatened, not just its scientific organs.

The primacy of ideology over science to which Branscomb refers is just one facet of the growing primacy of ideology in American politics. This trend appears to have deep roots in American culture and is not about to disappear. The friction that this trend is producing is so visible at the interface between politics and science because this is where conflicts between ideology and reality are starkly evident and most difficult to ignore. For that reason, scientists have a special responsibility to make clear what is at stake to their fellow citizens. The scientific community has the potential to meet this responsibility because it enjoys the respect of the public, and established scientists are relatively invulnerable to political retribution. Whether this potential will be transformed into sufficient resolve and energy to face the challenge, only time will tell.

KURT GOTTFRIED

Professor of Physics, Emeritus

Cornell University

Chairman, Union of Concerned Scientists

Ithaca, New York


Lewis M. Branscomb’s article tells instructive stories about presidents from both parties who have violated the unwritten rules of science advice; rules about balance, objectivity, and freedom of expression. Most of the stories involve presidents who felt wounded by scientists, and scientists who were punished for violating the unwritten rules of political loyalty.

This discussion could usefully separate science advice into two streams, traditionally called policy for science and science for policy. The unwritten rules in policy for science are macroshaping with microautonomy. Elected officials shape the allocation of research funds at a broad level and make direct decisions on big-ticket facilities, but are supposed to leave the details of what gets funded to researchers, particularly at the level of project selection. In his focus on presidential interventions, Branscomb does not point out that a growing number of members of Congress have been violating these unwritten rules during the past few decades, with the active cooperation of many major research universities, through earmarking funding for specific projects and facilities. Even though these activities take money away from strategically important projects that have passed rigorous standards of quality control, the activities not only continue but grow.

For the public, the stakes may be even higher in science for policy: the use of scientific expertise in regulatory and other policy decisions. Most of Branscomb’s stories describe times when researchers and presidents disagreed on the policy implications of scientific evidence. The research community has consistently and rightly maintained that the public is served best when researchers can speak out on such matters with impunity. Public debate on important issues such as climate change and toxic substances needs to be fully informed if democracy—decision-making by the people, for the people—is to survive in an age of increasing technical content in public policy decisions.

The most disturbing of Branscomb’s stories tell about a mixing of these two streams of policy, of times when speaking out on policy issues has brought retribution in funding. Branscomb even seems to sanction this mixing by stressing the symbiosis of science and politics, including the need for science to make friends in the political world in order to maintain the flow of money into laboratories. This is a dangerous path to follow. The Office of Management and Budget’s first draft of its new rules on the independence of regulatory peer reviewers initially incorporated a particularly corrosive version of this mixing by declaring that any researcher that had funding from a public agency was not independent enough to provide scientific advice on its regulatory actions. As many observers rightly pointed out, this rule would have allowed technical experts from the private firms being regulated to serve as peer reviewers, while eliminating publicly funded researchers. This aspect of the proposed rule has fortunately been removed.

The public needs to protect its supply of balanced, objective scientific advice and knowledge from threats in both policy for science and science for policy. Although Branscomb’s article is aimed at the research community, broader publics should also be organizing for action in both areas.

SUSAN E. COZZENS

Director, Technology Policy and Assessment Center

Georgia Institute of Technology

Atlanta, Georgia


Lewis M. Branscomb proposes four rules to “help ensure sound and uncorrupted science-based public decisions.” I judge the rule key to be that “The president should formally document the policies that are to govern the relationship between science advice and policy.”

In George W. Bush’s second term, this would be the opportunity to quell overzealous staff in the White House, departments, and agencies, who, in the absence of explicit documented presidential policy, rely on their own predilections and readings of administration policy and admissibility.

Bush’s director of the Office of Science and Technology Policy has maintained that it is certainly not the policy of President Bush to disregard or distort science advice or to appoint any but the most competent people to advisory committees. But where is the presidential directive that the administration, the Congress, and the public can hold government officials to account for adhering to?

Explicit presidential policy should incorporate the 1958 code of ethics for government employees (provided to me many times as a consultant or special government employee):

Code of Ethics for Government Service

Any person in Government service should:

  1. Put loyalty to the highest moral principles and to country above loyalty to persons, party, or Government department.
  2. Uphold the Constitution, laws, and regulations of the United States and of all governments therein and never be a party to their evasion. . . .
  1. Never discriminate unfairly by the dispensing of special favors or privileges to anyone, whether for remuneration or not; and never accept, for himself or herself or for family members, favors or benefits under circumstances which might be construed by reasonable persons as influencing the performance of governmental duties.
  2. Make no private promises of any kind binding upon the duties of office, since a Government employee has no private word which can be binding on public duty. . . .
  1. Expose corruption wherever discovered.
  2. Uphold these principles, ever conscious that public office is a public trust.

(The Code of Ethics for Government Service can be found at 5 C.F.R., Part 2635. This version of the code was retrieved on 10/22/04 from www.dscc.dla.mil/downloads/legal/ethicsinfo/government_service.doc.)

The national interest lies in getting the best people into government and advisory positions. Although there is some benefit in having officials at various levels who have good channels of communication with the White House as a result of friendship or political affiliation, it seems to me that the appropriate way to assemble a slate of candidates for each position is through nonpartisan (rather than bipartisan) staffing committees, not the White House personnel office. The appointments and the ensuing conduct should be governed by the code above.

RICHARD L. GARWIN

IBM Fellow Emeritus

Thomas J. Watson Research Center at IBM

Yorktown Heights, New York

Richard L. Garwin was a member of the President’s Science Advisory Committee under Presidents Kennedy and Nixon.


Fisheries management

“Sink or Swim Time for U.S. Fishery Policy” (Issues, Fall 2004) is a helpful contribution to the continuing debate over U.S. fishery policy. However, James N. Sanchirico and Susan S. Hanna might give readers the impression that policymakers needed the reports of the two recent ocean policy commissions in order to understand the root cause of the problems facing our fisheries. That misperception might lead to expectations that appropriate policy will naturally follow the illumination of the problem.

Readers should recognize that the fishery problems outlined by the Pew Oceans Commission and the U.S. Commission on Ocean Policy were even more thoroughly explained in the 1969 report of the U.S. Commission on Marine Science, Engineering, and Resources (the Stratton Commission). That report led to many significant changes in government structure and policy related to the oceans. In terms of fundamental fishery policy, however, one must conclude that policymakers have essentially ignored the findings of the Stratton Commission concerning the root cause of fishery management problems.

The Stratton Commission clearly explained the biological and economic destructiveness that results from competition among fishermen for catch shares that are up for grabs. The commission recognized the joint biological and economic benefits that could be obtained for and from our fishery resources by having an objective of producing the largest net economic return consistent with the biological capabilities of the exploited stocks. If the recommendations of the Stratton Commission had been followed by fishery managers over the past 35 years, our fisheries would not be at the critical juncture they face today.

Ecosystem management and aligning incentives toward sustainability are not new ideas whose discovery was needed to allow progress on fishery management. As early as 1969, the Stratton Commission had explained the incentives facing fishermen under the open-access common-property regime that characterized most U.S. fisheries. Most of our current fishery management problems reflect the failure to adopt policies that align the incentives of fishermen with the broader interests of society. Sanchirico and Hanna offer specific policy actions that can be taken now to align incentives. But we should not assume that that knowledge will be acted on. The same politically oriented cautions that were offered in the Stratton Commission report are in play today. The public at large exhibits a “rational ignorance” concerning fishery policy. And fishery bioeconomics is too deep a subject for mass media treatment. Necessary changes in policy will require a continuing education effort aimed at the fishing community and their representatives. These fishery representatives include public officials who are nominal representatives of the public, with responsibility for the management of public-trust fishery resources.

As a commercial fisherman, I spent about half of my 40-year career fighting against the ideas that Sanchirico and Hanna put forth. When I finally convinced myself that the fishing industry’s opposition to those ideas was self-destructive, I became an advocate for policies that align the incentives facing fishermen with the interests of society. I welcome the support provided by the two ocean commissions, but I know that their pronouncements will not end the struggle.

RICHARD B. ALLEN

Commercial fisherman and independent fishery conservationist

Wakefield, Rhode Island


James N. Sanchirico and Susan S. Hanna have identified the important issues facing U.S. and world fisheries managers, and I agree with the major points they make. However, few have recognized that the problem with U.S. fisheries is primarily economic, not biological. There is no decline in the total yields from U.S. fisheries, whether measured economically or in biomass; the decline is in the profitability of fishing. U.S. fisheries are currently producing, on a sustainable basis, 85 percent of their potential biological yield. The crisis is not from overfishing but from how we manage the social and economic aspects of our fishery.

Although I agree that we could do better in terms of biological production, increasing U.S. biological yields by 15 percent is not going to solve any problems. We are going to cure our fisheries problems by solving the economics, not by fine-tuning biological production. Sanchirico and Hanna are right on target when they list ending the race for fish and aligning incentives as the highest priorities, and both of these items were included in the recommendations of the two ocean commissions. However, the Pew Commission was almost totally mute on how to achieve this and emphasized a strongly top-down approach to solving biological problems, without discussing or evaluating incentives in any detail. The U.S. Commission was much more thorough in looking at alternatives for aligning incentives.

There remains a strong thread through the reports of both commissions: the idea that the solutions for U.S. fisheries will come from better science, stricter adherence to catch limits, marine protected areas, and ecosystem management. I refer to these solutions as Band-aids, stopping superficial bleeding while ignoring the real problems. The U.S. Commission recommended theadoption of “dedicated access privileges,” including individual quotas, community quotas, formation of cooperatives, and territorial fishing rights. Movement to these forms of access and the associated economic rationalization that comes with them should be the highest priority. In U.S. fisheries, the biological yield is good and the economic value of the harvest is good, but the profitability of fishing is terrible.

Finally, the United States has adopted a model of fisheries regulation that includes centralized control through regional fisheries management councils or state agencies and almost total public funding of research and management. The more successful models from the rest of the world suggest that more active user involvement in science and decisionmaking and having those who profit from the exploitation of fish resources pay all the costs of management will be much more likely to result in good outcomes.

The more we spend time on restructuring the agencies and trying to decide what ecosystem management is, the longer we will delay in curing the problems afflicting U.S. fisheries.

RAY HILBORN

School of Aquatic and Fishery Sciences

University of Washington

Seattle, Washington


James N. Sanchirico and Susan S. Hanna and are on target in saying that we are at a critical time in U.S. fishery policy, but I would expand that view to included ocean policy internationally. The problems of degradation of the marine environment, overexploitation of resources, and insufficiency of current governance for ocean and coastal areas are global. Our oceans are under serious threat and major changes in policy are urgently needed.

A central feature of the U.S. Commission on Ocean Policy and the Pew Oceans Commission reports is the call for the implementation of ecosystem-based management: management of human impacts on marine ecosystems that is designed to conserve ecosystem goods and services. Ecosystem-based management needs to explicitly consider interactions among ecosystem components and properties, the cumulative impacts of human and natural change, and the need for clarity and coherence in management policy. Fisheries management must be part of this overall move toward ecosystem-based management, not remain as an isolated sector of policy.

The U.S. Commission recommends some needed changes in fisheries policy, but perhaps the most important change is instituting greater accountability for conservation in the management system. U.S. fisheries management, despiteits problems and failures, has some successful features: 1) there is a strong scientific advisory structure, 2) there is clear stakeholder involvement in management decisions, and 3) there is a governance structure that has the potential to deal with emerging issues. In order for this system to live up to its potential, accountability must be improved by ensuring that there is a positive obligation to implement strong management even if the stakeholder process of plan development fails. Under the current system, regional councils prepare management plans for approval or disapproval by the National Marine Fisheries Service (NMFS). If a plan is not developed in a timely manner or doesn’t meet conservation needs and is rejected, then usually no new management is implemented until a council develops a new plan, even if current management is clearly failing to conserve vital resources. In other words, the NMFS is presented with the choice of whether a submitted plan is better than nothing. Is that really the perspective we want for management? Alternatively, the U.S. Commission recommends a strong default rule: If a plan doesn’t meet conservation standards, no fishing should occur until one that does is available. In other words, shift the burden of conservation onto the management system, rather than the resource. Similarly, there must be an absolute and immediate obligation to adjust a plan if it doesn’t perform as intended.

Just managing fisheries is not enough to protect the marine environment. A broad suite of conservation measures is needed. The U.S. Commission calls for ecosystem-based management to be developed regionally and locally in a bottom-up approach to management. But in all cases there must be a positive, timely obligation for conservation. Participatory processes take time, and we need to remember that often the fish can’t wait.

ANDREW A. ROSENBERG

Professor of Natural Resources

Institute for the Study of Earth, Oceans and Space

University of New Hampshire

Durham, New Hampshire

Andrew A. Rosenberg is a former deputy director of the National Marine Fisheries Service.


Public anonymity

“Protecting Public Anonymity,” by M. Granger Morgan and Elaine Newton (Issues, Fall 2004), deals with one problem by exacerbating another. If someone breaks into my home, I don’t expect the authorities to punish me for carelessness but to punish the perpetrator. Yet most of the methods for protecting anonymity put the burden on those who collect or manage databases. Why not a clearer definition of what an abuse is and of punishments for the abusers?

We already allow the merging of databases with information about individuals, because a great deal of research requires a lot of information about each individual, not for revelation but for statistical purposes. It is true that even statistical findings can lead to stereotyped conclusions about subgroups in society, but that can be reduced by proper presentations of results.

Important survey research uses personal interviews to collect much information directly from individuals, but highly productive improvements in the data can be made economically by adding information from other sources, ranging from data sets with individuals identified to those containing information about the area where a person lives or the nature of his or her occupation. And great reductions can be made in respondents’ burdens if some information can be made available from other sources. Methodologically, we can learn about response error and improve the data by comparing data from more than one source. Explanations of situation or behavior must allow for complex interaction effects.

We already have protections when personal data are merged, and to prohibit the ransacking of data to reveal individuals. At the University of Michigan’s Institute for Social Research, we have been collecting rich individual data for years, including the use of reinterview panels, without any case of loss of anonymity.

JAMES N. MORGAN

Senior Research Scholar Emeritus

Institute for Social Research

Professor of Economics Emeritus

University of Michigan

Ann Arbor, Michigan


The challenge to our society is to calibrate the balance between personal privacy and society’s security in accord with the constant evolution of technology. This public policy debate has to include the full participation of academics, business leaders, civil libertarians, law enforcement, national security, and technologists with our elected political leaders who reflect the attitudes of the citizens.

The challenge is global because technology erases national borders but cannot eliminate the cultural and historical attitudes on the individual issues of personal privacy and national security as well as their convergence. Europe’s attitudes, for example, on the convergence of these issues are shaped by the historic experiences of Nazi occupation and by recent domestic terrorism in England, Ireland, Italy, Germany, France, and Spain. Other areas such as Hong Kong, Australia, and Japan have distinct national ideas about privacy.

Companies such as EDS are engaged in dialogues and partnerships with the U.S. government as well as governments in Europe, Asia, and Latin America and with multilateral governmental organizations to determine a process that reflects the consensus of all the participants in the robust debate about the “balance” between personal privacy and security. This global conversation is vertical and horizontal, because some information—personal financial and health records, for example—is particularly sensitive and is therefore more regulated. EDS has been involved in this discussion for well over 10 years and plans to continue its engagement in these public/private dialogues for years to come.

The article by M. Granger Morgan and Elaine Newton was troublesome, because there was the suggestion that anonymity was somehow a “right” in the United States. I disagree. In an era of search engines and digitization of records, people aren’t anonymous. That’s a reality. Controls can be put in place to provide privacy protections and punish actual abuses and serious crimes such as identity theft, but the idea that complete personal anonymity is possible, much less a “right” in the United States, is naïve and simplistic. Frankly, after September 11, every passenger and crew member on the airplane feels more secure because they know every other passenger was “screened” by the same regime, and no one is really anonymous to the authorities.

At the same time, the article was constructive, because there was the strong suggestion that a privacy/security regime could be instituted voluntarily in partnership with business, which frankly is more sensitive to the realities of the market, technology, and our customers’ concerns than is government regulation.

Sometimes, there is amnesia about a central fact: The customer sets the rules, because the customer is the final arbiter. Remember: If privacy is the issue, as in the financial and healthcare sectors, then the processes adapt to that concern. If security is the issue, as in airline travel, then the processes adapt to that concern as was demonstrated in the recent negotiations between the United States and the European Union on airline passenger lists. If there is customer concern about data from radio frequency identification devices, then the rules and business practices will evolve to address those concerns. Sometimes, the government will prod the process forward. In this space of privacy and security convergence with technology deployment, the odds are that government regulation is a lagging indicator.

At the same time, the article raises the legitimate concern about governmental abuse of its powers. History has certainly provided plenty of examples for the concern to be warranted. However, the lesson to be drawn from history is that regulation should be a reaction to demonstrated abuses rather than an attempt to anticipate and proscribe abuse. The marketplace can generate its own more powerful and immediate remedy, especially with an issue where consumer confidence is key to market success.

The article raises a number of points but fails to recognize the current and robust engagement of all participants—academic, business, and government—in the pursuit of a balance. As a participant in many of these dialogues and forums, EDS remains committed to the global dialogue to provide privacy and security simultaneously to our customers and our customers’ customers in full partnership with elected and appointed leaders of governments.

WILLIAM R. SWEENEY, JR.

Vice President, Global Government Affairs

EDS

Dallas, Texas


Developing-country health

Michael Csaszar and Bhavya Lal (“Improving Health in Developing Countries,” Issues, Fall 2004) have done a service by drawing attention to the need for more research on global health problems. The key issue is how to institutionalize appropriate health R&D financing.

The United States, Japan, and the European Union governments fund nearly half of the world’s health research. Although much of that research eventually benefits poor countries, many global health problems are underfunded. Unfortunately, it is hard to convince legislators in rich countries, who answer to their domestic constituencies, to allocate funds for research on the diseases of the poor abroad. The Grand Challenges in Global Health initiative of the Gates Foundation and the National Institutes of Health offers a model for tapping governmental health research capacity for the diseases of the poor.

The pharmaceutical industry last year provided more than 40 percent of world health R&D expenditures—some $33 billion. The industry brings to market only a small percentage of the products it studies, earning enough from a tiny percentage of very successful products to pay for its entire R&D, manufacturing, and marketing enterprise. Research-intensive pharmaceutical firms are not more profitable than other companies (or the stock market would drive up their prices). Yet their successful model for financing R&D is under attack as overly costly to the consumer. Moreover, the low-cost preventive measures that are most appropriate to the needs of developing countries are unattractive to the pharmaceutical industry. People will pay less to prevent than to cure disease. Tax inducements and regulatory reform should be considered to stimulate industrial R&D.

Ultimately, pharmaceutical companies need strong markets for their products in developing nations. The Interagency Pharmaceutical Coordination Group (IPC) offers one approach to creating these markets. Similarly, the Global Alliance for Vaccines and Immunizations and the Global Fund to Fight AIDS, Tuberculosis and Malaria are providing money to buy vaccines and pharmaceuticals for developing nations.

Philanthropic foundations, including the Howard Hughes Medical Institute, the Wellcome Trust, and the Gates Foundation, fund less than 10 percent of world health research. Yet their leadership has been and is critically important.

After the creation of the Tropical Disease Research Program in 1975, new institutions were created to further encourage research on global health problems, notably the Global Forum for Health Research, the Council on Health Research and Development, the International AIDS Vaccine Initiative, and the Initiative on Public-Private Partnerships for Health. Still, the key to providing more technological innovations appropriate to developing nations and to building their health science capacity probably lies in creating more public and political support for existing institutions while improving their policies and programs.

JOHN DALY

Rockville, Maryland


Michael Csaszar and Bhavya Lal raise important concerns relating to health in developing countries. By focusing on a systems approach, they identify one of the most critical factors that accounts for the success or failure of project activities in developing countries.

The most common source of failure in health innovation systems arises from the lack of focus on specific missions. Even where research missions exist, they tend to be formulated in the developed countries and extended to developing countries. This common practice often erodes the potential for local ownership and undermines trust in the health systems being promoted.

A second cause of failure is the poor choice of collaborating institutions in developing countries. Many of the international research programs do not make effective use of knowledge nodes such as universities in developing countries. Knowledge-based health innovation systems that are not effectively linked to university research are unlikely to add much value to long-term capacity-building in developing countries.

Probably the most challenging area for health innovation systems is the creation of technological alliances needed to facilitate the development of drugs of relevance to the tropics. A number of proposals have been advanced for increasing research investment in this area. They range from networks of existing institutions to new technology-development alliances, many of which focus on vaccine development. Although these arrangements seek to use a systems approach in their activities, the extent to which they involve developing-country universities, research institutions, and private enterprises is not clear. The design of such incomplete health innovation systems can only guarantee failure.

CALESTOUS JUMA

Professor of the Practice of International Development

Kennedy School of Government

Harvard University

Cambridge, Massachusetts


A systems approach to building research capacity and finding ways to apply the research findings to benefit the health of a population is an attractive proposition. I would like to highlight two fundamental issues that must be addressed if the proposed concept is to be successful. My response is based on my experience at SATELLIFE (www.healthnet.org), a nonprofit organization serving the urgent health information needs of the world’s poorest countries through the innovative use of information technology for the past 15 years.

First, what are the mechanisms by which networks will be created for the sharing of research results with health practitioners in developing countries? What are the formal, reliable systems for knowledge dissemination leading to an evidence-based practice of health care in a country? How does the knowledge move from the capital cities, where it is generated, to rural areas, where health care providers are scarce and 90% of the population lives? In these rural areas, nurses and midwives are the frontline health workers who mainly see patients. These are challenging questions with no easy answers, but clearly information and communications technology can play a significant role.

Second, information poverty plagues researchers and health practitioners in emerging and developing countries. Many medical libraries cannot afford to subscribe to journals that are vital and indispensable informational resources for conducting research. How does one gain access to the most current, reliable, scientifically accurate knowledge that informs research and data for decisionmaking? Poor telecommunications infrastructure, expensive Internet access, poor bandwidth to surf the Web, and the lack of computers and training in their use often work against the researcher in resource-poor countries. Timely, affordable, and easy access to relevant knowledge has a profound impact on policy formulation and the delivery of health care in a country. On October 22, 2004, a subscriber from Sri Lanka sent a message to our email-based discussion group on essential drugs, trying to locate a full-text article: “We don’t have Vioxx here in Sri Lanka but there are about 12 registered brands of rofecoxib in the market. I would be thankful if anyone having access to that article can mail it to me as an attachment. (We don’t have access to many medical journals!)” The digital divide is not only about computers and connections to the Internet but also about the social consequences of the lack of connectivity.

The systems approach to developing research capacity and disseminating findings most likely addresses these crucial barriers in an implicit manner. But they need to be made more explicit so as to garner the necessary resources at the social/governmental, organizational, physical, and human levels to make a real difference.

LEELA MCCULLOUGH

Director of Information Services

SATELLIFE

Watertown, Massachusetts


Democratizing science

David H. Guston (“Forget Politicizing Science. Let’s Democratize Science!” (Issues, Fall 2004) rightly argues that public discussion should move beyond bickering over the politicization of science and consider how science can be made more compatible with democracy. But that may be difficult without some discussion of what politicization is. One useful concept says that politics is the intersection of power and conflict. So if conflicts of opinion on a science advisory committee are resolved through fair discussion, they are not political. Voting on advisory committees, however, amounts to the political resolution of conflicts through the equal distribution of power. Similarly, even though good advice may enhance the power of public officials, it would be odd to call appointing the best scientists to an advisory committee political. But such appointments may become political, if they become matters of conflict or if power is used to keep latent conflicts from emerging. Science is thus rarely entirely political, but usually in part; and it always has the potential to become more political.

This view of politics suggests that the Bush administration and its critics are each only half right when accusing the other of politicizing science: The administration has apparently used its power to dominate selected advisory processes, and its critics have publicly contested that use of power. From this perspective, the politicization of science might be compared to the politicization of other social institutions once deemed essentially private. The workplace and the family, for example, have been politicized to a certain extent as part of efforts to fight discrimination and domestic violence, respectively. In each case, politicization was a necessary part of alleviating injustices, and coping with politics proved better than trying to suppress it.

The best way of coping with politics is democracy, and Guston’s suggestions promise a more just distribution of the costs and benefits of science. Pursuing these suggestions effectively will require careful consideration of what democratization means. Guston refers to ideals of accessibility, transparency, accountability, representation, deliberation, participation, and the public interest. These ideals are not always compatible. Creating spaces for public deliberation on science policy, for example, may require limits on transparency and participation, since media scrutiny or too many participants may hinder productive deliberation. And although interest groups are usually not representative of all citizens, they can often enhance participation more effectively than deliberative forums. Democratizing science thus requires a wide variety of institutions, each focused on a limited set of ideals.

More generally, some modes of democratizing science distribute power far more equally than others. If “democratic” means open to public view, accountable to professional associations, and representative of public interests, science has been democratic for much of its history. But if scientists are to be held accountable to elected officials or lay citizens, and if representing the public interest depends on public input, then democratizing science becomes both more controversial and more difficult. Democratizing science thus requires a willingness to politicize not only science but also democracy.

MARK B. BROWN

Assistant Professor

Department of Government

California State University

Sacramento, California


David H. Guston is correct to assert that science is political, and his proposals for increasing accessibility, transparency, and accountability in science point us in a positive direction. However, the success of Guston’s proposals will depend on two fundamental reforms. First, comprehensive scientific literacy initiatives must emphasize not just the “facts” of science but should also teach citizens to think critically about science. Second, scientists need to be offered incentives to collaborate with lay citizens in the scientific enterprise.

We need to understand—and teach—that science is not just political in the sense that elected officials engage in the process of setting science policies and funding priorities. The ways in which scientists understand the phenomena they study also reflect an array of social and political factors. Thus, for example, the use of the techniques of the physical sciences in biology beginning in the early 1930s did not come about because nature called on scientists to think about biological phenomena in physical terms, but because the Rockefeller Foundation had the resources to push biologists in this direction. Likewise, nature doesn’t tell scientists to prefer false negatives to false positives in their research. This is a well-established social norm with political implications. Today, a scientist who claims that a phenomenon is real when it is not (a false positive) may hurt her or his professional reputation. By contrast, lay citizens who are concerned about carcinogen exposure in their local environment would probably prefer to be incorrectly informed that they were exposed (a false positive) than that they were not (a false negative). In short, science is thoroughly political, reflecting the interplay of actors with varying degrees of power and diverse interests.

To give citizens the sense that science is political in its everyday practice demands that we rethink what it means to be scientifically literate. We must not only teach our children how experiments are done, what a cell is, and the elements that make up water, but also that the phenomena scientists study, the way they study them, and what scientists accept as competent experimental designs all reflect social and political processes. This kind of scientific literacy is the necessary bedrock of a truly democratic science.

Democratizing science also demands that we alter the incentive structure for scientists. Guston points to the virtues of organizations that offer lay citizens the chance to shape research agendas. What motivation do academic scientists have to work with citizens to craft research agendas in such arenas? Will doing so improve the prospect that a junior faculty member will get tenure? Will the results of the citizen-prompted research be publishable in scholarly journals? To successfully democratize science demands that universities broaden their criteria for tenure so that scientists get credit from their colleagues for working with citizens.

I fully endorse Guston’s proposals, but to thoroughly democratize science, we will need to broaden what it means to be scientifically literate and work to alter the structure of incentives scientists have for doing their work.

DANIEL LEE KLEINMAN

Associate Professor of Rural Sociology

University of Wisconsin–Madison


Science education

Evidence of the need to improve science education in elementary school, especially in the lower grades, is not far to seek. The recently released results of the Trends in International Mathematics and Science Survey (TIMSS) 2003 show that achievement by U.S. fourth-grade students is notwhat this nation expects. Between 1995 and 2003, fourth-graders in the United States did not improve their average science scores on TIMSS. In “Precollege Science Teachers Need Better Training” (Issues, Fall 2004), John Payne poses the question: Could part of U.S. students’ problem with science achievement have its roots in the way and extent to which elementary science teachers are being trained to teach science while in their college programs?

The short answer must be yes. Although many factors influence student achievement, the preparation of science teachers is certainly one critical factor. One analysis, based on the Bayer Facts of Science Education, suggests that elementary teachers do not teach science daily, do not feel “very qualified” to teach science, and do not rate their school program very highly. What could an undergraduate program do to help alleviate these problems?

In 2007-2008, the No Child Left Behind legislation mandates that school districts assess all students in science at least once in the elementary grades, thus elevating science to the same tier as literacy and mathematics. The result: More science will be taught in elementary schools. So we have a response to the first issue, but it is not a result of teacher education.

What about the second issue? One of the limiting factors for elementary teachers feeling qualified to teach science is their understanding of science. I suggest that colleges design courses specifically for elementary teachers. Often, the response to such a suggestion is that they should take the standard courses such as introductory biology, chemistry, physics, and geology. Well, at best they only will take two of these courses. And these courses are usually not in the physical sciences, where our teachers and students have the greatest deficits. Colleges and universities can design courses that develop a deep conceptual understanding of fundamental science concepts and provide laboratory experience based on core activities from elementary programs. There is research supporting this recommendation that comes mostly from mathematics education, but in my view it applies to science teacher education as well.

The third issue, exemplary science programs for elementary schools, could be addressed by an emphasis on National Science Foundation (NSF) programs in future teacher education programs. The reality is that undergraduate teacher education has some, but not substantial, impact on the actual program used by a particular school district. State standards and the economics and politics of commercial publishers all play a much more significant role in the adoption and implementation of exemplary programs.

In the NSF Directorate for Education and Human Resources, programs related to the issue of teachers’ professional development and exemplary programs have been severely reduced because of recent budget reallocations. Without such external support, the likelihood of major reforms such as those envisioned by Payne and proposed here is very low.

RODGER W. BYBEE

Executive Director

Biological Sciences Curriculum Study

Colorado Springs, Colorado


I completely agree with John Payne’s comments about the success of efforts by the National Science Foundation (NSF) and others to improve the quality of in-service teacher education activities in science, technology, engineering, and mathematics (STEM) fields. However, he seems unaware of the equally aggressive efforts by NSF to improve the quality of pre-service teacher education in STEM fields.

Between 1991 and 2002, I served as a program officer and later as division director in NSF’s Division of Undergraduate Education. That division was assigned responsibility for pre-service education programs in 1990 in recognition that teacher preparation is a joint responsibility of STEM faculty and departments as well as schools and colleges of education. The division incorporated attention to teacher preparation in all of its programs for curriculum, laboratory, instructional, and workforce development. The flagship effort was the Collaboratives for Excellence in Teacher Preparation (CETP) program, which made awards from 1993 to 2000. The CETP program was predicated on the realization that effective teacher preparation programs require institutional support and the concerted effort of many stakeholders, including faculty and administration from two-year, four-year, and research institutions; school districts; the business community; and state departments of education. Funded projects were expected to address the entire continuum of teacher preparation, including recruitment, instruction in content, pedagogy, classroom management, early field experiences, credentialing, and induction and support of novice teachers. Attention was also given to the preparation of teachers from nontraditional sources.

Two evaluations were done of the CETP program. The first was an evaluation of the first five funded projects released in March 2001 by SRI International. The report concluded that CETP was “highly successful” in exposing pre-service teachers to improved STEM curricula, more relevant and innovative pedagogy, and stronger teacher preparation programs. The program was also judged “very successful” in involving STEM faculty. It also noted that “the potential for institutionalization looks positive.” The other evaluation was performed by the Center for Applied Research and Educational Improvement at the University of Minnesota and was a summative evaluation of the entire project. This report, released in March 2003, concluded that “the establishment and institutionalization of the reformed courses stand out as do improved interactions within and among STEM and education schools and K-12 schools.” Furthermore, when comparing graduates of CETP projects with graduates of other projects, the report noted, “CETP[-trained] teachers were clearly rated more highly than non-CETP[-trained] teachers on nine of 12 key indicators.” These indicators included working on problems related to real-world or practical issues, making connections between STEM and non-STEM fields, designing and making presentations, and using instructional technology. I wish STEM faculty were as well prepared for their instructional responsibilities; but that’s a topic for an article in itself.

It’s unfortunate that the CETP program was ended before we could obtain rich longitudinal data that might inform us about the actual classroom performance of the CETP-trained teachers. Of greater concern has been the volatility that has followed the expiration of CETP. The CETP program made new awards over an eight-year period (or two undergraduate student lifetimes). CETP was followed, briefly, by the STEM Teacher Preparation program, which was later folded into the Teacher Professional Continuum along with the previously separate program for in-service teacher enhancement lauded by Payne. This compression was necessary in order to pay for the Math and Science Partnership (MSP) program at NSF, an ambitious effort that focuses on partnerships between institutions of higher education and K-12 school districts. After three rounds of awards, there is now an effort to remove MSP from NSF and add funds to a similarly named program at the Department of Education that now functions more by block grant than by competitive peer review. So on balance, Payne’s call for new efforts is entirely appropriate as long as we amend his call to request that, when indications are that they are successful, programs also be sustained.

NORMAN L. FORTENBERRY

Director

Center for the Advancement of Scholarship on Engineering Education

National Academy of Engineering

Washington, D.C.


John Payne correctly identifies the most serious problem in science education: the poor learning of science in the elementary school years. He also recognizes that the poor teaching of science by elementary school teachers is at the core of poor learning by students. I applaud him for calling for better educating those who will become elementary school teachers. Finally, I extend my appreciation and congratulations to him and his company for their long-term commitments to helping improve the situation.

Having said these things, I would like to make some observations and take exception to a few of his claims. Having followed the reforms in Pittsburg, I suggest that the early and dramatic improvements in student performance and attitudes toward science there should be attributed to the use of elementary science specialists. These teachers have uncommonly strong backgrounds in science from their undergraduate years, and they make up a small percentage of all elementary school teachers. By contrast, most elementary teachers and teacher candidates are fearful of science, many to the point of anxiety and dislike, and took only few science courses in college (which are often large lecture classes in the general education curriculum).

Many of us have long noted that science (and mathematics) anxiety in elementary school teachers is one more consequence of poor teaching in the elementary (and often in the secondary) years of a teacher’s education. Bad attitudes and practices are passed from generation to generation. I assert that meaningful progress in reforming early science education would be best served by converting to the use of elementary science specialists, parallel to how specialists are used for instruction in art and music.

The practice of inquiry-based science deserves further comment. I don’t doubt that Payne accurately quoted published figures, such as 95 percent of deans (of education, I presume) and 93 percent of teachers say that students learn science best through experiments and discussions where they defend their conclusions. And that 78 percent of new teachers say they use inquiry-based science teaching most often (compared to 63 percent 10 years ago). However, based on my personal observations over many years, the observations of many colleagues who visit classrooms regularly, and the continuing poor performance of elementary students in science nationwide (selected communities like Pittsburgh excepted), these figures simply cannot be believed. I have administered many surveys to teachers myself, and one has to expect that most teachers report what they wished they were doing rather than what they actually do. Learning by inquiry is difficult for most science majors in college. Expecting most elementary school teachers to become comfortable and skilled at teaching this way is completely unrealistic unless the budget for teacher professional development activities in science is increased a hundredfold.

Investing in and requiring the use of elementary science specialists is a cheaper and more reliable solution to the K-8 learning problems.

DAN B. WALKER

Professor of Biology and Science Education

San Jose State University

San Jose, California


Staying competitive

In “Meeting the New Challenge to U.S. Economic Competitiveness”(Issues, Fall 2004), William B. Bonvillian offers a concise statement of many of the challenges now facing the U.S. economy and especially its technology-intensive sectors. He reminds us of the concerted efforts during the 1980s of business, government, organized labor, and academia to find new ways of innovating and producing that led in large measure to the boom times of the 1990s. He recommends returning to this formula to search again for new ways to stay “on top.”

This is certainly a wise prescription and one that leaders in every sector should embrace. Today, Americans are sharply divided not only on their politics but also on their understanding of the causes and consequences of current economic ills. The debate about offshore outsourcing and whether it is good or bad for U.S. jobs is only one illustration of how far we are from a shared understanding of the problem, let alone a solution. A fresh dialogue is essential to help us move forward as a nation.

2004 is not 1984, however, and it is not obvious that the old formula for dialogue would succeed today. Many more and different kinds of legitimate stakeholders need to be in the conversation. Part-time, contract, and self-employed workers, as well as the new generation of knowledge and service workers, have as great a stake as do the members of the old manufacturing trade unions. “New economy” companies view the challenges and opportunities of the global economy in quite a different light from those from an earlier era. Resource scarcity, environmental challenges, and global climate change are just as important as the balance of trade and productivity growth in defining the next American future. Any process of national dialogue must incorporate all of these perspectives, and more, if it is to succeed.

I see two highly promising pathways for a fruitful new American dialogue, in addition to Bonvillian’s wise suggestion of a new “Young Commission.” The first is for Congress to reassert its traditional role as the forum within which the United States openly examines its most pressing problems. During the past decade, Congress has lost much of its real value, turning from rich and open inquiry directed at solving problems to sterile partisan exercises intended to preserve the status quo or score points against the political opposition. Our country can no longer afford to squander our precious representative institution in this way. Congress must go back to real work.

The second is for the organizers of a new American dialogue to find ways to take advantage of the immensely rich Internet-based communications culture, which barely existed when the first Young Commission was doing its work in the 1980s. All the tools of the new forms of information exchange— Web pages, email, list serves, chat rooms, blogs, data mining, and all the other new modes—offer unprecedented opportunities, not only to tap into the chaotic flow of information and misinformation that characterizes the 21st-century world but also to pulse that flow in ways that yield new insights that can help build the new competitive nation that Bonvillian and I and others like us are seeking.

CHRISTOPHER T. HILL

Vice Provost for Research

George Mason University

Fairfax, Virginia


William B. Bonvillian states well the key issues related to U.S. economic competitiveness: “If the current economy faces structural difficulties, what could a renewed economy look like? Where will the United States find comparative advantage in a global economy?” After a brief review and history of competitiveness, he focuses on innovation as a major factor and discusses the appropriate role for government in support of innovation in the context of five key issues: R&D funding, talent, organization of science and technology (S&T), innovation infrastructure, and manufacturing and services.

Indeed, well-crafted government policies and programs in these areas could significantly improve the ability of U.S.-based companies to innovate and excel in the global economy. I found it particularly noteworthy that Bonvillian’s proposals represent a positive agenda. His proposals for funded government programs do not have the appearance of corporate welfare, and his S&T proposals acknowledge the limits of federal R&D budgets and the need to prioritize investments. Bonvillian also avoids protectionist recommendations and emphasizes the need for U.S. companies, individuals, and institutions, including the government, to innovate in order to compete. This positive agenda is one that could muster bipartisan support within Congress and the Executive Branch.

Manufacturing is an area primed for a public/private partnership. Bonvillian mentions several public policy actions that could help our manufacturing sector, including trade, tax, investment, education, and Department of Defense program proposals. However, he identifies innovation in manufacturing as the most important element. Bonvillian calls for a revolution in manufacturing that exploits our leadership and past investments in technology. He calls for “new intelligent manufacturing approaches that integrate design, services, and manufacturing throughout the business enterprise.” Such an approach is worthy of a public/private partnership.

As we embark on new public/private partnerships, we must realize that globalization has significantly altered the playing field. Consider the case of SEMATECH, which Bonvillian correctly identifies as a government/industry partnership success of the 1980s. SEMATECH was originally established as a public/private partnership to ensure a strong U.S. semiconductor supplier base (especially for lithography) in light of a strong challenge from Japan. The creation of SEMATECH, along with effective trade and tax policies, S&T investments, and excellent management in U.S. companies, helped the U.S. semiconductor industry recover and thrive. However, during the late 1990s, in response to the globalization of the semiconductor industry, SEMATECH evolved from a U.S.-only consortium working to strengthen U.S. suppliers into a global consortium with a global supply chain focus. Today, SEMATECH has members from the United States, Europe, and Asia, and works with global semiconductor equipment and material suppliers. Among SE-MATECH’s most significant partnerships is one with TEL, the largest Japanese semiconductor equipment supplier and a major competitor of U.S. suppliers. Applied Materials, a U.S. company that is now the world’s largest semiconductor equipment supplier, achieved its growth by making large investments in R&D, aggressively pursuing global customers, and purchasing companies (hence technology) throughout the world. And though Applied Materials is the world’s largest semiconductor equipment supplier, there are no longer any U.S. suppliers of leading-edge lithography. In today’s global economy, U.S. semiconductor manufacturers view a diverse global supply chain as a strength, not a threat. U.S. policy-makers must develop new policies and programs that acknowledge the realities of the global economy and recognize that to maximize benefit to the United States, government investments in innovation may need to include the participation of global companies and yield benefits beyond our borders.

Bonvillian has established an excellent framework for a reasoned debate on meeting new challenges to U.S. economic competitiveness. And as he asserts, it is time to go from analysis to action.

GILBERT V. HERRERA

Director, Manufacturing Science and

Technology

Sandia National Laboratories

Sandia, New Mexico

Gilbert V. Herrera is the former CEO of SEMI/SEMATECH, a consortium of U.S. semiconductor equipment and material suppliers.


William B. Bonvillian spells out a series of challenges to long-term U.S. competitiveness. The responseto those challenges will go a long way toward determining America’s 21st-century prosperity and capacity for international leadership.

In the past 15 years, China, India, and the former Soviet Union have brought 2.5 billion people into the global economy. China is already producing technologically sophisticated products, and India is a growing force in providing information technology and other services. Korea has emerged as a power in advanced electronics, and Brazil is the third largest manufacturer of civilian aircraft.

The digital revolution continues to change the playing field for many occupations that were formerly shielded from international competition. Europe, Japan, and much of the world are seeking to emulate the successful U.S. model of innovation and are actively recruiting students and scientists that used to think of America as the preferred destination.

What then must the United States do to retain its leadership in the global economy? First, we need to move past the debate on government versus the market and focus on developing the right mix of public policies and private initiative to ensure an innovative future.

Second, we must establish the right macroeconomic context. That means reducing the fiscal deficit without endangering needed investments in R&D. It also means striking a global bargain with the world’s major economies to gradually reduce the size of our current account deficit that has helped erode the country’s manufacturing base.

Third, we need to adjust our national research portfolio to ensure adequate funding for the physical sciences and to help bridge the gap between the private sector and basic research.

Fourth, we must adopt an aggressive strategy to prepare Americans for the careers of the future and continue to welcome international students and scientists.

Finally, we need to forge a durable political consensus that supports a strategy for 21st-century innovation. National security played that role in the 1960s and 1970s, and international competition was an added force in the 1980s. We need to articulate a national mission that will galvanize popular support and, like the space program, excite young Americans about careers in science and technology. The president’s proposed mission to Mars might be the answer. I would suggest two others: new forms of energy that will reduce and eventually end dependence on the Middle East while better preserving the environment, and renewed U.S. leadership in making a globalattack on tropical and other threatening diseases.

Hats off to Bonvillian for clearly spelling out some critical American choices. Working on Capitol Hill, Bonvillian is in a position to help turn good ideas into timely legislation. We all need to wish him well.

KENT HUGHES

Director

Project on America and the Global Economy

Woodrow Wilson Center

Washington, D.C.

Kent Hughes was an Associate Deputy Secretary of Commerce in the Clinton administration.


Like Tom Paine demanding attention for “Common Sense,” William B. Bonvillian makes a persuasiveand eloquent argument that the U.S. economy faces grave and unprecedented threats—a situation that cries out for an immediate creative response.

He argues cogently that we’ve never been able to measure our ability to remain at the forefront of innovation with any precision. It’s hard to attract attention to problems you can’t see. It’s fair to ask whether, at the end of the 19th-century, Britain could have seen signs that it was about to blow a two-century lead in innovation. Alarm bells did not ring, even as huge amounts of capital flowed to upstart projects in the United States, nor as Americans started dozens of universities that were admitting smart American rustics and granting degrees in “agricultural and mechanical arts” and other topics not considered suitable for young gentlemen. Politics in Britain focused on the burdens of empire, not on whether local steel mills were decades out of date.

The recent presidential campaign was particularly disappointing in that the debate on the United States’ declining status in innovation was scarcely joined. This was painful. Federal research investment is essential because these investments provide a stream of radically new ideas and the sustained investments needed to engage in bold projects such as sequencing the genome. It is outrageous that this investment continues to decline as a fraction of the nation’s economy, and it is vulnerable to even more dramatic new cuts when post-election budget writers face the reality of ballooning defense costs and declining revenues. As the long knives come out, it will be a battle to see who screams the loudest, and it will be hard for the arguments of the research community to be heard in the din.

As Bonvillian points out, the success of the federal research investment depends not just on its size but on the skill with which it’s managed. We can only succeed if federal managers find a way to move adroitly to set new priorities and ensure that investments are made where they are most likely to yield results. They must also ensure that the process rewards high-risk proposals whose success can yield high potential impacts (the old DARPA style). Many of these concepts will not come with familiar labels but will operate at the interface between disciplines such as biology, mathematics, physics, and engineering. Bonvillian’s insight that technical innovation must now be coupled with “an effective business model for using the technology” means that many innovations will involve both products and services. And his observation that “a skilled workforce is no longer a durable asset” demands that we find new, more productive ways of delivering education and training.

Loss of technical leadership is an enormous threat to our economic future. It cripples our ability to meet social goals such as environmental protection or universal education at an affordable cost. It undermines a central pillar of national and homeland security. What I fear most is that instead of being remembered as Paine, Bonvillian will be remembered as Cassandra—completely correct and completely ignored.

HENRY KELLY

President

Federation of American Scientists

Washington, D.C.


Women in science

I was dismayed to see your magazine publish an article that advocates discrimination. This is Anne E. Preston’s “Plugging the Leaks in the Scientific Workforce” (Issues, Summer 2004), where she says that universities should make “stronger efforts to employ spouses of desired job candidates.” Because universities have finite resources, such efforts inevitably reduce job prospects for candidates who lack the “qualification” of a desirable spouse. Favoring spouses thus amounts to the latest version of the old-boy system, where hiring is based on connections rather than on merit. When a couple cannot get jobs in the same city, it is unfortunate. But when a single person is denied a job because the spouse of a desirable candidate is favored, it is not only unfortunate but also unjust. It is particularly ironic when favoring women who are married to powerful men is somehow felt to serve the cause of feminism.

FELICIA NIMUE ACKERMAN

Department of Philosophy

Brown University

Providence, Rhode Island


Future of the Navy

Robert O. Work’s “Small Combat Ships and the Future of the Navy” (Issues, Fall 2004) makes a much-needed contribution to the debate over the transformation of the U.S. armed forces to meet the threats of the future.

As Work notes, the case in favor of acquiring at least some Littoral Combat Ships (LCSs) is strong. The U.S. Navy has conducted, and will continue to conduct, a range of missions that would benefit from the capabilities of a ship such as the LCS. Moreover, the development of these ships can foster innovation within the naval services. The Australian, Norwegian, and Swedish navies, among others, have fielded highly innovative small craft in recent years. The U.S. Navy can benefit from many of these developments through the LCS program. Finally, regardless of whether one believes that the era of the aircraft carrier is at an end, there is a strong argument for diversifying the Navy’s portfolio of capabilities.

Although the case for investment in LCSs is strong, Work correctly notes that there is opposition to even a limited buy in parts of both the Navy and Congress. The fact that the Navy envisions LCSs undertaking missions that it considers marginal, such as mine warfare, demonstrates that to some, small combatants are themselves peripheral.

This is not the first time that the Navy has considered a prominent role for small combatants. In the late 1970s, Chief of Naval Operations Elmo Zumwalt envisioned a fleet that would include a number of new models of small combatants, including missile-armed hydrofoils. His plans came to naught, however, because of a combination of organizational opposition within the Navy and uncertainty over how such ships would fit U.S. strategy. Supporters of LCS would do well to heed this experience. The LCS program will succeed only if supporters can demonstrate that it will have value as an instrument of U.S. national power.

THOMAS G. MAHNKEN

Visiting Fellow

Philip Merrill Center for Strategic Studies

Paul H. Nitze School of Advanced International Studies

The Johns Hopkins University

Washington, D.C.


Robert O. Work’s assessment of the U.S. Navy’s ongoing transformation and the Littoral Combat Ship (LCS) program captures the essential technical and doctrinal challenges facing the Navy as it transitions to a 21st-century fleet postured to meet U.S. national security requirements in a dangerous and uncertain world. Work’s article is a summary of a masterful study he completed early in 2004 for the Center for Strategic and Budgetary Assessments in Washington, D.C.

Today’s Navy, Marine Corps, and Coast Guard are proceeding on a course of true transformation. The term runs the risk of becoming shopworn in the Bush administration’s national security lexicon, but it is undeniable that the U.S. sea services are being transformed in a way comparable to the transition to modern naval power that began roughly 100 years ago. Work’s article highlights the key attributes of this transformation, notably the development of highly netted and more capable naval platforms.

His contemplation of the Navy of tomorrow resembles the experience of naval reformers in ages past. As Bradley Allen Fiske wrote at the Naval War College in 1916, “What is a navy for? Of what parts should it be composed? What principles should be followed in designing, preparing, and operating it in order to get the maximum return for the money expended?”

Chief of Naval Operations (CNO) Admiral Vern Clark grapples with the same issues that Fiske pondered 88 years ago. Clark seeks to build a balanced fleet encompassing potent platforms and systems at both the high- and low-end mix of the Navy’s force structure—a force able to meet all of its requirements in both coastal waters and the open ocean.

Tomorrow’s Navy will be able to project combat power ashore with even higher levels of speed, agility, persistence, and precision than it does today, but Clark also faces the stark challenge of affordability in recapitalizing the Navy, when it is unlikely that funding for Navy recapitalization will be increased and, because of a variety of factors, could be decreased if wiser heads do not prevail.

At a time when the number of warships in the Navy is falling to the lowest level since 1916, the need for a significantly less expensive, modular, and mission-focused LCS is obvious. Today’s ships are far more capable than hulls of just a decade ago, but in a world marked by multiple crises and contingencies, numbers of ships have an importance all their own. “There is no substitute for being there,” is how one former CNO expressed this consideration. LCS will help the Navy to achieve the right number of ships in the fleet by providing a capable and more affordable small combat ship suitable for a wide range of missions.

Clark has spoken eloquently of the shared responsibilities faced by navies and coast guards around the world in keeping the oceans free from terror to allow nations to prosper. “To win this 21st-century battle on the 21st-century battlefield, we must be able to dominate the littorals,” Clark said last year. ”I need LCS tomorrow.”

Work offers some useful cautions regarding LCS design considerations (notably the tradeoff between high speed and payload), and his recommendation that the Navy evaluate its four first-flight LCS platforms carefully before committing to a large production run makes sense.

It should be noted, however, that the Navy has conducted extensive testing and experimentation in recent years using LCS surrogate platforms, including combat operations during the invasion of Iraq. It has a good grasp of its mission requirements in the littorals. As for the Navy’s requirement for a high-speed LCS, no less an authority than retired Vice Admiral Arthur Cebrowski, director of the Office of Force Transformation in the Department of Defense, supports the Navy’s position. As he observed earlier this year, speed is life in combat.

GORDON I. PETERSON

Retired Captain, U.S. Navy

Technical Director

Center for Security Strategies and Operations

Anteon Corporation

Washington, D.C.

From the Hill – Winter 2005

Federal R&D spending to rise by 4.8 percent; defense dominates

The federal R&D budget for fiscal year (FY) 2005 will rise to $132.2 billion, a $6 billion or 4.8 percent increase over the previous year. Eighty percent of the increase, however, will be devoted to defense R&D programs, primarily for weapons development. The total nondefense R&D investment will rise by $1.2 billion or 2.1 percent to $57.1 billion, betterthan the 1 percent increase overall for domestic programs but far short of previous increases.

Perhaps the biggest surprise was a cut in the budget of the National Science Foundation (NSF). This comes just two years after Congress approved a plan to double the agency’s budget over five years.

Most R&D funding agencies will see modest increases in their budgets. The National Institutes of Health (NIH) budget will increase by 2 percent. Although the National Aeronautics and Space Administration (NASA) budget will increase by 4.5 percent to $16.1 billion, the bulk of the increase will go to returning the space shuttle to flight, leaving NASA R&D up just 2 percent.

There are some clear winners in the nondefense R&D portfolio. U.S. Department of Agriculture(USDA) R&D received a 7.8 percent boost to $2.4 billion because of new laboratory investments and R&D earmarks. R&D in the National Oceanic and Atmospheric Administration will climb 10.7 percent to $684 million because of support for the U.S. Commission on Ocean Policy’s recommendation to boost ocean R&D. The National Institute of Standards and Technology’s (NIST’s) support of its intramural laboratory R&D will increase 16.2 percent to $328 million.NIST’s Advanced Technology Program won another reprieve from administration plans to eliminate it.

R&D earmarks total $2.1 billion in FY 2005, up 9 percent from last year, according to an American Association for the Advancement of Science analysis of congressionally designated, performer-specific R&D projects in the FY 2005 appropriations bills. Although these projects amount to only 1.6 percent of total R&D, they are concentrated in a few key agencies and programs. Four agencies (USDA, $239 million; NASA, $217 million; Department of Energy, $274 million; and Department of Defense, $1 billion) will receive 85 percent of the total R&D earmarks, whereas NIH, NSF, and the new Department of Homeland Security remain earmark-free. In some programs, earmarks make up one out of every five program dollars.

FY 2005 R&D earmarks are up more than a third from 2002 and 2003 after a dramatic jump last year. The total number of earmarks is increasing faster than dollar growth, suggesting that the size of the average earmark is shrinking in an era of tight budgets but increasing constituent demand.

Federal S&T appointees must be impartial and independent, report says

A report by the National Academies’ Committee on Science, Engineering and Public Policy (COSEPUP) released in November 2004 urges policymakers to ensure that the presidential appointment process for senior science and technology (S&T) posts and the process for appointing experts to federal S&T advisory committees operate more quickly and transparently.

The report’s release comes on the heels of criticism by scientists and others that the Bush administration has selected candidates for advisory committees more on the basis of their political and policy preferences than of their scientific knowledge and credibility. In addition, a recent Government Accountability Office (GAO) report warned that the perception that committees are biased may be disastrous to the advisory system. GAO has also found, in response to a request from Rep. Brian Baird (D-Wash.), that several statutes prohibit the use of political affiliation as a factor in determining members of advisory committees. Baird has called for a Justice Department investigation of instances in which advisory candidates have been asked about their political preferences by agency employees.

At a press conference accompanying the release of the report, John E. Porter, a former member of Congress and chair of the committee that wrote the report, cited the need for scientific advisory committees to be free from politicization and to “be and be seen as impartial and independent.” Although COSEPUP representatives said that they had not examined the recent specific allegations and that their guidelines make no reference to actions of the current administration, the report recommends that any committee requiring technical expertise should nominate persons on the basis of their knowledge, credentials, and professional and personal integrity, noting that it is inappropriate to ask nominees to provide “non-relevant information, such as voting record, political party affiliation, or position on particular policies.”

Total R&D by Agency
Final Congressional Action on R&D in the FY 2005 Budget
(budget authority in millions of dollars)

House-Senate Conference
FY 2004 FY 2005 FY 2005  Chg. from Request  Chg. from FY 2004
Estimate  Request  Approved  Amount  Percent  Amount  Percent
Defense (military) 65,656 68,759 70,285 1,526 2.2% 4,630 7.1%
(“S&T” 6.1,6.2,6.3 + Medical) 12,558 10,623 13,550 2,928 27.6% 993 7.9%
(All Other DOD R&D) 53,098 58,136 56,735 -1,402 -2.4% 3,637 6.8%
National Aeronautics & Space Admin. 10,909 11,334 11,132 -201 -1.8% 224 2.0%
Energy 8,804 8,880 8,956 76 0.9% 152 1.7%
(Office of Science) 3,186 3,172 3,324 152 4.8% 138 4.3%
(Energy R&D) 1,374 1,375 1,339 -37 -2.7% -36 -2.6%
(Atomic Energy Defense R&D) 4,244 4,333 4,293 -40 -0.9% 49 1.2%
Health and Human Services 28,469 29,361 29,108 -253 -0.9% 639 2.2%
(National Institutes of Health) 27,220 27,923 27,771 -152 -0.5% 551 2.0%
National Science Foundation 4,077 4,226 4,063 -162 -3.8% -14 -0.3%
Agriculture 2,240 2,163 2,414 252 11.6% 174 7.8%
Homeland Security 1,037 1,141 1,243 102 9.0% 206 19.9%
Interior 675 648 672 24 3.6% -3 -0.5%
(U.S.Geological Survey) 547 525 545 20 3.8% -2 -0.3%
Transportation 707 755 718 -37 -4.9% 10 1.5%
Environmental Protection Agency 616 572 598 26 4.6% -17 -2.8%
Commerce 1,131 1,075 1,183 108 10.1% 52 4.6%
(NOAA) 617 610 684 73 12.0% 66 10.7%
(NIST) 471 426 468 42 9.9% -3 -0.5%
Education 290 304 258 -46 -15.2% -32 -11.1%
Agency for Int’l Development 238 223 243 20 9.0% 5 2.1%
Department of Veterans Affairs 820 770 813 43 5.6% -7 -0.8%
Nuclear Regulatory Commission 60 61 61 0 -0.8% 1 0.9%
Smithsonian 136 144 141 -3 -1.9% 5 3.8%
All Other 311 302 311 9 2.8% 0 -0.2%
Total R&D 126,176 130,717 132,200 1,484 1.1% 6,024 4.8%
Defense R&D 70,187 73,499 74,976 1,477 2.0% 4,790 6.8%
Nondefense R&D 55,989 57,218 57,224 6 0.0% 1,234 2.2%
Nondefense R&D minus DHS 55,239 56,484 56,378 -105 -0.2% 1,139 2.1%
Nondefense R&D minus NIH 28,770 29,295 29,453 158 0.5% 683 2.4%
Basic Research 26,552 26,770 26,954 184 0.7% 402 1.5%
Applied Research 29,025 28,841 30,016 1,175 4.1% 991 3.4%
Total Research 55,578 55,611 56,970 1,359 2.4% 1,392 2.5%
Development 66,192 70,287 70,480 193 0.3% 4,289 6.5%
R&D Facilities and Capital Equipment 4,407 4,818 4,750 -68 -1.4% 343 7.8%
“FS&T” 60,613 60,380 61,804 1,424 2.4% 1,191 2.0%

AAAS estimates of R&D in FY 2005 appropriations bills. Includes conduct of R&D and R&D facilities. All figures are rounded to the nearest million. Changes calculated from unrounded figures.

FY 2005 Approved figures adjusted to reflect across-the-board reductions in the FY 2005 omnibus bill. November 24, 2004 – AAAS estimates of final FY 2005 appropriations bills.

The report also recommends the expeditious identification and appointment of a confidential “assistant to the president for science and technology” soon after the presidential election, to provide immediate science advice and to serve until a director of the Office of Science and Technology Policy is confirmed by the Senate, which often takes many months. Part of the advisor’s duties would be to seek input from a diverse set of “accomplished and recognized S&T leaders” when seeking nominees for advisory committees.

To reduce the often arduous nature of the appointment process for nominees, as well as to make the positions more attractive, the report recommends that the president and Senate “streamline and accelerate the appointment process for S&T personnel,” including a simplification of the appointment procedures. This could be done through more efficient background checks, a standardization of pre-and post-employment requirements, simplified financial disclosure reporting, and a continuation of health benefits.

To increase the visibility and transparency of the process, the report recommends that searches for appointees should be widely announced in order to obtain recommendations from all interested parties. Conflict-of-interest policies for committee members should be clarified and made public. In addition, agency employees who manage committee operations should be properly trained and held “accountable for its implementation.”

It does not currently appear that the administration will implement the committee’s recommendations. In a recent Science article, an administration spokesperson was quoted as praising the report but saying that there was no he saw no need to change how scientific advisory candidates are vetted.

House, Senate examine ways of creating stable vaccine supply

In the wake of an October 5, 2004, decision by British officials to shut down a Chiron plant in England that produces half of the U.S. flu vaccine supply, committees in both the House and Senate met to examine how future vaccine shortages could be prevented.

At a House Government Reform Committee hearing, Rep. Henry Waxman (D-Calif.) charged that the Food and Drug Administration (FDA) could have averted the crisis, claiming that a contamination problem found at the plant as early as June 2003 was never rectified. Acting FDA chief Lester Crawford and Chiron CEO Howard Pien disputed Waxman’s assertion, stating that all problems with the facility had been fixed and that the more recent problems were unrelated to the contamination that occurred in 2003, when the plant was owned by another company. They pointed out that Chiron had produced viable vaccines after the initial incident.

Waxman also charged that the FDA had become too passive in its oversight. As evidence, he cited fewer FDA warnings to pharmaceutical companies and the lack of enforcement of laws governing TV drug ads and food labeling. Crawford maintained that FDA policy was properly followed and that the real problem is an economic and legal climate that prompts companies to make their products overseas.

Pien urged Congress to take steps to encourage more vaccine manufacturing in the United States. To accomplish that goal, he recommended increasing the price the government pays for vaccine doses, offering financial incentives, and reforming liability laws. However, he said that the most effective way of generating enough vaccines for a broad spectrum of flu viruses would be to guarantee government buyout of any surplus vaccines. This would create a constant demand and stabilize production decisions for manufacturers that today must gamble on which of the countless existing viruses will emerge in any given year, he said.

To illustrate the need for a diverse portfolio of vaccine manufacturers, some panelists outlined a worst-case scenario: a devastating pandemic that leads the United Kingdom to appropriate all British-manufactured vaccines intended for the United States. Pien urged Congress to take the current shortage as a warning and to begin discussing the pandemic scenario with the British government before it happens.

Many of these concerns were also echoed during a hearing of the Senate Special Committee on Aging. Peter Paradiso of Wyeth Pharmaceuticals said his company withdrew its FluShield product from the market because of what it perceived to be a harsh regulatory environment. He suggested that the committee consider the entire vaccine industry, which he claims is hampered by low government prices, high risk, and cumbersome liability laws. Paradiso cited the growing number of lawsuits claiming links between autism and vaccinations as an example of the need for immediate reform before vaccine shortages for other childhood diseases create an even bigger crisis for the country.

Support for private-sector incentives has fallen along party lines. In the House, Rep. John Mica (R-Fla.) argued that tort reform was the highest priority. Waxman vociferously disagreed, arguing that the Vaccine Injury Compensation Program had effectively solved most flu vaccine liability issues.

Though both hearings focused primarily on private-sector strategies, the importance of basic research was addressed during the House Government Reform Committee hearing. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases (NIAID), said that federal funding for influenza research alone has risen from $21 million to $66 million in the past few years. The top priorities have been advances in recombinant DNA technology, the genetic sequencing of several thousand flu viruses, and the development of vaccines derived from cell cultures. Another critical research goal is to establish a more robust development pipeline for new antiviral drugs in case human resistance to current drugs develops, which, Fauci warned, is inevitable.

Regardless of whether U.S. policymakers seek new scientific or private-sector solutions to the current vaccine shortage, U.S. reliance on overseas manufacturers will need to be addressed. The British government recently extended the suspension of Chiron’s license to produce vaccines, greatly reducing the likelihood that the company will be able to manufacture doses in time for next year’s flu season.

McCain continues push for climate change legislation

Sen. John McCain (R-Ariz.) used his last hearing as chairman of the Senate Committee on Commerce, Science and Transportation to continue to push for legislation dealing with the causes of climate change. McCain called the hearing to review the sobering conclusions of a new study on climate change in the Arctic. McCain called the study, which encapsulates the work of 300 scientists from around the world over four years, the canary in the coal mine of climate change. Sen. Frank Lautenberg (D-N.J.) agreed, calling the report’s conclusions “chilling.”

In testimony before the committee, Robert Corell, chair of the group that produced the Arctic Climate Impact Assessment report and a senior fellow at the American Meteorological Society, listed some of the expected effects of global warming on the Arctic region and on Earth as a whole. He said that between 1990 and 2090, it is estimated that the global surface air temperature will increase by 15° to 18°F. Consequently, glaciers will melt at an accelerated pace, leading to a one-meter rise in sea level and a decrease in oceanic salinity.

Such a dramatic change in snow cover would mean a reduction in the reflectivity of the Arctic region, Corell said. He explained that about 80 percent of the Sun’s rays are reflected away from Earth’s surface by snow cover. A decrease in the total surface area of glaciers and other snow-covered regions would result in more landmass being exposed and more of the Sun’s rays being absorbed by Earth, thus speeding the melting process.

Furthermore, a decrease in salinity could hamper the ocean’s circulation system, leading to cooling trends in Europe. Corell emphasized that even if action is taken now, it might take a few hundred or a thousand years to put the breaks on the relentless “supertanker” of global warming.

The hearing also provided an opportunity to glimpse the leadership style of the incoming Commerce chairman, Sen. Ted Stevens (R-Alaska), who has been fixated on the impact of climate change in his state. Corell stated that parts of Alaska are warming 8° to 10°F more than the average global rate, leading to a recession of the ice sheets that used to protect the shoreline of coastal towns. Once exposed, the villages will no longer have a buffer against the usually severe summer storms. Also, rising temperatures have started to melt permafrost, destabilizing foundations and in some cases causing entire buildings to collapse. Stevens acknowledged witnessing the devastation that many of these coastal villages have experienced and vowed to hold future hearings on the subject in the upcoming session of Congress.

Susan Hassol, an independent science writer and lead author of the report, described the negative effects of warming in more human terms. For example, she stated that the 10,000-year-old Inuit language has no word for robin, yet the bird is now thriving in the warmer Arctic climates. Furthermore, in just the past 30 years, the average amount of Arctic sea ice lost would equal the size of Arizona and New York combined.

The report is available at: www.acia.uaf.edu or www.cambridge.org.

The second part of the hearing focused on the federal government’s climate monitoring programs in Antarctica. Ghassem Asrar, deputy associate director for science missions at NASA, stated that advancements in remote sensing technology have helped to improve the accuracy of the measurements of the changes that have occurred in glaciers and sea ice. He noted that although the South Pole has recently grown cooler as a result of ozone depletion, the trend is expected to reverse in the next few decades.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Economics, Computer Science, and Policy

 

Perhaps as little as a decade ago, it might have seemed far-fetched for scientists to apply similar methodologies to problems as diverse as vaccination against infectious disease, the eradication of email spam, screening baggage for explosives, and packet forwarding in computer networks. But there are at least two compelling commonalities between these and many other problems. The first is that they can be expressed in a strongly economic or game-theoretic framework. For instance, individuals deciding whether to seek vaccination against a disease may consider how infectious the overall population is, which in turn depends on the vaccination decisions of others. The second commonality is that the problems considered take place over an underlying network structure that may be quite complex and asymmetric. The vulnerability of a party to infectious disease or spam or explosives depends strongly on the party’s interactions with other parties.

The growing importance of network views of scientific and social problems has by now been well documented and even popularized in books such as Malcolm Gladwell’s The Tipping Point, but the central relevance of economic principles in such problems is only beginning to be studied and understood. The interaction between the network and economic approaches to diverse and challenging problems, as well as the impact that this interaction can have on matters of policy, are the subjects I will explore here. And nowhere is this interaction more relevant and actively studied than in the field of computer science.

Research at the intersection of computer science and economics has flourished in recent years and is a source of great interest and excitement for both disciplines. One of the drivers of this exchange has been the realization that many aspects of our most important information networks, such as the Internet, might be better understood, managed, and improved when viewed as economic systems rather than as purely technological ones. Indeed, such networks display all of the properties classically associated with economic behavior, including decentralization, mixtures of competition and cooperation, adaptation, free riding, and tragedies of the commons.

I will begin with simple but compelling examples of economic thought in computer science, including its potential applications to policy issues such as the management of spam. Later, I will argue that the power and scale of the models and algorithms that computer scientists have developed may in turn provide new opportunities for traditional economic modeling.

The economics of computer science

The Internet provides perhaps the richest source of examples of economic inspiration within computer science. These examples range from macroscopic insights about the economic incentives of Internet users and their service providers to very specific game-theoretic models for the behavior of low-level Internet protocols for basic functionality, such as packet routing. Across this entire range, the economic insights often suggest potential solutions to difficult problems.

To elaborate on these insights, let us begin with some background. At practically every level of detail, the Internet exhibits one of the most basic hallmarks of economic systems: decentralization. It is clear that the human users of the Internet are a decentralized population with heterogeneous needs, interests, and incentives. What is less widely known is that the same statement applies to the organizations that build, manage, and maintain what we call monolithically the Internet. In addition to being physically distributed, the Internet is a loose and continually changing amalgamation of administratively and economically distinct and disparate subnetworks (often called autonomous systems). These subnetworks vary dramatically in size and may be operated by institutions that simply need to provide local connectivity (such as the autonomous system administered by the University of Pennsylvania), or they may be in the business of providing services at a profit (such as large backbone providers like AT&T). There is great potential for insight from studying the potentially competing economic incentives of these autonomous systems and their users. Indeed, formal contractual and financial agreements between different autonomous systems specifying their connectivity, exchange of data and pricing, and other interactions are common.

Against this backdrop of decentralized administration, a number of prominent researchers have posited that many of the most common problems associated with the Internet, such as email spam, viruses, and denial-of-service attacks, are fundamentally economic problems at their core. They may be made possible by networking technology, and one may look for technological solutions, but it is often more effective to attack these problems at their economic roots.

For example, many observers argue that problems such as spam would be best addressed upstream in the network. They contend that it is more efficient to have Internet service providers (ISPs) filter spam from legitimate mail, rather than to have every end user install spam protection. But such purely technological observations ignore the question of whether the ISPs have an economic incentive to address such problems. Indeed, it has been noted that some ISPs have contractual arrangements with their corporate customers that charge fees based on the volume of data carried to and from the customer. Thus, in principle, an ISP could view spam or a denial-of-service attack as a source of potential revenue.

An economic view of the same problem is that spam has proliferated because the creation of a nearly free public resource (electronic mail) whose usage is unlimited has resulted in a favorable return on investment for email marketing, even under infinitesimal take rates for the products or services offered. One approach is to accept this economic condition and pursue technological defenses such as spam filters or whitelists and blacklists of email addresses. An alternative is to seek to alter the economic equation that makes spam profitable in the first place, by charging a fee for each email sent. The charge should be sufficiently small that email remains a nearly free resource (aside from Internet access costs) for nearly all non-spammers, but sufficiently large to eradicate or greatly reduce the spammer’s profitability. There are many challenging issues to be worked out in any such scheme, including who is to be paid and how to aggregate all the so-called micropayments. But the mere fact that computer scientists are now incorporating real-world economics directly into their solutions or policy considerations represents a significant shift in their view of technology and its management.

As an example of economic thought at the level of the Internet’s underlying protocols, consider the problem of routing, the multi-hop transmission of data packets across the network. Although a delay of a second or two is unimportant for email and many other Internet operations, it can be a serious problem for applications such as teleconferencing and Internet telephony, where any latency in transmission severely degrades usefulness. For these applications, the goal is not simply to move data from point A to point B in the Internet, but to find the fastest possible route among the innumerable possible paths through the distributed network. Of course, which route is the fastest is not static. The speed of electronic traffic, like the speed of road traffic, depends on how much other traffic is taking the same route, and the electronic routes can be similarly disrupted by “accidents” in the form of temporary outages or failures of links.

Recently, computer scientists have begun to consider this problem from a game-theoretic perspective. In this formulation, one regards a network user (whether human or software) as a player in a large-population game in which the goal is to route data from one point to another in the network. There are many possible paths between the source and destination points, and these different paths constitute the choice of actions available to the player. Being “rational” in this context means choosing the path that minimizes the latency suffered in routing the data. A series of striking recent mathematical results has established that the “price of anarchy”— a measure of how much worse the overall latency can be at competitive equilibrium in comparison to the best “socialist” or centrally mandated nonequilibrium choice of routes—is surprisingly small under certain conditions. In other words, in many cases there is not much improvement in network behavior to be had from even the most laborious centralized network design. In addition to their descriptive properties, such results also have policy implications. For example, a number of plausible schemes for levying taxes on transmission over congested links of the network have been shown to significantly reduce the price of anarchy.

These examples are just some of the many cases of computer scientists using the insights of economics to solve problems. Others include the study of electronic commerce and the analysis and design of complex digital markets and auctions.

The computer science of economics

The flow of ideas between computer science and economics is traveling in both directions, as some economists have begun to apply the insights and methods of computer science to new and old problems. The computer scientist’s interest in economics has been accompanied by an explosion of research on algorithmic issues in economic modeling, due in large part to the fact that the economic models being entertained in computer science are often of extraordinarily large dimension. In the game-theoretic routing example discussed above, the number of players equals the number of network users, and the number of actions equals the number of routes through the network. Representing such models in the so-called normal form of traditional game theory (where one explicitly enumerates all the possibilities) is infeasible. In recent years, computer scientists have been examining new ways of representing or encoding such high-dimensional models.

Such new encodings are of little value unless there are attendant algorithms that can manipulate them efficiently (for instance, performing equilibrium and related computations). Although the computational complexity of certain basic problems remains unresolved, great strides have been made in the development of fast algorithms for many high-dimensional economic models. In short, it appears that from a computational perspective, many aspects of economic and game-theoretic modeling may be ready to scale up. We can now undertake the construction and algorithmic manipulation of numerical economic models whose complexity greatly exceeds those one could have contemplated a decade ago.

Finally, it also turns out that the analytical and mathematical methods of computer science are extremely well suited to examining the ways in which the structure of an economic model might influence the expected outcomes in the models; for instance, the way in which the topology of a routing network might influence the congestion experienced at game-theoretic equilibrium, the way in which the connectivity pattern of a goods exchange network might influence the variation in prices or the distribution of wealth, or (as we shall see shortly) the way in which transfers of passengers between air carriers might influence their investment decisions for improved security.

Interdependence in computer security

To illustrate some of these computational trends, I will examine a case study drawn from my own work on a class of economic models known as interdependent security (IDS) games, which nicely capture a wide range of commonly occurring risk management scenarios. Howard Kunreuther of the Wharton School at the University of Pennsylvania and Geoffrey Heal of Columbia University introduced the notion of IDS games, which are meant to capture settings in which decisions to invest in risk mitigation may be heavily influenced by natural notions of risk “contagion.” Interestingly, this class is sufficiently general that it models problems in areas as diverse as infectious disease vaccination, corporate compliance, computer network security, investment in research, and airline baggage screening. It also presents nontrivial computational challenges.

Let us introduce the IDS model with another example from computer science, the problem of securing a shared computer resource. Suppose you have a desktop computer with its own software and memory, but you also keep your largest and most important data files on a hard disk drive that is shared with many other users. Your primary security concern is thus that a virus or other piece of malicious software might erase the contents of this shared hard drive. Your desktop computer and its contents, including all of your email, any programs or files you download, and so on, is a potential point of entry for such “malware,’’ but of course so are the desktop machines of all the other users of the hard disk.

Now imagine that you face the decision of whether to download the most recent updates to your standard desktop security software, such as Norton Anti-Virus. This is a distinct investment decision; not so much because of the monetary cost but because it takes time and energy for you to perform the update. If your diligence were the only factor in protecting the valued hard drive, your incentive to suffer the hassle would be high. But it is not the only factor. The safety of the hard drive is dependent on the diligence of all of the users whose desktop machines present potential points of compromise, since laziness on the part of just a single user could result in the breach that wipes the disk clean forever. Furthermore, some of those users may not keep any important files on the drive and therefore have considerably less concern than you about the drive’s safety.

Thus, your incentive to invest is highly interdependent with the actions of the other players in this game. In particular, if there are many users, and essentially none of them are currently keeping their security software updated, your diligence would have at best an incremental effect on an already highly vulnerable disk, and it will not be worth your time to update your security software. At the other extreme, if the others are reliable in their security updates, your negligence would constitute the primary source of vulnerability, so you can have a first-order effect on the disk’s safety by investing in the virus updates.

Kunreuther and Heal propose a game-theoretic model for this and many other related problems. Although the formal mathematical details of this model are beyond our current scope, the main features are as follows:

  • Each player (such as the disk users above) in the game has an investment decision (such as downloading security updates) to make. The investment can marginally reduce the risk of a catastrophic event (such as the erasure of the disk).
  • Each player’s risk can be decomposed into direct and indirect sources. The direct risk is that which arises because of a players own actions or inactions, and it can be reduced or eradicated by sufficient investment. The indirect risk is entirely in the hands of the rest of the player population. In the current example, your direct risk is the risk that the disk will be erased by malware entering the system through your own desktop machine. Your remaining risk is the indirect risk that the disk will be erased by malware entering through someone else’s machine. You can reduce the former by doing the updates, but you can do nothing about the latter.
  • Rational players will choose to invest according to the tradeoff presented by the two sources of risk. In the current example, you would choose to invest the least update effort when all other parties are negligent (since the disk is so vulnerable already that there is little help you alone can provide) and the most when all other parties are diligent (since you constitute the primary source of risk).
  • The predicted outcomes of the IDS model are the (Nash) equilibria that can arise when all players are rational; that is, the collective investment decisions in which no player can benefit by unilateral deviation. In such an equilibrium, every party is optimizing their behavior according to their own cost/benefit tradeoff and the behavior of the rest of the population.

 

Baggage screening unraveled

In the shared disk example, there is no interesting network structure per se, in the sense that users interact with each other solely through a shared resource, and the effect of any given user is the same on all other users: By being negligent, you reduce the security of the disk by the same amount for everyone, not differentially for different parties. In other words, there are no network asymmetries: All pairs of parties have the same interactions, even though specific individuals may influence the overall outcome differently by their different behaviors.

Kunreuther and Heal naturally first examined settings in which such asymmetries are absent, so that all parties have the same direct and indirect risks. Such models permit not only efficient computation but even the creation of simple formulas for the possible equilibria. But in more realistic settings, asymmetries among the parties will abound, precluding simple characterizations and presenting significant computational challenges. It is exactly in such problems that the interests and strengths of computer science take hold.

A practical numerical and computational example of IDS was studied in recent work done in my group. In this example, the players are air carriers, the investment decision pertains to the amount of resources devoted to luggage screening for explosives, the catastrophic event is a midair explosion, and the network structure arises from baggage transfers between pairs of carriers.

Before describing our experiments, I provide some background. In the United States, individual air carriers determine the procedures and investments they each make in baggage screening for explosives and other contraband, subject to meeting minimum federal requirements. Individual bags are thus subjected to the procedures of whichever carrier a traveler boards at the beginning of a trip. If a bag is transferred from one carrier to another, the receiving carrier does not rescreen according to its own procedures but simply accepts the implicit validation of the first carrier. The reasons for this have primarily to do with efficiency and the cost of repeated screenings. The fact that carriers are free to apply procedures that exceed the federal requirements is witnessed by the practices of El Al Airlines, which is also an exception in that it does in fact screen transferred bags.

As in the shared disk example, there is thus a clear interdependent component to the problem of baggage screening. If a carrier receives a great volume of transfers from other carriers with lax security, it may actually have little incentive to invest in improved security for the bags it screens directly: The explosion risk presented by the transferred bags is already so high that the expense of the marginal improvement in direct check security is unjustified. (Note: For simplicity, I am not considering the expensive proposition of rescreening transferred bags, but only of improving security on directly checked luggage.) Alternatively, if the other airlines maintain extremely high screening standards, a less secure carrier’s main source of risk may be its own checked baggage, creating the incentive for improved screening. Kunreuther and Heal discuss how the fatal explosion aboard Pan Am flight 103 over Lockerbie, Scotland, in 1988 can be viewed as a deliberate exploitation of the interdependent risks of baggage screening.

The network structure in this case arises from the fact that there is true pairwise interaction between carriers (as opposed to the shared disk setting, where all interactions were indirect and occurred via the shared resource). Since not all pairs of airlines may transfer bags with each other, or may not do so in equal volume, strong asymmetries may emerge. Within the same network of transfers, some airlines may find themselves receiving many transfers from carriers with lax security, and others may receive transfers primarily from more responsible parties. On a global scale, one can imagine that such asymmetries might arise from political or regulatory practices in different geographical regions, demographic factors, and many other sources. Such a network structure might be expected to have a strong influence on outcomes, since the asymmetries in transfers will create asymmetries of incentives and therefore of behavior.

In the work of my group, we conducted the first large-scale computational and simulation study of IDS games. This simulation was based on a data set containing 35,362 records of actual civilian commercial flight reservations (both domestic and international) made on August 26, 2002. Each record contains a complete flight itinerary for a single individual and thus documents passenger (and therefore presumably baggage) transfers between the 122 commercial air carriers appearing in the data set. The data set contained no identifying information for individuals. Furthermore, since I am describing an idealized simulation based on limited data, I will not identify specific carriers in the ensuing discussion.

I will begin by discussing the raw data itself—in particular, the counts of transfers between carriers. Figure 1 shows a visualization of the transfer counts between the 36 busiest carriers (as measured by the total flight legs on the carrier appearing in the data). Along each of the horizontal axes, the carriers are arranged in order of number of flight legs (with rank 1 being the busiest carrier, and rank 36 the least). At each grid cell, the vertical bar shows the number of transfers from one particular carrier to another. Thus, transfers between pairs of the busiest (highest-rank) carriers appear at the far corner of the diagram; transfers between pairs of the least busy carriers in the near corner; and so on.


Despite its simplicity, Figure 1 already reveals a fair amount of interesting structure in the (weighted) network of transfers between the major carriers. Perhaps the most striking property is that an overwhelming fraction of the transfers occur among the handful of largest carriers. This is visually demonstrated by the “skyscrapers” in the far corner, which dominate the landscape of transfers.

Scientists and travelers know that the hub and spoke system of U.S. airports naturally leads to a so-called “heavy-tailed” distribution of flights in which a small number of major airports serve many times the volume of the average airport. Here we are witnessing a similar phenomenon across air carriers rather than airports: The major carriers account for almost all the volume, as well as almost all the transfers. This is yet another example of the staggering variety of networks—transportation, social, economic, technological, and biological—that have been demonstrated in recent years to have heavy-tailed properties of one kind of another. Beyond such descriptive observations, less is known about how such properties influence outcomes. In a moment, we will see the profound effect that the imbalanced structure of the carrier transfer network has on the outcome predicted by our IDS simulation, and how simple models can suggest how such structure can lead rather directly to policy suggestions.

In order to perform the simulations, the empirical number of transfers in the data set from carrier A to carrier B was used to set a parameter in the IDS model that represents the probability of transfer from A to B. The numerical IDS model that results does not fall into any of the known classes for which the computation of equilibria can be performed efficiently. However, this is not a proof of intractability, because we are concerned here with a specific model and not general classes. We thus performed simulations on the numerical model in which each carrier gradually adapts its investment behavior in response to its current payoff for investment, which depends strongly on the current investment decisions of its network neighbors in the manner we have informally described. (See the “IDS Models” at the end for a detailed explanation of the model.)

The most basic question about such a simulation is whether it converges to a predicted equilibrium outcome. There is no a priori reason why it must, since the independent adaptations of the carriers could, for instance, result in cyclical investment behavior. This question is easily answered: The simulation quickly converges to an equilibrium, as do all of the many variants we examined. This is a demonstration of a common phenomenon in computer science: the empirical effectiveness of a heuristic on a specific instance of a problem that may be computationally difficult in general. Further, it is worth noting that the particular heuristic here—the gradual adaptation of investment starting from none—is more realistic than a “black-box” equilibrium outcome that identifies only the final state, because it suggests the dynamic path by which the carriers might actually arrive at equilibrium starting from natural initial conditions.

The more interesting question, to which we now turn, is what are the properties of the predicted equilibrium? And if we do not like those properties, what might we do about it?

The answer, please

Figure 2 shows the results of the simulation described above. The figure shows a 6-by-6 grid of 36 plots, one for each of the 36 busiest (again, according to overall flight traffic in the data set) out of the 122 carriers. The plot in the upper left corner corresponds to the 36th busiest carrier, and the plot in the lower right corner corresponds to the busiest. The x axis of each plot corresponds to time in units of simulation steps, and the y axis shows the level of investment between 0 (no investment) and 1 (the hypothetical maximum investment) for the corresponding carrier as it adapts during the simulation. As noted above, all carriers start out at zero investment.


Examining the details of Figure 2, we find that within approximately 1,500 steps of simulation, the population of carriers has converged to an equilibrium and no further adaptation is taking place; carrier 18 is the last to converge. From the viewpoint of societal benefit, the outcome we would prefer to emerge is that in which all carriers fully invest in improved screening. Instead, the carriers converge to a mixture of those who invest fully and those who invest nothing. In general, this mixture obeys the ordering by traffic volume: The less busy carriers tend to converge to full investment, whereas the larger carriers never move from their initial position of no investment. This is due to the fact that, according to the numerical model, the larger carriers generally face a large amount of indirect or transfer risk and thus have no incentive to improve their own screening procedures. Smaller carriers can better control their own fate with improved screening, since they have fewer transferred bags. There are exceptions to this simple ordering. For instance, the carriers of rank 32 and 33 do not invest despite the fact that carriers with similar volume choose to invest. These exceptions are due to the specific transfer parameters of the carriers. The carriers of rank 37 to 122 (not shown) all converge to full investment.

Figure 2 thus shows that the price of anarchy in our numerical IDS baggage screening model is quite high: The outcome obtained by letting carriers behave independently and selfishly is far from the desired societal optimum of full investment. The fact that “only” 22 of the 122 carriers converge to no investment is little consolation given the fact that they include all the largest carriers, which account for the overwhelming volume of flights. The model thus predicts that an insecure screening system will arise from the interdependent risks.

Even more interesting than this baseline prediction are the policy implications that can derived by manipulation of the model. One way of achieving the desired outcome of full investment by all carriers would be for the federal government to subsidize all carriers for improved security screening. A natural economic question is whether the same effect can be accomplished with minimal centralized intervention or subsidization.

Figure 3 shows the results of one such thought experiment. The format of the figure is identical to that of Figure 2, but one small and important detail in the simulation was changed. In the simulation depicted in Figure 3, the two largest carriers have had their investment levels fixed at the maximum of 1, and they are not adapted from this value during the simulation. In other words, we are effectively running an experiment in which we have subsidized only the two largest carriers.


The predicted effects of this limited subsidization are quite dramatic. Most notably, all of the remaining carriers now evolve to the desired equilibrium of full investment. In other words, the relatively minor subsidization of two carriers has created the economic incentive for all other carriers to invest in improved security. This is an instance of the tipping phenomenon first identified by Thomas Schelling and recently popularized by Malcolm Gladwell: a case in which a behavioral change by a small collection of individuals causes a massive shift in the overall population behavior.

Figure 3 also nicely demonstrates cascading behavior among the non-subsidized carriers. The subsidization of the two largest carriers does not immediately cause all carriers to begin investing from the start of the simulation. Rather, some carriers (again mainly the larger ones) begin to invest only once a sufficient fraction of the population has invested enough to make their direct risk primary and their transfer risk secondary. Indeed, the seventh largest carrier has an economic incentive to invest only toward the end of the simulation and is the last to converge. This cascading effect, in which the tipping phenomenon occurs sequentially in a distinct order of investment, was present in the original simulation but is much more pronounced here.

Of course, the two largest carriers form only one tipping set. There may be other collections of carriers whose coerced investment, either through subsidization or other means, will cause others to follow. Depending on more detailed economic assumptions we can make about the investment in question, some tipping sets may be more or less expensive to implement than others. Natural policy questions include asking what the most cost-effective and practical means of inducing full investment are, and such models facilitate the exploration of a large number of alternative answers.

The model can also predict necessary conditions for tipping. In Figure 4, we show the results of the simulation in which only the largest carrier is subsidized. Although this has salutary effects, stimulating investment by a number of carriers (such as carrier 3) that would not otherwise have invested, it is not sufficient to cause the entire population to invest. The price of anarchy remains high, with most of the largest carriers not investing. As a more extreme negative example, we found that subsidizing all but the two largest of the 122 carriers is still insufficient to induce the two largest to invest anything; the highly interdependent transfer risk between just these two precludes one of them investing without the other.


 

What next?

The IDS case study examined above is only one example in which a high-dimensional network structure, an economic model, computational issues, and policy interact in an interesting and potentially powerful fashion. Others are beginning to emerge as the dialogue between computer scientists and economists heats up. For instance, in my group we have also been examining high-dimensional network versions of classical exchange models from mathematical economics, such as those studied by Kenneth Arrow and Gerard Debreu. In the original models, consumers have endowments of commodities or goods and utility functions describing their preferred goods; exchange takes place when consumers trade their endowments for more preferred goods. In the variants we have studied, there is also an underlying network structure defining allowable trade: Consumers are allowed to engage in trade only with their immediate neighbors in the network.

The introduction of such natural restrictions on the models radically alters basic properties of their price equilibria. The same good can vary in price across the economy due entirely to network asymmetries, and individual consumers may be relatively economically advantaged or disadvantaged by the details of their position in the overall network. In addition to being an area that has seen great strides in efficient algorithms for equilibrium computation, it is also one that again highlights the insights that computer science can bring to the relationship between structure and outcome. For example, it turns out that a long-studied structural property of networks known in computer science as “expansion” offers a characterization of which networks will have no variation in prices and which will have a great deal of variation. Interestingly, expansion properties are also closely related to the theory of random walks in networks. The intuition is that if, when randomly wandering around a network, there are regions where one can become stuck for long periods, these same regions are those where economic imbalances such as price variation and low wealth can emerge. Thus, there is a direct relationship between structural and economic notions of isolation.

We have also performed large-scale numerical experiments on similar models derived from United Nations foreign exchange data. Such experiments demonstrate the economic power derived purely from a nation’s position in an idealized network structure extracted from the data. The models and algorithms again support thought-provoking predictive manipulations. For instance, in the original network we extracted, the United States commanded the highest prices at equilibrium by a wide margin. When the network was modified to model a truly unified, frictionless European Union, the EU instead became the predicted economic superpower.

Looking forward, the research dialogue between the computer science and economics communities is perhaps the easy part, since they largely share a common mathematical language. More difficult will be convincing policymakers that this dialogue can make more than peripheral contributions to their work. For this to occur, the scientists will need to pick their applications carefully and to work hard to understand the constraints on policymakers in those arenas. This sociological step, when scientists wade into the often messy waters where their methods must prove useful despite political, budgetary, and other constraints, is not likely to be easy. But it seems that the time for the attempt has arrived, since the computational, predictive, and analytical tools for considerably more ambitious economic models are quickly falling into place.

As I have discussed, within computer science the influence of economic models is already beginning to inform policy. This is a particularly favorable domain, since so many of the policy issues have technology at their core; the scientists and policy-makers are often close or even the same individuals. Similarly promising areas include epidemiology and transportation, the latter including topics such as our application of IDS to baggage screening. That case study exemplifies both the opportunities and challenges. It provides compelling but greatly oversimplified evidence for the potential policy implications of rich models. The missing details—the specifics of plausible security screening investments, the metrics of the carriers’ direct risks based on demographics and history, and many others—must be filled in for the model to be taken seriously. But regardless of the domain, all that is required to start is a scientist and a policymaker ready to work together in a modern and unusual manner.

IDS Models and Their Computational Challenges

When one formalizes the IDS baggage screening problem, the result is a model for the payoffs of a game determined by the following parameters:

I. For each carrier A, a numerical parameter D(A),quantifying the level of the direct risk of A; intuitively, the probability that this particular carrier directly checks a bag containing an explosive onto a flight. Obviously, this parameter might vary from carrier to carrier, depending (among other things) on the ambient level of risk presented by the demographics of its customer base or the geographic region of the carrier.

II. For each pair of carriers A and B, a numerical paramater T(A,B), quantifying the indirect risk that A faces due to transferred bags from B; intuitively, the probability that a bag transferred from a flight of B to a flight of A contains an explosive device. This parameter might vary for different carrier pairs, depending (among other things) on the volume of transfers from B to A and the direct risk of B.

III. Parameters, possibly varying from carrier toticated carrier, quantifying the required investment I(A) for improved screening technology or procedures and the cost E(A) of an in-flight explosion.

The resulting multiparty game is described by a payoff function for each carrier A that will depend on E(A), I(A), D(A), and the parameters T(A,B) for all other carriers B.

For the numerical experiments we describe, the empirical number of transfers in the data set from carrier B to carrier A was used to set the parameter T(A,B). Note that despite the large number of records in the data set, it is actually rather small compared to the number of pairs of carriers, thus leading to many transfer counts that are zero. However, our simulation results appear robust even when standard forms of “smoothing” are applied to these sparse estimates.

Although the data set contains detailed empirical evidence on intracarrier transfers, it provides no guidance on the setting of the other IDS model parameters (for direct risks and for investment and explosion costs). These were thus set to common default values for the simulations. In future work, they could clearly be replaced by either human estimates or a variety of sources of data. For instance, direct risks could be derived from statistics regarding security breaches at the individual carriers or at the airports where they receive the greatest direct-checked volume. Let us briefly delve into the computational challenges presented by such models. The sheer number of parameters is dominated by those in category II. There is one such parameter per pair of carriers, so the number of parameters in this category grows roughly with the square of the number of carriers. For instance, in our model with over 100 carriers, the number of parameters of the model already exceeds 10,000. We are thus interested in algorithmically manipulating rather high-dimensional models.

From the theoretical standpoint, the computationalnews on such models is mixed, but in an interestingrameter way. If we consider the completely general case given by parameter categories I, II, and III above, it is possible to prove formally that in the worst case, there may be certain equilibria that are computationally intractable to find. On the other hand, various restrictions on or assumptions about the parameters (particularly the transferdirect parameters in category II) allow one to develop sophisticated algorithms that can efficiently compute all of the possible outcomes. Such mixed results—in which theproved most ambitious variant of the problem is computationally infeasible, but in which nontrivial algorithms can tackle nontrivial special cases—is often a sign of an interesting problem in computer science.

Of course, the real world also typically lies somewhere in between the provably solvable and worst cases. And one often finds that simple and natural heuristics can be surprisingly effective and yield valuable insights. In particular, in the simulations we describe, an heuristic known as gradient descent was employed. More precisely, according to the IDS model, the numerical payoff that carrier A will receive from investment in improved screening depends on the current investments of the other carriers, weighted by their probability of transferring passengers to carrier A. This payoff could be either positive (incentive for increased investment) or negative (disincentive for increased investment). In our simulations, carrier A simply incrementally adjusts its current investment up or down according to this incentive signal, and all other carriers do likewise. All carriers begin with no investment, and we assume that there is a maximum possible investment of 1. Such gradient approaches to challenging computational problems are common in the sciences. There are many possible natural variants of this simulation that can be imagined.

Agricultural Biotechnology: Overregulated and Underappreciated

The application of recombinant DNA technology, or gene splicing, to agriculture and food production, once highly touted as having huge public health and commercial potential, has been paradoxically disappointing. Although the gains in scientific knowledge have been stunning, commercial returns from two decades of R&D have been meager. Although the cultivation of recombinant DNA-modified crops, first introduced in 1995, now exceeds 100 million acres, and such crops are grown by 7 million farmers in 18 countries, their total cultivation remains but a small fraction of what is possible. Moreover, fully 99 percent of the crops are grown in only six countries—the United States, Argentina, Canada, Brazil, China, and South Africa—and virtually all the worldwide acreage is devoted to only four commodity crops: soybeans, corn, cotton, and canola.

Attempts to expand “agbiotech” to additional crops, genetic traits, and countries have met resistance from the public, activists, and governments. The costs in time and money to negotiate regulatory hurdles make it uneconomical to apply molecular biotechnology to any but the most widely grown crops. Even in the best of circumstances—that is, where no bans or moratoriums are in place and products are able to reach the market—R&D costs are prohibitive. In the United States, for example, the costs of performing a field trial of a recombinant plant are 10 to 20 times that of the same trial with a virtually identical plant that was crafted with conventional techniques, and regulatory expenditures to commercialize a plant can run tens of millions dollars more than for a conventionally modified crop. In other words, regulation imposes a huge punitive tax on a superior technology.

Singled out for scrutiny

At the heart of the problem is the fact that during the past two decades, regulators in the United States and many other countries have created a series of rules specific for products made with recombinant DNA technology. Regulatory policy has consistently treated this technology as though it were inherently risky and in need of unique, intensive oversight and control. This has happened despite the fact that a broad scientific consensus holds that agbiotech is merely an extension, or refinement, of less precise and less predictable technologies that have long been used for similar purposes, and the products of which are generally exempt from case-by-case review. All of the grains, fruits, and vegetables grown commercially in North America, Europe, and elsewhere (with the exception of wild berries and wild mushrooms) come from plants that have been genetically improved by one technique or another. Many of these “classical” techniques for crop improvement, such as wide-cross hybridization and mutation breeding, entail gross and uncharacterized modifications of the genomes of established crop plants and commonly introduce entirely new genes, proteins, secondary metabolites, and other compounds into the food supply.

Nevertheless, regulations in the United States and abroad, which apply only to the products of gene splicing, have hugely inflated R&D costs and made it difficult to apply the technology to many classes of agricultural products, especially ones with low profit potential, such as noncommodity crops and varieties grown by subsistence farmers. This is unfortunate, because the introduced traits often increase productivity far beyond what is possible with classical methods of genetic modification. Furthermore, many of the recombinant traits that have been introduced commercially are beneficial to the environment. These traits include the ability to grow with lower amounts of agricultural chemicals, water, and fuel, and under conditions that promote the kind of no-till farming that inhibits soil erosion. Society as a whole would have been far better off if, instead of implementing regulation specific to the new biotechnology, governments had approached the products of gene splicing in the same way in which they regulate similar products—pharmaceuticals, pesticides, and new plant varieties—made with older, less precise, and less predictable techniques.

But activist groups whose members appear to fear technological progress and loathe big agribusiness companies have egged on regulators, who need little encouragement to expand their empires and budgets. The activists understand that overregulation advances their antibiotechnology agenda by making research, development, and commercialization prohibitively expensive and by raising the barriers to innovation.

Curiously, instead of steadfastly demanding scientifically sound, risk-based regulation, some corporations have risked their own long-term best interests, as well as those of consumers, by lobbying for excessive and discriminatory government regulation in order to gain short-term advantages. From the earliest stages of the agbiotech industry, those firms hoped that superfluous regulation would act as a type of government stamp of approval for their products, and they knew that the time and expense engendered by overregulation would also act as a barrier to market entry by smaller competitors. Those companies, which include Monsanto, DuPont-owned Pioneer Hi-Bred, and Ciba-Geigy (now reorganized as Syngenta), still seem not to understand the ripple effect of overly restrictive regulations that are based on, and reinforce, the false premise that there is something uniquely worrisome and risky about the use of recombinant DNA techniques.

The consequences of this unwise, unwarranted regulatory policy are not subtle. Consider, for example, a recent decision by Harvest Plus, an alliance of public-sector and charitable organizations devoted to producing and disseminating staple crops rich in such micronutrients as iron, zinc, and vitamin A. According to its director, the group has decided that although it will continue to investigate the potential for biotechnology to raise the level of nutrients in target crops above what can be accomplished with conventional breeding, “there is no plan for Harvest Plus to disseminate [gene-spliced] crops, because of the high and difficult-to-predict costs of meeting regulatory requirements in countries where laws are already in place, and because many countries as yet do not have regulatory structures.” And in May 2004, Monsanto announced that it was shelving plans to sell a recombinant DNA-modified wheat variety, attributing the decision to changed market conditions. However, that decision was forced on the company by the reluctance of farmers to plant the variety and of food processors to use it as an ingredient: factors that are directly related to the discriminatory overregulation of the new biotechnology in important export markets. Monsanto also announced in May that it has abandoned plans to introduce its recombinant canola into Australia, after concerns about exportability led several Australian states to ban commercial planting and, in some cases, even field trials.

Other companies have explicitly acknowledged giving up plans to work on certain agbiotech applications because of excessive regulations. After receiving tentative approval in spring 2004 from the British government for commercial cultivation of a recombinant maize variety, Bayer CropScience decided not to sell it because the imposition of additional regulatory hurdles would delay commercialization for several more years. And in June 2004, Bayer followed Monsanto’s lead in suspending plans to commercialize its gene-spliced canola in Australia until its state governments “provide clear and consistent guidelines for a path forward.”

Another manifestation of the unfavorable and costly regulatory milieu is the sharp decline in efforts to apply recombinant DNA technology to fruits and vegetables, the markets for which are minuscule compared to crops such as corn and soybeans. Consequently, the number of field trials in the United States involving gene-spliced horticulture crops plunged from approximately 120 in 1999 to about 20 in 2003.

Setting matters aright

The public policy miasma that exists today is severe, worsening, and seemingly intractable, but it was by no means inevitable. In fact, it was wholly unnecessary. From the advent of the first recombinant DNA-modified microorganisms and plants a quarter century ago, the path to rational policy was not at all obscure. The use of molecular techniques for genetic modification is no more than the most recent step on a continuum that includes the application of far less precise and predictable techniques for genetic improvement. It is the combination of phenotype and use that determines the risk of agricultural plants, not the process or breeding techniques used to develop them. Conventional risk analysis, supplemented with assessments specific to the new biotechnology in those very rare instances where they were needed, could easily have been adapted to craft regulation that was risk-based and scientifically defensible. Instead, most governments defined the scope of biosafety regulations to capture all recombinant organisms but practically none developed with classical methods.

An absolutely essential feature of genuine reform must be the replacement of process-oriented regulatory triggers with risk-based approaches.

In January 2004, the U.S. Department of Agriculture (USDA) announced that it would begin a formal reassessment of its regulations for gene-spliced plants. One area for investigation will include the feasibility of exempting “low-risk” organisms from the permitting requirements, leading some observers to hope that much needed reform may be on the horizon. However, regulatory reform must include more than a simple carve-out for narrowly defined classes of low-risk recombinant organisms.

An absolutely essential feature of genuine reform must be the replacement of process-oriented regulatory triggers with risk-based approaches. Just because recombinant DNA techniques are involved does not mean that a field trial or commercial product should be subjected to case-by-case review. In fact, the introduction of a risk-based approach to regulation is hardly a stretch; it would merely represent conformity to the federal government’s official policy, articulated in a 1992 announcement from the White House Office of Science and Technology Policy, which calls for “a risk-based, scientifically sound approach to the oversight of planned introductions of biotechnology products into the environment that focuses on the characteristics of the . . . product and the environment into which it is being introduced, not the process by which the product is created.”

One such regulatory approach has already been proposed by academics. It is, ironically, based on the well-established model of the USDA’s own plant quarantine regulations for nonrecombinant organisms. Almost a decade ago, the Stanford University Project on Regulation of Agricultural Introductions crafted a widely applicable regulatory model for the field testing of any organism, whatever the method employed in its construction. It is a refinement of the “yes or no” approach of national quarantine systems, including the USDA’s Plant Pest Act regulations; under these older regimens, a plant that a researcher might wish to introduce into the field is either on the proscribed list of plant pests, and therefore requires a permit, or it is exempt.

The Stanford model takes a similar, though more stratified, approach to field trials of plants, and it is based on the ability of experts to assign organisms to one of several risk categories. It closely resembles the approach taken in the federal government’s handbook on laboratory safety, which specifies the procedures and equipment that are appropriate for research with microorganisms, including the most dangerous pathogens known. Panels of scientists had stratified these microorganisms into risk categories, and the higher the risk, the more stringent the procedures and isolation requirements. In a pilot program, the Stanford agricultural project did essentially the same thing for plants to be tested in the field: A group of scientists from five nations evaluated and, based on certain risk-related characteristics, stratified a number of crops into various risk categories. Importantly, assignment to one or another risk category had nothing to do with the use of a particular process for modification or even whether the plant was modified at all. Rather, stratification depended solely on the intrinsic properties of a cultivar, such as potential for weediness, invasiveness, and outcrossing with valuable local varieties.

What are the practical implications of an organism being assigned to a given risk category? The higher the risk, the more intrusive the regulators’ involvement. The spectrum of regulatory requirements could encompass complete exemption; a simple “postcard notification” to a regulatory authority (without prior approval required); premarket review of only the first introduction of a particular gene or trait into a given crop species; case-by-case review of all products in the category; or even prohibition (as is the case currently for experiments with foot-and-mouth disease virus in the United States).

Under such a system, some currently unregulated field trials of organisms modified with older techniques would likely become subject to regulatory review, whereas many recombinant organisms that now require case-by-case review would be regulated less stringently. This new approach would offer greater protection and, by decreasing research costs and reducing unpredictability for low-risk organisms, encourage more R&D, especially on noncommodity crops.

The Stanford model also offers regulatory bodies a highly adaptable, scientific approach to the oversight of plants, microorganisms, and other organisms, whether they are naturally occurring or “non-coevolved” organisms or have been genetically improved by either old or new techniques. The outlook for the new biotechnology applied to agriculture, especially as it would benefit the developing world, would be far better if governments and international organizations expended effort on perfecting such a model instead of clinging to unscientific, palpably flawed regulatory regimes. It is this course that the USDA should pursue as it reevaluates its current policies.

At the same time as the U.S. government begins to rationalize public policy at home, it must stand up to the other countries and organizations that are responsible for unscientific, debilitating regulations abroad and internationally. U.S. representatives to international bodies such as the Codex Alimentarius Commission, the United Nations’ agency that sets food-safety standards, must be directed to support rational science-based policies and to work to dismantle politically motivated unscientific restrictions. All science and economic attachés in every U.S. embassy and consulate around the world should have biotechnology policy indelibly inscribed on their diplomatic agendas. Moreover, the U.S. government should make United Nations agencies and other international bodies that implement, collude, or cooperate in any way with unscientific policies ineligible to receive funding or other assistance from the United States. Flagrantly unscientific regulation should be made the “third rail” of U.S. domestic and foreign policy.

Uncompromising? Aggressive? Yes, but so is the virtual annihilation of entire areas of R&D; the trampling of individual and corporate freedom; the disuse of a critical, superior technology; and the disruption of free trade.

Strategies for action

Rehabilitating agbiotech will be a long row to hoe. In order to move ahead, several concrete strategies can help to reverse the deteriorating state of public policy toward agricultural biotechnology.

First, individual scientists should participate more in the public dialogue on policy issues. Perhaps surprisingly, few scientists have demanded that policy be rational; instead, most have insisted only on transparency or predictability, even if that delivers only the predictability of research delays and unnecessary expense. Others have been seduced by the myth that just a little excess regulation will assuage public anxiety and neutralize activists’ alarmist messages. Although defenders of excessive regulation have made those claims for decades, the public and activists remain unappeased and technology continues to be shackled.

Scientists are especially well qualified to expose unscientific arguments and should do so in every possible way and forum, including writing scientific and popular articles, agreeing to be interviewed by journalists, and serving on advisory panels at government agencies. Scientists with mainstream views have a particular obligation to debunk the claims of their few rogue colleagues, whose declarations that the sky is falling receive far too much attention.

Second, groups of scientists—professional associations, faculties, academies, and journal editorial boards—should do much more to point out the flaws in current and proposed policies. For example, scientific societies could include symposia on public policy in their conferences and offer to advise government bodies and the news media.

Third, reporters and their editors can do a great deal to explain policy issues related to science. But in the interest of “balance,” the news media often give equal weight to all of the views on an issue, even if some of them have been discredited. All viewpoints are not created equal, and not every issue has “two sides.” Journalists need to distinguish between honest disagreement among experts, on the one hand, and unsubstantiated extremism or propaganda, on the other. They also must be conscious of recombinant DNA technology’s place in the context of overall crop genetic improvement. When writing about the possible risks and benefits of gene-spliced herbicide-tolerant plants, for example, it is appropriate to note that herbicide-tolerant plants have been produced for decades with classical breeding techniques.

Fourth, biotechnology companies should eschew short-term advantage and actively oppose unscientific discriminatory regulations that set dangerous precedents. Companies that passively, sometimes eagerly, accept government oversight triggered simply by the use of recombinant DNA techniques, regardless of the risk of the product, ultimately will find themselves the victims of the law of unintended consequences.

Fifth, venture capitalists, consumer groups, patient groups, philanthropists, and others who help to bring scientific discoveries to the marketplace or who benefit from them need to increase their informational activities and advocacy for reform. Their actions could include educational campaigns and support for organizations such as professional associations and think tanks that advocate rational science-based public policy.

Finally, governments should no longer assume primary responsibility for regulation. Nongovernmental agencies already accredit hospitals, allocate organs for transplantation, and certify the quality of consumer products ranging from seeds to medical devices. Moreover, in order to avoid civil legal liability for damages real or alleged, the practitioners of agricultural biotechnology already face strong incentives to adhere to sound practices. Direct government oversight may be appropriate for products with high-risk characteristics, but government need not insinuate itself into every aspect of R&D with recombinant DNA-modified organisms.

The stunted growth of agricultural biotechnology worldwide stands as one of the great societal tragedies of the past quarter century. The nation and the world must find more rational and efficient ways to guarantee the public’s safety while encouraging new discoveries. Science shows the path, and society’s leaders must take us there.

Unleashing the Potential of Wireless Broadband

Broadcast TV, once vilified by former Federal Communications Commission (FCC) chairman Newton Minnow as a “vast wasteland,” can now also be characterized as a vast roadblock—specifically, a roadblock to the rapid expansion of digital wireless broadband technologies that could produce great economic and social benefits for the United States. In a nutshell, TV broadcasters have thus far been reluctant to vacate highly desirable parts of the electromagnetic spectrum that were lent to them by the federal government in the 1930s and 1940s in order to broadcast TV signals over the air. But the broadcasters no longer need this analog spectrum, because most Americans today receive TV signals from cable or satellite. Meanwhile, purveyors of services using new wireless broadband technologies are locked into inefficient parts of the spectrum that are severely hindering their development. These new technologies are capable of delivering data, video, and voice at vastly higher speeds than today’s cable or DSL connections and consequently could speed the development of a wealth of new applications that could transform society. They also could help reignite the telecommunications boom of the 1990s and create billions of dollars of value and thousands of new jobs. It is time for Congress and the FCC to take the steps needed to free up suitable parts of the spectrum—starting with the spectrum used to broadcast analog TV signals—to pave the way for the expansion of digital wireless broadband.

To understand the issue of spectrum allocation, it is important to understand what spectrum is. Electromagnetic waves all move at the same speed, at least for all purposes relevant to daily life and business activity. They oscillate, however, at varying frequencies. When the FCC sells or gives away spectrum, it actually is granting a license to use certain frequencies, either exclusively or in conjunction with other users.

All waves can be interrupted and modified in various ways. These changes in waves, like the tapping of a key connecting to a telegraph, can be used as a code that conveys information. The code can be music, as in the case of radio; or pictures, as in the case of broadcast and satellite TV; or email, as in the case of a Blackberry; or anything at all that can be appreciated by the eyes or ears. The senses of taste, smell, and touch are not well evoked by code, as of this writing.

Waves of different frequencies have different propagation characteristics. At some frequencies waves can travel without being absorbed or distorted by material objects; in other words, they go through buildings. Broadcast TV and radio use such waves. By contrast, most cellular telephones use waves that do not easily pass through walls.

The best reason to have the government grant licenses for frequencies rather than to treat them like water, which one can scoop up or drill for or collect from the skies without government permission, is that if two people make machines that emit waves at the same frequency, the waves can cancel each other out so that neither succeeds at transmitting its coded content. Some argue that those who interfere with each other can go to court or negotiate their conflict, just as neighbours may sue each other or compromise out of court concerning irritating behavior, such as the use of a leaf blower. But the transaction costs that would ensue are high, and on balance it seems practical to have a license regime.

However, in order to promote competitive markets and permit freedom of expression, it makes sense for government to grant as many licenses as can be issued without creating intolerable conflicts of use. For those who wish to emit messages (for example, to send broadcast TV or enable cell phone calls) there is a cost to using a frequency. The frequencies that penetrate buildings (which are lower on the spectrum) are particularly valuable because it is less costly to use them to send messages than it is to use the frequencies that do not penetrate buildings as well. Broadcast TV and radio have the best spectrum for most commercial purposes.

In the 1930s and 1940s, the federal government gave those media that spectrum because of the historical accident that they were developed before the microprocessor and digitization made the modern cell phone possible. No one ever decided that TV was more worthy than cellular telephony and especially that it was more important economically or socially than wireless broadband access to the Internet. Indeed, it is now the case that TV is less important than wireless broadband by any measure. The imperative for policy now is to translate the hierarchy of value into the frequency license allocation decisions of the government. In brief, government’s job is to take the frequencies for analog TV broadcast and give them to wireless broadband or any other use a truly efficient market would demand.

You would think that this would be an easy mission, principally because Americans make little use of broadcast TV today. Instead, about 90 percent of all households resort to cable or satellite TV to watch video. No rational person can disagree that the economic purpose of communications policy is to promote the welfare of our citizens, and making the most productive use of the electromagnetic spectrum provides benefits to all. Increased productivity translates into decreases in the price of transmission and increases in the amount of information moved per second from place to place.

This story played out in the mobile communications market in the 1990s. Voice communication over wireless networks generated many new firms, hundreds of thousands of new jobs, and billions of dollars of consumer benefits. Multiple users of spectrum have taken advantage of the absence of retail price regulations and of cheap interconnection mandated by government. Mobile communications firms created a market that delivers high growth, high usage, high penetration, and a high rate of technological innovation.

In fact, the original licenses for cellular telephony, granted in the 1980s, were repurposed UHF TV licenses. However, Congress and the FCC did not have the vision or the political courage to favor the emerging cellular industry over the existing broadcast industry. Therefore, the additional licenses auctioned for mobile communications were at a much higher frequency than those allotted for broadcast TV. The consequence is poorer performance, greater energy consumption, and higher network cost. However, because voice communication requires much less bandwidth than does video or Web browsing, the penalty for using higher frequencies has not much thwarted the development of a robust mobile communications market.

However, wireless broadband—access to the voice, video, and data of the Web through electromagnetic waves traveling over the air with sufficient capacity to carry many megabits of information per second—will incur much greater cost if it develops in higher frequencies than if it were to use the lower frequencies now used by broadcast TV. According to a number of studies, including one done by the Brookings Institution, the total cost of providing wireless broadband access could be five times higher if the optimal spectrum is not used by the new communications devices soon to reach the market. Of course, manufacturers need to select radios tuned to the frequencies permitted by government. The burden on government then is to act quickly to inform entrepreneurs in big and small companies what frequencies they can use in their wireless broadband designs.

The good news is that government decided in the early 1990s to move all analog broadcasting to digital broadcasting and to shrink greatly the amount of spectrum allocated to the broadcasters. Key decisions to this effect were made in Congress and at the FCC while I was chairman. The bad news is that so far in this century, Congress and the FCC have not taken adequate steps to make this move away from the analog broadcast spectrum actually happen.

TV broadcasters simply say they need more time to complete the move from analog to digital broadcasting, because they do not want to abandon any users who have only analog TV reception. But this would allow them to hold on to their spectrum indefinitely, because there will always be people who, for whatever reason, won’t switch to digital reception.

Speeding the transition

A number of ways exist to expedite the move from analog to digital broadcasting. Indeed, government could simply buy for every household a digital converter box that would make it possible to view a digital broadcast on an analog TV set. Then there would be no reason at all for analog broadcasting to continue. Moreover, the new boxes could be designed to also be compatible with cable and telephone networks, giving consumers significant choices for Internet access. Indeed, the new boxes could be personal computers that underpin home entertainment and communications services. Presumably, a modest government voucher, coupled with a defined date for the termination of analog broadcasting late in 2005, would suffice to move the country en masse from analog to digital broadcast access. In fact, probably not more than 10 to 20 percent of the country would even notice, given that so many depend on cable and satellite for video delivery.

The FCC needs to adopt a clear and systematic approach for spectrum currently available and to set forth an immutable policy for the treatment of spectrum that will come to the market in the future.

The FCC needs to adopt a clear and systematic approach for spectrum that is currently available and to set forth an immutable policy for the treatment of spectrum that will come to the market in the future. In November 2002, the FCC’s Spectrum Policy Task Force issued a report that said the commission should generally rely on market forces and gave an outline for how to increase the amount of spectrum in the market and how to use market forces to govern the use of spectrum and to increase flexibility. That report did not go far enough in its ambitions for spectrum management. Therefore, the Bush administration should now issue an executive order creating an independent commission charged with developing alternative solutions, including the one cited above, for clearing analog broadcast spectrum. That commission’s recommendations should be passed by Congress and implemented by the FCC in 2005.

Currently, the FCC is considering auctions of various blocks of spectrum as well as designating certain bands for unlicensed operations. It is possible to put this spectrum on the market at the same time and to facilitate the clearing of incumbents.

Although the FCC should auction spectrum, the current plan lacks specific dates for auctions and in general is an inadequate smorgasbord of spectrum offerings. No method appears to lie behind the auction madness. Indeed, it is not even clear that the FCC understands that its goal should be to be auction so much spectrum so quickly that the price goes down. The most important goal is not to maximize auction income for the government but to open as much spectrum as possible to the productive uses that will have a ripple effect throughout the economy.

After all, people do not consume spectrum; they do not eat electromagnetic waves. Spectrum is an input into other services. The highest and best current use of waves below 1 gigahertz, where TV broadcasting occurs, is wireless broadband. Consequently, Congress and the FCC should make that spectrum available on a defined date and thereby permit firms to make the investments that the market will bear.

People familiar with politics see spectrum issues as invariably bound into Gordian knots of special-interest pleading. One outcome of single-party government ought to be that the White House has a sword that can cut any political knot. With U.S. technological leadership in the Internet at stake and hundreds of thousands of new jobs to be created, that sword should be wielded to clear broadcasters out of analog spectrum.

Underage drinking

Alcohol use by young people is dangerous, not only because of the risks associated with acute impairment, but also because of the threat to their long-term development and well-being. Traffic crashes are perhaps the most visible of these dangers, with alcohol being implicated in nearly one-third of youth traffic fatalities. Underage alcohol use is also associated with violence, suicide, educational failure, and other problem behaviors. All of these problems are magnified by early onset of teen drinking: the younger the drinker, the worse the problem. Moreover, frequent heavy drinking by young adolescents can lead to mild brain damage. The social cost of underage drinking has been estimated at $53 billion including $19 billion from traffic crashes and $29 billion from violent crime.

More youth drink than smoke tobacco or use other illegal drugs. Yet federal investments in preventing underage drinking pale in comparisons with resources targeted (mostly to youths) at preventing illicit drug use. In fiscal 2000, $71.1 million was targeted at preventing underage alcohol use by the U.S. Departments of Health and Human Services (HHS), Justice, and Transportation. In contrast, the fiscal 2000 federal budget authority for drug abuse prevention (including prevention research) was 25 times higher, $1.8 billion; for tobacco prevention, funding for the Office of Smoking and Health, only one of several HHS agencies involved with smoking prevention, was approximately $100 million, with states spending a great deal more with resources from the states’ Medicaid reimbursement suits against the tobacco companies.

Respect your elders

Youth drink within the context of a society in which alcohol use is common and images about the pleasures of alcohol are pervasive. Efforts to reduce underage drinking, therefore, need to focus on adults and must engage the society at large.

Early learners

Drinking alcohol begins for some youth at an age when their parents are still worried about how much Coke to let them have, and it spreads like a virus as kids age. By age 15, one of five has tried alcohol, and by age 18, three of ten have engaged in heavy drinking (more than five servings at a time).

A persistent problem

The prevalence of drinking among 12th graders peaked in the late 1970s, declined slowly during the 1980s, and has remained essentially constant since then. In 2003, almost half of 12th graders reported drinking in the previous 30 days, compared with 21.5 percent who used marijuana and 26.7 percent who smoked in the same period.

Gender equity we can do without

More girls than boys begin drinking at a very early age, and although the boys soon catch up, the number of girls who drink— and who drink heavily—is close to the number of boys.

White fright

Underage drinking is one social problem that the white majority cannot dismiss as someone else’s worry. White youth are more likely than their African-American or Hispanic peers to consume alcohol.