The Boom in Industry Research

Meetings on science and technology (S&T) policy or innovation in Washington, D.C., or elsewhere around the country usually find at least one speaker lamenting that industry has abandoned longer-term high-risk research. Nothing could be further from the truth. Industry is doing more long-range, high-risk, discovery-type research than ever before. Indeed, as Robert Buderi points out in Engines of Tomorrow, “the extended time horizon of central labs is why many [industrial research] directors insist that basic research is alive and well–if not thriving.” We can expect the recent strong growth in this type of research to continue as we move further into a new knowledge-driven economy.

Recent data from the National Science Foundation (NSF) show remarkable strength in total industrial research and development (R&D) investment during the past five years. This R&D investment by industry has risen from $97.1 billion (all spending is given in current dollars) in 1994 to a projected $166 billion in 1999, an increase of 71 percent and double-digit annual growth. The major change that few people are aware of is that directed basic research in industry (directed toward potential future products, processes, or services) has grown even faster than aggregate R&D investment over the past five years, rising from $6 billion in 1994 to a projected $10.9 billion in 1999, an increase of 79 percent, or nearly 15 percent a year. Applied research increased an equally amazing 91 percent in this period, whereas development grew only 65 percent. Thus, the trend is clearly toward an increased emphasis on research.

Industry support of university research has likewise been strong, growing from $1.45 billion in 1994 to $2.16 billion in 1999, an annual increase of nearly 10 percent. Industry’s support of research in universities as a fraction of the total academic effort grew from 6.7 percent to 7.6 percent over this period. The increase is not dramatic, but projections show continued growth of this funding by industry and increasingly close links between industry and universities. The recent proposal by President Clinton for significant increases in federally funded research in fiscal year 2001, particularly in the basic sciences supported by NSF, will complement industry’s growing support of academic research and lay the foundation for continued U.S. economic growth.

Inside the companies

The strong growth of industrial R&D is being driven by global competition and supported by healthy corporate profits and cash flows. As one should expect, a large fraction of industry spending on R&D is for the D. Of the estimated $166 billion that was invested in R&D in 1999 by industry, slightly more than 70 percent was for development activities such as engineering, prototypes, and testing to meet today’s immediate technological challenges. But nearly 30 percent of the total was for research–some 22 percent for applied research on tomorrow’s products, processes, or services, and nearly 7 percent for discovery-type research to provide viability for the day after tomorrow. “Tomorrow” in this context might be considered a few years, so the fundamental research being funded by industry in its own labs and at universities is actually quite long-range, on the order of 5 to 15 years–and risky.

Although it is true that some companies, such as W. R. Grace and Eastman Chemical, have closed the doors of their corporate labs, many companies without central labs are carrying on discovery research in their business-unit labs, and other companies such as Pfizer and Corning are expanding their corporate labs significantly. Bell Labs has increased its R&D investment from 8 or 9 percent of sales before divestiture to 12 percent of Lucent Technologies’ sales today. Nearly 10 percent of its R&D budget is allocated for discovery research. The goal at Bell Labs is to increase patent production to 6 or 7 each working day from the 3 or 4 they receive now. IBM already is granted more than 10 patents per day. These companies as well as companies such as Intel, Motorola, and Hewlett Packard do not acquire this kind of intellectual property without investment in discovery research. According to IBM, the difference today is that purely curiosity-driven studies have been virtually eliminated, and science is no longer considered an end in itself.

A significant change over the past 15 years is in how life science and information technology (IT) companies have dramatically increased their R&D investments. The Industrial Research Institute’s R&D Leaderboard of the top 100 R&D investors for 1998 shows that these companies clearly dominate the list. Eight of the top 10 investors were in these industries, as were 19 of the top 25 and 35 of the top 50. Investing 17 percent of its sales in R&D, Microsoft made the top 10 in 1998 and is expected to be close to the top 5 this year after a planned 28 percent increase in its R&D. The IT and life science industries are clearly being driven by R&D, including investment in discovery research. This trend has paid off handsomely for most investors in those industries, for the U.S. economy, for the economy’s competitiveness, and for the benefit of society.

Many industrial companies have a 1990s-style skunk works and/or make venture capital available for startups to stimulate potential breakthroughs in S&T. Intel, Hewlett-Packard, Lucent, IBM, Chevron, P&G, Texas Instruments, and Xerox are examples. These actions have prompted a renewed interest in stimulating creativity and idea generation among research scientists and engineers. Studies have shown that 3,000 or more ideas are needed for one commercial success, demonstrating that R&D is indeed a risky business. Good planning is an essential element of the process, especially at the “fuzzy front end” of innovation.

Economists point out that about half of U.S. gross domestic product (GDP) growth in the past 50 years was due to technological innovation. This percentage is likely to be considerably higher for the last half of the 1990s as a result of rapid changes in technological innovation. R&D investment by industry has helped total U.S. R&D investment as a proportion of GDP to rise sharply since 1994. Although still below the 1964 high of 2.87 percent of GDP, U.S. R&D was expected to reach 2.79 percent of GDP in 1999 and, if a forecast from the Battelle Memorial Institute is on target, will be even higher in 2000. Battelle’s forecast shows continued strong investment in industrial R&D. The projected total for industry in 2000 is $184 billion, up 10.5 percent over 1999, bringing industry’s funding to more than 70 percent of the total R&D effort in the United States.

R&D investment abroad by U.S.-based companies is also increasing at a healthy pace. This globalization of R&D has followed the globalization of markets and is essential for customizing products to meet local demands as well as for gaining new knowledge and effectively utilizing cultural differences. Thus, the question is not when or why but how best to organize and conduct R&D efforts in other countries in concert with domestic R&D efforts.

In an update of its study Globalizing Industrial Research and Development, the Office of Technology Policy at the U.S. Department of Commerce (DOC) reported in 1999 that R&D investments in other countries by foreign-owned and U.S. companies approximately tripled from 1987 to 1997. Foreign-owned companies spent $19.7 billion on R&D in the United States in 1997, up from $6.5 billion in 1987 and $14.6 billion in 1993. This work was conducted in 715 R&D laboratories owned by 375 foreign firms. Japanese companies had the highest number of these labs with 251, followed by German companies with 107, and French companies with 44.

More than 80 U.S.-based companies invested $14.1 billion on R&D in other countries during 1997, up from $5.2 billion in 1987 and $9.6 billion in 1993. These companies had more than 200 R&D labs, primarily in Europe (88), Japan (45), and Canada (26). The DOC report concluded that this acceleration of industry investment in global R&D shows that firms believe they need a presence in foreign markets if they are to grow. To be effective in these markets, they also need to benchmark themselves against the best performers in the world. Globalization of R&D is an excellent way for firms to utilize the world’s growing stock of resources and knowledge and to support business growth.

Policy impact

Forty years ago, the federal government funded 65 percent of U.S. R&D. Now industry funds 70 percent, about the same percentage that industry funds in Japan. The shift from government-funded dominance of U.S. R&D occurred in 1980. At that time, the globalization of markets, particularly in the chemicals, petroleum, and automotive sectors, was well under way. U.S. industry, viewing R&D investments as an excellent way to address the growing competition from abroad, doubled its R&D spending in just six years, from $30 billion in 1980 to $60 billion in 1986. The White House had completed its Domestic Policy Review of Industrial Innovation, which had been undertaken in response to a concern that the United States was losing its technological edge, that productivity improvements were lagging, that government regulations were stifling innovation, and that industry was not doing enough basic research. The venture capital market, at an estimated $750 million in 1978, was growing rapidly because of a 1978 cut in the maximum capital gains tax from 49 percent to 28 percent, which encouraged entrepreneurs inside and outside of large companies to begin new businesses.

The amount that is spent on R&D is not as important as how well the spending is managed.

These actions were followed by a series of other initiatives by Congress, the administration, and the private sector to promote much closer cooperation among industry, government, and universities and to address the growing concern over misdirected research in universities. Many of these initiatives built on previous studies. For example, the 1983 President’s Commission on Industrial Competitiveness, chaired by John Young of Hewlett-Packard, used much of the information developed in the 1978 Domestic Policy Review on Industrial Innovation. But attention had shifted from innovation to the larger issue of competitiveness and led to the creation of the private sector Council on Competitiveness, with John Young as its chairman, in 1986. The council was supposed to have a three-year life, but it still exists because of strong involvement by leaders of business and universities as well as the continuing importance of global economic competitiveness.

Creation of the Baldrige National Quality Award in 1986 was also a watershed for U.S. industry. The award has helped transform the way manufacturing and business processes are managed in the United States. A recent study of Baldrige Award winners found that total quality management (TQM) pays off handsomely, with TQM award winners averaging 44 percent higher stock-price return, 48 percent higher growth in income, and 37 percent higher sales growth than that of a comparable control group. Adoption of quality management programs is now a major driving force for employees at Motorola, Allied Signal, General Electric, and hundreds of other major firms. Japan has also created its own Baldrige Quality Award to complement its long-standing and highly visible Deming Prize.

A global perspective

Although ranked only number three in the Council on Competitiveness’ most recent Innovation Index, the United States has been the most competitive–and innovative–nation on Earth for the past seven years. This vitality is reflected by the recent dramatic growth in venture capital, which rose to $48.3 billion in 1999, up 150 percent over the $19.2 billion invested in 1998 and an order of magnitude higher than just four years earlier. Entrepreneurial activity stimulated by venture capital as well as R&D has helped the United States create five times as many fast-growing companies and generate four times as many initial public offerings as has Europe in recent years. The bottom line of innovation is job creation, productivity, and profitability. No other country comes close to the United States in these areas, nor in its current record-breaking economic expansion.

Other countries, especially Japan, are closely studying new U.S. management practices, including those for R&D and innovation. Japan has also carefully studied the U.S. government’s S&T policymaking infrastructure and has developed a plan to restructure its S&T decisionmaking agencies along similar lines. In fact, it is going one step further by bringing together technology policy formulation and economic policy under a reorganized Ministry of International Trade and Industry (MITI). (Better coordination of economic and technology policies is something that could be given more attention by our federal government.) Evidence of this interest by Japan was demonstrated when the Japanese chemical industry and MITI established a Japan Chemical Innovation Institute (JCII) in 1998. JCII will expand on the initiative of U.S. chemical companies and the U.S. Department of Energy’s “Industries of the Future” program to establish a chemical industry vision for the year 2025. The Japanese plan includes not only the roadmapping of chemical technologies but also the design of a new model for the management of technology (MOT) to support the creation of added economic value deemed critical to Japan’s future competitiveness. Reflecting the new Japanese interest in technology management, some Japanese chemical companies have established research-on-research departments to improve the effectiveness as well as efficiency of their R&D investments. Another Japanese organization, the Japan Productivity Center for Socio-Economic Development, which is responsible for administering the Japanese version of the Baldrige Award, has initiated a feasibility study with MITI for a new International Center for the Study of MOT. The message from Japan is clear: They intend to recover from their recent economic distress through effective management of technological innovation.

China has also recently placed heavy emphasis on innovation as a key to its transformation into a market economy. At a national conference on technological innovation in Beijing in 1999, President Jiang Zemin emphasized that technological innovation must play a much more important role than it has in the past as the driving force behind social and economic progress in the coming century. Premier Zhu Rongii at the same conference demanded that technological innovation be intensified in traditional industries and that a new innovation system be established in which state-owned technology-intensive enterprises play the leading role. The Communist Party of China (CPC) provided guidelines to encourage R&D innovation, covering infrastructure development, R&D investment levels, stimulation of venture capital, and other matters crucial to competitiveness. The CPC indicated that a long-term perspective in the development of technology was essential and that great importance should be attached to basic research and technology projects to meet strategic national needs.

Speaking at a Sino-U.S. Joint Science Policy Summit in October 1999, Zhang Cunhao, president of the National Natural Science Foundation of China, emphasized the importance of “applied” basic research in China’s national goals. He also argued that total R&D investment in China should be raised from the recent level of considerably less than 1.0 percent of its GDP to 1.5 percent in 2000. This goal is unlikely to be achieved, but China is using a variety of other techniques to advance S&T, such as recruiting foreign scientists for research at the China Academy of Sciences, creating a new top R&D award that is to be regarded as China’s Nobel Prize, and inducing foreign companies to establish R&D centers in China that will focus on fundamental as well as applied S&T.

Innovation and productivity

As previously noted, productivity is a key factor in competitiveness. For much of the past decade, economists seriously questioned the economic payoff in improved productivity from our growing investment in IT. Robert Solow, a Nobel laureate from MIT, has said, “You can see the computer age everywhere but in the productivity statistics.” Productivity growth had been advancing at a rate of around 1 percent a year until 1996, when inflation began to drop significantly. Since then, the percentage change in nonfarm productivity has more than doubled and has come close to 3 percent increases in the past two years. These strong increases in productivity, concurrent with rising wages and low inflation, now have some economic experts believing that our rising investment in IT is finally producing real returns. Leonard Nakamura of the Federal Reserve Bank of Philadelphia believes that the rising investment in intangible assets such as R&D and software helps explain the rising value of U.S. equities, because their earning power is understated by conventional accounting methods. If all of these assumptions prove to be true over the long run, the investment in IT has serious policy implications for the future.

The United States is investing much more in IT than are other nations. At an estimated 4.5 percent of GDP for 1999, the U.S. investment is more than double Japan’s 2 percent and well ahead of Europe’s 2.5 to 3 percent. The number of computers for each worker in U.S. companies is two to four times that of companies in Japan and Europe. An excellent example of the impact of IT is IBM’s recent announcement of “Blue Gene,” a $100 million research project to build the world’s fastest computer. This petaflop machine will initially be used to model the folding of human proteins, making this fundamental study of biology an important milestone in the future of health care.

With IBM’s recent entry into a biotech consortium and new alliances concerning patient care, we could be seeing a shift in their business strategy influenced by the great potential from directed basic research.

Moving forward

The strong investment in industrial R&D and the even stronger investment in IT and venture capital should enable the United States to maintain its momentum in technological innovation well into the new decade. One potential problem is the recent increase in real interest rates in the United States and Europe. Should they continue to rise, economies will lag along with investment.

The Clinton administration’s 21st Century Research Fund, with its growing support of basic research in the hard sciences and focus on IT, is especially timely in these respects. Some economists say that the United States is grossly underinvesting in R&D; that we should be spending three or four times as much. However, the amount that is spent on R&D is not as important as how well the spending is managed. The quality of R&D is particularly important. The United States is investing nearly as much as the rest of the world in basic S&T as well as development, and perhaps more than the rest of the world combined on IT and venture capital. All signs point to continued strong investment in these areas, but this investment needs to be leveraged by effective management of continuous adaptation to the rapid changes in technology and markets.

It is critical that Congress and the administration provide the appropriate economic climate for continued investment–and risk-taking–in the new millennium. The recent action by Congress to extend the R&E tax credit for five years is nothing short of a miracle. Congress is beginning to think longer term, which is especially important for the directed basic research in industry, for curiosity-driven basic research in universities, and for our nation’s future.

Archives – Spring 2000

James Craig Watson

In the mid-19th century, a slight anomaly in Mercury’s orbit proved a bafflement to astronomers.

Unable to reconcile observation to what Newtonian mechanics predicted regarding the planet’s path, French astronomer U.J.J. LeVerrier, whose calculations resulted in the discovery of Neptune in 1846, hypothesized the existence of a planet or planets between Mercury and the Sun. At LeVerrier’s request, U.S. astronomer and National Academy of Sciences member James Craig Watson (1838-1880) tried to observe the proposed planet.

Viewing the 1878 solar eclipse from Separation, Wyoming, in the Rocky Mountains, Watson claimed to have seen not one but two planets between Mercury and the Sun. Watson’s claim was rejected by most scientists, and we know now that there are no planets in this area. What Watson saw was probably a pair of comets. But Watson’s driven belief in his discovery survived him. His estate included funds for astronomical research, and the first grant went to a researcher hunting for the phantom planets.

Shaping Science Policy

This is a path-breaking book, destined to influence subsequent academic discussion and historical interpretation of its important topic. David Hart, for the very first time, analyzes the history of public policy debates over the role of government in U.S. science and technology policy in the first half of the 20th century.

Hart sorts the major historical figures involved in U.S. science and technology policymaking over the period 1921 to 1953 into five bins. There are the “conservatives,” who hold that the state has no role in science and technology other than guaranteeing intellectual property rights and getting out of the way. Then there are the “associationalists,” who believe that because markets sometimes fail as a result of poor information, we need to create hybrid publicly supported but privately controlled institutions that develop this information and transmit it to the private sector. There are “reform liberals,” who see dangers in the private concentration of economic power and call for the government to step in to regulate or even directly participate in the market.

“Keynesians,” in Hart’s view, believe that the economy suffers from occasional mechanical malfunctions and that the role of government is to shove it back on track when required. He further distinguishes between “commercial Keynesians,” who believed that appropriate technological innovation would naturally emerge from firms if the government got the macro stuff right, and “social Keynesians,” who believed that “market failures in provision of science and technology for economic purposes appeared far more widespread.” Hart argues that “commercial” and “social” Keynesianism merged in the late 1940s and that the resulting merged Keynesian prescription for science and technology policy was for federal funding for academic science and the provision of subsidized funding to small business.

The last box belongs to partisans of the “national security state.” In this view, national security comes before all other goals of government. The development of military technologies to support security trumps all other arguments, and any effects in the economic sphere are the incidental (if not entirely unwelcome) consequence of the security imperative. Hart argues that the security statists and the converged Keynesians found common ground and formed the coalition that put into place the policies of the early 1950s, the so-called “postwar consensus.”

Back to the beginning

Hart’s history begins with the little-known history of U.S. science and technology policy before World War II. His account revolves around, of all people, Herbert Hoover, the “Great Associationalist,” taking the first pass at an activist technology policy in the 1920s while serving as Secretary of Commerce. Beefing up the National Bureau of Standards (NBS) (predecessor of today’s National Institute of Standards and Technology), Hoover beat back opposition from conservatives in the Treasury and elements of the private sector to make the NBS the centerpiece of a major technology partnership program with U.S. industry.

The Great Depression and the New Deal in the 1930s, Hart argues, saw the associationalist program evolve into crude protectionism. With the death of the National Recovery Administration at the hands of the Supreme Court in 1935, the associative state created by Hoover and continued by Roosevelt ground to a halt. With conservatives on the run in Washington, says Hart, liberal reformers seized the opportunity to develop new government programs to stimulate the economy through technological progress. The brightest star in this firmament was the Tennessee Valley Authority and its efforts to build a technologically advanced fertilizer industry in the South.

By the late 1930s, however, the center of gravity for technology policy, which Hart sees moving into the hands of the “reform liberals,” had shifted to the Department of Justice. There, reform liberal and assistant attorney general for antitrust Thurman Arnold embarked on a crusade to “break the patent bottleneck” and prevent entrenched corporations with a few key patents from blocking the emergence of new technologies and industries and thus suppressing the expansion and modernization of markets. Antitrust enforcement was combined with new financing and R&D programs in the construction industry, which Hart describes as a key example of New Deal reform liberal policies in action.

With the rise of the Keynesians in the early 1940s, however, with their focus on macroeconomic forces and demand management, what at first had been a loose collaboration evolved into a policy competition. “War mobilization briefly delayed the estrangement of the reform liberals from the Keynesians,” writes Hart, because “national security was a potent weapon in the battle for both public investment and antitrust enforcement.” With the war on, however, the military and corporate America joined and gained operational control over a vastly expanded government investment apparatus that “shoved ‘Thurman Arnoldism’ aside,” while embracing the Keynesian macroeconomic agenda. Hart says that in the early 1940s, social Keynesians, particularly Alvin Hansen, argued that “secular stagnation”–a characteristic of an economy dependent on technological innovation but incapable of generating a sufficient quantity of it–was a serious problem and supported the reform liberal agenda of antitrust, R&D funding, and public investment. By 1948, notes Hart, Paul Samuelson had turned this idea on its head, writing in his introductory economics text that “secular exhilaration,” not stagnation, was the problem of the postwar years. With the focus now squarely on aggregate demand management, social Keynesianism and commercial Keynesianism had fused into a new synthesis with little interest in technology policy, or for that matter, any policy based on microeconomic analysis of particular industries or problems.

The Cold War and Korea saw the rise of the national security state–in some sense, the institutionalization of an alliance between associationalists and committed believers in technological investment as the key to military advantage that was forged in the 1940s. Converged Keynesians played a supporting but largely passive role as managers of aggregate demand. This latter period, which included the struggles over control (in Hart’s analysis, largely between Keynesians, associationalists, and reform liberals) that delayed the creation of the National Science Foundation, takes us to more familiar ground.

But Hart is surely right–in fact, probably does not make the point strongly enough–that the history of the alleged postwar consensus in subsequent decades is far from the stable, smooth execution of an agreed-upon consensus model. He sees constant tugs and pushes, zigs and zags in U.S. science and technology policy, propelled by policy entrepreneurs promoting agendas that are the lineal descendents of the same forces he documents in the first half of the 20th century.

Tight squeeze

One problem with Hart’s analysis is that he squeezes a very fuzzy and amorphous reality into five very sharply bounded boxes in classifying policy and policy entrepreneurs. Although the five viewpoints he enumerates are, on reflection, useful in defining the sometimes antagonistic forces shaping U.S. science and technology policy, they also confuse and confound historical understanding at times. For example, Vannevar Bush, a gigantic figure in the policy debates, is portrayed as an associationalist in the late 1930s, a social Keynesian in the mid-1940s, and a converged Keynesian in the early 1950s. I question whether Bush would have identified himself as any one of these things, or whether his views changed as sharply as these reclassifications suggest. When Bush made common cause with Frank Jewett of AT&T, Hart’s archetypal “conservative,” were they simply cutting a deal, or is it not possible that both men actually had relatively complicated and nuanced views of the world, with convergence on some concrete questions reflecting genuine agreement outside the sharply drawn bounds of Hart’s simple taxonomy?

Another problem is that the study does not target a wide audience. The book originated as Hart’s Ph.D. thesis and bears the resulting baggage. The author drags the reader down occasional meandering byways as he labors mightily to show the novelty of his theoretical framework, which boils down to the observation that policy is made by a political process rather than being the determinate outcome of a rational and optimal choice. This may be hot stuff in political science departments, but most policy-savvy readers will be distinctly unimpressed when he writes that “I theorize science and technology policymaking as a political process.” Nonetheless, the book works well as an excellent and path-breaking political history and deserves a wide readership.

There is one glaring omission in this important book. Hart completely overlooks a major development in the economic theory of science and technology investment that occurred in the late 1950s and early 1960s and that profoundly altered the nature of the policy debate. Over this period, economists (in particular, Richard Nelson, Charles Hitch, and Kenneth Arrow) for the first time precisely articulated a reason for market failure and a theoretical rationale for active government policy: the inability of those investing in science or technology to completely capture the economic fruits of this investment. This was a precise and persuasive diagnosis of market failure, far removed from vague hand waving about secular stagnation. Other economists (notably Zvi Griliches and Edwin Mansfield) then produced pioneering empirical work demonstrating that this theoretical argument appeared to be significant in real life, reflected in large gaps between social and private returns on technology investments. This research directly motivated a new interest in a more active role for government in funding research in the 1960s and many more such proposals over the years, as the tides of policy in an increasingly high-tech economy ebbed and flowed.

Indeed, so great was the impact of this new scholarship that the meaning of “conservative” changed. In the 1920s, it was indeed possible for a reasonable conservative to argue that the government should simply get out of the way and do as little as possible in the realm of science other than to strengthen and enforce intellectual property rights. Today, one is hard pressed to find a conservative, or at least a conservative economist, who does not concede that the government has some appropriate role in supporting basic scientific research, and particularly academic basic research, as the least easily appropriated investment in technology.

Magical Thinking

What are we doing wrong? Or more to the point, what is it we’re not doing? Science, the ultimate product of rational thought, is on a roll: One ancient scourge after another is being eliminated, hunger has been reduced to a political problem, life spans have doubled, all the knowledge of the world has been put at the fingertips of ordinary citizens, and the deepest mysteries of the cosmos are unfolding before our eyes. We are even beginning to get a handle on how to keep our planet healthy.

Why then is the public turning away from science? Nay, not just turning away, but fleeing in the opposite direction. My bookcase overflows with wonderful, reductionist accounts of how the world works, written by brilliant scientists for nontechnical audiences–Gould, Dawkins, Sagan, Goodenough–but I look in vain for their names on bestseller lists. Instead, I find such pathetic drivel as Deepak Chopra’s Ageless Body, Timeless Mind: The Quantum Alternative to Growing Old and Harvard psychiatrist John Mack’s Abduction. There is James Van Praagh, Talking to Heaven, while Neale Walsch is having Conversations with God.

How are we to account for such widespread nuttiness? Is it indelibly coded into our DNA? Perhaps. In her latest book, Sleeping with Extra-Terrestrials: The Rise of Irrationalism and Perils of Piety, Wendy Kaminer turns to psychologist James Alcock, who points out that all of us, scientists and professional skeptics included, engage at times in magical thinking. It’s not surprising. Evolution is a slow business. All of recorded history covers a mere 4,000 years, the space age a mere four decades–far too little time to influence our genes. We are all saddled with genes selected for life in a Pleistocene wilderness. In a time before science, there was no gene for scientific thinking, and there still isn’t. It must be learned.

As the physicist Richard Feynman put it, science is what we have learned about how to keep from fooling ourselves. But none of us think scientifically all the time. We instinctively re-create the conditions our memory associates with some rewarding experience, or we avoid the things that surrounded some unpleasant experience. We often find ourselves almost compelled to go through these rituals, even when the higher centers of the brain are telling us that a causal connection is highly implausible. Nor do we bother to test the effectiveness of these rituals. Kaminer, who is not a scientist, makes known on the first page her own indulgence in magical thinking. With characteristic self-deprecating humor, she confesses that she consults a homeopath. She is aware on an intellectual level that this conduct is foolish, and yet she persists in it.

This is the Wendy Kaminer we discovered in I’m Dysfunctional, You’re Dysfunctional: the same clear-eyed, witty insights into the human condition and the same wonderful use of language. She buttresses her arguments with just the right quotes and she constructs dozens of one-liners of her own that deserve to be in Bartlett’s. She is able to take a detached, amused view even of herself. Still, homeopathy is so completely off the wall that I found myself wishing she had chosen a different quirk to make her point.

From faith to foolishness

Sleeping with Extra-Terrestrials is fun, but it is also deeply troubling. In an age of science, we are reminded, irrationalism is raging out of control. Kaminer quotes from polls showing that almost all Americans profess a belief in God, with 76 percent picturing God as a heavenly figure that pays attention to their prayers. Society honors faith. Belief in that which reason denies is associated with steadfastness and courage, whereas skepticism is often identified with cynicism and weak character. That’s pretty hard to fit into a scientific worldview, but few scientists are willing to say so publicly. Kaminer writes: “In this climate of faith in the most ridiculous propositions–with belief in guardian angels commonplace–mocking religion is like burning a flag in an American Legion hall.”

And when you open the door to one supernatural belief, all sorts of demons rush in. Today these demons are most often cloaked in the symbols and language of science. If you attend a conference on cold fusion, for example, you will almost certainly find yourself in a room with people who speak authoritatively on past lives and psychokinesis. Those who are convinced of alien abductions have no problem with accepting homeopathy. They wear their gullibility proudly. Indeed, I sometimes think they have a secret handshake or something that permits them to recognize one another. They gather together to provide reinforcement. “The more limited your understanding of science,” Kaminer tells us, “the more that scientists resemble masters of the occult, and the more that paranormal phenomena seem likely to reflect undiscovered scientific truths . . . A persistent irony of scientific progress is its encouragement of pseudoscientific claims.”

Science begets pseudoscience. That single insight is worth the price of admission to this book. Scientists are eager to tell the public what it’s like on the frontier. We regale them with speculations about parallel universes, quantum teleportation, wormholes through spacetime and 10-dimensional superstrings. But what the nonscientist may be taking away from this is just that the universe is so strange that anything can happen.

This opens the way for New Age gurus to sprinkle their message with words drawn from science. Alternative healers speak of energy, resonance, and balance, but the words have no substance. They lack any definition. Kaminer recounts a talk by Matthew Fox, who is described as a “postmodern theologian,” on “the physics of angels.” Angels, he explains, move at the speed of light, like photons. “People seem to listen intently and very respectfully to Fox’s rambling exposition, although I cannot imagine they can make much sense of it. Reviewing my notes and a transcript of his remarks, I’m struck by his incoherency.”

I recently had the same reaction to a talk by Deepak Chopra on the release of his new book, How to Know God: The Soul’s Journey into the Mystery of Mysteries. He was introduced as treating spirituality with a rigorous scientific approach. The packed, well-to-do dinner audience in the National Press Club listened worshipfully as he explained that, “GOD is not a person but a process.” He paused, and then repeated it for emphasis: “GOD is not a person but a process.” There were literally gasps from the audience as the importance of this message sank in. “The essential nature of this material world is that it is not material,” he continued. “Things must be seen by the eye of the mind and the eye of the soul.” The eye of the soul, it seems, can be explained by quantum nonlocality. It is “unbounded by space or time.” What was he talking about, and why was everyone nodding their heads as if they understood?

In his wonderful 1996 book Conjuring Science, Christopher Toumey complains that science “carelessly allows its meanings and values to be eviscerated when it permits mischief-makers to hijack its symbols for the benefit of other values and meanings.” In the end, Kaminer offers no solutions. My solution would be for everyone to read her book, but I suspect that those most in need of its message will be the least likely to pick it up.

After the Cold War

In late 1992, Ashton Carter and William Perry joined John Steinbruner in writing A New Concept of Cooperative Security, a seminal study published by the Brookings Institution. The study’s thesis, subsequently refined in the hefty volume Global Security: Cooperation and Security in the 21st Century, edited by Janne Nolan, was that in the aftermath of the Cold War and the collapse of the Soviet Union it was necessary to fashion a new formula for international security based on conflict prevention and cooperation. To a large extent, Carter and Perry’s Preventive Defense represents an extension of that thesis, with reference to a number of key security challenges confronted by the authors during their service in the Department of Defense in the Clinton administration. (Perry was secretary of defense, and Carter was assistant secretary for international security policy.)

The great merit of Preventive Defense is its accessibility for a lay audience. In less than 250 pages, readers are provided with a cogent and very sober assessment of five dangers that have the potential to become “A-list” threats to U.S. security, followed by a set of thoughtful and practical policy recommendations to forestall their development. The dangers, which a strategy of preventive defense is designed to mitigate, are that (1) Russia might follow Germany’s course after World War II and devolve into chaos, isolation, and aggression; (2) Russia might lose control of its nuclear assets; (3) China could emerge as a major U.S. adversary; (4) weapons of mass destruction will proliferate and threaten directly U.S. security; and (5) “catastrophic terrorism” involving weapons of mass destruction might occur on U.S. territory.

A chapter is devoted to each of these very real dangers, and a concluding chapter addresses what must be done to meet what the authors regard as the greatest challenge to preventive defense: “the threat within.” This threat is post-Cold War complacency.

The authors succeed admirably in providing snapshots of preventive defense in action regarding the nuclear legacy of the Soviet Union and North Korean nuclear brinkmanship. They also make a persuasive case for engaging Russia and China in a variety of military-to-military relationships ranging from regular high-level talks to nuclear arms reduction and security negotiations. Especially instructive is the extended discussion of the process by which the Russian military was recruited as a partner in NATO peacekeeping operations in Bosnia. The discussion exploits the authors’ unique vantage points. Also revealing are the behind-the-scenes glimpses of U.S. responsiveness to the 1996 Chinese “missile tests” in the Taiwan Strait.

It is a shame that the book does not provide more such novel insights informed by the authors’ unusual firsthand experiences. The account of the Clinton administration’s decision to support rapid expansion of NATO, for example, sheds little light on the domestic and bureaucratic political determinants of that watershed event. The authors also choose not to explore the damaging implications of the NATO decision for U.S.-Russian cooperation in the sphere of nonproliferation. One could argue that NATO enlargement, more than any other single event, has led policymakers in Moscow to question the wisdom of close consultation and cooperation with the United States in the sphere of nuclear nonproliferation–cooperation that persisted during much of the Cold War. Similarly, the book’s succinct description of “Project Sapphire,” the successful removal of approximately 600 kilograms of highly enriched uranium from Ust Kamenogorsk, Kazakhstan, in November 1994, highlights the positive outcome of the initiative but ignores the toll taken by the accompanying interagency battle. An aversion to repeating that bureaucratic struggle accounts in large part for the failure to mount similar preventive defense operations immediately thereafter to remove other stocks of bomb-grade material from Belarus, Georgia, Kazakhstan, Uzbekistan, and Ukraine. By the time another airlift was undertaken to remove a small quantity of highly enriched uranium from Tbilisi, Georgia, in April 1998, at least one cache of weapons-grade material had been diverted from its storage site in Sukhumi, Georgia (located in the breakaway region of Abkhazia). The location of the diverted material remains unknown.

Missile defense

Perhaps most conspicuous by its absence is more than a passing reference in the book to the role of national missile defense in a strategy of preventive defense. The failure to confront this complex and controversial issue head-on weakens the otherwise very constructive recommendations for a new agenda for arms control. That agenda correctly identifies the need to broaden the nuclear reduction process to include tactical nuclear weapons and, ultimately, other nuclear powers. The prospects for implementing this new agenda, however, are severely impeded by U.S. plans to accelerate the development and deployment of a national missile defense system. Although it may yet be possible through creative diplomacy to gain Russian acquiescence in modifying the 1972 ABM Treaty, it is doubtful that the United States can simultaneously embrace early deployment of national missile defense and expect to engage China further in multilateral arms control.

If Preventive Defense is less than comprehensive, it nevertheless offers a host of very prudent policy recommendations, which deserve careful consideration by U.S. national security policymakers and the public. Perry and Carter were on the losing side of the debate within the Clinton administration in 1995 regarding the timetable for NATO expansion. At that time, they maintained that NATO’s Partnership for Peace (PFP) should be the cornerstone of the alliance’s eastward policy. They were correct then and are right again in arguing for a pause in NATO enlargement and the revitalization instead of the PFP. Time is needed to repair relations with Russia and to more fully integrate Poland, Hungary, and the Czech Republic into the alliance.

Renewed effort also must be invested in providing substance to the NATO-Russian Founding Act, which was heralded in 1997 as the embodiment of a new NATO-Russian relationship but has yielded few tangible results. As the authors note, the Permanent Joint Council that was established pursuant to the act “has been more of a diplomatic debating society than a catalyst for practical NATO-Russia cooperation.” Although one should not underestimate the difficulties of implementing the Founding Act given the general state of U.S.-Russian relations, Carter and Perry identify a list of practical measures that should be pursued, including cooperation in the areas of counterterrorism and nonproliferation. One modest step to enhance cooperation that is consistent with the authors’ approach would be to create and maintain, under the auspices of the Permanent Joint Council, a joint database on international terrorist incidents involving the acquisition, use, or threat to use weapons of mass destruction.

Both authors are well known for their creative implementation of the Nunn-Lugar Cooperative Threat Reduction program, and it is not surprising that the book recommends that this form of “defense by other means” not only continue but grow. The authors also correctly identify the areas of fissile-material safeguards and chemical and biological weapons dismantlement and destruction as priority areas for further Nunn-Lugar initiatives. Not mentioned directly by the authors but also vital to the long-term effectiveness of the Nunn-Lugar program is the development in the Soviet successor states of a nonproliferation safeguards culture and an incentive structure that encourages prudent safeguards practices.

Two of the greatest challenges to U.S. national security at the dawn of the new millennium are ignorance and complacency. These tendencies find expression in Congress, the Russian Duma, and in most national parliaments, which remain woefully uneducated about nonproliferation and other security issues and generally are unprepared to exercise the political will or to allocate the resources commensurate to the threat. Today, more often than not, parliamentarians and their constituents are preoccupied with pressing domestic problems and display scant interest in international issues, especially those that are not directly related to economics. For many, this disposition is reinforced by the mistaken perception that with the end of the Cold War and the diminution of superpower conflict, there are no longer any real nuclear security threats.

Preventive Defense is an excellent antidote for anyone with symptoms of this post-Cold War malaise. It provides in short, easy-to-digest doses more than enough factual nuggets to counter such complacency. National security addicts already exposed to the A-list dangers of the new millennium, however, await a more comprehensive treatment by scholar/practitioners Perry and Carter.

Nuclear Weapons

Since India and then Pakistan exploded nuclear devices in May 1998, the world has been grappling with the consequences. In this hefty history of Indian nuclear policy from its origins until the end of 1998, George Perkovich has provided an indispensable guide for those trying to sort out the ramifications of the nuclear tests. Perkovich, one of our most distinguished scholars of South Asian security issues and their global impact, and currently director of the W. Alton Jones Foundation’s Secure World Program, demonstrates that the primary factors bearing on India’s nuclear policy today have been present from the very creation of its nuclear program shortly after the country’s independence in 1947.

Two interrelated factors have favored nuclear weapons. First has been the driving impulse among many in India’s elites to take actions aimed at transcending the country’s colonial past. Nuclear science and nuclear weapons have been seen as essential in gaining India the respect and standing in international life that it feels it deserves because of its size and ancient civilization. These elites consider the international nuclear nonproliferation regime as “nuclear apartheid,” structured to keep India (and other non-Caucasian countries) out and down. Perkovich shows that it is this “political narrative” and not a “security-first narrative” that has dominated India’s discourse with itself and the world on nuclear issues. To be sure, Indian hawks have cited the security threat from China and, less frequently, from Pakistan as reasons to go forward. But Perkovich richly documents his contrary view that elite hunger for status and recognition have counted for much more.

The chief fomenters of this impulse have been an interconnected band of scientists in government and thinkers and publicists in think tanks and in the media who have pushed hard for nuclear weapons. Perkovich calls this group the “strategic enclave” (a term borrowed from the Indian scholar Itty Abraham). The members of this enclave have been cavalier about the military (which every Indian government has kept isolated from nuclear matters), about cost, and about Pakistan’s scientific and nuclear potential. They have also been extraordinarily prickly about status and pugnacious toward the United States. India’s nuclear explosions of May 1974 and May 1998 were triumphs for them.

The probomb crowd, however, has rarely been ascendant, because, Perkovich argues, four powerful antibomb factors have almost constantly been in play. The most powerful is what he calls “the normative interest in positioning India as morally superior to the international system’s major powers who possess and threaten to use nuclear weapons.” Like the hunger for status, this factor derives from India’s colonial past, specifically the Gandhian/Nehruvian ideology that so influenced the freedom struggle. Three other factors have reinforced it: 1) reluctance to involve the military and hence to build a bomb that the military would have to help deploy; 2) economic constraints (because India is poor, there has always been a conflict between development and security needs); and 3) the high political and economic costs that India might suffer in the international community. India’s Nuclear Bomb is essentially the story of how the two probomb and four antibomb factors weighed against each other over time to produce Indian policy.

Gandhi’s legacy

India’s nuclear research and energy program, with its imbedded weapons option, was launched soon after independence. But while its sponsor Prime Minister Jawaharlal Nehru was alive, the weight of the Gandhian inheritance and the need to develop economically kept India hostile to nuclear weapons. In 1964, however, Nehru died, and China tested a nuclear bomb, ushering in, by Perkovich’s account, a confusing but fascinating two years. Between 1964 and 1966, the United States three times considered supplying India with nuclear weapons to counter China. And Homi Bhabha, the scientist father of the Indian nuclear program, asked the United States for nuclear help. But pronuclear sentiment reversed itself quickly: By the end of the 1960s, the United States had firmly embarked on a nonproliferation path. And India, though it now had undeclared nuclear weapons capability (with its perceived deterrent advantages) and refused to sign the 1968 Nuclear Non-Proliferation Treaty (NPT), could still keep the moral high ground by forgoing the production of actual weapons.

Perkovich argues that this “recessed deterrence” provided India with a satisfactory long-term policy compromise for dealing with its competing needs. Consequently, he believes that Prime Minister Indira Gandhi’s 1974 authorization of a “peaceful nuclear explosion” was an aberration. Aberration or not, it was certainly a failure by any criterion except postcolonial pride: The political bounce soon faded, sanctions were costly, China appeared not to notice, and Pakistan continued to move ahead with its own weapons program.

Contrary to expectations, successive Indian prime ministers then adhered to the “nuclear option” strategy for another generation, although it was always under pressure. By 1980, India and Pakistan could both assemble and deliver a limited number of nuclear weapons quickly, and thus their intermittent crises–in 1983, 1986, and 1990–now had nuclear weapons at the top of the escalation ladder. Perkovich reveals that Mrs. Gandhi authorized and then canceled another test in 1982 or 1983, and that in 1986 her son and successor Rajiv considered and then rejected an attack on Pakistan’s nuclear facilities. In every crisis, however, the two countries drew back (as they would again in1999). And both stayed “recessed” even after each lost its main foreign support–Pakistan with the cancellation of U.S. aid in 1990 and India with the dissolution of the Soviet Union in 1991. Both countries were now more on their own than ever before, more driven by domestic impulses; and yet, for another seven years, the “nuclear option” held.

Nuclear era begins

Why then did it cease to hold in 1998? After all, as Perkovich demonstrates, so little had changed. To be sure, postcolonial resentment was rekindled by indefinite extension of the NPT in 1995 and negotiation of the Comprehensive Test Ban Treaty (CTBT) in 1997. Many Indians saw these moves as reaffirmations of nuclear apartheid, and this built new pressures for weapons testing. P. V. Narasimha Rao’s abortive test authorization in 1995 is well known, but Perkovich reveals that during his first 12-day tenure as Prime Minister in 1996, A. B. Vajpayee also ordered tests and then rescinded the orders. But the countervailing pressures were also still in force. Neither China nor Pakistan were rattling sabers at India. The United States was obliquely signaling that it knew the South Asian weapons and missile programs were facts of life for a long time to come. Further, there were new glimmerings of Indian attraction to the Chinese path to power: giving primacy to economic development while stabilizing relations with one’s closest neighbors.

Perkovich argues that “a handful of Bharatiya Janata Party (BJP) leaders made the decision to test.” The decision was made shortly after the party came to power in March 1998 and “without consulting other political parties . . . and without conducting a previously advertised strategic defense review.” Although the Chinese “threat” and a Pakistani missile test on April 6 were later cited as motives for the tests, “if strategic considerations had been paramount, the decision could have awaited the defense strategy review and still enabled the scientists to act prior to the anticipated entry into force of the [CTBT] in late 1999.” Domestic politics and postcolonial phobia were what really counted. The United States was not told beforehand, Perkovich writes, because “Indian leaders intended the tests to display India’s autonomy and security but were somehow afraid that they would not be able to withstand U.S. pressure had Washington been warned. This combination of defiant assertiveness and diffident timidity may have been a price paid for the colonial experience.” He adds, “Given the political interests of the BJP and the drive of the weaponeers, Washington probably could not have prevented India from testing in 1998.”

As in 1974, Perkovich shows, the 1998 tests failed to meet any of India’s objectives beyond showing prowess. The political boost to the BJP faded quickly. Although not crippling, sanctions hurt. India lost international status; the months after the tests echoed with the sound of slamming doors. And by responding with its own tests, “Pakistan . . . practically matched India in effective nuclear power and recast India not as the preeminent South Asian state but as part of an unstable Indo-Pak dyad whose conflict over Kashmir and potential nuclear and missile competition must be managed above all else.” Finally, the tests brought the United States back into South Asia as the necessary negotiating partner for both India and Pakistan as they grope their ways toward the “minimum deterrence” they profess to seek.

Perkovich is a fine historian, but inside a political scientist and a policy advisor are struggling to get out, and in the book’s conclusion, they do. In political science, structural realists argue that states act mainly to ward off external threats; Perkovich, by contrast, stresses that in India, domestic factors have been at least as important. Much nonproliferation theory holds that removing the original causes will suffice to roll proliferation back; Perkovich argues that acquiring nuclear weapons capability changes domestic political balances and that democracy, as in India’s case, “appears to obstruct efforts to control and eliminate nuclear weapons once they have been acquired.” Perkovich believes that both democracy and “unproliferation” should be promoted, and he argues that “averting the potential clash between [them] requires clearer commitments to eliminating nuclear weapons from all states.”

Perkovich’s operating assumption is that the South Asian tests have changed much and that the damage to the nonproliferation regime can be repaired only by drastic action. He may be right, but we will never know. In today’s world “clearer commitments to eliminating nuclear weapons” are not in the cards. Hence, the proposition that is more likely to be tested is that the South Asian tests have in fact changed little, as his own account suggests. If that is the case, what is required to restabilize the security situation is a series of negotiated steps that would allow India to return to something like the balance among competing factors that has prevailed throughout most of its history. That is precisely what the United States is seeking in its negotiations with India and Pakistan.

Recessed deterrence is, of course, gone forever, but all other historical factors remain in play. The factors that will surely shape India’s (and hence Pakistan’s) minimum deterrence are still the same. To be sure, the strategic enclave will keep trying to tweak postcolonial hangover into national migraine. But there will still be resistance to a major role for the Indian military; lingering moral distaste for nuclear weapons; international pressure not to go further; and even some awareness that “to achieve greatness India must integrate itself into the international political economy” with its U.S. gatekeeper. And if it can be suitably defined, minimum deterrence can resemble recessed deterrence like a brother, as the keeper of the middle ground where Indian national interests meet and lie down together. That middle ground is also where the postcolonial pathologies that have helped drive the program can expire at last. Then India can take her rightful place as a strong, responsible modern power in the world community.

The Endocrine Disrupter Hypothesis

Concerns about chemicals possibly affecting human and animal health through mimicry of or interference with normal hormonal processes (so-called “endocrine disruption”) have grown among environmental scientists and toxicologists and have increasingly been reported in the popular press. These are deeply emotional issues relating to our health, our children’s health, our reproduction, and, some would argue, to the health and reproduction of wildlife the world over. Because of their potential significance, the concerns that have led to the endocrine disrupter hypothesis demand attention. Many in the scientific community who now pursue research in this area may not be fully aware of the history of the endocrine disrupter concept and how it came to be so prominent. Hormonal Chaos describes that history. It also describes the difficulties involved in establishing policy when there are scientific uncertainties, as there are regarding endocrine disrupters.

This is an eminently readable book. When it arrived in my office, I opened it half expecting to find another discourse arguing that we are facing an impending disaster because of chemical effects on or through hormonal systems. I seldom find such discourse engaging. However, the book is primarily a narrative, without polemic, and I found it to be a page-turner (perhaps because of my own involvement in the issue). The first two chapters recount the history of the idea of endocrine disruption and how it rose to prominence. The first section, and indeed the entire book, is peppered with details about the efforts of many of the individuals who have figured prominently in the emergence of this concept. As those familiar with this topic know, Theo Colborn, an environmental scientist currently with the World Wildlife Fund, played a critical role in defining this issue, and her involvement and what led her to the issue are discussed extensively.

A balanced view

Concerns about endocrine disruption are based on observations regarding health problems in humans and wildlife that could involve some aspect of endocrine systems. Sheldon Krimsky, a professor in the Department of Urban and Environmental Policy at Tufts University, discusses most of these in terms accessible to most lay readers. Some are mentioned briefly; others are considered at length. Among the latter are three highly publicized concerns regarding humans: the possible environmental chemical effects on sperm count, breast cancer, and behavior or neurophysiology. In considering conflicting views, Krimsky draws not just on the scientific literature but also on the mass media. For example, his discussion of differing views in the literature regarding possible effects on human sperm count refers to an article in the New Yorker, which raised doubts as to whether sperm counts are indeed declining or, if so, whether chemicals can be strongly implicated.

The author uses detailed examples to describe how the issues gained visibility and how governmental and nongovernmental organizations have responded in the United States and around the world. He writes largely as an observer, and there is little with which one might disagree. Congressional hearings, the publication of Coburn’s Our Stolen Future (co-authored with Diane Dumanoski and J.P. Myers), many news reports, and the activities of the Environmental Protection Agency (EPA) and the National Research Council (NRC) are all described from a “behind the scenes” perspective. Two television documentaries that differed somewhat in their views of the threat exemplify the author’s discussion of media involvement. The BBC documentary “Assault on the Male” was considered too biased to be accepted for broadcast in the United States by Nova. The PBS Frontline documentary “Fooling with Nature” presented both sides as a scientific debate. Both were widely viewed, but Krimsky does not reveal which he prefers and concludes that their real impact is not clear.

The perspective in the book is up to the minute. For example, in the section titled “Executive Branch Initiatives” Krimsky relates how an NRC committee resolved internal committee disputes and managed to complete its report Hormonally Active Agents in the Environment, which was released just prior to the publication of Hormonal Chaos. The book and the report are different in nature. The report is largely an evaluation of the primary scientific literature on the subject, whereas the book is a narrative of events with a policy perspective. Yet both stress that more research is needed to assess the nature and extent of possible problems.

Policy implications

The second part of the book deals broadly with social and policy matters in a chapter (“Uncertainty, Values and Scientific Responsibility”) that discusses how we come to know what we know, the responsibilities of science to society, and how scientists may differ in their views on the endocrine disruption issue–indeed, on any such issue. The discussion of differing views of causality and the differing positions one might take regarding the nature of evidence required (in a section titled “Skepticism versus the Precautionary Principle”) will be quite valuable to policymakers and to investigators who may be confronted with a need to consider these topics as they relate to the results of their own research.

One factor that Krimsky suggests may influence a scientist’s views on this issue is association with industry. He describes in a less-than-favorable light the views of some researchers who have at times received research money from industry sources. The implication seems to be that objectivity is no longer possible when research dollars come from industry. If this is true, then is it not also true that investigators who depend on federal research funds benefit when funding for particular research topics grows, which would be likely to occur when those topics are viewed as being of greater concern? Might such scientists then engage in hyperbole in describing the potential problems? The implication, by Krimsky or anyone else, that either group might be so disingenuous is disconcerting. People should judge the science first on its own merits.

In “The Policy Conundrum,” Krimsky rightly concludes that nagging scientific uncertainty most often occurs in instances when policy decisions must be made regarding environmental chemicals. He implies that acting on “weight of evidence” in advance of conclusive data (that is, before the detailed mechanisms are known) can be worthwhile, citing examples of the benefits of acting to control exposures to radon, lead, and other hazards. However, the examples cited refer to single agents with well-defined effects, whereas endocrine disruption involves a broad mix of chemicals with numerous and sometimes poorly connected effects.

The NRC report recommended the development of screening and testing protocols, and the EPA had already established the Endocrine Disrupter Screening and Testing Advisory Committee to consider ways to accomplish such testing. Krimsky describes the proposed system, however, as “ponderous, complex, and replete with ambiguities.” Clearly, the complexity of the possible effects, chemicals, species, and combinations thereof makes this an unusually difficult problem, and uncertainty is likely to plague policymakers for some time to come.

There are some errors in the book. For example, in the prepublication copy reviewed, the author offers some explanations as to why developing individuals might be more sensitive to xenobiotic exposures than newborns or young children. First among these is that detoxification mechanisms do not develop until after birth. This is factually incorrect.

Through most of the book, Krimsky succeeds in striking a balance among differing views of specific issues that are part of the hypothesis. There is a subtle bias, however, that is most evident in the concluding chapter. Here Krimsky views the endocrine disrupter hypothesis as contributing to an entirely new way of thinking about chemical effects. He also refers to the “hypothesis,” the term used throughout most of the book, as a theory, thus raising its factual status; and he criticizes those who regard it as a hypothesis only.

On the last page of the text, Krimsky asserts that “The fact that a particular chemical or a particular species fails to corroborate one mechanism does not invalidate the utility of the general framework but suggests . . . mechanistic variations of signaling transductions by xenobiotics.” Certainly this is true, but if different mechanisms might be involved, does it make sense to lump effects together under a term that implies related mechanisms? An example appears in an earlier section regarding cognitive function, where Krimsky says that “Although no mechanism has been proposed to account for the cognitive and developmental effects associated with exposure to PCBs, . . . factors suggest increased vulnerability of fetuses to low level xenobiotic exposure, which is consistent with other findings associated with the environmental endocrine hypothesis.” To me, this implies that any developmental effect might be associated with the endocrine disrupter hypothesis, even in the absence of mechanistic information that might link the effect in question to disruption of an endocrine system. The effects might just as well occur through other toxicological processes that do not primarily involve the endocrine system. Obviously, proving the mechanism(s) and the causal agent(s) is important to health, with or without the endocrine disrupter hypothesis.

If a supposed endocrine effect results from some non-endocrine mechanism, it does not necessarily mean that the endocrine disrupter “hypothesis” is diminished, as Krimsky correctly asserts. But does it mean that the hypothesis needs to be stated more carefully? Probably. Given the untidy nature of the terminology, it is surprising that Krimsky seems almost disdainful of the NRC committee’s use of the term “hormonally active agents” as an alternative to endocrine disrupters. He fails to mention that the committee’s careful search for another term was undertaken largely because endocrine disrupter is not a neutral term but instead presumes an adverse effect.

Few issues in science have galvanized so many so quickly, and Krimsky has accomplished the difficult task of chronicling the history of this contentious idea without being drawn too far into the fray. Minor problems in no way diminish my enthusiasm for this book. It is full of insights that apply not only to how we consider endocrine disrupters but also are equally important in environmental toxicology in general. Indeed, as a detailed case study, the book could well form the basis for a course dealing with environmental toxicology issues.

Paying for the Next Big One

The devastation caused by recent large-magnitude earthquakes in Turkey and Taiwan provides a chilling preview of what could happen in a major urban disaster in the United States. The damage to modern buildings and infrastructure cannot be simply written off as bad construction practices. For the first time in human history, we are approaching a moment when more people worldwide live in cities than in rural areas, and many of those cities are located in areas prone to earthquakes, hurricanes, and other natural disasters.

The United States’ own experience with disasters in the past decade has focused some attention on pre- and post-disaster policy. After a relatively quiet century, major earthquakes, hurricanes, and floods have become more frequent, and urbanization has contributed to increased damage and dramatically increased costs for repairs. The five largest U.S. disasters between 1989 and 1994 caused $75 billion in damage, half of it in residential structures. In just those five events (two earthquakes, two hurricanes, and a major flood), more than 800,000 housing units–approximately the number of housing units in metropolitan Seattle–were damaged or destroyed,

Because relatively few people have been killed in these disasters, Americans tend to underestimate the real risk of disasters. But Hurricane Andrew, which hit South Dade County, Florida, in 1992, and the Northridge earthquake, which hit the San Fernando Valley in California in 1994, were financial disasters for insurance companies, and each required federal appropriations 5 to 10 times greater than any previous events. Capital losses were about $25 billion in each event. In both cases, half of the total losses were in residential structures, and in both cases, insurance paid for about half the losses, with the federal government picking up the difference. This does not include the indirect costs to business and individuals without insurance and/or access to federal programs. Analysts estimate the total price tag for each disaster at about $40 billion.

These two suburban disasters demonstrated the potential for financial loss in a major urban event. If Hurricane Floyd in 1999 had hit central Miami or if a Kobe-caliber earthquake were to occur in San Francisco or Los Angeles, capital losses could easily range from $50 to $100 billion, four times those of Andrew or Northridge. Unfortunately, in the aftermath of those two disasters, the two primary funding sources for repairs–insurance and federal assistance–have severely limited access to capital. Residential insurance has become expensive and hard to obtain, and balanced budget requirements now cap federal disaster spending. Thus, when the next disaster hits a major urban area, there will be no deep pockets to fund repairs.

Private insurance companies were shocked by losses from Andrew and Northridge, which drove nine companies into insolvency. Most companies no longer offer disaster insurance along with a traditional homeowner policy in California, Florida, and Hawaii. Coverage is only available through state-managed disaster insurance pools with high premiums, high deductibles, and limited coverage. Fewer than 20 percent of California homeowners carry disaster insurance now. Although there has been increased federal funding for disaster response and recovery, it was never intended to take the place of insurance. Between 1989 and 1998, the number of federal disaster declarations range from 29 in 1989 to 72 in 1996. Federal supplemental appropriations for disasters totaled $35.5 billion, with $6 billion in 1992 and $8.4 billion in 1994, each reflecting large expenditures after Andrew and Northridge. Until 1996, these appropriations were designated as emergency funds and were therefore exempt from budget limitations. The 104th Congress changed that rule, and now supplemental disaster appropriations require compensating cuts from other domestic programs. Thus, although the public believes that insurance is unnecessary because the Federal Emergency Management Agency (FEMA) will be there to pick up the pieces, the reality is that the federal disaster recovery programs will be subject to political whims and partisan deals.

Who will now pay for the repair and rebuilding of hundreds of thousands of homes, apartments, and commercial buildings in a major urban disaster is an open question. With the cost of urban disasters increasing, private insurance is not available, government agencies are caught between the public pressure to do more and a congressional unwillingness to pay more, and property owners have no incentives to take actions that would reduce losses and costs. To bolster disaster recovery finance, new policies that promote shared risk and responsibility are urgently needed.

The Hayward fault scenario

In 1999, the U.S. Geological Survey released research that suggests a 70 percent probability for a magnitude 6.7 or greater event in the San Francisco Bay Area during the next 30 years. In the 10 years since the Loma Prieta earthquake (magnitude 7.1, with the epicenter 60 miles south of San Francisco), the damage to freeways and bridges has still not been fully repaired. It is clear that in a magnitude 7.0 earthquake on the Hayward fault, which is closer to San Francisco, there would be widespread damage, including bridge collapses and mass disruption in the region’s transportation system resulting from hundreds of road closures. Repairs could cost $50 billion, six times more than the Loma Prieta repair program, and take longer to complete. Power outages would be widespread, telecommunications would be overwhelmed, and lifelines and critical facilities would be severely affected. Buildings of all types would suffer, but most important, housing damage could reach the levels experienced in Kobe, Japan, in 1995, where 400,000 units became uninhabitable.

According to reports by the Association of Bay Area Governments and my own research, more than 100,000 dwellings would be uninhabitable and as many as 400,000 could sustain some damage. In a region where rents and home prices are at a premium and vacancies are extremely low, damage to one third of the housing stock in the counties closest to the fault rupture (combined with the business disruption and the inability to travel around the region) would create a social and financial disaster.

The potential for massive disruption is a function of the physical conditions in the region. The building stock and the infrastructure are old. The geography of the region has concentrated urban development between the hills and the bay, forcing limited transit corridors with little redundancy and creating significant distances between the urban core immediately surrounding the bay and outlying communities.

The potential for economic disruption is a function of the changing financial climate. Traditionally, the assumption of risk in any disaster was in the following order: owner, insurance company, federal government, and lender. Today, the burden is largely on the owner. Insurance availability is limited. The government is trying to reduce emergency spending. Lenders are selling mortgages in a secondary market and insuring themselves against owners who default, not against property damage.

The evolution of federal disaster policy

The U.S. model for providing disaster relief and recovery assistance has always been a mixture of charity, private insurance, and federal programs. Before World War II, the Red Cross provided emergency relief to disaster victims throughout the country, and federal assistance was limited to specific appropriations designated for financial assistance to local governments on a case-by-case basis. After the war, federal involvement in disaster assistance grew with each succeeding disaster, but it usually was limited to providing a safety net that paralleled but did not substitute for charitable relief or private insurance.

The programs and policies that evolved over the course of a century are the product of the country’s experience and lack thereof with certain types of disasters. Although some disaster relief acts were passed in response to specific floods and hurricanes in the 1920s and 1930s, the beginning of the federal role in supplementing state and private disaster relief efforts came with the Federal Disaster Act of 1950. This legislation laid the groundwork for the federal government to provide supplementary assistance to states, usually to support infrastructure repair or replacement. The losses of individual victims were not covered by this law.

The concentration of the U.S. population in disaster-prone areas has dramatically escalated the recovery costs of the next big natural disaster.

After Hurricane Camille in 1969, Congress instituted an assistance program for individual disaster victims. Over the years, individual assistance has grown to include temporary housing, individual and family grants, low-interest home loans, long-term rental assistance, unemployment compensation, food stamps, crisis counseling, and legal services. In Disasters and Democracy, Rutherford Platt describes the past half century of federal laws and programs designed to soften the financial and social impacts of natural disasters as a transition from compassion to entitlement. He suggests that federal generosity has served to diminish the natural caution that individuals, businesses, and communities might otherwise exercise in adjusting to the risk of natural hazards in their investment and location decisions.

In 1988, the Stafford Act reorganized emergency management within FEMA but did not streamline previous legislation or change the activities of other agencies. Then, in September 1989, Hurricane Hugo, a category 4 storm, devastated large sections of eastern South Carolina. One month later, a magnitude 7.1 earthquake rocked the San Francisco Bay Area, severely damaging buildings in the cities of Watsonville and Santa Cruz and also destroying freeways, bridges, and housing in Oakland and San Francisco 60 miles away.

In 1991, wildfires burned 3,000 homes in the Oakland hills. In 1992, Hurricane Andrew slammed into south Florida, damaging 1,100 square miles, while Hurricane Iniki hit Hawaii. In 1993, the big flood on the Mississippi River affected communities in nine states, and more wildfires broke out in southern California. In 1994, the magnitude 6.8 Northridge earthquake struck in the San Fernando Valley in northwest Los Angeles. From 1995 to 1997 major floods hit California, North Carolina, Ohio, Minnesota, and North Dakota. In 1998, tornadoes left their mark across Texas and Oklahoma, and in 1999 Hurricane Floyd missed Florida but engulfed the Carolinas. Between the big disasters, there were scores of smaller ones. For federal agencies, there was unprecedented pressure to do better and to do more.

The most significant programmatic improvements came in the area of emergency response. In California, communication problems after Loma Prieta and the Oakland hills fire resulted in the development of statewide satellite communications and standardized emergency response procedures. The inequities and problems in sheltering low-income victims after Loma Prieta and Andrew led to a review of procedures by charitable and government agencies. Volunteers and staff were trained in cultural sensitivity, foreign languages, and specialized services. At FEMA, regulations were changed to allow federal agencies to take action on catastrophic disasters even before states officially requested help.

The housing recovery problems after Hugo and Loma Prieta were met with the standard range of services even though the scale of housing loss was greater than had ever been experienced. Homeowners were frustrated by the maze of programs, each of which required that an owner be rejected by one program before he or she could work through another application and inspection process. Lower-income homeowners often found that they were not creditworthy and were thus ineligible for Small Business Administration (SBA) loans. Apartment owners had equal trouble qualifying for SBA loans. Low-income renters could not find any alternative housing and often received no assistance because they could not produce a traditional lease. The post-disaster affordable housing problems were complicated by general problems of housing supply and affordability.

The Loma Prieta experience raised the federal consciousness about housing loss in disasters, and federal services were expanded. After Hurricane Andrew, disaster programs were combined with existing housing programs to help thousands of victims in south Dade County. FEMA provided more than 3,600 mobile homes and travel trailers, the U.S. Department of Agriculture developed mobile home parks for farm workers, and the Department of Housing and Urban Development (HUD) made 8,000 Section 8 rental vouchers available for victims and provided reconstruction funds through the acceleration of Community Development Block Grants and other funds.

It is not surprising then that within hours of the Northridge earthquake, in January 1994, HUD secretary Henry Cisneros was on an airplane to Los Angeles, ready to offer HUD’s resources to assist with the problems of temporary shelter and reconstruction of damaged housing. The combined resources of FEMA and HUD quickly rehoused displaced residents of all income levels. There were still no special disaster programs for the repair of multifamily housing, and apartment owners faced certain rejection by SBA loan programs, but HUD filled a significant gap with extra moneys in its funding programs. SBA made 99,000 loans to creditworthy owners of single-family homes, amounting to $2.4 billion, one-eighth of the value of all its loans for the previous 40 years.

The entry of nondisaster agencies such as HUD, along with the continued expansion of disaster agencies into new areas, has been an attempt to solve the new problems posed by large-scale urban disasters. But compassion and politics have pushed federal disaster spending to new levels. The actual magnitude of federal spending on disasters is difficult to assess, because although FEMA accounts for its outlays under the Stafford Act, other agencies combine FEMA transfers with their own programs and resources. Still, of the $6 billion in 1992 supplemental appropriations, at least $2.2 billion was spent on Andrew, and of that, $1.2 billion was spent on housing. Similarly, of the $8.4 billion appropriated in 1994, $4.7 billion was spent on housing by FEMA, HUD, and SBA, half in loans and half in direct grants.

The expansion of agency roles and the large expenditures have led conservative members of Congress from states outside the earthquake and hurricane belts to question the federal role in disaster assistance. In 1996, the 104th Congress offset supplemental appropriations for disaster assistance with cuts of prior domestic appropriations, including low-income housing and bilingual education.

Rising expectations

The Loma Prieta earthquake was a big-time news event, in part because the national media was already in the area for the World Series and because the damage to the Cypress Freeway, the Bay Bridge, and the Marina district was particularly dramatic and photogenic. With the introduction of 24-hour cable TV news, Loma Prieta moved disasters into the same category as wars and (later) murder trials.

The constant broadcasting and updating of disaster footage intensified the politicization of disasters light-years beyond the ordinary dimensions of pork barreling. In response to every unchallenged tirade against the federal government for failure to deliver more services, federal agencies tried to improve their tattered images by offering more dollars and more services than ever before. President Bush promised to rebuild Homestead Air Force Base (leveled by Hurricane Andrew) in the midst of base closures around the nation. Although Bush could not make good on the Homestead promise, he paid all of Florida’s cleanup costs instead of the customary 75 percent. He extended this largesse to Louisiana and Hawaii (after Hurricane Iniki), and when Senator Ernest Hollings (D-N.C.) complained, North Carolina was retroactively included. Only in California, where Bush was unlikely to receive additional votes, were funds allocated less generously.

In this era, gone was the notion that a federal declaration of disaster signified that state and local resources were overwhelmed. With TV cameras rolling, the federal government was the first on the scene, and in the public mind, the federal programs designed to assist victims in need were transformed into entitlement programs for aggressive individuals and local governments. Constant TV broadcasting has forced the federal government to promise more and more assistance, and the visibility of those promises has built unrealistic expectations, especially now that disaster assistance is supposed to be traded off against regular domestic programs.

The insurance problem

Just as disasters in the past decade have focused attention on government’s role, they have also focused attention of the role of private insurance. Earlier in the century, the government faced the problem of having insurers leave the market because of repeated flood losses, which accounted for 70 to 80 percent of all U.S. disasters. The federal government first attempted to address flood loss through a program of flood control measures, such as dams, seawalls, and levees, begun in the 1920s. The first federal flood insurance was instituted in 1956, followed by the National Flood Insurance Program (NFIP) in 1968. Flood insurance was designed to serve as an alternative to disaster relief, and participation was tied to local mitigation efforts, but many communities did not and still do not participate, because local governments were and are unwilling to adopt any development regulations.

Tax laws that prevent insurers from creating multiyear risk pools for hurricanes and earthquakes should be changed.

At the time of the Midwest flood in 1993, only 20 to 30 percent of insurable buildings were covered by federal flood insurance, and communities choosing not to participate in the NFIP received substantial assistance, despite regulations to the contrary. Since then, the program has been reorganized but problems remain. State and local governments are more likely to defer to real estate and development interests than to enforce good building and zoning practices. For its part, the federal government has a hard time refusing assistance to any disaster victim.

Nothing similar to federal flood insurance has ever been proposed for hurricanes or earthquakes. During this century, the infrequency and unpredictability of these events suggested that a combination of private insurance and one-time appropriations was adequate. The past decade has changed that view, but the only proposed remedy was a national multihazard insurance program, a proposal with few supporters. Most critics fear another government-sponsored insurance program would simply condone bad building practices and create a moral hazard, a situation in which an insured party has lower incentive to avoid risk because an enhanced level of protection is provided.

At the time of Loma Prieta, only about 20 percent of Californians had earthquake coverage as a rider on their home insurance. Why so few? The reasons are varied but generally included the following, published by Risa Palm from survey data:

  • Homeowners did not think “it” would happen to them.
  • Homeowners perceived premiums and deductibles as too high.
  • Banks did not require earthquake insurance.
  • Insurers did not market coverage, because it appeared underpriced relative to potential losses.
  • Homeowners assumed that the federal government would come to their aid.

Despite the fact that California had required insurance companies to offer earthquake coverage with homeowner polices since 1985, purchases of the coverage did not increase until after the Loma Prieta earthquake. By 1994, more than 40 percent of California homeowners had earthquake coverage. With more homeowners purchasing the insurance, the companies saw their exposure increased, without adequate underwriting. The Oakland hill fires made insurance companies recognize the scale of potential losses. The insured value of many older homes did not reflect the replacement costs, and insurers faced tremendous pressure to settle for more than the policy value.

Even though insurers believed the product was underpriced and consumers thought it was overpriced, insurance companies were under pressure because the worldwide capacity for reinsurance had not grown with demand. Reinsurance is the insurance bought by insurance companies to protect themselves against extreme losses. Lloyds of London and Swiss RE are two of a small number of such global reinsurance companies. When Lloyds faced bankruptcy in 1992 and the market was concentrated in only a few companies, rate increases made reinsurance hard to obtain. Hurricane Andrew’s insured losses were $16.3 billion, and after the Northridge earthquake the insured losses were $12.5 billion. In the Northridge case, the insured losses were more than three times the total direct premiums collected for all earthquake insurance policies in the United States from 1990 to 1993.

Raising rates was not the answer. Insurers preferred to leave the disaster market entirely. In California and Florida, private insurers quit offering earthquake or hurricane insurance as part of a homeowner policy. In each case, a semiautonomous nonprofit agency [the California Earthquake Authority (CEA) and the Joint Underwriting Authority in Florida] was created to offer limited residential disaster insurance coverage. These agencies do something a private company cannot. They provide a tax-free, state-backed reserve fund set aside over a period of years for disasters. However, these funds are relatively small ($12 billion in California), so coverage is limited, making the policies unattractive for consumers. If CEA had been in place at the time of the Northridge earthquake, only about half of the payments would have been made to half the number of claimants. This is because CEA policies raise the deductible from 10 to 15 percent, limit contents losses to $5,000, and limit coverage to the main structure only. Together these policy restrictions cut coverage dramatically.

It is a mistake to think that private insurance will never return. Lenders require it on commercial building loans, and small companies are beginning to target “good risk” customers with an alternative to state policies. But in order for private insurance to grow significantly, government will need to look seriously at changing the corporate tax law. Currently, corporations are taxed on their annual assets, so that a pool of reserve funds (for a future disaster) is taxed as an asset. Insurance companies find it impossible to create multiyear risk reserves for hurricanes and earthquakes if the fund is subject to annual taxation.

At the same time, insurers need to do the research necessary to better tie the price of disaster insurance to the real risks, based on location and building conditions. Insurers could also consider a variety of insurance products. Instead of a high-premium policy that covers catastrophic loss after the owner pays a $30,000 to $50,000 deductible, insurers might offer a “minor loss” policy, capped at $50,000, to cover the typical minor damage and contents damage most homes experience after hurricanes and earthquakes. For insurance to make a comeback, it needs to be part of a series of recovery options available to home and building owners, and not the sole source of recovery funding.

Predisaster mitigation

Because of the limited availability of insurance and caps on federal spending, FEMA has advocated establishing “disaster resistant communities.” Under this strategy, seed funds would be provided to cities to promote mitigation of either earthquake or hurricane hazards by building owners before disaster strikes in order to limit federal and personal recovery costs. In theory, this is reasonable, because there clearly are cases of individual buildings or bridges that have been retrofitted and thus spared some damage as a result. However, the success of individual projects does not necessarily translate into regional or national programs. Despite the fact that mitigation can reduce losses, it is clear from past experience that the real estate market does not reward a building owner for such expenditures in higher rents or higher property values.

Similarly, FEMA has a buyout program in which homes and property in flood-prone communities are purchased and converted to parks or wetlands. Although this program has been a success in a few small towns in Illinois, Wisconsin, and Oklahoma, most owners simply refuse to move or even elevate their houses.

For earthquakes, only brick buildings have been singled out for mitigation. In 1981, Los Angeles mandated that unreinforced masonry (URM) buildings be seismically retrofitted because of the overwhelming collapse hazard. In 1985, California passed legislation requiring all local jurisdictions to inventory and mitigate the URM problem. But cities had a hard time requiring property owners to undertake expensive repairs if they were unable to increase rents. No mitigation regulations have ever been proposed to retroactively require hurricane clips when a house is re-roofed, even though it is clear that most Florida houses were built without them. If mitigation by regulation is difficult, mitigation without incentives is almost impossible.

Mitigation advocates cite damage avoided as a social benefit, but there has been little exploration of who should bear the cost. It may be relatively easy to justify retrofitting a highway system that is publicly owned and maintained. It will be more difficult to justify any regulation that orders hazard mitigation in privately owned structures, when the cost to be incurred by individuals is intended for a public good. Is it reasonable to ask that every structure in California be brought up to some newer standard? If not all, then which ones, in what order of priority, and at what cost?

Safety assessments of buildings should be incorporated into real estate transactions.

Mitigation as a concept is not always realistic in terms of engineering solutions. The development of seismic standards for existing buildings is relatively new and much debated. The technical standards are currently only guidelines and are far from being adopted as part of a universally accepted building code. The structural engineering community is promoting the concept of “performance-based design,” in which a building owner designates how his/her building should function after an earthquake (such as immediate occupancy, repairable damage, or collapse prevention). Although engineers have met such performance standards in specialized new buildings, the technical capacity to actually retrofit an existing building to such standards is still being developed and tested.

Before any mitigation program is adopted as policy, a good deal more technical and economic research is needed. Standards and enforcement procedures for seismic improvements to existing buildings, infrastructure systems, and lifelines need further development, including definitions of vulnerability based on soil conditions, fault proximity, construction type, and maintenance quality. The quantification of risk needs to be improved by reassessing past losses, identifying statistically relevant risk types, and integrating the mapping of soil and building conditions. Serious economic evaluations of mitigation incentives and programs need to be undertaken as well as comparisons between the costs of mitigation and the costs of public and private recovery programs such as insurance and federal assistance.

Forging a new disaster recovery policy

At first glance, the prognosis for any disaster recovery policy appears bleak, with little that makes political or economic sense. The costs incurred in the past decade as a result of disasters shocked the insurance industry and the federal government, and the first reaction of both was to limit future payouts. FEMA director James Lee Witt has tried to offer predisaster mitigation as a policy mechanism for reducing future costs, but there has been no systematic reevaluation of the problem of financing postdisaster repair and rebuilding costs after large urban disasters.

A new recovery policy incorporating realistic costs for urban disasters will require a comprehensive revision of the government’s role, new insurance instruments, and the involvement of the lending community. For government, humanitarian disaster relief services ought to be separated from financing for repairs. After Northridge, SBA loans went to single-family homeowners, whereas HUD funds went largely to apartment owners and low-income homeowners. In both cases, existing nondisaster programs were used to finance and expedite recovery. This is a good starting point for future government involvement in recovery finance. Another opportunity would be to build on funding programs for nonprofit housing corporations and social service agencies. Tax credits and other forms of construction and management funding could be increased to allow these locally based agencies to take control of community recovery.

However, if government also wants to promote predisaster mitigation, it will take a combination of regulation and incentives. Tax credits are an obvious incentive, but they tend to go to those who would do the mitigation anyway. To reach a large number of homeowners and some apartment owners, it is important to devise a policy that taps into the real estate marketplace. What most of these properties have in common is a loan, but the lenders sell loans to other financial institutions, spreading their risk and keeping capital in their business. Consequently, in order to affect real estate lending, one must go not to the banks and savings and loan companies but to the secondary market–the Federal Housing Administration (FHA), Fannie-Mae, and Freddie-Mac (officially, the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation). These are the quasi-governmental agencies that underwrite and purchase residential loans.

A new role for mortgage underwriters might be to require building safety inspections in order to qualify for federally backed mortgages. The inspections could be based on an engineering rating system in which a rating change could lower the mortgage rate and perhaps even influence insurance premiums. If the secondary mortgage market requires a safety inspection as part of a loan transaction, all lenders would be using the same information and would be able to price their loans accordingly. The same rating system could provide the basis for insurance rates for a variety of insurance products.

Instituting federal requirements tied to mortgage origination offers many financial advantages. For consumers, national standards would force all loan transactions to recognize disaster risk as part of the loan. For lenders, common rules would keep a level playing field. For private insurance companies as well as state-managed insurance pools, tying insurance to mortgage origination would guarantee sufficient participation and long-term financial capacity. There are also enormous political obstacles to both ambitious and modest interventions in the complex world of real estate mortgage finance, insurance, and tax codes. However, real solutions to funding postdisaster building repairs can happen only in the arenas where capital already exists.

Ultimately, any disaster recovery policy will require political commitment, but the economic future of California, Florida, and other hazard-prone regions depends on an anticipatory approach to policy and planning. As FEMA promotes mitigation as a cost cutter, the agency should promote policies to help insurance companies return to the market and promote programs that incorporate building safety assessments into real estate transactions. In the future, we need pre- and postdisaster policies that are safe, fair, and cost-effective.

Let Them Eat Pixels

President Clinton says, “Our big goal should be to make connection to the Internet as common as connection to telephones is today . . . We want to join with the private sector to bring more computers and Internet access into the homes of low-income people.” He would provide $2 billion in tax incentives to encourage corporate donations to community technology centers, $150 million to train all new teachers to use computers effectively, $50 million for a public/private partnership to expand home access to computers and the Internet for low-income people, and much more. The Department of Commerce (Why that agency?) has even created a special Web site (digitaldivide.gov) devoted completely to the topic of how to ensure that low-income people, rural dwellers, and minorities, particularly students, have increased access to computers and the Internet. Considering the school dropout rate, the crumbling condition of school buildings, the lack of teacher training in critical subjects such as science and math, and the shortage of textbooks, not to mention a host of nonschool problems that interfere with learning, why in the world are we getting exercised about the need for computers and Internet access? Is that anywhere near the top of the list of needs for the disadvantaged? Is it even on the first page!

Yes, it’s true that the poor and some other groups do have less access to computers and the Net, but access is spreading so rapidly that this problem might disappear before we can create a presidential commission to study it. The percentage of schools with an Internet connection has grown from 35 percent in 1994 to 95 percent in 1999. Almost two out of three public school instructional rooms had Internet access in 1999, up from 3 percent in 1994. Home access is increasing also, growing at a healthy, albeit somewhat slower, pace. In one aspect of computer use, black students are actually ahead of the curve. The 1998 Reading Report Card from the National Assessment of Educational Progress (NAEP) included survey data that indicated that black students are more likely than white students to use a computer in doing their homework.

On the other hand, researchers have found that compared to white and affluent students, poor and minority students are more likely to use computers for drill and practice exercises and less likely to use them for higher-order intellectual tasks. Similarly, poor schools are likely to have slower Internet connections than their more affluent counterparts. And since teachers in low-income schools are likely to be less well trained, it’s likely that they make less effective use of computers in the classroom.

But is this a national crisis? These students who have less access to computers and the Net also have less access to everything else. Why among all their deprivations should we focus on their lack of computers? Is this what separates the underclass from the upwardly mobile? Hardly. In fact, when I join other Washington policy wonks and opinion shapers on the sidelines for our kids’ soccer games, the conversation often turns to how to keep kids off their computers and into their books. We understand very well that books are still the best technology for teaching kids how to acquire information, understand the human psyche, and argue a point of view. We also know that when kids get online, they spend most of their time exchanging instant messages, hanging out in chat rooms, playing games, or downloading music files. Never mind the other things that we don’t want to even think about. Yes, they also do some research and word processing, but we know that the most important part of their education does not occur in front of a screen.

If I had to guess how well a student was doing in school on the basis of a single piece of home information, I’d be looking at bookcases, not computers. Of course, we don’t know nearly as much about books in the home, but the 1998 NAEP study found that about a quarter of the homes of U.S. black and Hispanic fourth graders have fewer than 25 books. And that’s not even close to what I think would be necessary to stimulate a child’s imagination. Other findings from the NAEP study suggest that those concerned with low student achievement would do well to look at factors other than computers. For example, only 26 percent of 12th graders study more than a hour a day, whereas 37 percent watch 3 or more hours of TV per day.

The Clinton administration wants every new teacher to be trained in how to use computers effectively in the classroom. That would be fine if we knew what to tell them. As William L. Rukeyser, the coordinator of Learning in the Real World, a Woodland, California-based organization that awards grants to researchers studying the costs and benefits of classroom computers, told the National Journal, “Most people are of the opinion that there are areas where it’s going to be worthwhile and cost effective [to use computers in schools], but we don’t have the evidence yet to determine what those areas are.” This spring, the Department of Education plans to launch a comprehensive study on the effectiveness of classroom technology. Wouldn’t it make sense to learn a little more about what works before inflicting the technology on students and teachers?

We know quite a bit about what works in chalk-and-blackboard science and math education, and we know that only 11 percent of middle school math teachers and 21 percent of middle school science teachers majored in math, science, or math/science education. Isn’t that a more pressing problem? Likewise, there are undoubtedly numerous teachers who could be trained to be more effective writing teachers and librarians who could do a better job of giving students the skills to locate, evaluate, and use information. These could be the beginning of a long list of skills that could be imparted to teachers with a $150-million teacher-training budget. We are constantly hearing how important computer skills are becoming in the workplace. Sure, but the computers that today’s elementary school students use at work will be completely different from what they might use now in a classroom. And ask any employer if they would prefer to have to teach new hires computer skills or basic writing and math.

There is a divide in this country, but it’s not digital. It’s in basic academic skills and in high-quality schools. The educated upper middle class is not choosing schools on the basis of how many computers they have or how fast their Internet connection is. They are looking at the qualifications of the teachers and the rigor of the curriculum. Operating a computer is simple compared to designing a scientific experiment, solving a challenging math problem, or writing a forceful and coherent paragraph. For young people who have these skills, the computer is a useful tool. For those who don’t, the computer is a diversion, an entertainment device, perhaps an alternative to TV. It’s better than TV, but it’s not nearly good enough. At this stage in the development of educational technology, the computer and Net are a condiment or a dessert on the educational menu. If we want to help disadvantaged students, we should be providing more nourishing help: the bread of well-trained teachers and a rigorous curriculum.

The Illusion of Integrated Pest Management

USDA and EPA struggled to come up with a workable definition of IPM and a suitable way to assess its level of adoption. This is not surprising, given the apparent confusion among policymakers as to what IPM is all about. The most recent attempt came in October 1998, when USDA announced that a given farm should have in place a management strategy for “prevention, avoidance, monitoring, and suppression” (PAMS) of pests. To qualify as IPM under these guidelines, a farmer must use tactics in at least three PAMS components. USDA defines “prevention” as the practice of keeping a pest population from ever infesting a crop. “Avoidance” may be practiced when pest populations exist in a field, but their impact on the crop can be avoided by some cultural practice. “Monitoring” refers to regular scouting of the crop to determine the need for suppressive actions. “Suppression” is used where prevention and avoidance have failed and will typically mean application of a chemical pesticide.

The major problem with the PAMS approach is that it does not recognize the concept of integration or compatibility among pest management tactics as envisioned by the founders of IPM. Simply mixing different management tactics does not constitute IPM. Mixing the tactics arbitrarily may actually aggravate pest problems or produce other unintended effects. For example, studies have documented antagonistic relationships between genetically resistant crop cultivars and biological control agents of insect pests. It is naïve to assume that nonchemical or reduced-risk alternatives can be mixed and deployed in the same way in which pesticide “cocktails” have commonly been used in the past. Combining tactics to achieve the best long-term results requires considerable ecological finesse. Many potentially effective alternatives will provide only disappointment if they are used in the same way as conventional pesticides and are applied without good knowledge of how they affect other control agents.

A federal policy that promotes IPM without a proper understanding of IPM is doomed to failure. But just understanding IPM is not enough: Federal policy also must address how farmer adoption will be measured and provide incentives to encourage such adoption. This is not likely to happen in the foreseeable future. In view of this, we believe that the time has come for a major policy change at the federal level: Dispense with what has become an “IPM illusion” and shift the focus to a more definable goal, such as pesticide reduction (including risk reduction). Because policymakers in Washington seem to have a fuzzy view of the origin of IPM, we will begin with a brief historical account.

Historical perspective

Shortly after World War II, when synthetic organic insecticides became available, applied entomologists in California developed the concept of “supervised insect control.” Entomologists in cotton-belt states such as Arkansas were advocating a similar approach. Under this scheme, insect control was “supervised” by qualified entomologists, and insecticide applications were based on conclusions reached from periodic monitoring of pest and natural-enemy populations. This was viewed as an alternative to calendar-based insecticide programs. Supervised control was based on a sound knowledge of the ecology and analysis of projected trends in pest and natural-enemy populations.

Supervised control formed much of the conceptual basis for the “integrated control” that California entomologists articulated in the 1950s. Integrated control sought to identify the best mix of chemical and biological controls for a given insect pest. Chemical insecticides were to be used in manner least disruptive to biological control. The term “integrated” was thus synonymous with “compatible.” Chemical controls were to be applied only after regular monitoring indicated that a pest population had reached a level (the economic threshold) that required treatment to prevent the population from reaching a level (the economic injury level) at which economic losses would exceed the cost of the artificial control measures.

IPM extended the concept of integrated control to all classes of pests and was expanded to include tactics other than just chemical and biological controls. Artificial controls such as pesticides were to be applied as in integrated control, but these now had to be compatible with control tactics for all classes of pests. Other tactics, such as host-plant resistance and cultural manipulations, became part of the IPM arsenal. IPM added the multidisciplinary element, involving entomologists, plant pathologists, nematologists, and weed scientists. In the United States, IPM was formulated into national policy in February 1972 when President Nixon directed federal agencies to take steps to advance the concept and application of IPM in all relevant sectors. In 1979, President Carter established an interagency IPM Coordinating Committee to ensure development and implementation of IPM practices.

Illusion of success

There is now a growing awareness that IPM as envisioned by its initial proponents is not being practiced to any significant extent in U.S. agriculture. According to estimates by the Consumers Union, true IPM is probably being practiced on only 4 to 8 percent of the U.S. crop acreage. Globally, the percentage is probably even lower. This is true almost three decades after President Nixon’s directive, which, among other things, stimulated major IPM research and extension programs in the land grant colleges of agriculture (LGCA).

Much of what is being billed as modern IPM is really nothing more than a reinvention of the supervised control of 50 years ago. Today, pest consultants typically monitor crops and determine when to treat with a pesticide–often as the primary or only tactic. This approach is sometimes derisively referred to as “integrated pesticide management.” In many situations, pest consultants do not even practice supervised control because they may not monitor at all, or if they do, there is no effort to monitor natural-enemy populations. Pesticides are the “magic bullet” for the risk-averse farmer and pest consultant; these materials are easy to apply, provide a quick fix, and require little or no ecological understanding of the target system. Many pest consultants are also employed by the pesticide industry and thus have a built-in conflict of interest. There is not much economic incentive to try (much less integrate) alternative methods, even when these are available. Also, alternative methods often require relatively more effort to implement as well as greater ecological understanding of the target system.

Finally, there is virtually no integration of tactics. Integration is critical both within a class of pests and among classes of pests. For example, an insecticide should not destroy natural enemies of insect pests, nor should a fungicide destroy microbial antagonists of plant pathogens (vertical integration); the same insecticide should not destroy insects that suppress weeds, nor should the same fungicide destroy predatory mites that help control other mites and smaller insect pests (horizontal integration). This was the original meaning of the term “integration.” Unfortunately, in U.S. agriculture today there is usually no “I” in the IPM.

This does not mean that the IPM movement was a complete failure. Considerable benefit was passed on to farmers in the form of models for predicting pest occurrence, plans to monitor pests, and treatment guidelines for use in supervised control. As a result, there has been some pesticide reduction in certain crops. However, little benefit was passed on in terms of truly integrated programs as envisioned by the founders of IPM. In some ways, the chief beneficiaries of the IPM movement have been research scientists, extension agents, and government bureaucrats instead of the farmers. The IPM movement was successful in that it generated the fuel (that is, funding) needed to operate the research engines of mission-oriented scientists, not to mention the research engines of other scientists engaged in curiosity-driven research that could be justified by the national need for IPM. But complete IPM programs did not result, largely because not enough research effort was devoted to vertical and horizontal integration of tactics that could be implemented by the farmer.

Problems with implementation

The implementation of IPM is hampered in several ways. The monitoring schemes developed for pest and natural enemy populations may be too sophisticated and expensive to be a practical tool for the pest consultant. In other cases, they may be too simplistic to be used in making rational decisions on how to harmonize tactics for best immediate and long-term results. Predicting pest and natural enemy population trends is difficult because of “chaos” in agro-ecosystems (that is, the extreme sensitivity to initial conditions). Economic thresholds and injury levels have proven to be primarily of academic interest; in practice, they can become operationally intractable. Because of the complexity of crop systems and the site-specific nature of many pest problems, relying on predetermined static treatment thresholds is problematic. Dynamic thresholds are needed. But developing a dynamic threshold for a single pest can take years of field research, which is a disincentive for LGCA scientists. What’s more, the concept of economic threshold may not apply to many pest problems. For example, there is no threshold in the case of a single weed that can produce enough seed to infest the entire field or for a pathogen population that cannot be monitored in any practical way.

Much of what is being billed as modern IPM is really nothing more than a reinvention of the supervised control of 50 years ago.

Mainstream post-World War II entomologists blazed the trails with the new synthetic organic insecticides but became preoccupied with treating the symptom rather than developing an ecological understanding of pest-antagonist relationships. Other pest disciplines seem to have followed the same trail. Plant pathologists and weed scientists have been slow to develop a good ecological understanding of naturally occurring antagonistic agents of pathogens and weeds. Soil fumigation–an ecological blunt instrument–is still used for “management” of nematodes and soil-borne pathogens in certain crops. In short, there is minimal vertical integration of tactics; horizontal integration is in its infancy. The pest disciplines have failed to integrate on behalf of the farmer. Or to quote Philip H. Abelson in his Science editorial of August 8, 1997: “. . . society has problems; universities have departments.”

Pest consultants typically have a B.S. or M.S. degree in an agricultural discipline. Some are trained in pest management. At best, most are able to diagnose pest problems and determine when to treat with a pesticide. However, their training is simply inadequate for dealing with the ecological complexity and challenge of IPM. The research, extension, and regulatory agencies have been slow (or unable) to recognize this problem, and the LCGAs have failed to assume proper responsibility for a job that should be part of their mission.

Achievable goals

A fundamental shift in federal policy relative to IPM is in order. We suggest the following for the immediate future.

  • Set aside the Clinton administration’s IPM Initiative and concentrate instead on the percentage of crop acreage that is under supervised control. Once 75 percent of U.S. crop acreage is under supervised control, policymakers can set a goal for the next step: the vertical integration of tactics (first-level IPM). Although this can be done by executive order, it would be better for Congress to act. Executive orders are usually unfunded mandates and may expire at the end of a president’s term. IPM needs financial support and a long-term commitment.
  • Shift the debate to pesticide reduction, with particular emphasis on compounds that pose significant hazards to humans and the environment and/or destroy naturally occurring antagonists of pests. Pesticide reduction can be quantified and analyzed statistically, whereas IPM cannot. We also have to pay attention to “phantom reductions.” For example, switching to a different pesticide may make it possible to reduce the amount of pesticide applied, but certain compounds (such as synthetic pyrethroids) may pose greater environmental risk even though less is applied. Reducing the risk associated with pesticide use (risk reduction) must therefore also be considered.
  • Increase funding for research on naturally occurring antagonists of pests in agro-ecosystems, including ways to exploit crop plants that favor the antagonists. Increased funding is especially critical for antagonists of plant pathogens, weeds, and nematodes. Priority should be given to field-oriented research projects that seek to establish the relative importance of the suite of antagonists that exist in a given crop system and integrate the more important antagonists with the standard artificial control tactics for that system.
  • Reassess the academic strategy in the LGCAs for training the next generation of pest consultants. A new model is required to address the simultaneous need for field scouts to assess pest populations and determine when to treat and for broadly trained, interdisciplinary specialists to deal with the challenge of IPM and the complexity of crop systems. In the latter case, a doctoral degree in plant health is in order. This would be a professional nonresearch degree comparable to the D.V.M., D.D.S., and M.D. degrees. The University of Florida recently announced such a program.

It is tempting to simply admit defeat and drop the IPM acronym from production agriculture. However, this would probably add to the confusion that already exists. Although true IPM has yet to be substantially realized in U.S. agriculture, it does remain a worthy goal for the early part of the 21st century. This goal is attainable in principle but will most likely require innovative partnerships among scientists, extension agents, pest consultants, progressive farmers, farm workers, and consumers to see it to fruition. The focus should be at the local or grassroots level. The federal government should be a facilitator by providing as much incentive as possible, while not getting in the way of innovation.

Forum – Spring 2000

Airline deregulation

I agree with John R. Meyer and Thomas R. Menzies that the benefits of deregulation have been substantial (“Airline Deregulation: Time to Complete the Job” (Issues, Winter 2000). I also agree that some government policies need to be changed to promote competition.

I strongly opposed the 1985 decision to allow the carriers holding slots at the four high-density airports to sell those slots. Our experience has been that incumbent airlines have often refused to sell slots at reasonable prices to potential competitors. I am pleased that both the House and Senate Federal Aviation Administration reauthorization bills, now pending in conference committee, seek to eliminate slots at three of the four slot-controlled airports. According to a 1995 Department of Transportation (DOT) study, eliminating these slots will produce a net benefit to consumers of over $700 million annually from fare reductions and improved service.

Certain airport practices, such as the long-term leasing of gates, also limit opportunities for new entrants. This problem can correct itself over time if airports make more gates available to new entrants as long-term leases expire.

However, I disagree with the authors’ suggestion that we should adopt methods such as congestion-based landing fees to improve airport capacity. Higher fees would discourage smaller planes from taking off or landing during peak hours. In reality, there are few small private planes using large airports at peak hours, so this solution would not significantly reduce delays.

Nor would it be sound policy to discourage small commuter aircraft from operating at peak hours. This could disrupt the hub-and-spoke system, which has benefited many passengers under deregulation. Higher fees could prompt carriers to terminate service to communities served by smaller aircraft.

It is important for the government to protect low-fare carriers from unfair competition because they have been major providers of deregulation benefits to the traveling public. A 1996 DOT study estimated that low-fare carriers have saved passengers $6.3 billion annually.

Unlike the authors, I believe that it is important for DOT to establish guidelines on predatory pricing. DOT investigations and congressional hearings have uncovered a number of cases of predatory airline practices. These include incumbent airlines reducing fares and increasing capacity to drive a new competitor out of the market, then recouping their lost revenue by later raising fares.

The Justice Department’s investigation of alleged anticompetitive practices by American Airlines sends a strong message that major established carriers cannot resort to anticompetitive tactics. However, as I explained in a speech in the House of Representatives (Congressional Record, October 2, 1998, page H9340), DOT has broader authority to proceed against such practices and should exercise this authority, taking care not to inhibit legitimate fare reductions that benefit consumers.

The major airlines’ response has been to scream “re-regulation” at every opportunity when solutions are proposed. It is time that government and industry work constructively to find real solutions that expand the benefits of deregulation and “complete the job.”

REP. JAMES L. OBERSTAR

Democrat of Minnesota

The author is ranking Democratic member of the House Transportation and Infrastructure Committee.


The article by John R. Meyer and Thomas R. Menzies reflects a recent Transportation Research Board report, Entry and Competition in the U.S. Airline Industry. The principal conclusion of the study is that airline deregulation has been largely beneficial and most markets are competitive. I strongly agree. Meyer and Menzies also correctly observe that airline deregulation has been accompanied by a paradox: However successful it can be shown to have been in the aggregate, there are enough people sufficiently exercised about one aspect or another of the deregulated industry to have created a growing cottage industry of critics and would-be tinkerers bent on “improving” it.

These tinkerers say they do not want reregulation. But I agree with Meyer and Menzies’ assertion that this tinkering is very unlikely to improve things. Only rarely do we get to choose between imperfect markets and perfect regulation or vice versa. That’s easy. The overwhelmingly common choice is between imperfect markets and flawed regulation administered by all-too-human regulators. And the overwhelming evidence is that when a policy has done as much good as airline deregulation, tweaking it to make it perfect is likely to create other distortions that will require other regulatory responses, and so on back toward where we came from.

I think it might further this important public policy debate to better understand the sources of two of the most important complaints about the deregulated airline system: its deteriorated levels of amenity and its highly segmented pricing system.

The Civil Aeronautics Board (CAB) did not ever have the statutory power to regulate conditions of service for mainline air transportation. But it did engage in practices that encouraged service competition. It kept fares uniform and high at levels that allowed more efficient carriers enough margins to offer extra amenities to attract passengers. This forced less-efficient carriers to match in order to keep customers. This encouraged ever-better passenger service at prices more and more divorced from those necessary to support bare-bones service. This effect became particularly sharp when CAB forced jet operators to charge more than piston operators even though the jets had lower unit costs than the piston-engined aircraft they replaced. The result was a race to provide more and more space and fancier and fancier meals to passengers. This was the famous pre-deregulation era of piano bars and mai-tais.

In this mode, air travel wasn’t accessible to the masses, but it defined a level of amenity that was beyond what most consumers would choose to pay for. When deregulation allowed lower prices and new lower-amenity service alternatives, traffic burgeoned. As airlines struggled to reduce costs to match the fares that competition forced them to charge, they stripped out many of the service amenities passengers had become used to. Airline managements used to high service levels tried to offer higher levels of service at slightly higher prices, but consumers rejected virtually all of these efforts.

Coupled with this lower level of service were the highly differentiated fares that replaced the rigidly uniform fare structure of the regulated era. Competition meant that most passengers paid less than they had during the regulated era, but segmentation meant that some paid much more. These segmented fares were necessary to provide the network reach and frequency desired by business travelers. The expanded system generated more choices for business travelers and more bargain seats for leisure travelers.

Those paying high prices were accommodated in the same cabin with those flying on bargain fares, and they received the lower standard of service that most passengers were paying for. Airlines tried to accommodate the most lucrative of their business customers with the possibility of upgrades to first class and a higher level of amenity, but they could not do so for all payers of high fares.

So we have seen a potent political brew: 1) Higher-paying passengers sit next to bargain customers and get and resent a lower level of service than they received before deregulation. 2) Bargain customers remember the “good old days” (and forget that they couldn’t afford to fly very often) and are unhappy with the standard of service they are receiving but are unwilling to pay more. 3) Business passengers and residents of markets supported by the higher fares (frequent service to smaller cities) appreciate the frequency but resent the fare. In my view, the contrast between new service and old and between segmented fare structures and the old uniform fare structures has been a potent factor in feeding resentment among customers who take the network for granted but hate the selectively high fares that support it as well as the service that the average fare will support.

Menzies and Meyer are right: It would certainly help improve things to “complete the job” by ending artificial constraints on infrastructure use and adopting economically sound ways of rationing access where infrastructure is scarce. But I am afraid that the clamor to do something about airlines won’t die down until passengers recognize the price/quality distortions created by the previous regulatory regime and somehow erase the memory of a “fair” but artificially uniform pricing system that priced many of them out of the market, suppressed customer choice, and inhibited network growth.

MICHAEL E. LEVINE

Harvard Law School

Cambridge, Massachusetts


I agree with John R. Meyer and Thomas R. Menzies that it is time to complete the job started more than 20 years ago by the Airline Deregulation Act of 1978. But it is one thing to advocate that remaining government interventions such as slot controls, perimeter rules, and de facto prohibitions on market pricing be done away with. It is quite another thing to figure out ways of bringing about these results. I suggest that we have some evidence from other countries that changing the underlying institutions can be an effective way of bringing about further marketization.

About 16 countries have corporatized their air traffic control systems during the past 15 years, shifting from tax support to direct user charges. Although most of these charging systems are a long way from the kind of marginal-cost or Ramsey-pricing systems that economists would like to see, they at least link service delivery to payment and create a meaningful customer-provider relationship that was generally lacking before the change. And these reforms have already led to increased productivity, faster modernization, and reduced air service delays.

In addition, more than 100 large and medium-sized airports have been privatized in other countries during the same 15-year period. In most cases, privatized airports retain control of their gates, allocating them on a real-time basis to individual airlines. This is in marked contrast to practices at the typical U.S. airport, where most of the gates at most airports are tied up under long-term, exclusive-use lease arrangements by individual airlines. Hence, the privatized airports offer greater access to new airlines than do their typical U.S. counterparts.

Currently, the prospects for corporatizing the U.S. air traffic control system appear fairly good. The record level of airline delays in 1999 has led a growing number of airline chief executive officers to call for corporatizing or privatizing the system, and draft bills are in the works in Congress. A modest Airport Privatization Pilot Program would permit up to five U.S. airports to be sold or leased by granting waivers from federal regulations that would otherwise make such privatization impossible. Four of the five places in that program have now been filled–but all by small airports, not by any of the larger airports where access problems exist. An expanded program is clearly needed.

Incentives matter. That is why we want to see greater reliance on pricing in transportation. But institutional structures help to shape incentives. Airports and air traffic control will have stronger incentives to meet their customers’ needs (and to use pricing to do so) when they are set up as businesses, rather than as tax-funded government departments.

ROBERT W. POOLE, JR.

Director of Transportation Studies

Reason Public Policy Institute

Los Angeles, California


Plutonium politics

“Confronting the Paradox in Plutonium Policies” (Issues, Winter 2000) by Luther J. Carter and Thomas H. Pigford contains a number of interesting ideas and proposals, some of which may merit consideration. However, others are quite unrealistic. For example, the successful establishment of international and regional storage facilities for spent nuclear fuel could offer fuel cycle and nonproliferation advantages in some cases. However, the suggestion by the authors that the French, British, and presumably others should immediately stop all of their reprocessing activities is likely to be strongly resisted and, if pursued aggressively by the United States, could prove very damaging to our nonproliferation interests. In addition, it is neither fair nor accurate to say, as the authors have, that the international nonproliferation regime will be incomplete unless and until all spent fuel is aggregated in a limited number of international or regional storage facilities or repositories. These suggestions and assertions, in my view, are too sweeping and grandiose, and they overlook the complexity of the international nuclear situation as it exists today.

I certainly agree with the authors that it is important for the international community to come to better grips with the sizeable stocks of excess separated plutonium that have been built up in the civil fuel cycle and to try to establish a better balance in the supply and demands for this material. We also face major challenges in how best to dispose of the vast stocks of excess weapon materials, including highly enriched uranium and plutonium, now coming out of the Russian and U.S. nuclear arsenals. It also would improve the prospects for collaborative international action on plutonium management if there were a far better understanding and agreement between the United States and the Western Europeans, Russians, and Japanese as to the role plutonium should play in the nuclear fuel cycle over the long term.

However, it must be recognized that there remain some sharply different views between the United States and other countries over how this balance in plutonium supply and demand can best be achieved and over the role plutonium can best play in the future of nuclear power. Many individuals in France, the United Kingdom, Japan, and even in the United States (including many people who are as firmly committed to nonproliferation objectives as are the authors) strongly believe that over the long term, plutonium can be an important energy asset and that it is better either now or later to consume this material in reactors than to indefinitely store it in some form, let alone treat it as waste.

I believe that it is in the best interest of the United States to continue to approach these issues in a nondoctrinal manner that recognizes that we may need to cope with a variety of national situations and to work cooperatively with nations who may differ with us on the management of the nuclear fuel cycle. In some cases, the best solution may be for the United States to continue to encourage the application of extremely rigorous and effective safeguards and physical security measures to the recycling activities that already exist. In other cases, we might wish to encourage countries who strongly favor recycling to ultimately wean themselves off of or avoid the conventional PUREX reprocessing scheme that produces pure separated plutonium and to consider replacing this system with potentially more proliferation-resistant approaches that allow some recycling to occur, while always avoiding the presence of separated plutonium. However, to be credible in pursuing such an option, which still needs to be proven, the United States will have to significantly increase its own domestic R&D efforts to develop more proliferation-resistant fuel cycle approaches. In still other cases, it might prove desirable to try to promote the establishment of regional spent fuel storage facilities.

National approaches to the nuclear fuel cycle are likely to continue to differ, and what the United States may find tolerable from a national security perspective in one foreign situation may not apply to other countries. We should avoid deluding ourselves into believing that there is one technical or institutional approach that will serve as the magic solution in all cases.

HAROLD D. BENGELSDORF

Bengelsdorf, McGoldrick and Associates

Bethesda, Maryland

The author is a former senior official at the U.S. Departments of State and Energy.


Time will tell whether Luther J. Carter and Thomas H. Pigford’s vision for reducing plutonium inventories by ending commercial reprocessing and disposing of used nuclear fuel by developing a network of waste repositories will become reality.

Without question, this vision embodies over-the-horizon thinking. Nonetheless, Carter and Pigford are solidly real-world in their belief that the United States must build a repository without delay to help ensure nuclear energy’s future and to dispose of surplus weapons plutonium in mixed oxide (MOx) reactor fuel.

The United States does not reprocess commercial spent fuel, but France, Great Britain, and Japan have made the policy decision to use this technology. As the global leader in nuclear technology, however, we can also lead the world in a disposal solution. Moreover, we who have enjoyed the benefits of nuclear energy have a responsibility to ensure that fuel management policy is not simply left to future generations.

So why are we at least 12 years behind schedule in building a repository? The answer is a lack of national leadership. Extensive scientific studies of the proposed repository at Yucca Mountain, Nevada, are positive, and the site is promising. But, as Carter and Pigford correctly note, it is “a common and politically convenient attitude on the part of…government…to delay the siting and building of [a] repository.”

Delay and political convenience are no longer acceptable. Nuclear energy is needed more than ever to help power our increasingly electrified, computerized economy and to protect our air quality. In addition, Russia has agreed to dispose of surplus weapons plutonium only if the United States undertakes a similar effort. Without timely construction of a repository, these important goals will be difficult to realize.

Leadership from the highest levels of government is needed to ensure the following conditions for building a repository that is based on sound science and has a clear, firm schedule:

  1. Involvement by the U.S. Nuclear Regulatory Commission (the preeminent national authority on radiation safety) in setting a radiation standard for Yucca Mountain that protects the public and the environment. 2) Moving used fuel safely and securely to the repository when construction begins. 3) Prohibiting the government from using repository funds to pay legal settlements arising from its failure to move fuel from nuclear power plants.

Likewise, governmental leadership is vital to move the U.S. MOx program forward. Congress and the administration must provide funding for the infrastructure for manufacturing MOx fuel; a continued commitment to the principles and timetables established for the program; and leadership in securing international support for Russia’s disposition effort.

It has been said that future is not a gift, but an achievement. A future with a cleaner environment and greater security of nuclear weapons materials will not be a gift to the United States, but we can achieve this future by using nuclear energy. Governmental leadership is the key to ending the delays that have held up timely development of a repository for the byproducts of this important technology.

JOE F. COLVIN

President and Chief Executive Officer

Nuclear Energy Institute

Washington, D.C.

www.nei.org


In Luther J. Carter’s and Thomas H. Pigford’s excellent and wise article, I find the following points to be of critical importance: (1) that the U.S. and Russia expedite their disposition of separated weapons-grade plutonium; (2) that all the countries with advanced nuclear power programs move to establish a global system for spent fuel storage and disposal; and (3) that the nuclear industries in France, Britain, and Russia halt all civil fuel reprocessing.

Of these, the third point is certainly the most controversial and will require the most wrenching change in policy by significant parts of the nuclear industry in Europe and elsewhere. But the authors’ central argument is, in my view, compelling: that the separation, storage, and recycling of civil plutonium pose unnecessary risks of diversion to weapons purposes. These risks are unnecessary because the reprocessing of spent fuel offers no real benefits for waste disposal, nor has the recycling of plutonium in MOx fuel in light water reactors any real economic merit. To this reason, I would add one other: An end to civilian reprocessing would markedly simplify the articulation and verification of a treaty banning the production of fissile material for weapons­an often-stated objective of disarmament negotiations over the past several years. With civil reprocessing ongoing, such a treaty would ban any unsafeguarded production of plutonium (and highly enriched uranium) but, of necessity, allow safeguarded production, vastly complicating verification arrangements.

I am more ambivalent about the authors’ prescription to give priority to disposition of the separated civil plutonium, especially if this is done through the MOx route. Such a MOx program would put the transport and use of separated civil plutonium into widespread commercial or commercial-like operation. At the end of the disposition program, the plutonium would certainly be in a safer configuration (that is, in spent fuel) than at present. But while the program is in progress, the risks of diversion might be heightened, particularly compared to continued storage at Sellafield and La Hague, where the plutonium (I imagine) is under strong security. This may not be true of the civil plutonium in Russia, however, and there a program of disposition looks more urgent and attractive. Also, to tie up most reactors in the burning of the civil plutonium could substantially slow the use of reactors to burn weapons plutonium, a more important task.

Immobilization, the alternative route of disposition of civil plutonium that the authors discuss, would not have these drawbacks, at least to the same degree. One reason why the MOx option has received the most attention in the disposition of weapons plutonium is that the Russians have generally objected to immobilization, partly on the grounds that immobilization would not convert weapons-grade plutonium to reactor-grade as would be done by burning the plutonium in a reactor. This criticism, at least, would be moot for the disposition of civil plutonium.

HAROLD A. FEIVESON

Senior Research Policy Analyst

Princeton University

Princeton, New Jersey


Although many aspects of “Confronting the Paradox in Plutonium Policies” deserve discussion, I limit my comments to the fundamental inaccurate premise that “A major threat of nuclear weapons proliferation is to be found in the plutonium from reprocessed nuclear fuel.” Just because something is stated emphatically and frequently does not make it valid. First, let’s recall why there is a substantial difference (not “paradox”) in plutonium policies. Some responsible countries that do not have the U.S. luxury of abundant inexpensive energy made a reasoned choice to maximize their use of energy sources–in this case, by recycling nuclear fuel. The recycling choice is based on energy security, preservation of natural resources, and sustainable energy development. When plutonium is used productively in fuel, one gram yields the energy equivalent of one ton of oil! Use it or waste it? Some countries, such as France and Japan, have chosen to use it. MOX fuel has been used routinely, safely, and reliably in Europe for more than 20 years and is currently loaded in 33 reactors.

In recycling, separation of reactor-grade plutonium is only an intermediate step in a process whose objective is to use and consume plutonium. Recycling therefore offers a distinct nonproliferation advantage: The plutonium necessarily generated by production of electricity in nuclear power plants is burned in those same plants, thereby restricting the growth of the world’s inventories contained in spent fuel.

As the authors recognize, recycling minimizes the amount of plutonium in the waste. It also results in segregation and conditioning of wastes to permit optimization of disposal methods. As one of the authors previously wrote, one purpose of reprocessing is “to convert the radioactive constituents of spent fuel into forms suitable for safe, long-term storage.” Precisely!

The authors take for granted that civil plutonium is the right stuff to readily make nuclear weapons, without mentioning how difficult it would be. There are apparently much easier ways to make a weapon than by trying to fashion one from civil plutonium (an overwhelming task) diverted from responsible, safeguarded facilities (another overwhelming task). One might ask why a terrorist or rogue state would take an extremely difficult path when easier paths exist. Those truly concerned about nonproliferation need to focus resources on real, immediate issues such as “loose nukes”: weapons-grade materials stockpiled in potentially unstable areas of the world.

Although the authors state that safeguards “reduce the risk of plutonium diversions, thefts, and forcible seizures to a low probability,” one concludes from the thrust of the article that the authors consider national and international safeguards and physical protection measures to be, a priori, insufficient or inefficient. The impeccable record belies this idea.

The authors speak of the need for action by “the peace groups, policy research groups, and international bodies that together make up the nuclear nonproliferation community.” However, these entities participate in the nonproliferation community, but they do not constitute it. Industry is a lynchpin of the nonproliferation community. We not only believe deeply in nonproliferation, but the very existence of the industry depends on it. Those who prefer to criticize the industry should first reflect on that. Right now, industry is making substantial contributions to the international efforts to dispose of excess weapons-grade plutonium. Without the industrial advances made in recycling, neither MOx nor immobilization could be effective.

Open dialogue is irreplaceable for nonproliferation, as well as for optimization of the nuclear industry’s contribution to global well-being. A Decisionmakers’ Forum on a New Paradigm for Nuclear Energy sponsored by Sen. Pete Domenici and the Senate Nuclear Issues Caucus recognized the importance of not taking action that would preclude the possibility of future recycling. (This does not challenge current U.S. policy or programs but would maintain for the future an option that might become vital.) The forum included representatives of the political, nonproliferation, and academic communities, as well as national labs and industry.

I sincerely hope that we, as Americans, can progress from preaching to dialogue and from “Confronting the Paradox in Plutonium Policies” to “Recognizing the Validity of Different Energy Choices.” Let’s solve real problems, rather than building windmills to attack.

MICHAEL MCMURPHY

President and Chief Executive Officer

COGEMA-U.S.A.

Washington, D.C.


Luther J. Carter and Thomas H. Pigford’s view is tremendously thought provoking. The idea of creating a global network of storage and disposal centers for spent nuclear fuel and high-level radioactive waste is certainly worthwhile for providing a more healthy and comfortable world nuclear future. In particular, their proposal that the United States should establish a geologic repository at home that would be the first international storage center seems extremely attractive for potential customers.

Spent fuels in a great number of countries, including Japan, South Korea, and Taiwan, originated in the United States, and under U.S. law they cannot be transferred to a third-party state without prior U.S. consent. The easiest way to gain U.S. approval is for the United States to become a host country. There might be several standards and criteria for hosting the center. The host organization involved would have to have the requisite technical, administrative, and regulatory infrastructure to safely accommodate the protracted management of spent fuel. To be politically acceptable, the host nation will have to possess solid nonproliferation credentials and be perceived as a stable and credible long-term custodian. The United States undoubtedly satisfies these conditions.

The challenges associated with establishing an international center are somewhat different, depending on whether it is aimed at final disposal or interim storage. To be viable, a final repository will have to be in a suitable technically acceptable geologic location, and the host will have to be prepared to accept spent fuel from other countries on a permanent basis. A proposal by a state to receive and store spent fuel from other states on an interim basis entails less of a permanent obligation but leaves for later determination and negotiation what happens to the spent fuel at the end of the defined storage time.

The international network proposed by Carter and Pigford is obviously based on the final disposal concept. In view of the risk of taking an extremely long time to fix the location of a final repository that is both technically and socially acceptable, however, the concept of interim storage looks more favorable for moving the international solution forward. It might be more preferable from the viewpoint of customer nations as well, because to keep options open, some customers do not want to make an impetuous choice between reprocessing or direct disposal but to store their own spent fuel as a strategic material under appropriate international safeguards. The most difficult obstacle would be gaining world public support. Without a clear-cut statement about why the network is necessary, people would not willingly support the idea. My view is that the network must be directly devoted to an objective of worldwide peace–that is, the further promotion of nuclear disarmament as well as nuclear nonproliferation. If the network is closely interrelated to an international collaborative program for further stimulation of denuclearization, then it might have much broader support from people worldwide.

ATSUYUKI SUZUKI

University of Tokyo

Tokyo, Japan


Higher education

As we try to think carefully about the extravagantly imagined future of cyberspace, it is helpful to sound a note of caution amid the symphony of revolutionary expectations [like those expressed in Jorge Klor de Alva’s “Remaking the Academy in the Age of Information” (Issues, Winter 2000)]. For over a century, many thoughtful observers have suggested that the ever-advancing scientific and technological frontier either had or inevitably would transform our world, its institutions, and our worldview in some extraordinary respects. In retrospect, however, despite the substantial transformation of certain aspects of society, actual events have both fallen somewhat short of what many futurists had predicted (for example, we still have nation-states and rather traditional universities) and often moved in unanticipated directions. In particular, futurists once again underestimated the strength and resilience of existing belief systems and the capacity of existing institutions to transform themselves in a manner that supported their continued relevance and, therefore, existence.

Nevertheless, institutions such as universities that deal at a very basic level with the creation and transmission of information will certainly experience the impact of the new information age and its associated technologies in a special way. Even if we assume the continued existence of universities where students and scholars engage in face-to-face conversation or work side by side in its libraries and laboratories, there is no question that new communications and computation technologies will lead to important changes in how we teach and learn, how we conduct research, and how we interact with each other and the outside world. Nevertheless, I believe that these developments are unlikely to undermine the continued social relevance of universities as we understand them today.

Despite the glamour, excitement, and opportunities provided by the new technologies, I believe that the evidence accumulated to date regarding how students learn indicates that the particular technologies employed do not make the crucial difference. If the quality of the people (and their interaction), the resources, and the overall effort are the same, one method of delivery can be substituted for another with little impact on learning, providing, of course, that those designing particular courses are aware of how people learn. The principal inputs in the current model are faculty lectures or seminars, textbooks, laboratories, and discussions among students and between students and faculty. In other words, students pursuing degrees learn from each other, from faculty, from books and from practice (laboratories, research papers, etc.). It is my belief that the exact technologies used to bring these four inputs together are not the crucial issue for learning. What is important, however, is to remember that one way or another all these inputs are important to effective learning at the advanced level, particularly the interaction between learners and teachers.

Since ongoing feedback between faculty and students and among students themselves will always be part of quality higher education, it probably follows that to a first approximation the costs of producing and delivering the highest-quality programs are similar across the various technologies as long as all the key inputs identified above are included. I do not believe that one can put a degree-based course on the Web and simply start collecting fees without continuing and ongoing attention to the intellectual and other needs of the students enrolled.

A new world is coming, but not all of the old is disappearing.

HAROLD T. SHAPIRO

President

Princeton University

Princeton, New Jersey


Jorge Klor de Alva is the president of a for-profit university of nearly 100,000 students. I am a teacher at a college that has fewer than 1,000 students on its two campuses (Annapolis and Santa Fe) and is eminently not for profit. Yet, oddly enough, our schools have some things in common. We both have small discussion classes and both want to do right by our students. We both see the same failings in the traditional academy, chief of which is that professors tend to suit their own professional interests rather than the good of their students. But in most things we are poles apart, though not necessarily at odds.

Yet were we a hundred times as large as we are, we would not (I hope) take the doom-inducing dominating tone toward other schools that de Alva takes toward us. I think that more and better-regulated proprietary schools are a good and necessary thing for a country in urgent need of technically trained workers. But why must education vanish so that technical training may thrive? The country also needs educated citizens, and as human beings citizens need education, for there is a real difference between training and education.

I ardently hope that many of his students have been educated before they come to learn specialized skills at de Alva’s school. Training should be just as he describes it: efficient, focused on the students’ present wishes, available anywhere, and flexible.

Education, on the other hand, is inherently inefficient, time-consuming, dependent on living and learning together, and focused on what students need rather than on what they immediately want. That is why our students submit to an all-required curriculum. It is our duty as teachers to think carefully about the fundamentals a human being ought to know; it is their part not to have information poured into them but to participate in a slow-paced, prolonged shaping of their feelings and their intellect by means of well-chosen tools of learning. The aim of education is not economic viability but human viability. Without it, what profit will there be in prosperity?

Could de Alva be persuaded to become an advocate of liberal education in the interests of his own students? Our two missions, far from being incompatible, are complementary.

There is one issue, however, in respect to which “Remaking the Academy” seems to me really dangerous: in supporting the idea that universities and colleges should go into partnership with business. I know it is already going on, but it seems to me very bad–not for proprietary schools, for which it is quite proper, but for schools like ours. Businesses cannot help but skew the mission of an educational institution; why else would they cooperate? How then will one of the missions of the academy survive–the mission to reflect on ways of life that emphasize change over stability, information over wisdom, flexibility over principle? For trends can be ridden or they can be resisted, and somewhere there ought to be a place of independence where such great issues are freely thought about.

EVA BRANN

St. John’s College

Annapolis, Maryland


Universities have a long and distinguished history that has earned them widespread respect. But as Jorge Klor de Alva points out, the problem with success is that it creates undue faith in what has worked in the past. The University of Phoenix has been an important innovator in higher education, but the real trailblazing has been taking place within corporate education. Because corporations place a premium on efficiency and are free of the blinders of educational tradition, they have become leaders in the development of new approaches to education. At the VIS Corporation, which helps companies use information technology in their training programs, we have learned a number of preliminary lessons that should be helpful to all educators interested in using technology to improve effectiveness.

The process and time to deliver most educational content can be dramatically streamlined using technology. Computer-based delivery often makes it possible to condense a full day of classroom material into one or two hours without the need for any instructor. This is possible because computers essentially provide individual tutoring, but without the cost of an individual tutor for each student. Of course, computer-only instruction will not work for all types of material, but we need to recognize that because a given training course requires some personal interaction, say role-playing exercises, it does not mean that the entire course must be taught in a classroom with a live instructor.

Different types of content require different approaches. The traditional ways of disaggregating educational content by subject area are not helpful here, because within each subject area a variety of types of learning must take place. In trying to figure out a way to help us choose the right approach to education, we broke down learning into four broad categories that transcend subject area and developmental level: informational, conceptual, procedural, and behavioral learning.

Informational and conceptual learning are the types of content that are normally considered the traditional subject matter of education. Informational learning includes what we normally think of as information and data. This content, which changes rapidly and has many small pieces, is ideal for technology delivery and constitutes the vast majority of what is available on the Internet. Conceptual learning involves understanding models and patterns that allow us to use information effectively and modify our actions as situations change. This is a significant part of the learning delivered in higher education and by master teachers. With a computer, conceptual learning can be advanced through self-guided exploration of software models. Procedural and behavioral learning are the traditional province of corporate training. These are rarely taught in educational institutions but are critical to how effectively an individual can accomplish goals at work or in life. Interactive simulation is the best delivery mechanism for procedures, whereas behavioral learning needs a mix of video demonstration and live facilitation.

Effective technology-based learning requires that the content be focused on the learner’s needs and learning style, not on the instructor’s standard approach. In a classroom we sit and learn in the way that the teacher structures the materials. A good instructor will use a variety of techniques to reach the largest number of students, but the process is ultimately linear. Technology, on the other hand, offers the opportunity to deliver content in a wide range of forms that can be matched to the content needs and the learning style of each individual. A second factor is that different people have dramatically different learning styles and that no one methodology is a perfect solution for everyone. Technology delivery offers the option to deliver the content in the best way for each individual learner.

Although the process of designing content for performance-based learning that is common in corporate education is much more straightforward than for the general education provided in universities, the steps we have developed will be useful in preparing any course:

  1. Identify and categorize the knowledge and skills exhibited by superior performers.
  2. Develop an evaluation model (test, game, questionnaire, or 360-degree evaluation) that can determine exactly what each individual most needs to learn.
  3. Create the content delivery modules and a personalized navigation capability to let each individual get at the content that they need or want in the most flexible way.
  4. Develop a way to assess what the student has learned and to certify that achievement.

By following this process in numerous settings, it is likely that over the next decade we will learn how to use the “right” combinations of technology to make all types of education more efficient and more focused on the needs of each student, rather than on the needs of the teacher or the curriculum

ANDY SNIDER

Managing Director

VIS Corporation

Waltham, Massachusetts


Mending Medicare

My friend Marilyn Moon has done her usual exemplary job of puncturing some of the mythology surrounding Medicare “reform” in “Building on Medicare’s Strengths” (Issues, Winter 2000), but one of her arguments already needs updating, and I would like to take issue with another.

Moon does an appropriately thorough job of demolishing the falsehoods surrounding the relative cost containment experiences of Medicare and the private sector by demonstrating that, over time, Medicare has been more effective at cost containment; but the data she uses extend only through 1997, which turns out to have been the last year of a period of catch-up by private insurance, in which the gap between Medicare and private cost growth was significantly reduced. We now have data from 1998 and lots of anecdotal information from 1999 that show that, largely as a result of the provisions of the Balanced Budget Act of 1997, Medicare cost growth has essentially stopped, at least temporarily, while private sector-costs are accelerating. In other words, Moon’s assertions about the relative superiority of Medicare’s cost containment efforts are, if anything, understated.

Moon does seem to accept the conventional wisdom that the growth in Medicare’s share of national income over the next few decades, arising from the combined effect of the aging of the baby boomers and the continued propensity of health care costs to grow more quickly than the economy as a whole, constitutes a significant problem that requires reform. She points out, for example, that under then-current projections, Medicare’s share of national income will grow from roughly 2.53 percent in 1998 to 4.43 percent in 2025 (although those numbers are likely to be modified downward in response to the more recent data described above). In doing so, I fear that like many of the rest of us she has fallen into a trap quite deliberately and systematically laid by forces that are opposed to the very concept of social insurance programs such as Medicare and Social Security, and especially to the intrinsically redistributive character of such programs.

Obviously, as a society ages, it will spend relatively more of its resources on health care and income support for older people, all other things being equal. It will also, presumably, spend relatively less of its resources on younger people. But spending 4.5 percent of gross domestic product on health care for the elderly is only a problem, let alone a crisis, if people are unwilling to do so. Perhaps more important, as the experience in public finance over the past five years should once again have reminded us, if the economy grows enough, we can double or triple or quadruple what we spend on health care for the elderly and everyone else in society will still have more real income left over.

Every other industrialized nation, except Australia, already has a higher proportion of old people in its population than does the United States, and all provide their elderly and disabled citizens with more generous publicly financed benefits than we do. None of those nations appears to be facing imminent financial disaster as a result. The “crisis” in financing Medicare and Social Security has largely been manufactured by individuals and institutions with a political agenda to shrink or abolish those programs, and we should stop letting ourselves fall victim to their propaganda.

BRUCE C. VLADECK

Director, Institute for Medicare Practice

Mount Sinai School of Medicine

New York, New York


Health care is complicated. When people have to make decisions on complicated issues, they typically limit the number of decision criteria they address and instead use a “rule of thumb” that encompasses just one criterion. That’s what federal decisionmakers have done with the Medicare program, and their single criterion has been cost. Often, the focus has been still more narrow, coming down to a matter of the prices paid to providers and managed care plans. As Marilyn Moon notes, Medicare (the biggest health care purchaser in the country), has been able to control the price it pays, within the limits of interest group politics, to keep federal cost increases in the program at levels at or below those found in the private sector. Indeed, many federal innovations in payment policy have been adopted by private insurers.

However, federal costs depend on more than federal prices; federal costs can go down while the overall health care costs of those covered by the program go up. And as Moon implies, many other criteria should be used to assess the program today and in the future. One such criterion that is of critical importance to people on Medicare is security. This is, after all, an insurance program. It was designed not only to pay for needed health services but to provide older Americans (and later people with serious disabilities) with the security inherent in knowing that they were “covered” whatever happened.

Many recent proposals to “reform” Medicare seem to reflect an unwillingness to deal with the multiple factors that affect the program’s short- and long-term costs. In particular, reforms that would have a defined “contribution” rather than a defined “benefit” seem to wrap a desire to limit the federal government’s financial “exposure” in the language of market efficiency and competition. In the process, such proposals ignore the importance of Medicare’s promise of security.

Health care markets are more feasible when competing units are relatively flexible entities such as health plans rather than more permanent structures such as hospitals. Thus, competition has become more apparent in the Medicare managed care market. Has this led to greater efficiency, even in terms of federal cost? Multiple analyses indicate that Medicare has paid higher prices than it should have to managed care plans, given the risk profile of the people they attract. Those with serious health problems are more risk averse and thus less likely to switch to a managed care plan even when they could reap significant financial advantages. The desire for security that comes with working with a known system and known and trusted health care providers is a powerful incentive for people on Medicare. It is part of the reason that nearly 15 years after risk-contract HMOs became available to people on Medicare, only 16 percent are enrolled in such plans. That proportion is likely to grow as people who are experienced with managed care (and have had a good experience) become eligible for Medicare. But there is a big difference between such gradual growth in HMO use and the proposed re-engineering of the program that would make it virtually impossible for millions of low- to moderate-income people to stay in the traditional Medicare program. The discontinuities in coverage and clinical care that are likely in a program where there is little financial predictability for either health plans and providers on the one hand, or members and patients on the other, may be just as costly in the long term as any gains attributable to “market efficiencies” that focus on price reductions in the short term. In short, echoing Moon’s arguments, we do not need to rush headlong into reform proposals based on the application of narrow decision heuristics. We need to face the challenge of considering multiple decision criteria if we are to shape a Medicare program that will do an even better job in the future of providing health and security to older Americans (and their families) as well as to people with disabilities in a manner that is affordable for us all.

SHOSHANNA SOFAER

School of Public Affairs

Baruch College

New York, New York


Jeffersonian science

Issues recently published (Fall 1999) two most commendable articles on the weaknesses of our definitions of research and how those definitions affect, and even inhibit, the most efficient funding of publicly supported science.

In “A Vision of Jeffersonian Science,” Gerald Holton and Gerhard Sonnert skillfully elaborated the flaws in our rigid descriptions of and divisions between basic and applied research. The result has been the perpetuation of the myth that basic and applied research are mutually exclusive. The authors bring us to a more realistic and flexible combination of the two with their term “Jeffersonian research.”

As the authors cited, the Jeffersonian view is compatible with the late Donald Stokes’ treatise Pasteur’s Quadrant. In it, we were reminded that Pasteur founded the field of microbiology in the process of addressing both public health and commercial concerns, the point being that solving problems is not mutually exclusive of discovering new knowledge. In fact, the two are often comfortably inseparable. Pasteur was not caught in a quagmire of definitions and divisions. He was asking questions and finding answers.

At a time when most fields and disciplines of science and engineering are spilling into each other and shedding light on each other’s unanswered questions, it is somewhat mysterious that we cling to the terms basic and applied research. We can hardly complain that members of Congress line up in one camp or another if we in the science community continue this bifurcation.

Lewis M. Branscomb’s article “The False Dichotomy: Scientific Creativity and Utility” gives us a fine history of the growth and evolution of publicly funded research in the United States, with all of its attendant arguments and machinations. Branscomb concludes, “An innovative society needs more research driven by societal need but performed under the conditions of imagination, flexibility, and competition that we associate with basic science.” No one can argue with that.

Both articles make reference to the late “dean of science” in Congress, George Brown, who for many years chided the science community on the obsolescence of our descriptions of research and on the need for science to address societal concerns and problems. These two articles suggest that we are on the path to doing so.

As our society continues to move beyond the constraints of Cold War policy and practice, the debate presented in these two articles brings us to a healthy discussion of defining the role and responsibility of science and engineering for a new economy and a new century. More flexible definitions and more open discussions of the research that reflect our actual work rather than our old lexicons are an important beginning.

RITA COLWELL

Director

National Science Foundation

Arlington, Virginia


Minority engineers

In “Support Them and They Will Come” (Issues, Winter 2000), George Campbell, Jr. makes a compelling case for a renewed national commitment to recruit and educate minority engineers. There is another group underrepresented in the engineering and technical work force that also deserves the nation’s attention: women.

Women earn more than half of all bachelor’s degrees, yet only 1.7 percent of them earn bachelor’s degrees in engineering, compared to 9.4 percent of men who graduate with engineering degrees. Men are three times more likely than women to choose computer science as a field of study and more than five times more likely to choose engineering.

As a result, women are significantly underrepresented in key segments of the technical work force. Women are least represented in engineering, where they make up only 11 percent of the work force. And women executives make up only about 2 percent of women working in technology companies. Rep. Connie Morella likes to point out that there are more women in the clergy (12 percent) than in engineering. There also are more women in professional athletics, with women accounting for almost 24 percent of our working athletes.

Today, creating a diverse technical work force is not only necessary to ensure equality of opportunity and access to the best jobs, it is essential to maintaining our nation’s technological leadership. In my view, our dependence on temporary foreign technical workers is not in our long-term national interest. As we increasingly compete with creativity, knowledge, and innovation, a diverse work force allows us to draw on different perspectives and a richer pool of ideas to fuel technological and market advances. Our technical work force is literally shaping the future of our country, and the interests of all Americans must be represented in this incredible transformation.

We must address this challenge on many fronts and at all stages of the science and engineering pipeline. Increased funding is important, but it is not enough. Each underrepresented group faces unique challenges. For example, women leave high school as well prepared in math and science as men, but many minority students come from high schools with deficient mathematics and science curricula.

The K-12 years are critical. By the time children turn 14, many of them–particularly girls and minorities–have already decided against careers in science and technology. To counter this trend, we must improve the image of the technical professional, strengthen K-12 math and science teaching, offer mentors and role models, and provide children and parents with meaningful information about technology careers. The National Action Council for Minorities in Engineering’s “Math is Power” campaign is one outstanding program that is taking on some of these challenges.

As Campbell notes, we must build a support infrastructure for college-bound women and minorities and for those already in the technical work force. This includes expanding internships, mentoring programs, and other support networks, as well as expanding linkages among technology businesses and minority-serving institutions.

Perhaps most important, business leadership is needed at every stage of the science and engineering pipeline. After all, America’s high technology companies are important customers of the U.S. education system. With the economy booming, historically low unemployment rates, and rising dependence on temporary foreign workers, ensuring that all Americans have the ability to contribute to our innovation-driven economy is no longer good corporate citizenship, it is a business imperative.

KELLY H. CARNES

Assistant Secretary for Technology Policy

U.S. Department of Commerce

Washington, D.C.


Even as our nation continues to enjoy a robust economy, our future is at risk, largely because the technical talent that fuels our marketplace is in dangerously short supply. There are more than 300,000 unfilled jobs in the information technology field today in the United States. Despite such enormous opportunity, enrollments in engineering among U.S. students have declined steadily for two decades. Worse, as George Campbell, Jr. points out, a deep talent pool of underrepresented minorities continues to be underdeveloped, significantly underutilized, and largely ignored.

Our public education system is either inadequate or lacks the resources to stimulate interest in technical fields and identify the potential in minority students. That’s particularly disturbing when you consider that in less than 10 years, underrepresented minorities will account for a full third of the U.S. labor force.

As I see it, two fundamental sources can drive change, and fast. The first is U.S. industry. We need more companies to aggressively support math and science education, particularly at the K-12 level, to reverse misperceptions about technical people and technical pursuits and to highlight the urgency of technical careers in preserving our nation’s competitiveness. More businesses also should invest heavily in the development of diverse technical talent. In today’s global marketplace, businesses need people who understand different cultures, who speak different languages, who have a firsthand feel for market trends and opportunities in all communities and cultures.

The second source is the role model. Successful technical people are proud of their contributions and achievements. Such pride is easy to assimilate and can make a lasting impression on K-12 students. We must encourage our technical stars to serve as mentors to our younger people in any capacity they can. Their example, their inspiration, will go a very long way.

In 1974, IBM was one of the first companies to join the National Action Council for Minorities in Engineering (NACME). We supported it then because we understood the principles of its mission and saw great promise in the fledgling organization. We continue to be strong supporters today, along with hundreds of other corporate and academic institutions, because NACME continues to deliver on its promises. Among them: to increase, year after year, the number of minority-student college graduations in engineering. In times of economic boom or bust, during Democratic or Republican administrations, and when affirmative action is being applauded or attacked, NACME has made a significant difference.

Over the past quarter century, the issue of diversity in our work force–and particularly in our technical and engineering professions–has been transformed from a moral obligation to a strategic imperative for business, government, universities, and all institutions. It’s an imperative on which the future of our nation’s economy rests. The advantage goes to institutions that mirror the marketplace’s demographics, needs, and desires; to those who commit to keeping a stream of diverse technical talent flowing; and to those who take the issue personally.

NICHOLAS M. DONOFRIO

Senior Vice President and Group Executive

Technology & Manufacturing

IBM Corporation

Armonk, N.Y.

The author is chairman of NACME, Inc.


National forests

“Reshaping National Forest Policy” (Issues, Fall 1999) by H. Michael Anderson is an excellent overview of the challenges facing the progressive new chief of the Forest Service, Mike Dombeck, especially his efforts to better protect National Forest roadless areas. One particular slice of that issue presents an interesting picture of what can happen when science and politics collide.

Since the article was written, President Clinton has directed the Forest Service to develop new policies that better protect roadless areas. However, the president deliberately left open the question of whether the new protections will apply to one particular forest: Alaska’s Tongass, our country’s largest national forest and home of the world’s largest remaining temperate rainforest.

Why did he do that? The only reason to treat the Tongass any differently from the rest of the nation’s forests is politics, pure and simple. Chief Dombeck and the President want to avoid angering Alaska’s three powerful members of Congress, each of whom chairs an influential committee.

Indeed, 330 scientists have written to President Clinton, urging him to include the Tongass in the roadless area protection policy. “In a 1997 speech calling for better stewardship of roadless areas,” the scientists wrote to the President, “you stated: ‘These unspoiled places must be managed through science, not politics.’ There is no scientific basis to exclude the Tongass.” Signers of the letter to the president included some of the nation’s most prominent ecologists and biologists, such as Harvard professor and noted author E. O. Wilson; Stanford professor Paul Ehrlich; Reed Noss, president of the Society for Conservation Biology; and Jane Lubchenco, past president of the American Association for the Advancement of Science.

At the Alaska Rainforest Campaign, we certainly hope that science prevails over politics and that the president decides to protect all national forest roadless areas, including the Tongass.

MATTHEW ZENCEY

Campaign Manager

Alaska Rainforest Campaign

Washington, D.C.

www.akrain.org


Traffic congestion

Your reprinting of John Berg’s and Wendell Cox’s responses to Peter Samuel’s “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999), coupled with recent political events in the Denver area, led me to revisit the original article with more interest than on my first reading. Yet I still find both the original article and the Berg and Cox responses lacking in key respects.

Although there are examples of successful tollways, most have been failures. Market-based approaches work only when there are viable alternatives to allow for market choices. Talk of “pricing principles” and “externalities” is meaningless in cases such as the Pennsylvania Turnpike, where there is no alternative to the tollway. Samuel also ignores the congestion created at every toll plaza. Electronic payment systems do not work well enough to resolve this problem, particularly for drivers who are not regulars on the tollway.

I’m simply mystified by Samuel’s and Cox’s assertion of the “futility” of transit planning given Samuel’s own grudging admission of how “indispensable” transit is in certain situations. Public transit certainly could have helped Washington, D.C.’s Georgetown neighborhood, which was offered a subway stop during the planning of the Metro system. The neighborhood refused the stop and now the major streets through Georgetown are moving parking lots from 6 AM to 11 PM daily. I fail to see how any amount of “pricing discipline” or road building could have changed this outcome.

Mass transit has not always lived up to its promise in the United States, but the failures have almost always been primarily political. Portland, Oregon’s system is a disappointment because the rail routes are not extensive enough to put a significant dent in commuter traffic. Washington, D.C.’s system, although successful in many ways, suffers from its inability to expand at the same rate as the D.C. metropolitan area, a situation exacerbated by the District of Columbia’s chronic funding problems. Both are cases where the political will to build bigger transit systems early on would be paying real dividends now. Compare the D.C. Metro to the subway system in Munich, where more than triple the rail-route mileage serves a metropolitan area of about 1/3 the population of the D.C. metropolitan area. In Germany the political will and funding authority existed to make at least one mass transit system that serves its metropolitan area well.

Finally, I take issue with Cox’s assertion that transit can serve only the downtown areas of major urban centers. Those of us who travel on I-70 from Denver to nearby ski resorts know that traffic congestion can be found far from the skyscrapers. The Swiss have demonstrated that public transit works well in the Mattertal between Zermatt and Visp. Using existing railroad right-of-way between Denver and Winter Park, transit could provide a solution to mountain-bound gridlock.

The key to resolving this nation’s traffic congestion woes lies in keeping our minds open to all transportation alternatives and broadening our focus beyond transportation to include development itself. The lack of planning and growth management have led to a pattern of development that is often ill-suited to public transit. The resulting urban sprawl is now the subject of increasing voter concern nationwide. That concern, reflected in Denver voters’ recent approval of a transit initiative, give me hope that we will be seeing a more balanced approach to these issues in the near future. Such an approach would be far preferable to relying on the gee-whiz technology of Samuel, the sweeping economic assumptions of Berg, or the antitransit dogma of Cox.

ANTHONY B. CRAMER

Fort Collins, Colorado

From the Hill – Spring 2000

Clinton again proposes big budget increases for NIH, NSF

President Clinton would boost total spending on federal research and development (R&D) by 3 percent to $85.3 billion in the fiscal year (FY) 2001 budget (see chart). But the budget also includes a $1 billion increase for biomedical research at the National Institutes of Health (NIH), a $675 million or 17 percent increase for the National Science Foundation (NSF) –the largest dollar increase in its history–and major interagency initiatives in information technology (IT) and nanotechnology.

The administration’s R&D budget once again makes IT a priority. It would provide $2.3 billion in IT R&D, a $605 million increase from last year and a billion dollars more than in FY 1999. The IT initiative would focus on fundamental software research, ensuring the privacy and security of data, and continued advances in high-speed computing. Funding would go to seven agencies, including NSF, the Department of Energy (DOE), the Department of Defense (DOD), the National Aeronautics and Space Administration, and the Department of Health and Human Services, which houses NIH.

Congress has indicated that it supports the administration’s emphasis on IT. On February 15, the House by voice vote approved the Networking and Information Technology Research and Development Act, which would authorize $6.9 billion for FY 2000 through FY 2004 for IT-related research in seven civilian agencies. NSF would be the lead agency and the beneficiary of a $3.34-billion allocation for basic research into high-end computing, the creation of terascale computing capabilities, and education and training grants. An alternative bill has been introduced in the Senate.

A new research priority highlighted by the White House is the interagency National Nanotechnology Initiative, designed to promote basic research in the emerging fields of nanoscience and nanoengineering. The initiative involves six agencies at a total cost of $497 million, $227 million of which is new spending. The bulk of the money would go to NSF ($217 million), DOD ($110 million), and DOE ($96 million).

Clinton bars genetic discrimination against federal employees

President Clinton has signed an executive order prohibiting federal employees from being discriminated against on the basis of genetic information and has challenged Congress to pass legislation that would provide similar protection to citizens in the private sector.

The order prohibits all executive branch departments and agencies from discriminating against new applicants or firing existing employees on the basis of genetic information. It states that the federal government cannot require an employee to submit to a genetic test, and if an employee voluntarily takes a test, the information must remain confidential. An exception to the rules is allowed when a medical condition or a potential predisposition would prevent an employee or an applicant from performing his or her job.

Although some states, including California and New York, have passed laws prohibiting insurance discrimination on the basis of genetic tests, there is currently no federal law providing protection. In 1999, Rep. Louise Slaughter (D-N.Y.) introduced a bill that would prohibit health insurance companies from denying coverage and private-sector firms from using genetic information in a discriminatory fashion. Sen. Thomas Daschle (D-S.D.) introduced a similar bill in the Senate. But there has been no movement on either of the bills. Congressional leaders have made medical confidentiality a higher priority in their legislative agenda.

R&D in the FY 2001 Budget by Agency
(budget authority in millions of dollars)

  FY 1999 FY 2000 FY 2001 Change FY 00-01
  Actual Estimate Budget Amount Percent
Total R&D (Conduct and Facilities)
Defense (military) 38850 38719 38640 -79 -0.2%
S&T (6.1-6.3) 7574 8397 7543 -854 -10.2%
All Other DOD R&D 31276 30322 31097 775 2.6%
Health and Human Services 15797 18063 18998 935 5.2%
Nat’l Institutes of Health 15008 17141 18133 992 5.8%
NASA 9715 9753 10035 282 2.9%
Energy 6992 7091 7655 564 8.0%
Nat’l Science Foundation 2702 2903 3464 561 19.3%
Agriculture 1645 1773 1828 55 3.1%
Commerce 1084 1073 1152 79 7.4%
NOAA 593 591 594 3 0.5%
NIST 465 458 501 43 9.4%
Interior 500 584 590 6 1.0%
Transportation 786 585 733 148 25.3%
Environ. Protection Agency 670 648 679 31 4.8%
All Other 1601 1552 1561 9 0.6%
Total R&D 80342 82744 85335 2591 3.1%
Defense 42049 41994 42060 66 0.2%
Nondefense 38293 40750 43275 2525 6.2%
Basic Research 17468 19027 20328 1301 6.8%
Applied Research 15915 17193 18026 833 4.8%
Development 44302 44071 44323 252 0.6%
R&D Facilities and Equipment 2657 2453 2658 205 8.4%
21st Century Research Fund 37032 40038 42895 2857 7.1%

Source: AAAS, based on OMB data for R&D for FY 2001, agency budget justifications, and information from agency budget offices.

NIH, FDA pledge more oversight after death in gene therapy experiment

In the wake of the death of a young man enrolled in a gene therapy experiment, NIH and the Food and Drug Administration (FDA), both of which have authority to oversee gene therapy experiments, have launched two initiatives designed to strengthen safeguards for clinical trials while improving communication within the scientific community.

FDA will now require sponsors of gene therapy trials to routinely submit for agency review their monitoring plans, including a summary of the experience and training of their monitors. The new Gene Therapy Clinical Trial Monitoring Plan will also support the organization of conferences for investigators to discuss monitoring practices with their peers and other professionals in the field. The second initiative is a series of NIH/ FDA-organized Gene Transfer Safety Symposia designed to allow investigators to share and analyze medical and scientific data resulting from gene transfer research.

After 18-year-old Jesse Gelsinger died while undergoing a gene therapy treatment at the University of Pennsylvania, NIH acknowledged that it had failed to track adverse events during gene therapy trials. Members of Congress have strongly criticized NIH’s lapses. In a hearing held by the Senate Health, Education, Labor, and Pensions subcommittee, Senator Bill Frist (R-Tenn.), the subcommittee chair and a heart transplant surgeon, said that there is “a need for vigilant oversight to ensure patient safety. If we expect patients to participate in moving science forward, then we must be assured that gene therapy clinical trials are safe.”

Much of the hearing focused on why researchers had failed to notify NIH of adverse events during gene therapy trials as required by federal guidelines. But there was also extensive discussion of whether scientists are providing patients with enough information to understand the potential risks of participating in a clinical trial.

The most riveting testimony came from the father of Jesse Gelsinger. Paul Gelsinger testified that he wasn’t given information regarding adverse events in prior experiments conducted by the university and in the private sector–information that would probably have influenced their final decision to participate. “Looking back, I can see that I was fairly naive to have been as trusting as I was,” he said. He expressed serious concerns about the influence of the private sector, its ability to hide behind a proprietary curtain, and a “race to be first” in the field that have all contributed to unnecessary risk to patients. Gelsinger recommended that an independent patient advocate be present at informed consent sessions to ensure that patients are protected when risks are explained.

Gene therapy typically involves an infusion of corrective genes into the body via a virus. Use of gene therapy techniques as a potential treatment for disease began in the 1980s. Both NIH and FDA have established oversight procedures for studies. FDA has oversight over all gene therapy trials, public and private, whereas NIH monitors only those experiments that receive NIH funding.

After Jesse Gelsinger’s death, NIH asked investigators involved in gene therapy treatment to report any adverse event, defined as any expected or unexpected event (not necessarily a death) that can be related to the treatment, the disease itself, or an outside factor. Subsequently, NIH received 652 reports of serious events that previously had not been reported, compared to the 39 events that had been. NIH said that 372 clinical trials are currently registered, and more than 4,000 patients have participated in gene therapy experiments.

Amy Patterson, director of NIH’s Office of Biotechnology Activities, said NIH guidelines require individual investigators to report adverse events. Frist called the lack of reporting and inadequate oversight “inexcusable.”

Although the University of Pennsylvania reported Jesse Gelsinger’s death in a timely manner, FDA, in investigating his death, discovered potential safety violations and shut down the university’s gene therapy studies.

Richardson, Congress clash over DOE reorganization

Energy Secretary Bill Richardson and Congress are clashing over his implementation plan for a new semiautonomous agency responsible for weapons-related research within DOE. Members of Congress are particularly unhappy with Richardson’s decision to appoint several DOE officials to serve concurrently at the new agency, contending that this “dual-hats” policy violates the intent of the reorganization law. Richardson disputes this claim and has criticized the law’s limitations of his ability to exercise authority over the new agency.

The new National Nuclear Security Administration (NNSA), created by Congress in 1999 to tighten security at DOE’s nuclear weapons program, officially opened for business on March 1. Richardson has asked President Clinton to nominate Gen. John A. Gordon to be director of NNSA and undersecretary for nuclear security. Gordon is currently deputy director of the Central Intelligence Agency.

According to a report by the House Armed Services Committee’s Special Oversight Panel on Department of Energy Reorganization, Richardson’s implementation plan “overemphasizes DOE control over the NNSA, undermines the semi-autonomy of the NNSA, and would violate key provisions” of the reorganization bill. It says that dual authority “is clearly in violation” of the law and that the plan “explicitly sustains current reporting relationships . . . [that have] generated redundant and confusing lines of authority in the past.”

Testimony at a March 2 hearing by representatives of the General Accounting Office (GAO) and the Congressional Research Service (CRS) bolstered these claims. According to GAO, the implementation plan “does little to address [DOE’s] dysfunctional structure, with unclear chains of command among headquarters, field offices, and contractors.” Further, it said that dual authority is “contrary to the legislative intent behind the creation of NNSA as a separate entity within DOE.” CRS said that the plan’s “apparent disregard of the statutory provisions delineating certain limitations on the secretary’s direct authority over NNSA officers and employees could arguably be characterized as contrary to the letter and intent of the legislation.”

Rep. Duncan Hunter (R-Calif.) argued that forcing the secretary to operate through the NNSA director and keeping NNSA separate from the rest of DOE was the only way to avoid a repeat of the organizational disarray that plagued the investigation of Wen Ho Lee. Lee, a former physicist at Los Alamos National Laboratory, has been indicted on various charges of security breaches. “At the same time that you’re supporting the nomination of Mr. Gordon,” Hunter asked Richardson, “you’re taking all his power away?”

Richardson denied that he was attempting to undermine NNSA’s autonomy or Gen. Gordon’s authority and defended his implementation plan as legal and appropriate. He said his proposed changes to the law were necessary if he is to be held accountable for the new agency, and he emphasized that only 18 out of 2,013 NNSA employees have dual roles, although that includes several key officials, including the security and counterintelligence chiefs. Regarding the NNSA security structure, he said, “I want to make this efficient, and as rapidly as possible.”

After working primarily on security issues for the past year, Richardson wants to shift the focus back to the scientific research taking place at DOE weapons labs. “There is no longer a culture of lax security,” he said. “That has ended.”


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

U.S. Fighter Modernization: An Alternative Approach

During the next few decades, the Air Force, Navy, and Marine Corps plan to buy three new types of fighters, some 3,700 aircraft altogether, at a cost likely to reach nearly $340 billion. These plans are almost certainly unaffordable. Worse yet, even if fully implemented, they may leave the U.S. military ill-prepared to meet the very different and, compared to today, far more serious challenges that are likely to emerge 10 to 20 years from now. Fortunately, there is a more affordable approach that would allow the United States to meet its near-term security requirements as well as to better prepare itself for meeting long-term challenges. It would combine purchases of a smaller number of next-generation fighters with continued purchases of new or upgraded current-generation aircraft and modest cuts in the number of fighter wings. It would also defer production of one of the new fighters for at least five years. Not only would this approach be more affordable, it would be wiser.

The three new fighters the services plan to buy are the F/A-18E/F, the F-22, and the Joint Strike Fighter (JSF). The Navy’s F/A-18E/F, which began to be produced in fiscal year (FY) 1997, is intended to replace earlier F/A-18 models in the fighter/ground attack role and the A-6 in the deep interdiction role. The Navy plans to buy at least 548 of these aircraft. The Air Force plans to buy 333 F-22s to replace the F-15 air superiority fighter. Congress provided funding for the first two F-22s in FY 1999. Prototypes for the JSF are currently being developed in a competition between the Boeing Corporation and the Lockheed Martin Corporation, with plans for a family of 2,852 relatively low-cost aircraft to be used by all three services. Production is scheduled to begin around FY 2005.

The three new fighters would have impressive capabilities. The F/A-18E/F is a substantially upgraded version of the F/A-18C fighter. Both are designed to carry out air-to-air combat and ground attack missions, but the F/A-18E/F has a longer fuselage, larger wings, and a more powerful engine than the C version. It will also have greater range and payload capacity.

Unlike the F/A-18E/F, the F-22 is an entirely new aircraft. Its airframe shape and materials are designed to absorb or deflect radar signals. These “stealth” technologies are intended to make the F-22 substantially less vulnerable to surface-to-air missiles and air defense artillery that depend on radar guidance. The F-22 will also be the first aircraft ever to have a supersonic cruise capability. Existing aircraft can achieve supersonic speeds only through the use of afterburners that greatly increase fuel consumption. In addition, the F-22 will have a range of advanced avionics, including a new radar and displays that will provide the pilot with a much improved picture of the battle space, including the type, location, speed, and direction of enemy aircraft and the type and location of enemy surface-to-air missile threats. Although designed primarily to clear the sky of enemy fighters, the Air Force claims that the F-22 will also have a significant air-to-ground capability.

The JSF would also be an entirely new aircraft. The three planned variants of this fighter would be substantially better in terms of stealthiness, maneuverability, and avionics than the aircraft they are intended to replace. The Air Force and Navy versions of the JSF would be conventional takeoff and landing aircraft, whereas the Marine Corps version–like today’s AV-8B–would have a short takeoff and vertical landing capability.

Are they affordable?

Unfortunately, the performance improvements of the three new fighters will come at a very high price. Altogether, the services plan to buy 3,733 of these aircraft at a cost estimated at $258 billion to $337 billion. (All figures in this article are expressed in FY 2000 dollars.) The lower estimate assumes that the services can meet their cost goals for each new system, whereas the higher estimate assumes that, consistent with historical experience, the new fighters will end up costing substantially more to produce. About $39 billion has already been spent on these programs.

If the military’s unit-cost goals can be achieved, an average of $7.3 billion a year in procurement funding would be required during the next 27 years to pay for these plans. On the other hand, if the higher estimates, generated by the Congressional Budget Office (CBO), turn out to be correct, an average of $10.1 billion annually would be needed. In either case, another $800 million to $900 million a year would be needed for R&D. Altogether, completing these three programs would require average acquisition budgets (procurement plus R&D) of $8.1 billion to $11.1 billion a year during the FY 2000­2026 period.

By historical standards, these next-generation fighters are very costly. Depending on how successful cost-control efforts are, each F-22 is projected to cost $105 million to $124 million to procure. By comparison, the average F-15 cost only about $48 million. Similarly, each F/A-18E/F is likely to cost $70 million to $74 million, compared to $46 million for earlier F/A-18A-D models and $58 million for the F-14. Finally, the unit procurement cost of the JSF is projected to range from $43 million to $65 million for the Air Force variant, from $52 million to $77 million for the Marine Corps’ version, and from $53 million to $78 million for the Navy’s version. By contrast, the Air Force’s current-generation F-16 fighters cost about $25 million each, the Marine Corps’ AV-8B costs about $35 million, and, as already noted, the Navy’s earlier model F/A-18s (some of which would also be replaced by F/A-E/Fs) cost $46 million each to produce. The R&D costs of these new systems are also much higher. For example, developing the F-22 has cost about three times as much as developing the F-15.

It is extremely doubtful that the military’s modernization plans are affordable. In the past, fighter procurement has accounted for an average of about 4.6 percent of the Air Force’s budget and 3.6 percent of the Navy’s budget. If these shares are maintained in the future and the overall Department of Defense (DOD) budget stays flat in real inflation-adjusted terms at the level currently projected for FY 2005, the two services would have an average of about $7 billion a year available for fighter procurement during the FY 2000­2026 period. This is $300 million a year less than would be needed to pay for the administration’s procurement plans (even assuming that the military’s unit cost goals can be met) and $3.1 billion a year less than needed (assuming historical rates of cost growth). In addition, the services may find it difficult to fully fund the $800 million to $900 million a year required to complete R&D for the three aircraft.

As a result of the improved budget surplus projections made by CBO in early 2000, there is reason to believe that some additional funding will be provided for defense. However, the prospects for a major sustained increase that would include sufficient funding for the three fighter programs appear dim. The administration and Congress have placed higher priority on other policy goals. Likewise, it seems doubtful that the services would be able to shift sufficient funding from elsewhere in their budgets to pay for the three new fighters, because other major modernization projects are competing for their resources. If history is any guide, it is also likely that operation and support costs–military pay, operations and maintenance, family housing, and military construction–will grow in the future. In that case, the military may be hard pressed to provide even its historical share of funding for fighter modernization.

Reasons for constraint

There are a variety of reasons why the military’s current fighter modernization plans are unnecessary. First, the U.S. fighter force today is far superior to all of its potential adversaries, both in numbers and capabilities. Iran, Iraq, and North Korea have a combined total of only about 1,200 aircraft. Moreover, according to CBO, about three-quarters of these are first- and second-generation aircraft, most of which are based on 1950s designs. Most of China’s existing 2,500 fighters are first- and second-generation fighters. By 2005, China is expected to have only about 400 third-generation fighters and fewer than 100 fourth-generation fighters, which are based on designs from the 1970s or 1980s. By contrast, all current U.S. fighters are fourth-generation aircraft. In addition, the delivery of combat aircraft to developing countries dropped from an average of 578 per year in the 1983­1985 period to 237 per year in the 1995­1998 period, which suggests that the average age of Third World fighter fleets is increasing substantially.

Plans to buy three entirely new fighters and a total of 3,733 aircraft are almost certainly unaffordable.

Countries may also be modernizing their ground-based air defense forces less rapidly than they did during the Cold War. According to some reports, deliveries of surface-to-air missiles to developing countries fell from an average of about 5,500 per year in the mid-1980s to about 1,900 per year in the mid-1990s. Indeed, the value of all major arms deliveries to the developing world declined by about 50 percent between 1986 and 1998.

Overall, this information does not provide a complete picture of likely future threats to U.S. fighters. It is possible, for instance, that potential adversaries are making or will make significant modifications and upgrades to existing aircraft and surface-to-air missiles. Nevertheless, it appears that DOD could safely take a slower approach to fighter modernization and maintain smaller tactical air forces than are currently planned.

The ability to upgrade existing U.S. aircraft substantially at a reasonable cost also provides a compelling reason for going slower on modernization. Under current plans, the Air Force estimates that the average age of its fighter fleet will increase from about 12.5 years today to about 19.5 years in FY 2011 and then fall to about 18 years in FY 2015. Similarly, the Navy and Marine Corps estimate that the average age of their fighter forces will increase from 11.5 years today to 15 years in FY 2009 and then drop to 12.5 years in FY 2015. As aircraft age, they may develop structural and other problems, resulting in higher operation and support costs. However, it may be possible to avoid much of this cost growth if substantial funding is invested in a timely manner in modifying and upgrading existing aircraft. Indeed, many of the most important strides in fighter technology today involve improvements in avionics, which can often be retrofitted into existing aircraft.

Another factor that could allow DOD to prudently reduce its fighter forces and take a slower modernization approach is the enormous expansion of U.S. precision-guided munitions (PGM) capabilities. About 35 percent of the munitions used by NATO during the war in Kosovo were PGMs, including 90 percent of those used in the initial phase of the air campaign. During the past few decades, DOD has bought about 122,000 air-to-surface PGMs and 4,000 sea-launched Tomahawk cruise missiles. During the next few years, it plans to convert 322 existing nuclear-capable air-launched cruise missiles to conventional versions of the missile, and to convert 624 older Tomahawks to the latest configuration.

The military also plans to buy large quantities of new kinds of PGMs. These include 88,000 Joint Direct Attack Munitions, a relatively inexpensive kit that can be attached to existing “dumb” bombs, and 24,000 Joint Standoff Weapons, a more expensive glide bomb. Both weapons will rely on information from DOD’s Global Positioning System satellite network for guidance and could be delivered from almost any combat aircraft. Finally, DOD plans to buy 2,400 air-launched Joint Air-to-Surface Standoff Missiles, with a 100-mile range, and 1,353 Tactical Tomahawks, a new and less costly version of that missile.

In addition, it is critical to remember that fighters are only one element of U.S. air power, and fighter modernization is only one element affecting the capabilities of fighter forces. Other factors include the superior training received by U.S. personnel, electronic jamming provided by EA-6B electronic warfare aircraft, the refueling capability provided by a large fleet of tanker aircraft, the targeting and intelligence information provided by an unmatched network of communications and intelligence systems, a large airlift fleet, and long-range bombers. The recent war in Kosovo demonstrated that many of these capabilities, especially electronic warfare and other specialized aircraft support assets, are in short supply. Because of the high cost of the three new fighters, pursuing modernization may leave insufficient funding to adequately provide for these capabilities or to maintain high levels of training in the future.

Finally, perhaps the biggest problem with the military’s current fighter modernization plans is that they may reduce the funding necessary to research and experiment with new kinds of forces that might be needed to supplement or displace more traditional forms of tactical air power 10 to 20 years from now. It is widely believed that we are in the midst of a revolution in military affairs that will significantly change the way wars are fought in the future. The driving forces behind this revolution are advances in technology, especially information technology, combined with potential changes in military organization and operational concepts. The services claim that their current fighter plans continue to make sense in light of the changes under way. However, much evidence suggests that a substantially different mix of capabilities will be needed to effectively employ air power in the future. For example, the proliferation of increasingly accurate ballistic and cruise missiles and the growing access of many countries to satellite imagery are likely to dramatically reduce the U.S. military’s access to forward bases and increase the dangers posed to U.S. aircraft carriers operating near coastal waters.

All of the above trends suggest that, rather than focusing almost exclusively on the very costly modernization of an already large and effective fleet of tactical combat aircraft, the United States should devote greater resources to developing a range of other capabilities that might be better able to carry out some missions traditionally performed by tactical air forces. These alternatives include missile-firing ships, such as converted Trident ballistic missile submarines; long-range bombers; extended-range PGMs; and unmanned combat aerial vehicles.

An alternative approach

The most reasonable approach to modernization would involve scaling back and slowing current plans. The United States should rely more heavily on current-generation systems and do so further into the future. This could be done either by buying the latest new production versions of current-generation aircraft or by extending the lives of existing aircraft of these types. The best option would be to rely on a combination of new production aircraft and modifications and upgrades.

Buying too many new fighters may rob the military of the resources it needs to deal with emerging, longer-term security threats.

As noted earlier, current-generation U.S. fighters are far less expensive than the three proposed next-generation fighters. Assuming historical rates of cost growth, the three new fighters will cost roughly 50 to 150 percent more to produce than the current-generation fighters they are intended to replace, depending on the specific aircraft. Yet current-generation fighters remain highly effective. Moreover, in many cases, the latest production versions are far more effective than earlier versions. For example, according to the Air Force, the latest F-16C/D fighters are as much as five times more effective than the earliest versions of the F-16. Thus, simply replacing existing current-generation fighters with the latest versions of those aircraft would ultimately lead to the fielding of significantly more capable air forces.

It should also be possible to incorporate many, though certainly not all, of the advances planned for next-generation fighters into current-generation aircraft. For example, it may be possible to make a variant of the F-15 that includes advanced, electronically scanned array radar and is modified to incorporate significant stealth characteristics.

An even more cost-effective option would be to extend the life of existing fighter aircraft. Modification and upgrade efforts can vary dramatically in terms of cost and effectiveness. Typically, efforts aimed simply at extending the lives of existing systems are relatively inexpensive whereas those aimed at not only extending their service lives but also significantly improving the system’s capabilities can be quite costly.

A relatively inexpensive modification effort involves the F-16 Mid Life Update program, now being pursued by several NATO countries. At a cost of about $5 million per aircraft, or less than 20 percent as much as a new F-16, existing fleets are being outfitted with new cockpits and avionics systems, including a new mission computer and radar upgrades. The program is expected to add at least 10 years to the life of the aircraft and perhaps as much as double their air-to-air combat capabilities.

An example of a more extensive and costly effort is the Marine Corps’ AV-8B remanufacturing program, in which 72 older AV-8B Harrier IIs are being outfitted with a new fuselage, engine, and avionics, including a new radar and a night-attack capability. The program is expected to add about 6,000 hours, or roughly 20 to 25 years, to the life of the aircraft. The Marine Corps says the upgraded Harriers will cost about 75 percent as much as new-production Harriers.

It should also be possible to extend the lives of F-15s, early model F/A-18s, and other current-generation aircraft. The House Defense Appropriations Subcommittee recently reported that, “service life data from the Air Force indicates that the F-15 can exceed 16,000 flying hours without major structural changes.” This equates to a service life of 50 years or more. Moreover, the subcommittee noted that F-15 combat capabilities could be “improved substantially with upgraded radars, jammers, and helmet-mounted targeting systems.” The subcommittee also concluded that for about $200,000 per aircraft, F-15s could be upgraded with a new datalink, which allows aircraft to share target information and, tests suggest, could lead to a fivefold improvement in air combat kill ratios.

Next-generation investments

Although current-generation fighters, especially when incorporating the latest advances, are likely to remain highly capable well into the 21st century, it would be prudent to purchase at least some next-generation fighters during the next decade. Buying 100 to 200 F-22s instead of the 333 currently planned would substantially improve the effectiveness of U.S. land-based tactical air forces and still yield significant savings as compared to the current plan. According to the Air Force, the 42 F-117 stealth fighters deployed to the Persian Gulf during the 1991 war proved highly effective in destroying many of the most critical and heavily defended targets in the first few days of the war and played an especially important role in the collapse of the Iraqi air defense system. Similarly, a force of 100 to 200 F-22s, supplemented with a formidable fleet of the latest F-15s, would be adequate to clear the skies of the most dangerous air-to-air threats that are likely to emerge during the next several decades. For similar reasons, the United States should buy about 200 to 300 F/A-18E/Fs instead of the 548 now planned.

For three key reasons, production of the JSF should be delayed for at least another five years. First, the United States would already have highly capable tactical air forces, especially if the military buys some F-22s and F/A-18E/Fs and procures a mix of new-production and remanufactured current-generation fighters. Second, buying any JSFs before F-22 and F/A-18E/F purchases are completed, as currently planned, would almost certainly prove unaffordable. At $153 billion to $223 billion, the JSF is by far the most costly of the planned new fighters. Third, and most important, a five-year delay would give the military time to develop and experiment with new kinds of technologies, such as unmanned combat aerial vehicles and Tomahawk-equipped submarines, as well as new force structures and operational concepts. These investments could prove critical to maintaining U.S. power projection capabilities over the long term, particularly given the increasing access of potential adversaries to ballistic and cruise missiles and satellite imagery.

In the short run, the JSF program should be turned into an extended technology development effort. If in five years or so it appears that this aircraft will be needed to counter future threats, then a decision could be made to proceed with full-scale development and eventual production beginning about 2010. On the other hand, if unmanned combat aerial vehicles and other new kinds of forces have proven themselves, or the threat posed to tactical air forces by ballistic and cruise missile proliferation has seriously eroded the value of such forces, it may make more sense to focus resources on producing and fielding these new kinds of forces.

In addition to scaling back the planned purchases of the F-22 and the F/A-18E/F and deferring the JSF for at least five years, it also makes sense to reduce the size of our tactical air forces. Even relatively modest cuts in the existing force structure of 20 Air Force fighter wings, 11 Navy carrier wings, and 4 Marine Corps air wings could yield significant savings. For example, cutting two Air Force fighter wings and one or two aircraft carriers (the ships plus their air wings) could yield more than $3 billion a year in savings over the long term.

To be sure, in the near term, force structure cuts of this magnitude would reduce the military’s ability to fight and quickly win two major theater wars nearly simultaneously–a current U.S. requirement–and to carry out a broad range of forward presence and contingency operations. However, the reduction in capabilities might be relatively modest. At worst, it would still leave the United States with a force capable of winning a single major regional war quickly and decisively, while assuming a defensive posture in the second theater, and would support a less extensive range of forward presence and contingency operations. Moreover, if these next-generation aircraft are as effective as the military claims they will be, it may eventually be possible to carry out even the current two-theater strategy with smaller air forces.

The major question facing the military’s fighter modernization program today is not whether the services should continue to invest in tactical air power. Even the approach outlined here would require spending $6 billion a year or more on modernization over the long term. Rather, the question is whether the military can develop an approach to modernization that is affordable as well as capable of providing at least a modest hedge against the possibility that warfare in 2010 or 2020 will turn out to look very different from the 1991 Gulf War or the 1999 war in Kosovo.


Advanced Fighter Technologies

Sensors and other avionics: Modern fighter aircraft rely not only on the human eye but on radar, infrared, and other sensors to find enemy aircraft and ground targets. Radars work by actively sending out radio signals and detecting the energy reflected back, whereas infrared sensors are generally passive systems that detect the heat emitted by targets. Radars allow pilots to see for relatively great distances at night and through cloud cover, rain, smoke, and haze. Infrared systems generally have shorter ranges and only a limited ability to see through clouds and rain. On the other hand, infrared sensors tend to have better resolution and, because they are passive, do not betray the user’s presence or position.

Almost all U.S. fighters are equipped with radars, and some 500 U.S. combat aircraft now carry infrared sensors for navigation and targeting. A new radar system is being developed for the F-22. In addition to having greater range and tracking capabilities, this advanced phased array radar will be integrated into a state-of-the-art avionics suite that will also include improved communication, navigation, and electronic warfare systems. Taken together, the Air Force claims these avionics will give the F-22 a “first-look, first-shot, first-kill” capability. Advances are also being made in infrared sensor technology. For example, in coming years the altitude at which the Low-Altitude Navigation and Targeting InfraRed for Night (LANTIRN) system can be effectively used is expected to increase by over 50 percent, from roughly 25,000 to 40,000 feet.

These advances promise to make U.S. combat aircraft significantly more capable in the future. Importantly, many, if not most, of these new technologies could be incorporated into current-generation aircraft, as well as next-generation fighters.

Speed: The F-22 fighter will have roughly twice the power of the F-15. Among other things, this is because improved engine materials will allow temperatures within its turbines to reach 3,400 degrees, which is 1,000 degrees higher than is possible in existing fighter engines. The F-22 will also be the first aircraft capable of cruising relatively efficiently at supersonic speeds. According to the Air Force, this capability, which neither the F/A-18E/F nor the JSF will have, is one of the main reasons the F-22 will prove preeminent in the air-to-air combat role.

Stealth: The F-22 and the JSF are being designed to dramatically reduce their detectability by radars and other sensors. Both aircraft will use special shapes and materials to absorb and deflect radar signals, as well as various kinds of paints and surface coatings that absorb solar infrared radiation and limit emissions from friction-generated heat. They are also being designed to reduce the temperature of engine exhaust. The radar cross-section of these aircraft is expected to be at least 100 times smaller than existing aircraft, resulting in a several-fold reduction in the range at which they can be tracked by air-defense systems. Although current-generation fighters cannot be given this same level of stealth, some substantial reduction in radar cross-section might be possible through modification of existing aircraft.

Can Peer Review Help Resolve Natural Resource Conflicts?

Congress, businesses, environmental organizations, and religious groups are all calling for peer review systems to resolve conflicts over the protection of this nation’s natural resources. A recent opinion poll found that 88 percent of Americans support the use of peer review in the application of the Endangered Species Act (ESA). The rising interest in peer review is the result of widespread unhappiness with natural resource policies, including ESA listing decisions and the establishment of ESA-sanctioned Habitat Conservation Plans (HCPs). The many interest groups believe that scientific peer review will support their particular viewpoints. The obvious problem is that they can’t all be right.

A more important problem is that peer review as traditionally applied to examine scientific research is inadequate for supporting decisions about managing species, lands, and other natural resources. It does not take into account the complex political, social, and economic factors that must be factored into natural resource decisions.

Peer review can provide a basis for improving natural resource decisions, for reconsidering past decisions, and for settling disagreements. But to function effectively, the review system needs to be much different from the one used widely in academia today. In the meantime, traditional peer review is being applied on an ad hoc basis to important endangered species and habitat conservation issues, leading to contentious outcomes. In the rush to implement a popular policy, we are setting a precedent that is only institutionalizing our confusion.

Everyone wants it

It is heartening that all sides want independent peer review; it seems that everyone acknowledges that better decisionmaking is needed. A survey by the Sustainable Ecosystems Institute found that at least 60 farming, ranching, logging, industrial, ecological, wildlife, religious, and governors organizations are calling for scientific review in the application of the ESA. This includes reviews of HCPs, which are agreements between government agencies and private landowners that govern the degree to which those owners can develop, log, or farm land where endangered species live.

Why are so many diverse groups eager to embrace peer review? There is widespread distrust of the regulatory agencies involved in ESA and dissatisfaction with their administration of the act. Many groups believe that agencies are making the wrong decisions. Disagreements among interested parties often end up in litigation, where judges, not scientists, make rulings on scientific merit. Most decisions to list species in the West, including those involving the northern spotted owl, marbled murrelet, and bull trout, have been made after lawsuits. Similarly, one approved HCP–the Fort Morgan Paradise Joint Venture project in Alabama, which would have affected the endangered Alabama beach mouse–was successfully challenged in court on the basis of inadequate science.

Many organizations see science as a way of reducing litigation. After all, judges are not scientists or land managers and are apt to make the wrong technical decision. Court actions are costly. Any means of reducing vulnerability to lawsuits is roundly favored.

There are striking differences in opinion as to where peer review is needed. Simply put, each group favors review of actions that it finds unpalatable. Development groups want fewer species listings and therefore demand review of listing decisions. Some professional and environmental societies oppose peer review of listings because they will unnecessarily delay much-needed conservation measures. Environmental groups are concerned about habitat loss under HCPs and want them independently reviewed.

Regardless of their perspective, most groups want less litigation, less agency control, and greater objectivity. Many also see peer review as a tool for overturning wrong decisions. Regulatory agencies want to reduce vulnerability to litigation and develop greater public support. Agency staff, frequently doing a difficult task with inadequate resources, would prefer to have a strong system to rely on. It is always better to have a chance to do it right than to do it over.

The lure of hasty implementation

The move to implement some form of peer review is already under way. For example, the Magnuson Stevens Fisheries Conservation and Management Act calls for peer review in arbitrating disagreements over fisheries harvest levels. The U.S. Forest Service now calls for science consistency checks to review decisions about forest management. Unfortunately, the rush to implement random forms of peer review has created many ad hoc and ill-conceived methodologies.

Enthusiasm for peer review is so high that it is now central to efforts to reform ESA. In 1997, the Senate introduced the Endangered Species Recovery Act, which would have required peer review and designated the National Academy of Sciences (NAS) to oversee the review process. But few academy members or the scientists who serve on NAS committees have made their careers in applied science or have worked in an area in which legal and regulatory decisions are paramount. The bill was shot down, but the governors of the western states have asked the Senate to reintroduce similar legislation in 2000. Whether or not legislation is taken up, it is clear that Congress wants better science behind natural resource decisions and sees peer review as the way to achieve it.

Most legislative and agency measures calling for peer review, however, do not describe how it should be structured, other than to say that it should be carried out by independent scientists. Yet an ill-conceived review process will just compound the problems. Furthermore, there is a tacit assumption that the pure academic model will be used. Although it is appealing to think that this system would work as well for management and policy decisions as it does for pure research findings, it won’t. Traditional peer review cannot be applied as some kind of quality control in a political arena. Indeed, some attempts to use peer review in this way have backfired.

What can go wrong

Development of the management plan for the Tongass National Forest, covering 17 million acres in Alaska, illustrates several problems in applying academic peer review to natural resource management. To make a more science- based decision regarding the management and protection of old-growth forests and associated wildlife species, the Forest Service set up an internal scientific review team that worked with forest managers on the plans. Because of federal laws governing the use of nonagency biologists, the service sent drafts to external reviewers, most of whom were academics. In reviewing the plan and the methodology, the service concluded that science had been effectively incorporated and that managers and scientists had worked well together. Indeed, service officials have portrayed the plan as a watershed event, bringing the service’s research and management arms together.

The conclusion of the external review committee was different. It independently issued a statement that was critical of the management proposed in the plan, concluding that, in certain aspects, none of the proposed actions in the plan reflected the reviewers’ comments. The committee insisted that “the Service must consider other alternatives that respond more directly to the consistent advice it has received from the scientific community before adopting a plan for the Tongass.” The reviewers noted that there were specific management actions that should be carried out immediately to protect critical habitat but that were not part of the plan. These included eliminating road building in certain types of forest and adjusting the ratio of high-quality and low-quality trees that would be cut in order to protect old-growth forests.

The Tongass experience holds several lessons. First, internal and independent reviewers reached opposite conclusions; decisionmakers were left to determine which set of opinions to follow. Whatever the choice, a record of dissent has been established that increases vulnerability to legal challenge and political interference. Second, the independent scientists felt ignored, which again increases the vulnerability of the decisions. Third, the independent scientists made clear management recommendations, believing that science alone should drive management decisions; most managers will disagree with this point of view. Thus, peer review in the Tongass case raised new problems. Confusion of roles and objectives was a major cause of these difficulties.

Enthusiasm for peer review is so high that it is now central to efforts to reform the Endangered Species Act.

A different set of issues has arisen with the use of peer review in establishing two HCPs–one involving grasslands and butterflies in the San Bruno Mountains south of San Francisco, the other involving Pacific Lumber and old-growth forests near Redwood National Park. In both cases, scientific review panels were used from an early stage to guide interpretation of the science. The panels were advisory and scrupulously avoided management recommendations, sometimes to the frustration of decisionmakers. The panels avoided setting levels of acceptable risk and tended to use conservative scientific standards.

Another example comes from the State of Oregon Northwest Forest HCP, now being negotiated to cover 200,000 acres of second-growth forest that is home to spotted owls, murrelets, and salmon. The Oregon Department of Forestry sought reviews of their already-developed plan from 23 independent scientists representing a range of interest groups and expertise. Not surprisingly, diametrically opposed opinions were expressed on several issues. It will now be difficult to apply these reviews without further arbitration.

Hints of more endemic problems come from the Fish and Wildlife Service’s use of peer review for listing decisions. Typically, a few reviewers are selected from a group of scientists who are “involved” in the issue. But the service now reports that at best only one in six scientists contacted even replies to the request that they be a reviewer. If they do volunteer, they are often late with their responses or don’t respond at all. Two problems are becoming clear: There is no professional or monetary benefit from being a reviewer, and many scientists are wary of becoming caught up in politicized review processes, which can become drawn out and expose them to attacks by interest groups.

Certain actions can determine the effectiveness of a peer review process: how it is structured, who runs it, who the reviewers are, and how they are instructed and rewarded. Lack of attention to details and blanket application of an academic model has already led to problems and will continue to do so.

Clearing the minefield

Peer review has always been a closed system, confined to the scientific community, in which the recommendations of usually anonymous reviewers determine the fate of research proposals or manuscripts. When scientific review is used outside this arena, problems arise because scientists, policymakers, managers, advocacy groups, and the public lack a common culture and language. Few scientists are trained or experienced in how policymakers or managers understand or use science. Scientists may be tempted to comment on management decisions and indeed are often encouraged to do so. However, they are rarely qualified to make such pronouncements. Natural resource managers must make decisions based on many factors, of which science is just one. Inserting academic peer review into a management context creates a minefield that leads to everything from misunderstanding to disaster.

More appropriate applications of peer review can be designed once the major differences between academic and management science are understood. They involve:

Final decisions. Scientists are trained to be critical and cautious and to make only statements that are well supported. Managers must make decisions with whatever information is available. Scientists usually send incomplete work back for further study; managers typically cannot. Managers must also weigh legal concerns, public interest, economics, and other factors that may have little basis in hard data.

“Best available” science. Managers are instructed to use the best available science. Scientists may regard such data as incomplete or inadequate. Reviewers’ statements that the evidence in hand does not meet normal scientific standards will be irrelevant to a decisionmaker who lacks alternatives and must by law make a decision.

Competing ideas. In pure science, two competing theories may be equally supported by data, and both may produce publishable work. Management needs to know which is best to apply to the issue in question.

Reviewers as advocates. In academia, it is assumed that a reviewer is impartial and sets aside any personal biases. In management situations, it is assumed that reviews solicited from environmental advocates or development interests will reflect those points of view.

A new model of peer review must account for the complex political, social, and economic factors involved in natural resource decisions.

Speed. Academic reviews are completed at a leisurely pace. This is not acceptable in management situations.

Anonymity and retaliation. Academic reviews are typically anonymous to encourage frankness and discourage professional retaliation. Reviews in management situations usually must be open to promote dialogue. Some scientists will be reluctant to make strong statements if they are subject to public scrutiny.

“Qualified” versus “independent.” Often the scientists best qualified to be reviewers of a natural resource issue are already involved in it. Many HCP applicants, for example, do not want “inexperienced” reviewers from the professional societies. They prefer “experienced” scientists who understand the rationale and techniques of an HCP. This sets up a tension between demonstrable independence and depth of understanding.

Language. Managers and decisionmakers may not be familiar with the language of science. Statistical issues are particularly likely to cause confusion.

Reward structure. In academic science, reviews are performed free of charge for the common good and to add to scientific discourse. Hence they are typically given a low priority. In management situations, this will not work. Rewards–financial and otherwise–are necessary for timeliness and simply to encourage reviewers’ interest in the first place.

A new model

The troublesome experiences in recent cases such as the Tongass and appreciation of the different roles of academic and management science reviewers point the way to more effective integration of peer review into resource management decisions. The following principles provide a starting point:

  • The goals of peer review in each case must be clearly stated.
  • Clear roles for reviewers must be spelled out.
  • Impartiality must be maintained to establish credibility.
  • A balance must be sought between independence and expertise of reviewers.
  • Training of reviewers may be necessary.
  • A reward structure must be specified.
  • Early involvement of scientists will give better results than will post-hoc evaluations.

Three other lessons are evident. First, because academic scientists are rarely familiar with management, the individual or organization coordinating the review needs to be experienced in both fields. The traditional sources of these “science managers”–academic institutions, professional societies, or regulatory agencies–either lack the necessary experience or are not seen as independent. We need a new system for administering peer review.

Second, a mediator or interpreter who clarifies roles and eliminates misunderstandings can be highly effective. Scientists may need pressing on some points and at other times may need to be dissuaded from trying to be managers. Conversely, managers who lack advanced training in disciplines such as statistics may need help in interpreting scientific statements on issues such as risk. The interpreter can also be a gatekeeper for scientific integrity, ensuring that reviewers do not become advocates, either voluntarily or under pressure.

Third, a panel structure gives more consistently useful results. This is probably the result of panelists discussing issues among themselves. Although panels can produce conflicting opinions, they appear more likely to give unequivocal results than would a set of individual reviews.

There is enthusiasm for science and peer review among most parties involved with ESA and general natural resource management. But there is little consensus on how to make the process succeed. Nationally, we lack the necessary infrastructure for implementing peer review as a useful tool. In each case, environmentalists, developers, and any other regulated parties should be asked to design the appropriate system, because they will then accept its results. This means that advice on forming such groups and oversight of their progress would be needed. Peer review cannot be guided by managers alone nor by scientists alone. We need independent technical groups that have the necessary diverse skills but are seen as impartial.

Whichever route is taken, a better approach to peer review must be created. The rush to impose the old academic model must stop before it creates even more problems. By taking the time to properly devise review systems, we can ensure that the scientific voice is effective, understood, and utilized.

Forging Environmental Markets

To achieve a truly sustainable environment, we must recognize that environmental improvement and economic growth can and do go hand in hand–that environmental improvement is a market just like any other. Indeed, if environmental improvement is approached as a market, then many of its presumed conflicts with economic growth evaporate. For businesses, environmental improvement can provide lower costs and growing worldwide economic opportunity. For the public, it provides trillions of dollars worth of benefits, plus significant insurance against major disasters.

Normally, business leaders and economists would welcome such a huge market–already roughly $180 billion in the United States and more than $500 billion worldwide–as a major opportunity. For comparison, the world market for semiconductors in 1999 was about $150 million. But the terminology and rhetoric of the environmental field have so confused and polarized thinking that this fact and its implications are generally overlooked. Despite the huge economic gains shown by virtually every careful study, environmental improvement generally is referred to as a “cost” by most business executives, political figures, and policymakers. Yet, like other industries, environmental improvement responds to a valid demand, and it creates jobs, business opportunities, investment returns, tax revenues, profits, and positive benefits to citizens. In short, environmental improvement is a market rather than a cost.

The terminology used in national economic accounts and policy dialogues often is seriously misleading. National accounts neither recognize the lost values and actual costs currently incurred by pollution nor measure and offset returns from environmental improvements against the investments that created them. Further, in political discussions, attempts to force companies and users to bear the full costs of their actions are generally termed “taxes,” not “cost recovery.” Those who oppose internalization of such costs argue that environmental improvements “decrease national productivity and competitiveness” and hence reduce job opportunities and growth. But far from decreasing competitiveness, environmental improvements have greatly reduced costs for most businesses. A widely accepted Environmental Protection Agency (EPA) report, The Benefits and Costs of the Clean Air Act of 1970 to 1990, estimated that every dollar spent on “depollution” reduced health costs by $20, and that the U.S. economy experienced gains of $6.4 trillion (with a credible range of $2.3 trillion to $14.2 trillion) as a result of the initial 1970 act alone.

Terminology matters. When the media report an act of mass violence, perceptions change greatly if the actor is called a “terrorist” as opposed to a “freedom fighter.” The case is similar with the terminology of “costs” (implying losses) versus “markets” (implying gains). Ignoring the real costs of existing environmental degradation and the overwhelming contributions that improvements make to productivity skews public perceptions and focuses the environmental debate around wrong issues and data. Thinking and structuring data in terms of “environmental markets,” where all parties internalize their full costs and satisfy a real public demand, can enable a more reasoned dialogue, make environmental investments easier for the public and policymakers to comprehend, place alternative uses of resources (that is, for environmental improvement versus product manufacture or public transfer payments) on a sounder basis, and help allocate national resources more effectively.

The past two decades have proved the social and growth benefits of private markets for most societies. The true effectiveness of a market-driven economy, however, depends largely on: 1) fair pricing of various alternatives based on their real total cost, 2) transparency and information about the value and cost of alternatives, and 3) relatively equal capacity to purchase and innovate in each market. Properly developed, environmental markets stimulate all three components of market efficiency, thus improving overall economic efficiency. Unfortunately, the past practice of considering environmental conditions as “externalities” has seriously distorted resource allocations through implicit subsidies to producers and users of polluting systems.

Such subsidies encourage underpricing and overuse of the polluting industry’s products while discouraging innovation both in that and competing industries. When marketlike structures and incentives are used, innovation in these and supporting industries generally creates environmental results that are higher than initially expected at costs that are significantly lower than expected. Entire new industries–for advanced sensors, new fertilizers and farm technologies, large-scale modeling, lightweight materials, and high-performance engines, to name but a few–often have been stimulated, along with improved environmental outputs, most of which are not captured by national account data.

Recognizing environmental improvement as a market dramatically changes the calculus justifying environmental expenditures. Policy discussions generally have demanded that environmental expenditures be justified on the basis of “lower costs” for the society. But no analyst would demand that the automobile, fashion, entertainment, or furniture industries justify themselves in terms of “cost savings.” These are merely valid demands calling for the resources for their satisfaction, relative to other demands. Similarly, environmental improvement satisfies real demands (for clean air, water, etc.) and creates major new markets for supplier industries.

Indeed, environmental markets drive today’s demand for many new technologies. In many areas, environmental targets even have replaced traditional consumer product, industrial process, or military technological goals as drivers of scientific and entrepreneurial endeavor. And these technologies will undoubtedly create hosts of unexpected new free-standing market opportunities. Properly developed, environmental markets stimulate real economic growth.

Growth opportunities

Any opportunity, public or private, that calls forth previously uncommitted energies and resources can create growth. The only difference between environmental markets and private markets is that demand from many private individuals must be aggregated to purchase environmental amenities. Joint purchases (or “public markets”) create exactly the same growth opportunities as private markets. For example, when an individual works harder to buy an automobile, that private action stimulates growth. If he and his neighbors jointly buy the same car for a carpool, they provide an equal stimulus to the economy. If 1,000 citizens buy a vehicle as a public school bus, they create the same direct sales and jobs.

Generally, two conditions must be present for public markets to create real growth. First, there must be some underemployment of people and capital in the society–conditions that exist in the United States today, given that there are underemployed people ready to work, unused technologies that could free people for other tasks, and undertrained and poorly managed work forces that produce less than they could. Second, the demand must be valid–people must want the publicly purchased amenity more than other goods and services they could buy as individuals with the same resources. Although such preferences are hard to measure explicitly, they often can be determined within reason by marketlike choice mechanisms. Or, the benefits to citizens can be so clearly within the government’s constitutional charter that it would be remiss not to create these markets on its own. Together, public markets (aside from defense) are very large. We estimate that public markets accounted for over $2 trillion (27 percent) of the 1995 U.S. economy, including such things as health care, education, pollution abatement, law enforcement, and public transportation expenditures.

Importantly, studies repeatedly show that citizens prefer environmental improvement to many other public programs. Economists identify several classes of large environmental benefits that consumers would be willing to pay for. These include:

Consumptive benefits. People draw on a wide variety of natural “products,” such as lumber, fish, and drinking water. In addition to the raw value of these products, there are multiplier values of between about 1.4 and about 4 in translating sales of harvested outputs into use values. For example, in the fish and lumber areas, there are 40 to 300 percent more service jobs supporting these industries than there are jobs for commercial fishers and loggers. Product output can be maintained in perpetuity if the rate of biological resource growth matches that of harvesting. Unfortunately, because the asset values of the environment are neither properly recognized nor priced, they are frequently liquidated without their decline showing in national accounts.

Environmental markets are driving today’s demand for many new technologies.

Health benefits. It is estimated that environmentally related diseases (mostly due to water pollution) kill some 11 million children worldwide each year. Chemicals in the air are estimated to kill from 200,000 to 575,000 people per year. Many more individuals suffer poor health, which increases their health care costs, reduces job productivity, and lowers economic well-being. Conversely, studies show that many knowledge-intensive companies choose to locate in areas of the United States–such as Seattle, San Francisco, Phoenix, Denver, Boston, Minneapolis, or Raleigh-Durham–where the surrounding environment is attractive, thus creating higher property values and economic growth.

Ecosystem services. Ecosystem services are natural processes, such as forests absorbing pollutants or moderating rain runoff and erosion, that would have to be replaced artificially if natural habitats were removed or degraded. The value of these environmental amenities can be very large. For example, the market value to cities and farms of clean water running out of the Sierra Nevada mountains in California is approximately twice the value of the Sierra timber, grazing, and tourist industries combined.

Nonmarket benefits. Nonmarket benefits are amenities that are not generally sold or consumed directly but for which consumers are nevertheless willing to pay. Such benefits include, for example, the aesthetic appeal of a scenic location or of viewable wildlife. Environmental economists often classify these benefits into several categories: existence values (satisfaction in knowing that nature is protected); stewardship values (maintaining nature for future generations); option values (for example, preserving a forest because pharmaceutical substances may later be discovered in some of its species); and avoidance of risk (preserving the natural state to avoid the unpredictable effects of perceived alternatives).

Private impacts of public markets

Unfortunately, those who seem to benefit most directly from environmental improvements are often not those who must make the initial expenditures to achieve it. For example, automobile companies initially resisted 1970s air quality (and safety) regulations they thought would disastrously decrease auto markets. In retrospect, there is little evidence that the overall industry suffered large continuous losses, although brand switching toward innovators was common at first. However, there is much evidence that the regulations’ primary long-term effects were to stimulate enormous innovation in automobiles to lighten their structures, improve fuel efficiency, and add new features to the automobiles themselves. Profitable supplier industries and individual automobile companies grew to provide catalytic converters, airbags, seat belts, lighter metals and plastics, and fuel-saving features; and some companies became niche players in the “efficient” or “safe” car markets. New competitors, such as Honda and Orbital Engines, came in to exploit the need for higher engine efficiencies, driving the rest of the industry to match their performance. A further wave of innovation swept through the fuels industry to improve combustion without the use of noxious lead products.

As a result of such shifts in this “public market,” between 1970 and 1990 the United States saw a 40 percent emission reduction in sulfur oxide, a 45 percent reduction in volatile organic compounds, and a 50 percent reduction in carbon monoxide. Ozone concentrations decreased by 15 percent, airborne lead by 99 percent, and primary suspended particles by 75 percent. And largely because of other public market activities, such as the adoption of improved automobile safety features and the construction of superhighways, U.S. traffic fatalities dropped from 54,589 in 1972 to 40,115 in 1993–even though the period saw an increase of 60 million licensed drivers, collectively logging more than a trillion more miles per year. These gains, mostly unmeasured in national economic accounts, represented real benefits for individuals.

When regulations are proposed, the affected industries tend to exaggerate potential hardships. Cost estimates based on practices at the time of imposition are generally too high, as innovations almost always quickly lower costs. If regulations use flexibly designed market mechanisms and reward higher performance achievements–as opposed to specifying particular technologies or existing “best practices”–innovators often create solutions that generate both better outputs and lower costs than anyone could forecast at the time.

The town of Trenton, Michigan, presents a classic example. In the early 1950s, when the town refused to allow McLouth Steel to install Bessemer converters because they produced too much air pollution, the company began a search for alternative processes. This led to the first major U.S. installation of the so-called basic oxygen process for making steel. When diffused through the steel industry, the cost savings and value gains of this innovation alone would more than pay for the industry’s highly touted air depollution costs, forcing further innovations in competing processes as well.

Creating public markets effectively

Large-scale studies have shown that governments can promote economic growth by intervening to create parity between environmental and other markets. How to optimize the benefit in social, business, and economic growth terms is the crucial issue.

Many economists and policymakers are beginning to stress marketlike incentive mechanisms to create environmental markets. Special problems exist in this marketplace: Benefits may be highly diffuse or not accrue directly to those who must make needed expenditures; front-end costs appear measurably apparent whereas benefits are often hard to quantify; and no one knows, at the outset, what potential solutions really exist and how other systems or the public may ultimately respond. Most of these (with the exception of the lack of match-up between payers and beneficiaries) are precisely the elements that markets handle best. The major problem is understanding and aggregating the frequently diffuse demands for environmental improvement in a way that minimizes actual costs while optimizing their balance with competing demands.

Economists use two starting points to establish demand potentials in a public market. The first approach is direct: Economists ask and analyze preference questions. The simplest questions are: How much money would you be willing to pay to have a defined level of cleaner water, air, etc.? What amount of money would you demand to allow someone else to decrease the quality of that resource by a specified amount? The problems of achieving accuracy in such surveys are well known, but most studies show that people would demand between two and six times more to accept a loss in current quality-of-life levels than they would be willing to pay to achieve or improve these levels. The second approach is indirect, but typically proves more precise: Economists try to create a real-world market in which interested parties buy and sell real or surrogate assets and solutions. In this way, economists can analyze how people actually perform in trade-off market situations. For example, how much do people pay to vacation in a national park or fish in a clean wilderness area rather than fish or picnic locally? Such studies can include aesthetic (noncost) factors and provide initial baselines for reasonable environmental expenditures. However, they do not measure the value nonusers might place on the resource.

Because of the very high potential values of the environment and the tendencies of affected players to distort their estimates of costs and effects of changes (up or down), careful baseline data studies are crucial to sound analysis and policy. Markets operate most effectively when data are abundant and transparent. As in finance and trade, governments have a critical role to play in providing a reliable and neutral framework of environmental data under which private parties can appropriately value resources, examine trade-offs, and evaluate trends.

Advances in data warehousing technology and information searching have begun to make digital archives of national environmental information available and useful to nonspecialists. But databases are only as useful as the consistency, coverage, and quality control of their contents. Unfortunately, the newness of the field and the number and fragmented nature of interested parties have delayed the development of cohesive definitions, standards, methods, and technologies for environmental data. Since no coherent international standards exist, most countries’ environmental data systems are like giant libraries without a useful catalog.

Accessible, objective data are especially important where pollutants can be invisible, valuable resources (such as fish, rare plants, or unknown chemicals) are difficult to observe or identify, monetary values are not usually explicit, and options may be wide-ranging. It often takes considerable information, education, and time to evaluate alternative choices objectively. For example, the U.S. public appears to be much more concerned about low-level radioactive wastes, such as those from hospital laboratories, than about radon. Yet the National Research Council estimates that in the United States, radon causes 2,100 to 2,900 cancer deaths per year in nonsmokers and contributes to approximately 10 times that many deaths in smokers. By contrast, there have been few if any clear demonstrations of individuals in the United States dying from environmental exposure to low-level radioactive wastes.

Some observers would argue that such skewed risk perceptions are inherent in a market system. The challenge to government and business is to develop environmental incentives and mechanisms that are based on data and that address the important sources of market failure, prevent major disasters, and promote overall market efficiency. Important in the latter is finding mechanisms to publicize successes, improve understanding of trade-offs, and avoid media distortions of fear and false claims. Education and objective data are the best hopes.

Creating marketlike mechanisms

For practical political reasons, governments have tended to introduce approaches to achieving environmental improvement in the following general order: practicing direct top-down regulation, taxing undesirable actions, creating property rights to environmental benefits, providing insurance against regulatory or legal risks, and empowering “stakeholder” negotiations. Each, to some extent, addresses various common causes of market failure–externalized costs, failure to aggregate demand, fairness, and information–but each has its own peculiar strengths and weaknesses. Only in the late 1980s and 1990s have national governments enthusiastically embraced marketlike (rather than regulatory) strategies.

Direct regulation. The National Environmental Protection Act, the Clean Water Act, the Clean Air Act, and the Endangered Species Act of the early 1970s are at their heart top-down regulatory approaches to environmental protection. Although generally resisted by industry, which had no incentive (other than avoiding penalties) to comply, direct regulation forced more rapid responses than might otherwise have occurred, eliminated many egregious acts of environmental dumping, and started a series of learning processes about how to measure effluents, estimate effects, and stimulate desired responses in industry. The government soon learned that it had to provide longer lead times, create a stable regulatory environment, and spend extensive energy to monitor and enforce its regulations. Nevertheless, the mandates forced polluters to internalize more of their pollution costs, enforced greater fairness in distribution of benefits and costs, and aggregated demand for a better environment through the political process.

But top-down regulations often proved inefficient because regulators ignored marginal costing. They routinely demanded that “the best available technology” be applied regardless of cost. The assumption behind such regulation was punitive: that producers were maliciously avoiding depollution investments and had to be punished for their recalcitrance. This approach often backfired, because it discouraged innovation that might lead to better results or lower costs. In a current example, regulations first passed in California in 1990 require that automobile companies selling in that state must produce a substantial number of “zero-emission vehicles” by 2003. To ensure access to the country’s largest auto market, manufacturers have essentially been obligated to develop battery-powered electric cars. If enforced, the specification of a particular technology will result in uneconomical vehicles that will require subsidies, pose high contamination risks from battery disposal, and promote increased effluents from power stations needed to produce electricity for recharging. Resources spent on battery-powered cars become less available for potentially more promising solutions, such as new engine designs, public transit, increased use of bicycles, and better traffic control capabilities.

Despite such flaws, the genius of all four laws turned out to be their information requirements. Both agencies and industry are required to publicly state the potential environmental consequences of their actions in ways that allow the public or interveners to evaluate them and to use political or court processes to challenge actions. Although not always efficient, these processes have created much more objective awareness by all parties of environmental effects and a method to resolve issues.

Ideally, private parties, not the government, would provide environmental insurance.

Targeted taxation. A more marketlike alternative, which is particularly effective when the impact of effluents is highly diffuse either in production or consumption, is to tax undesired actions on a unit purchase (or user fee) basis to increase their cost. A unit fee such as that on fuels (set at a level where total revenue generated just offsets total externalities created) makes the total market economy more efficient as well as providing incentives for producers and consumers to make more cost-effective decisions. In addition, unit fees provide funds to monitor actions and to cover costs for innocent parties injured by pollution. Such fees are most useful when pollution moves from many diffuse sources toward many diffuse recipients, as does auto air pollution, solid wastes created by packaging, home fuels consumption, or runoff from farms. User fees clearly are not appropriate for other situations involving point sources of emissions or extremely intensive downstream concentrations of damage from emissions (such as sewage or smokestack toxins). In these cases, localized monitoring, effluent fees, or release penalties may provide much more direct market responses and match-ups of compensation versus injury.

Unit charges set at a level where total revenues just offset total externalities make the total market more efficient and provide added incentives for producer innovations and voluntary consumer choices of more cost-effective products or services. Assessing fees or taxes on those who are currently or potentially charging the society for their support (for example, energy, water, fertilizer, or gasoline consumers) makes more economic sense than do general sales or income taxes, which affect those selling services or products at full cost. Since relatively small environmental-use taxes, such as carbon or gasoline taxes, can raise very large amounts of money, they can be used to decrease the level of personal income taxes or other sales taxes, thus encouraging further growth and entrepreneurship in more socially responsible areas.

Creating property rights. An interesting extension of marketlike approaches is to convert some component of emissions into private property rights that can be bought or sold. Individually tradable quotas for releasing certain classes of pollutants are an example. Highly toxic pollutants must, of course, be absolutely prohibited. For other pollutants, whose danger increases with exposure, polluters receive a permit to release a fixed quantity of the pollutant depending on a combination of their current production of the effluent and an aggregate standard that, if implemented, would achieve desired health effects in a reasonable cost-benefit fashion. If the producer can reduce its emissions, it can sell the balance of its quota to another party. If it wishes to increase pollution, it must buy rights from a willing seller. New entrants must purchase quotas from existing holders.

The net result is that the amount of pollution is, at a minimum, held constant at current cost-benefit ratios. But each party has an incentive to improve performance. Those who can reduce performance inexpensively have an incentive to do so and to sell the amount saved at a higher price to someone whose reduction costs are high. The government, aggregating the demands of consumers, can decide on the total level of pollution by adjusting quotas over time. Participants in the market decide on the value of rights and seek the least expensive way to reach goals. Although at first attacked as granting “rights to pollute,” tradable permits are gaining wide acceptance among business and environmental interests.

Similar approaches are being taken to develop markets for privatized environmental amenities. Water supply, particularly in the arid U.S. West, has long relied on a system in which individuals own the rights to use certain amounts of water from rivers and reservoirs. Until recently, however, it was difficult for an “owner” to sell low-cost water rights to other users who wanted it for more valuable purposes. For example, farmers who sold such rights might lose them, since the sale would show that they did not need all they were allocated. But in the past several years, the U.S. Department of the Interior has begun to promote public water right purchases as a solution for developing adequate water supplies for cities and for wildlife, including in the San Francisco estuary, where the department assured farmers that the government would purchase water in an effort to help preserve some threatened fish species. Farmers responded by planting crops, such as wine grapes, that had higher value and required less water and by installing efficient drip irrigation systems. Together, such steps have led to the release of more than a million acre-feet of water for sale to urban and environmental users.

There also are other ways to assign property rights, each of which has substantial economic consequences. A well-known example is the assignment of ownership rights for potential pharmaceutical products discovered in nature but developed privately. But under current U.S. law, naturally occurring chemicals cannot be patented, meaning that if a cure for cancer were discovered in a rainforest, it could be freely copied, making it difficult for those protecting the rainforest to profit from any new products made available through that protection. The Convention on Biodiversity calls for countries of origin to own rights to natural products found within their borders, which assists conservation but removes the incentives for drug companies to undertake the expensive task of isolating potential drugs and bringing them to market. In one effort to bypass this problem, Costa Rica has agreed to facilitate bioprospecting by the U.S. drug company Merck in exchange for royalties from any marketable drugs developed. However, it is unclear what protection either will have for natural products developed in the partnership.

Insurancelike mechanisms. A relatively new class of insurancelike mechanisms offers other opportunities, particularly for protecting endangered species. Rather than limiting activities on all lands containing endangered species, agencies now can encourage landowners and neighbors to develop Habitat Conservation Plans (HCPs) to voluntarily set aside enough land to plausibly protect the species in the long run. In some cases, these are multispecies, multihabitat plans that cover not only the species now enumerated in the law but also species that might be protected in the future. In exchange for agreeing not to use some of their land commercially, the landowners obtain long-term contracts (termed “no-surprises agreements”) with the Interior Department agreeing not to impose new regulations on them, typically for 100 years. The use of HCPs has exploded, from 14 created from 1982 to 1992, mostly on small urban tracts, to more than 300 HCPs today.

But improvements in this area are needed, and proper controls for these plans are evolving. A recent National Science Foundation study, which involved 119 scientists from eight universities, examined 208 HCPs. The researchers found that the plans typically lacked enough scientific data to determine whether they provided adequate protections, and that few if any of the plans had mechanisms for monitoring their success or modifying them if they failed. A longer-term risk of this approach is the possibility of collusion between government administrators and industry–not unknown in the past management of U.S. forests.

Ideally, private parties, not the government, would provide environmental insurance. For example, company A could anticipate external costs (increased pollution or local species losses) that might be caused by its planned expansions and prepurchase offsetting reductions from company B. Company A then has an incentive to invest in environmental benefits early, before they become of regulatory concern and the price rises. Company B, which has set aside the land, has an interest in stronger regulation, which would increase the value of its environmental assets. Government’s role in such transactions would be in agreeing that the “insurance” meets regulatory requirements in preventing collusion or misrepresentation by the companies and in ensuring that adequate sanctions exist in the event of either party’s failure, perhaps in the form of reinsurance by third parties.

Empowering stakeholders. Because of the distractions and costs of having governments directly stimulate and enforce the terms of environmental markets, there is a growing political consensus that bypassing centralized regulatory approaches and fostering more direct conflict resolution among stakeholders (such as landowners, local businesses, local governments, and environmentalists) at ecologically relevant scales (such as watersheds) is the wave of the future. Of course, some regulatory involvement will remain critical: The incentive for warring local interest groups to negotiate is often that if they cannot reach a consensus, a distant bureaucrat will impose rules on them. For example, the 1995 San Francisco Bay-Delta Accord launched the 30-year, multibilion-dollar “CalFed” effort to restore water quality and fisheries in the San Francisco estuary. The agreement was signed by often-warring water interests only hours before a deadline set by the federal EPA, after which EPA was prepared to impose its own water-quality plan to protect threatened fish under the auspices of the Clean Water Act.

Unfortunately, in most localities there often is insufficient objective information or technical expertise to balance in a comprehensive way the costs and benefits of land use, water quality, rare species, population growth, chemical pollution, risks of flood and fire, and so on. To the degree that this information exists at all, it is likely to be held at multiple levels and points within different government units and in different formats. Recognizing that stakeholder negotiations typically get nowhere until industry, environmentalists, and government agree on common facts, the Clinton administration has been active in developing shared standards for environmental data sets and in requiring agencies to make their data available over the Internet. For example, the Federal Geographic Data Committee, representing all environmental research and land management agencies, has set mandatory standards for remote federal sensing and mapped data. In addition, access to biological data is being centralized under the National Biological Information Infrastructure, and most agencies have extensive Web sites permitting the public to browse their data holdings. Unfortunately, data collection has not kept pace with improvements in access. Over the past decade, the federal government has substantially curtailed field monitoring by major environmental data collection agencies such as EPA and the U.S. Geological Survey.

Taken together, then, some important pieces are in place to move the nation away from its past punitive approaches to environmental improvement, which often have led to high costs in terms of both pollution and cures. To foster the growth of environmental markets, the research, standard-setting, and regulatory functions of government will be critical, just as they have been in the development of many other markets, such as pharmaceuticals, communications, transportation, and the food industries. Developing environmental markets further requires improved science and data capabilities, sophisticated monitoring systems, new marketlike incentives, and strong constituencies to maintain the intended balances between economic and public environmental benefits.

But at the very heart of the matter, parties on all sides must work to create a new intellectual framework and terminology: that environmental improvement is a valid market whose demands and satisfaction ought to compete fairly with all other consumer and commercial markets. Properly managed, that market can create great economic growth opportunities for the future.

Biodiversity and population growth

How important is population growth to current global biodiversity loss? Although there is no credible numerical answer to that question, the bulk of the evidence suggests that population growth is and has been an important underlying cause of biodiversity loss. Perhaps most worrisome is that some of the most rapid human population growth is occurring in the vicinity of some of the world’s biologically richest yet most vulnerable habitats.

We recently examined rates of population growth (including migration) and density in 25 “biodiversity hotspots,” areas identified by Conservation International as especially rich in endemic species but which have experienced dramatic reductions in the amount of original vegetation remaining within their boundaries. Nearly one-fifth of humanity (more than 1.1 billion people) lives within the hotspot boundaries, despite the fact they enclose only one-eighth of the planet’s habitable land area, according to 1995 population data. In all, 16 of the 25 hotspots are more densely populated than the world as a whole, and 19 have population growth rates faster than the world average. In addition, more than 75 million people, or 1.3 percent of the world’s population, now live within the three major tropical wilderness areas (Upper Amazonia and Guyana Shield in South America, the Congo River Basin of central Africa, and New Guinea and adjacent Melanesia).

The 25 Global Hotspots

  Hotspot Area (thousands of sq. km.) Human Population,1995 (thousands) Population Density, 1995 (per sq. km.) Extent of Population Growth Rate, 1995-2000 (percent per year) Original Vegetation (thousands of sq. km.) Original Extent Remaining Intact Original Extent Protected
1 Tropical Andes 1415 57,920 40 2.8 1258 25% 6.3%
2 Mesoamerica 1099 61,060 56 2.2 1155 20% 12.0%
3 Caribbean 264 38,780 136 1.2 264 11% 15.5%
4 Atlantic Forest Region 824 65,050 79 1.7 1228 8% 2.7%
5 Chocó-Darién-Western Ecuador 134 5,930 44 3.2 261 24% 6.3%
6 Brazilian Cerrado 2160 14,370 7 2.4 1783 20% 1.2%
7 Central Chile 320 9,710 29 1.4 300 30% 3.0%
8 California Floristic Province 236 25,360 108 1.2 324 25% 9.7%
9 Madagascar and Indian Ocean Islands 587 15,450 26 2.7 594 10% 1.9%
10 Eastern Arc Mts. & Coastal Forests 142 7,070 50 2.2 30 7% 17.0%
11 Guinean Forests of West Africa 660 68,290 104 2.7 1265 10% 5.6%
12 Cape Floristic Province 82 3,480 42 2.0 74 24% 19.0%
13 Succulent Karoo 193 460 3 1.9 112 27% 2.1%
14 Mediterranean 1556 174,460 111 1.3 2362 5% 1.8%
15 Caucasus 184 13,940 76 -0 3 500 10% 2.8%
16 Sundaland 1500 180,490 121 2.1 1600 8% 5.6%
17 Wallacea 341 18,260 54 1.9 347 15% 5.9%
18 Philippines 293 61,790 198 2.1 301 8% 1.3%
19 Indo-Burma 2313 224,920 98 1.5 2060 5% 5.2%
20 Mountains of South-Central China 469 12,830 25 1.5 800 8% 2.1%
21 Western Ghats and Sri Lanka 136 46,810 341 1.4 183 7% 10.4%
22 Southwest Australia 107 1,440 13 1.7 310 11% 10.8%
23 New Caledonia 16 140 8 2.1 19 28% 2.8%
24 New Zealand 260 2,740 11 1.0 271 22% 19.2%
25 Polynesia / Micronesia 46 2,900 58 1.3 46 22% 10.7%

By 1995, population density (people per square kilometer) in the biodiversity hotspots was almost twice that of the world as a whole, and 16 of the 25 hotspots were more densely populated than the world population density of 42 people per square kilometer. As of 1995, the three major tropical wilderness areas (A, B, C) were still populated at relatively low densities.

Source: Population Action International; data from NCGIA/CIESIN, 1998.

The total population of the 25 hotspots is growing 1.8 percent annually, compared to 1.6 percent for developing countries and 1.3 percent for the world overall. The combined population within the forested wilderness areas (A, B, C) is growing at a rate of 3.1 percent annually–more than twice the world’s average population growth.

Source: Population Action International.

Ecosystem Data to Guide Hard Choices

Although the native fish of Lake Victoria in Africa have long supported a productive local fishery, several other species of fish were introduced during the 1900s in an attempt to increase production. Those introductions succeeded beyond expectations: The harvest grew dramatically and one of the nonnative species, Nile perch, now accounts for 80 percent of the catch. But was this really a success? Taking into account the side effects of these introductions on other features of the ecosystem, the unforeseen costs may outweigh the benefits. To begin with, the introduced species contributed to the devastation of the native fauna. More than half of the lake’s 350 species of cichlid fish (80 percent of which are found nowhere else in the world) are now either extinct or have been reduced to populations that are only a fraction of their original size. In addition, pressure on the limited forest resources around the lake grew because fuelwood was needed to dry the oily Nile perch for transport and sale. That forest loss, combined with other land use changes, increased water pollution and lake siltation. This in turn led to an increased frequency and extent of eutrophic and anoxic conditions, which placed still more pressure on native species and may ultimately threaten the long-term productivity of the fishery.

Examples abound of vast and uncontrolled ecosystem “experiments” such as this, where people have altered ecosystems to meet one need, only to encounter an array of unforeseen side effects. The expansion of agricultural land into natural habitats increased food production but changed the quantity and quality of freshwater runoff as well. Fertilization increased crop yield but also caused eutrophication of nearby rivers and estuaries and is responsible for anoxic “dead zones” found in coastal areas near major agricultural river basins. Timber harvest and the transformation of forest land to agriculture helped to meet needs for food and fiber but also released carbon into the atmosphere and changed Earth’s albedo (surface reflectivity), contributing to the risk of global climate change.

The case of Lake Victoria thus provides a dramatic example of a more widespread problem: Historically, when we have modified ecosystems through an act as simple as adding a new species or through more sweeping changes in land cover or resource use, we have not had the ability to understand or forecast the complex ways in which our actions might affect other ecosystem goods and services. As a consequence, our management of ecosystems has not produced the net benefits that could have been achieved and in all too many cases has needlessly degraded or destroyed valuable resources. Our growing scientific capabilities and the growing human needs for these goods and services suggest that it is time to confront this problem.

Our historical approach to managing ecosystems can be characterized as sectoral and reactive. Sectoral approaches–focused on single objectives such as food production or timber supply–made sense when tradeoffs among goods and services were modest or unimportant. They are insufficient today, when ecosystem management must meet conflicting goals and take into account the linkages among environmental problems. Reactive management was inevitable when ecological knowledge was insufficient to allow more reliable predictions. Today, given the pace of global change, the escalating demand for resources associated with growth in population and consumption, and the significant social and economic costs associated with unwise resource management decisions, human welfare is utterly dependent on forward-looking and integrated management decisions.

The challenge of effectively managing Earth’s ecosystems and the consequences of failure will increase significantly during the 21st century. A “step change” is needed in the amount of information on ecosystem goods and services that is available to meet decisionmakers’ needs and, in the technical capacity, for effective ecosystem management. To help meet these needs, a partnership of scientists, United Nations (UN) agencies, international conventions, governments, the private sector, and nongovernmental organizations is attempting to launch an unprecedented worldwide initiative to mobilize scientific knowledge pertaining to ecosystems. This initiative, known as the Millennium Ecosystem Assessment, is similar in some respects to the Intergovernmental Panel on Climate Change (IPCC), but with a focus on biological systems rather than climate systems. It would capitalize on a unique convergence of data availability, scientific advances, and policymaker demand and, if successful, could dramatically accelerate the pace at which integrated and forward-looking ecosystem management approaches are adopted around the world.

Ecosystem tradeoffs

Humans have profoundly changed the world’s ecosystems. Some 40 to 50 percent of land has been transformed (through change in land cover) or degraded by human actions; more than 60 percent of the world’s major fisheries are in urgent need of actions to restore overfished stocks or to protect stocks from overfishing; natural forests continue to disappear at a rate of about 14 million hectares each year; and other ecosystems such as wetlands, mangroves, and coral reefs have been substantially reduced or degraded.

These changes in ecosystems have had significant effects on the goods and services they provide. Some of the impacts of these changes have been intended, such as the tremendous growth in crop production around the world, and many have been inadvertent, such as the degradation of water sources and the loss of biological diversity. Human development relies on ecosystem goods such as food, timber, genetic resources, and medicines, and on services such as water purification, flood control, carbon sequestration, biodiversity conservation, disease regulation, and the provision of aesthetic and cultural benefits. These goods and services are in turn dependent on various essential ecosystem processes such as pollination, seed dispersal, and soil formation. The loss and degradation of ecosystem goods and services hinder national development and take the most serious toll on the poor, who often depend directly on forests, fisheries, and agriculture for their livelihoods and who tend to be most vulnerable to problems resulting from ecosystem degradation such as floods and crop failures.

The sheer magnitude of the human impact on Earth ecosystems, combined with growing human population and consumption, mean that the challenge of meeting human demands for these goods and services will grow. Models based on the UN’s intermediate population projection suggest that an additional one-third of global land cover will be transformed over the next 100 years, with the greatest changes occurring over the next three decades. By 2020, world demand for rice, wheat, and maize is projected to increase by some 40 percent and livestock production by more than 60 percent. Humans currently appropriate 54 percent of accessible freshwater runoff, and by 2025 demand is projected to increase to an equivalent of more than 70 percent of runoff. Demand for wood is projected to double over the next half century.

These growing demands for ecosystem goods and services can no longer be met by tapping unexploited resources. The magnitude of human demands on ecosystems is now so great that tradeoffs among goods and services have become the rule. A nation can increase food supply by converting a forest to agriculture, but in so doing decreases the supply of goods that may be of equal or greater importance, such as clean water, timber, biodiversity, or flood control. It can increase timber harvest, but only at the cost of decreased revenues from downstream hydro facilities and an increased risk of landslides.

In order to make sound decisions about the management of the world’s ecosystems and to adequately weigh the tradeoffs among various goods and services that are inherent in those decisions, a dramatic increase is needed in the information brought to bear on resource management decisions. More specifically, effective management of the goods and services produced by ecosystems requires an integrated multisectoral approach, and it requires significantly greater use of ecological forecasting techniques.

The challenge of effectively managing Earth’s ecosystems and the consequences of failure will increase significantly during the 21st century.

The technical capacity to support integrated and forward-looking management decisions, particularly on a regional scale, is vastly greater today than even a decade ago. Three advances in particular have made this possible. First, the coverage and resolution of the new generation of remote sensing instruments, combined with long-term data sets pertaining to ecosystem conditions obtained through various national and international monitoring systems, provide scientists with the basic global and regional data sets needed to monitor ecosystem changes. Second, significant advances have been made in techniques and models that can be used for ecological forecasting. For example, watershed models enable relatively accurate predictions of the consequences of various changes in land use and land cover patterns on downstream water quantity and quality. Nutrient flow models enable predictions of the likelihood of eutrophication in watersheds subjected to increasing nitrogen or phosphorous inputs. And new combined climate/ecosystem models enable improved forecasting of the likely ecosystem effects of climate change.

Finally, considerable advances have been made in the field of resource economics. Using the tools and approaches now available for the valuation of nonmarketed ecosystem services, decisionmakers are better able to weigh economic tradeoffs among management choices. For example, within the United States more than 60 million people in 3,400 communities rely on National Forest lands for their drinking water, a service estimated to be worth $3.7 billion per year–an amount greater than the annual value of timber production from these lands. Only with this type of information at hand can a manager (or citizens) hope to make sound decisions balancing the tradeoffs and benefits obtained from various ecosystem goods and services.

A number of examples of the application of integrated ecosystem management now exist, such as New York City’s 1996 decision to invest in watershed protection to meet its clean water needs. In order to meet federal water quality standards, the city faced the choice of either filtering its water supply from the Catskill Mountains at a cost of $4 billion to $6 billion or protecting its water quality through watershed management. After examining the economics of the alternatives and after several years of negotiation with the local, state, and federal governments, the city opted for the watershed management approach at a cost less than half that of the filtration option.

The planned restoration of the Skjern watershed in Denmark provides another example of the application of integrated approaches to ecosystem management (as well as a cautionary tale about the problems that sectoral approaches can create). The course of the Skjern River has been modified several times since the 18th century, with the greatest change taking place in the 1960s when the lower 20 kilometers of the river were straightened and confined within embankments. River channelization had its intended effects of increasing the area of farmland and reducing the frequency of floods, but a series of unforeseen or unappreciated impacts more than offset these benefits: The frequency of rare but catastrophic floods increased, salmon populations plummeted, the new farmland subsided, salt intrusion and waterlogging reduced the agricultural productivity, and the increased agricultural use led to severe eutrophication of the river. Today, after careful modeling and cost-benefit analysis, Denmark is attempting to return the Skjern to something like its original state by eliminating embankments and recreating wetlands.

These examples illustrate the benefits of an integrated and forward-looking approach to ecosystem management that examines the consequences of various management alternatives for the full range of goods and services provided by the ecosystem involved. But significant obstacles prevent the more widespread adoption of this approach. Perhaps the greatest obstacle is not a technical one, but rather the current mindset that guides environmental management and is embodied in resource management institutions. It is the exception rather than the rule for a land manager to be asked to balance multiple objectives in his or her land use decisions. Farmers grow crops and foresters grow trees; ministries of agriculture support crop production, ministries of forestry support tree production. It is a nontrivial task for these institutions and managers to begin viewing their responsibility to be one of managing a bundle of goods and services and for policymakers to provide the incentives necessary to achieve this end.

And even where recognition exists of the desirability of considering the impact of management decisions on the full array of ecosystem goods and services, decisionmakers are often constrained by the lack of the basic technical capacity, modeling tools, and necessary data. For example, the watershed models mentioned above can be used to forecast the affect of various land use changes in a particular watershed on the timing and quantity of river discharge and on the sediment levels in the river, but these models require extensive site-specific information (about slope, soil type, land cover, timing of rainfall, etc.) that many regions do not have available.

Finally, an important barrier to the application of such approaches at a local scale stems from the dependence of local outcomes on regional or even global ecological processes. For example, for managers in a coastal region to effectively forecast trends in nearshore fisheries, they would need better information on trends in agricultural development in the watersheds draining into the coastal zone, since this will have a major impact on the likelihood of eutrophication in coastal zones. They would also need to know the potential impact of global phenomena such as climate change on temperature and precipitation in the region.

Millennium ecosystem assessment

There have been several instances in recent decades where a well-timed scientific or public policy initiative has capitalized on a foundation laid in basic scientific research to help bring emerging science into the mainstream of commerce or decisionmaking. In the late 1960s, institutions such as the World Bank, the UN Food and Agriculture Organization (FAO), the Rockefeller Foundation, and the Ford Foundation saw an opportunity to build on advances in crop breeding to spread the technologies and benefits of international agricultural research worldwide. In 1971, these institutions and other governments and foundations created the Consultative Group on International Agricultural Research, which went on to support agricultural advances that dramatically transformed the path of agricultural development in less developed countries. The 1990 establishment of the Human Genome Project, with a budget of more than $300 million per year, was a similar attempt to galvanize the findings of basic science and accelerate the application and use of those findings. The IPCC, established in 1988, also served to mobilize the findings from a growing body of work on climate science and effectively bring it to bear on the needs of public policy decisions related to climate change.

An analogous situation exists today in the case of ecosystem management. The data and tools needed for better management exist, but the obstacles noted above (awareness, capacity, and scale) now prevent the widespread application of these approaches. With these issues in mind, in 1998 the World Resources Institute, UN Environment Programme (UNEP), UN Development Programme (UNDP), and World Bank established a steering committee to explore whether a process could be developed to bring better scientific information on ecosystem goods and services to bear on public policy and management decisions. The Millennium Ecosystem Assessment Steering Committee is composed of leading ecological and social scientists from around the world and of representatives of many of the international bodies that might be either sources of information or users of information. The committee has proposed the following design for the initiative.

The proposed Millennium Ecosystem Assessment (MA) would be a four-year initiative to (1) use the findings of leading-edge natural and social science research on ecosystem goods and services to help make regional and global policy and management decisions, and (2) build capacity at all levels to undertake similar assessments and act on their findings. The MA would focus on the capacity of ecosystems to provide goods and services that are important to human development, including consideration of the underlying ecosystem processes on which those goods and services depend. Like the IPCC, the MA would be repeated at 5- or 10-year intervals to meet the changing needs of decisionmakers and to periodically update the state of the science regarding key policy choices.

The MA would address the following:

(1) Current ecosystem extents, trends, pressures, conditions, and value. The MA would provide baseline information for the year 2000 on the geographic extent of different ecosystems (including terrestrial, freshwater, and marine environments) and the land or resource use patterns associated with them. Building on the findings of other national, sectoral, and global assessments as well as on newer remote sensing data, it would present information on trends in ecosystem goods and services, their condition and value, their contribution to human development, and the pressures affecting them.

The parties to the major ecosystem-related conventions should authorize those conventions to engage as partners in this joint ecosystem assessment.

(2) Ecosystem scenarios and tradeoffs. The MA would present a range of plausible scenarios showing how the quantity and quality of ecosystem goods and services may change in coming decades in different regions of the world. For example, just as scientists within the IPCC tackled the question of how rising CO2 concentrations would affect climate, the MA would examine such questions as: How will the expected 50 percent increase in flows of fixed nitrogen over the next 30 years affect water quality and fisheries productivity in different regions of the world? Given projected trends in land use change over the next 30 years, what will be the likely effect on the availability and timing of freshwater supplies in different regions? Given the expected continued growth in species introductions worldwide, what will be the likely impact on biodiversity and on various ecosystem goods and services?

(3) Response options. The MA would identify policy, institutional, or technological changes that could improve the management of ecosystems, thereby increasing their contributions to development and maintaining their long-term sustainability.

The proposed assessment would examine conditions, scenarios, and response options at a global and national scale and would also include a small number of assessments at smaller scales to help to catalyze more widespread use of integrated assessments and to develop the methodologies and modeling tools needed by those assessments.

There is now considerable experience that can aid in the design of such a process. Experts that have examined successful and unsuccessful initiatives to better link scientific information to public policy actions point to three prerequisites for success: saliency, credibility, and legitimacy. Scientific information is salient if it is perceived to be relevant or of value to particular groups who might use it to change management approaches, behavior, or policy decisions. It is credible if peers within the scientific community perceive the scientific and technical information and conclusions to be authoritative and believable. It is legitimate if the process of assembling the information is perceived to be fair and open to input from key political constituencies, such as the private sector, governments, and civil society.

The IPCC successfully meets these criteria. Considerable effort has been made to ensure that the MA will also meet these criteria. In order to ensure that the findings are salient and legitimate, the key users need to participate in designing the focus and content of the process. More specifically, the intended users must be more than a hypothetical audience; they must be actively requesting such an assessment, or else there is a risk that the findings will not be used. However, unlike the IPCC, which has a single “audience” in the form of the Framework Convention on Climate Change, it was apparent from the outset that the set of potential users of the findings of the MA is quite diverse. At the international level, a number of different ecosystem-related conventions, such as the Convention on Biological Diversity, the Convention to Combat Desertification (CCD), and the Ramsar Wetlands Convention, need the type of information that the MA would produce. In addition, national governments, regional institutions, national ministries, and the private sector are also important users, because they are most directly engaged in the specific management actions that can benefit from improved understanding of the potential impacts of various management decisions on ecosystem goods and services.

For this reason, the steering committee has proposed that a board of users drawn from this array of institutions be established to govern the MA, with particular representation from the ecosystem-related conventions. This board, in consultation with the scientists that will undertake the assessment, will identify the questions that the MA will seek to answer, thereby ensuring that the scientific findings address the issues of relevance to key users. The board also will ensure the legitimacy of the process by involving key stakeholders and setting the policies for such issues as peer review. In this way, the MA will become a joint assessment undertaken in partnership by several ecosystem-related conventions and other key users to meet the specific needs of the decisionmakers represented by those institutions.

Equal effort has gone into designing the process so that it will be highly credible within the scientific community and with various users. Again, the IPCC provides useful lessons. Part of IPCC’s success can be traced to the fact that IPCC Assessments are not a particular institution’s interpretation of the findings of climate science but rather the direct conclusions of the experts themselves. This arrangement will be emulated in the MA. The MA will be conducted through a set of working groups, each chaired by leading scientists in the fields in question. The chairs of each working group will make up the ecosystem assessment panel, which in turn will be co-chaired by leading natural and social scientists. The assessment panel will interact closely with the MA board in identifying the questions that should be answered by the MA, but the scientific assessment will then follow an independent peer-reviewed process.

A number of different institutions will facilitate the assessment. A small core secretariat will be established at the institution, housing one of the chairs of the process. Other institutions will house coordinators for the various working groups or perform administrative, logistical, or outreach functions. For example, the working group focused on scenario development is likely to be organized through the Scientific Committee on Problems of the Environment (SCOPE), a program of the International Council on Science. Within the UN system, the MA will be conducted through a partnership arrangement among UNEP, UNDP, FAO, and the UN Educational, Scientific, and Cultural Organization. Finally, the assessment will be closely linked to a number of processes such as the International Geosphere Biosphere Program, the IPCC, the Global International Waters Assessment, and the UNEP Global Environmental Outlook process.

From rhetoric to reality

The MA has already built substantial momentum, but its creation is by no means ensured. The successful launch and completion of the MA will require political buy-in, financial support, and scientific engagement. From the political standpoint, governments and other users, in particular the international environmental conventions, must “own” the assessment. It will not succeed if the various users view it as an external process being conducted by scientists that may or may not generate useful information. Instead, it must be a joint initiative of the various users that is designed to meet their needs.

Considerable progress has already been made in establishing this level of ownership among the various users. In May 1999, the Ramsar Convention noted “the scope of the proposed Millennium Assessment of the World’s Ecosystems, currently under development, to deliver valuable related information of relevance to the application of the Convention.” In September, ministers of environment or their representatives from Australia, Canada, Cote D’Ivoire, the Czech Republic, Denmark, Finland, Germany, Ghana, Japan, Kenya, Mozambique, the Netherlands, Nigeria, Norway, South Africa, Sweden, Togo, the United Kingdom, the United States, and Zimbabwe stated that: “The concept of a global ecosystem assessment . . . should be supported as a means of helping decision makers in assessing the impact of their various actions on their national as well as on the global ecosystem.” At the November 1999 Conference of Parties to the CCD, Senegal introduced a statement recommending support of the MA, and this was supported by Brazil, Norway, China, Kenya, and the United States. And, at the January 2000 meeting of the scientific body of the Convention on Biological Diversity (CBD), parties requested that the convention’s executive secretary explore ways and means of collaborating in the MA with other relevant conventions and organizations.

The United States could make an invaluable contribution by giving a global coverage of Landsat 7 data for the year 2000 to the assessment process.

But an important additional step is needed: The parties to the ecosystem-related conventions should now authorize those conventions to engage as partners in this joint ecosystem assessment. This does not mean that the MA would be the only mechanism available to the conventions to meet their science assessment needs; indeed, each convention is likely to design other more targeted assessment processes. However, in areas related to ecosystem goods and services where the scientific information needs of these and other related conventions overlap extensively, it will be important that these conventions directly communicate their information needs to the MA and, in turn, have a formal channel for receiving the findings of the assessment process.

On the financial front, the Global Environment Facility and the UN Foundation have indicated strong interest in the MA, subject to their council and board approval, and these sources of funding could cover about half of the $20 million budget. Funding is not yet secured, however, for many of the most essential components of the assessment, including the work to develop scenarios for ecosystem change and the catalytic local, national, and regional assessments. Just as important, a process such as the MA depends heavily on in-kind contributions by experts. (By way of comparison, the budget of an IPCC assessment is comparable to the MA budget, and in-kind contributions of time to the IPCC process are estimated to be equal to its budget.)

In the case of the MA, one of the most valuable in-kind contributions could be in the form of data. A major goal of the MA will be to improve baseline information related to ecosystem goods and services for the year 2000. Although remote sensing cannot provide all of the information needed for a baseline assessment, it is nonetheless one of the most important sources of new data that could be synthesized and disseminated through the MA process. The United States, for example, could make an invaluable contribution to the MA and to the capacity of other countries and institutions to effectively manage their ecosystems by contributing a global coverage of Landsat 7 data (worth some $10 million) for the year 2000 to the MA process. Although those images would require further processing by individual nations and researchers, they would nonetheless provide an extraordinarily useful common baseline data set for measuring and monitoring changes in ecosystems. These data, combined with the satellite information on land cover and ocean characteristics that will be available within the next few years from such instruments as the Moderate-Resolution Imaging Spectroradiometer aboard the recently launched Terra satellite would represent a quantum leap in the amount of information countries have on hand for making wise decisions regarding the use of their ecosystems.

On the scientific front, leading scientists from around the world have already been engaged in designing the MA, as members of either the steering committee or the advisory group, and an article published by the steering committee in Science has helped to generate awareness within the scientific community. But broader engagement of social and natural scientists is essential. This requires the commitment of a few leading scientists to play key roles in chairing the various components of the assessment process, and it will require convincing the scientific community that the time they devote to the process will be time well spent. A convincing case can be made only if it is clear that decisionmakers will use the findings.

Scientists in the United States were some of the early proponents of creating an IPCC-like ecosystem assessment process. Furthermore, the United States provides models of integrated assessment approaches, such as the Heinz Center Report on the State of the Nation’s Ecosystems, and examples demonstrating the utility of these approaches for ecosystem management. However, on the political front, the proposed MA is somewhat unusual among international scientific initiatives in that the United States has not been a key force behind the idea. Instead, a wide range of European countries and developing countries in Africa, Latin America, and Asia have thus far provided the leadership. Greater engagement of the United States with its scientific and data resources could greatly strengthen the process.

The linkages among political buy-in, financial support, and scientific engagement are direct. Buy-in requires that the users be convinced that the process will meet their needs. If convinced of the utility, the various users are likely to provide the financial support needed by the process. If the buy-in and financial support exist to show that the work will be used, then the scientific community is likely to make the commitment needed to undertake the assessment. And assurance that the scientific community will be effectively engaged and that the results will be of the highest credibility is key to obtaining the political buy-in.

The proposed MA is a novel institutional arrangement and process being built by an equally novel alliance of governmental, intergovernmental, scientific, and nongovernmental institutions to meet a very real set of issues with profound impacts on human lives. Decisions taken by local communities, national governments, and the private sector over the next several decades will determine how much biodiversity will survive for future generations and whether the supply of food, clean water, timber, and aesthetic and cultural benefits provided by ecosystems will enhance or diminish human prospects. The scientific community must mobilize its knowledge of these biological systems in a manner that can heighten awareness, provide information, build local and national capacity, and inform policy changes that will help communities, businesses, nations, and international institutions better manage Earth’s living systems. The MA could help dramatically speed the adoption and use of leading-edge ecosystem science, management approaches, and tools, but its success–and even existence–now require much broader engagement by institutions and agencies in the United States as well as other countries.

Spring 2000 Update

Richardson acts to save DOE’s research parks

In “Preserving DOE’s Research Parks” (Issues, Winter 1998), we argued that some of the nation’s most irreplaceable outdoor laboratories for scientific research and education are at risk of being disposed of by the Department of Energy (DOE). We are pleased that Secretary of Energy Bill Richardson has recently acted to protect the unique values of DOE property, but we believe that more steps should be taken.

Since June 1999, Richardson has set aside lands in five of the seven DOE research parks for wildlife preservation, research, education, and recreation. Management plans have been or are being established for 1,000 acres at the Los Alamos National Laboratory in New Mexico, 57,000 acres at the Hanford Nuclear Reserve in Washington, 10,000 acres at the Savannah River Site in Georgia, 74,000 acres at the Idaho National Environmental and Engineering Laboratory, and 3,000 acres at the Oak Ridge Reservation in Tennessee. These sites are to be managed as biological and wildlife preserves, allowing opportunities for research, education, and, for most of them, recreation. “In places of rare environmental resources,” Richardson said, “we have a special responsibility to the states and communities that have supported and hosted America’s long effort to win the Cold War . . . and we owe it to future generations to protect these precious places so that they can enjoy nature’s plenty just as we do.”

The preserves are home to several rare wildlife species, including bald eagles and loggerhead shrike, as well as numerous other animal and plant species. The only population of one rare plant, the White Bluffs bladder pod, occurs at the Hanford site. Under Richardson’s plan, traditional Native American cultural uses of these sites will continue. The preserves will also continue to provide a safety buffer for DOE facilities.

Despite these promising moves, the long-term viability of the management arrangements that have been established varies across the sites. For example, because of various constraints, the DOE agreement with the Tennessee Wildlife Resources Agency for management of the Three Bend Scenic and Wildlife Refuge on the Oak Ridge Reservation is for only five years, compared to the 25-year agreement with the U.S. Fish and Wildlife Service at Hanford. Further, some Oak Ridge city leaders have opposed establishing the refuge, because they want the land to be used for housing and industrial development.

Pressure to develop these unique lands is likely to continue to mount. Although DOE is required to identify surplus property according to the terms of Executive Order 12512, we have asked that this process occur without compromising long-term research, conservation, and education opportunities, including possible new facilities. To date, we feel that these values have not been given adequate weight and have not been integrated into national environmental goals.

We also believe that retaining the research parks is a cost-effective means of bolstering President Clinton’s Lands Legacy Initiative. Research park lands near communities can serve as buffers against sprawl as well as offering nearby urban residents diverse educational and recreational opportunities, such as hiking, biking, hunting, and nature walks.

We further recommend that DOE develop a long-term management plan for protecting opportunities for energy-related research, conservation, and education in the DOE research parks. This plan should include an outreach program specifying ways for the community, educators, and scientists to take advantage of the user facilities of the parks. For example, local science camps could be expanded to become national opportunities for students and educators to learn about energy use, conservation, and the environment. We envision that DOE’s “EcoCamps” could be just as popular as NASA’s Space Camps.

VIRGINIA H. DALE

PATRICIA D. PARR


States take lead in utility reform

In “Unleashing Innovation in Electricity Generation” (Issues, Spring 1998), I argued that removing the barriers to competition would disseminate state-of-the-art electric systems, foster technological innovations, double the U.S. electric system’s efficiency, cut the generation of pollutants and greenhouse gases, enhance productivity and economic development, spawn a multibillion-dollar export industry, and reduce consumer costs. The United States, in fact, is on the verge of a revolution in power plant innovation.

I also noted, however, that this revolution will occur only if lawmakers eliminate the numerous policy barriers, based on a decades-old system of regulated monopolies, that retard the deployment of these innovative technologies. Needed, for instance, are national interconnection standards to ensure that utility monopolies don’t set gold-plated requirements designed to restrict the access of competitors to the grid. Needed also are new environmental regulations that don’t ignore the fact that innovative technologies will displace polluting power plants. Needed, moreover, are common tax treatments, so that advanced turbines used to generate electricity do not face lengthy and prohibitive depreciation schedules.

Twenty-four states have moved to restructure their electricity industry, believing that competition rather than monopoly regulation will lead to lower prices and better service. Rather than focus on ways to advance innovation, however, most of the policy debate in those states concentrated on how much existing utilities could recover of the stranded costs associated with their power plants and other assets judged to be uneconomic in a competitive marketplace. In most states, in fact, utility lobbyists won lucrative stranded-cost judgments that are making it difficult for entrepreneurs to compete.

Progress on federal legislation has been slow. Moreover, H.R. 2944, which was approved in 1999 by a House subcommittee, also was quite favorable toward utilities, which wanted to ensure that the states, where utility lobbyists hold substantial sway, would maintain control of most restructuring issues.

Several new initiatives, however, point to a growing interest in policies that advance innovation in the electricity market. Illinois, for instance, allowed consumers using small and efficient generators to be relieved from high stranded-cost charges. Pennsylvania also curtailed stranded costs and opened the door for scores of competitors willing to finance and build innovative power stations. The Clinton administration’s legislative proposal included tax incentives for combined heat and power systems, and H.R. 2944 advanced national interconnection standards to ensure that utility monopolies could not block competitors providing distributed generation.

Yet there also have been new barriers that retard competition and innovation. Utility lobbyists in late 1999, for instance, convinced the New York State Public Utility Commission to permit the charging of significantly higher fees to any generator using backup power for anything other than an emergency outage. Many utilities also have begun to impose “uplift charges” or additional fees for use of their transmission and distribution system.

No doubt restructuring this critical industry, with its whopping $400-plus billion in annual revenue, is complex. The policy debate, however, was sparked initially by technological advancements, particularly in turbines. The challenge remains for policymakers to reform an outmoded regulatory system that blocks the United States from enjoying the full benefits of these innovations.

RICHARD MUNSON

Conservation in a Human-Dominated World

Forging a tangible connection among environment, development, and welfare is a formidable challenge, given the complex global interactions and slow response times involved. The task is made all the harder by quickening change, including new ideas about conservation and how it can best be done. Present policies and practices, vested in government and rooted in a philosophy that regards humanity and nature as largely separate realms, do little to encourage public participation or to reinforce conservation through individual incentives and civil responsibility. The challenge will be to make conservation into a household want and duty. This will mean moving the focus of conservation away from central regulation and enforcement and toward greater emphasis on local collaboration based on fairness, opportunity, and responsibility. Given encouragement, such initiatives will help reduce extinction levels and the isolation of parks by expanding biodiversity conservation in human-dominated landscapes.

The problems that beset current conservation efforts are daunting. Three factors in particular threaten steady economic and social progress as well as conservation: poverty, lack of access rights linked to conservation responsibilities, and environmental deterioration. Poverty and lack of access rights, especially in Africa, will keep populations growing and will fuel Rwandan-like emigration and political unrest. With short-term survival as its creed, poverty accelerates environmental degradation and habitat fragmentation. The peasant lacking fuel and food will clear the forest to plant crops or will poach an elephant if there is no alternative. So, for example, tropical forests–home to half the world’s species–are being lumbered, burned, grazed, and settled. Forest destruction precipitates local wrangles between indigenous and immigrant communities over land and squabbles between North and South over carbon sinks and global warming.

We cannot rely on the trickle-down effect of economic development and liberalism to eradicate poverty, solve access problems, or curb environmental losses–at least not soon. It was, after all, unfettered consumerism in the West that killed off countless animal species, stripped the forests, and polluted the air and water. And the same consumer behavior and commercial excesses are still evident, depleting old-growth forests and fighting pollution legislation every step of the way.

The policies, practices, and institutions needed to imbed conservation in society should therefore aim to change the perceptions of conservation from a cost of development imposed by outsiders to an individual and public good central to human advancement and welfare. To succeed, conservation must be as widely understood as hygiene and as voluntarily practiced as bathing.

The rise of modern conservation

The modern global conservation movement began in 19th- century Europe, triggered by the impact of population growth and industrialization on the environment. Growing affluence, education, mobility, and democracy saw popularly elected governments whittle down the aristocratic monopoly on natural resources, including forests, game, and fish. By mid-century, Germany had set aside national forest plantations to maximize timber yields and regulate hunting. By the turn of century, jurisdiction over natural resources had passed largely into government hands throughout the Western world. In the United States, the first national parks had been set aside to save grand natural monuments such as Yellowstone and Yosemite.

Coinciding as they did with mass migration from farm to city, the new conservation laws placed wildlife not only in government hands, but also effectively in those of the urban majority. Having retreated from nature, the urban populace began to see nature itself as a retreat from the grime and ugliness of industrial cities. Western science reinforced the growing distinction between humanity and nature. Nature became a balanced and self-regulating system in the eyes of scientists–the analog of technology’s greatest industrial achievement, the steam engine. By the 1930s, the emerging field of ecology contributed the principles on which modern conservation practices were founded: maximum sustained yield and protected areas.

The conservation movement was disseminated worldwide largely through colonialism. However, under colonialism, transfer of traditional ownership patterns of natural resources to government meant foreign domination, not conservation by and for the citizenry. The implications for conservation were dire. East Africa, where I grew up and saw the backlash emerge in the period leading up to independence, is typical.

At the turn of the century, the colonial powers in East Africa established hunting quotas in an effort to save some of the major forms of wildlife. Although saving the elephants and other species from the excesses that decimated the bison and exterminated the quagga, the new laws denied indigenous hunters their traditional rights. The sentiments of one Samburu elder were typical of what I heard time and again but that game departments steadfastly ignored: “The government has placed value on these animals, but they are of no value to us any more.” Wildlife, once considered “second cattle” that saw pastoralists through droughts, became a European privilege and an African burden.

Later, rising populations and competition for space saw colonial officers push through the establishment of scores of national parks in advance of independence. But, like all parks until the past few decades, East Africa’s were set aside to protect natural wonders, not biodiversity. The massed wildebeest herds of Serengeti and Amboseli were Africa’s natural monuments, and parks to “protect” the herds were hastily set aside without the benefit of ecological surveys or knowledge about the migration patterns. Consequently, the parks neither spanned habitat diversity nor covered migratory ranges. They nonetheless won support in the Western world and among many African leaders. The response locally was altogether different. “First they took our animals, then our land,” was the common view. In Tanzania, national parks were pejoratively called “Shamba la Bibi,” meaning the queen’s garden.

Expectations ran high that independence would restore rights to use wildlife. Instead, economically strapped governments, under pressure from the Western world and in need of hard currency, set aside more parks to generate tourism. Local resentment deepened further.

Having lived through the transition, studied wildlife, and directed Kenya’s wildlife agency, I think that had it not been for action taken by colonial and national governments, East Africa’s wildlife would be a shadow of what it is today. That said, the animosity toward conservation, compounded by a host of new challenges, is fast eroding past gains. There is a renewed urgency to reformulate conservation polices and practices to address the new challenges.

Rights, complexity, and change

The reasons for conservation have expanded steadily from its ancestral roots in food security to encompass recreation, esthetics, education, science, welfare, existence rights, wilderness, biodiversity, ecological services, and other values. Outwardly, it might seem that the more reasons to conserve, the better. In reality, pluralism is itself a threat, precisely because it is linked to rights. Rights weaken the central authority currently driving the conservation movement. The strength of central authority is that, with minimum negotiation, it cuts through the messy world of competing interests in the name of public good. On the downside, self-interests blossom when government enforcement and arbitration wither, feeding factionalism and confrontation.

Rights and justice are particularly sore points in Africa when it comes to natural resources. In response to the 1960s environmental movement that brought home the concept of the fragility of our planet, governments in Africa instituted tighter surveillance and control of resources. But all too often they used the name of environmentalism to acquire control and exploit national resources. Greed and corruption hastened resource depletion and deepened poverty, raising the anti-conservation tempo.

To succeed, conservation must be as widely understood as hygiene and as voluntarily practiced as bathing.

The rising clamor for democracy in Africa, leading to newfound freedoms, poses another threat to current conservation efforts by airing indigenous views of nature that had been ignored, disparaged, or suppressed by colonial and independent governments alike. In general, most non-Western cultures see nature and humanity holistically rather than as separate entities, and in utilitarian rather than sentimental terms. It was, for example, these divergent views and the growing non-Western voices at international conventions that saw the 1989 global ban on ivory sales partially reversed in 1997 in favor of sustainable use.

The real danger of ignoring the root causes of resistance to conservation, whether in the mind or through the pocket, lies in what I call the “transitional vortex.” By this I mean the social destabilization caused when traditional values, access rules, and social sanctions are forgotten or disregarded; central authority and capacity wane; and new ideologies, institutions, and practices have yet to replace them and win public support.

Science, too, has a direct bearing on conservation philosophy, policy, and practice. Two examples make the point. First, island biogeography has in recent decades demonstrated the importance of area to species diversity. Consequently, parks, once considered an adequate measure to save species, are now seen as insufficient to forestall the extinctions caused by habitat fragmentation. Second, by the 1980s, the equilibrium theories of populations and ecosystems on which modern conservation was founded gave way to nonequilibrium models rooted in chaos theory and nonlinear dynamics. The shift from stability and predictability to flux and uncertainty has profound implications. It means, for example, that maximum sustainable yields of fish stocks cannot be readily calculated. The large degree of uncertainty due to stochastic events, such as climate and complex interactions between predator and prey, calls for conservative catch limits as a hedge against overharvesting and for variable rather than fixed annual quotas as a way to reflect fluctuating conditions.

The upshot of such changing views in science is that ecologists are struggling to come to grips with the realization that ecosystems and habitats are both inherently unpredictable and increasingly dominated by human activity. To make matters more complicated, species are likely to shift individually in response to climate change, rather than migrate as tightly knit associations. This means that the 5 percent of land currently set aside for protection is no longer enough to preserve biodiversity. Indeed, some ecologists wonder whether anything less than habitats connected across entire continents will do. Such complexity and uncertainty are shifting the emphasis from conservation prescriptions based on deterministic models to experimentation, monitoring, and adaptive management aimed at multiple benefits.

Finally, the Western perception of nature is itself seen by anthropologists as a social and changeable construct. Conservationists take umbrage at such insinuations, but Yellowstone proves that, if anything, our views are more fickle than those of the !Kung hunter-gatherers of the Kalahari or the cattle-keeping Maasai of Kenya and Tanzania. Over the past century, the predominant views of this venerable park have changed from national monument, to vignette of precolonial America, to wildlife refuge, and, recently, to a biodiversity center within a larger ecosystem.

Ironically, this shift in scientific paradigm from reductionism, predictability, and the separation of humanity and nature to a holistic and dynamic view resonates more with traditional cosmologies than with the scientific theories on which the modern conservation movement was founded. This augers well for pluralist and local solutions.

Pluralism and decentralization

Although still serviceable, conservation ideology, policy, and practice must either adapt to new knowledge and circumstances or wither. Fortunately, there are a few examples that not only respond to the realities of democracy, pluralism, and weakening central government, but actually grow out of them. These examples serve to illustrate the strengths and weaknesses of devolved conservation and point up ideas for a policy framework that unifies action within larger, longer-term societal goals.

Starting in the late 1960s, I was instrumental in brokering a deal between the Kenya Wildlife Department and Maasai pastoralists that allowed migrant animals from Amboseli National Park to use the Maasai land in return for a grazing fee. Facing land loss, an ailing livestock economy, and disappearing culture, the Maasai saw tourist concessions, employment, and social services emanating from the park as a timely opportunity to diversify their economy and offset hard times. Soon after the scheme began in 1977, the Maasai, who had steadfastly resisted the creation of a national park and had speared the black rhino to near extinction in protest, stated that wildlife had become their “second cattle” once again.

The outcome of the new agreement was dramatic. Poachers had cut Amboseli’s elephant population from 1,000 in 1970 to fewer than 500 in 1977, mirroring the impact of the ivory trade on herds throughout Kenya and most of Africa. Once the Amboseli agreement went into effect, protection by the Maasai curbed poaching and saw the herd fully recover by 1998. In contrast, numbers continued to plunge in adjacent Tsavo National Park–a mere 50 miles away–from 44,000 elephants in 1970 to 6,000 in 1989, when the ivory ban went into effect.

Today, Amboseli National Park–along with the Amboseli ecosystem, some 10 times larger than the park itself–remains open to migrating wildlife. Among other successes, zebra numbers increased from 4,000 in 1977 to more than 12,000 in 1998, and wildebeest from 6,000 to 13,000 over the same period. Tourism, once confined to the park, has spread to adjacent Maasai ranches.

If Amboseli showed the potential for engaging communities in conservation, it also showed the weaknesses inherent in local participation. Despite enabling policy and legislation introduced in the mid-1970s, the community was not then self-organizing or skilled enough to build on the initiatives. Progress depended too heavily on outside conservation organizations and financial backing. The Maasai lacked the institutions and the modern business skills that went beyond their diffuse mode of governance. Also, the difficulties of achieving broad participation and understanding in a diffuse and mobile population limited how deeply conservation was adopted at the household level. Despite these shortcomings, however, the involvement of the Maasai led to the implementation of conservation practices that far surpassed the Wildlife Department’s ability to conserve wildlife.

In 1989, the Kenyan government, recognizing the need for a stronger national agency, created the semiautonomous Kenya Wildlife Service (KWS) to replace the Wildlife Department. With full control over its own revenues and run along less bureaucratic lines, KWS quickly gained backing from donor groups and nongovernment organizations (NGOs). Emphasis was focused on building local capacity and self-organizing associations. In Amboseli, a newly formed landowner’s wildlife association, emboldened by multiparty democracy and the emergence of individual rights, retained its own scouts trained by KWS to monitor elephants and other species. By 1997, the association had set aside three wildlife sanctuaries of its own, encouraged by KWS’s “parks beyond parks” initiative.

Indeed, such locally based conservation efforts have become widespread around the world over the past two decades. Zimbabwe’s Campfire Program covers extensive tracts of community land outside parks and largely accounts for the steady increase in the country’s elephant population. In the United States, a group called the Greater Yellowstone Coalition, modeled on the Amboseli approach, is trying to win space for wildlife by forging a conservation alliance aimed at balancing biodiversity, forestry, mining, recreation, and other interests in a highly fragmented landscape. The coalition’s goal is to better integrate the 2.2-million-acre park with the surrounding 13-million-acre ecosystem. If the coalition is successful in winning back migration access between the park and the ecosystem, the bison, elk, grizzly, and the recently reintroduced gray wolf will increase in numbers and improve their survival prospects over those of populations confined to the park.

Minimally, such new local initiatives can help buffer protected areas from ecological isolation, thus serving as a way to shore up their deficiencies by adding usable habitat for wildlife. But the real potential of local conservation efforts is far greater. If fostered, local action can open up much of the vast rural landscape and insert conservation into development plans. Of course, this implies more altered, less pristine wildlands, but for animals and plants, coexistence offers an evolutionarily better bet by far than does confinement in tiny fragmented parks.

Conservation ideology, policy, and practice either must adapt to new knowledge and circumstances –or wither.

A final example shows the potential far removed from parks. In a large area straddling the Arizona-New Mexico border, ranchers have joined together as the Malpai Borderlands Group in an effort to reverse a century of environmental degradation. The threats they face are familiar to the Maasai half way around the world: land subdivision, shrinking livestock economy, and loss of culture. The group’s stated objective: “To restore and maintain the natural processes that create and protect a healthy, unfragmented landscape to support a diverse, flourishing community of humans, plant, and animal life.” To this end, the ranchers have linked up with the Animus Foundation, the Nature Conservancy, university scientists, and government agencies, including the U.S. Forest Service and the Fish and Wildlife Service, to establish grass banks and conservation easements aimed at restoring the land and curbing subdivision. On the commercial front, plans are under way to sell “conservation beef” at premium prices by featuring the ranchers’ use of ecologically sound land practices.

In 1992, the Liz Claiborne Art Ortenberg Foundation sponsored an international meeting, held at Airlie House in Virginia, to review and promote community-based conservation. The meeting brought together 70 participants, including donors and representatives of local communities, government agencies, NGOs, and specialized disciplines. As participants examined case study after case study, it became apparent that the conditions enabling local participation came down to democracy, rights, justice, trust, equity, opportunity, incentives, skills, and new forms of institutions–a lexicon that now has grown familiar among international conservation bodies and donor agencies. Interestingly, the communities represented at Airlie House didn’t abandon government. What they called for was better governance to facilitate such initiatives and to provide the larger checks and balances not achievable locally.

Policy implications

As a conservation researcher and manager, I believe that the fundamental lesson that has emerged is that the modern conservation movement is faltering not because it is off track but because its ideology and practice rest on knowledge, prescriptions, and expectations that communities in conservation “target zones” either don’t understand, don’t agree with, or don’t have the skills or latitude to do anything about. The process rather than the goal is flawed. Past practices have been too rooted in simplistic prescriptions to be widely accepted. Furthermore, in bypassing the political process governing other concerns of society, command-and-control conservation has placed itself at odds with the very communities it ultimately depends on for success.

The foundations of new policy must be based on a deeper scientific understanding of complex interacting processes and on more effective principles for conservation in human-dominated ecosystems. Public education will be required to ensure that large-scale and long-term systems interactions and change–as well as ultimate global limits–are widely appreciated and understood. The enormous uncertainties in our understanding of ecological and geophysical processes call for wide safety margins in assessments of the tolerance of populations, ecosystems, and planetary properties, as well as for the development of techniques that take into account the full environmental costs of development. This particularly applies in the case of biodiversity, a nonrenewable resource. A Maasai saying, “He who has traveled far, sees far,” speaks to wisdom we cannot acquire from sectional thinking and simplistic models.

To create a better atmosphere for conservation, policy must be founded on basic universal rights and must come to grips with the reality of pluralistic values, cultural diversity, and conflicting interests. Policy also must address the mistrust, inequity in costs and benefits, and asymmetry in knowledge and power that mitigates against poor and marginalized communities in rural areas. Finally, conservation must encourage environmental and resource capacity and responsibility in institutional and social systems that are self-organizing and self-reinforcing.

A number of governments, donor groups, and conservation bodies are taking their cue from successful communities and reshaping policies and practices to achieve broad participation. The distinction between directing and responding is narrowing as dialogue, negotiation, and collaboration replace command-and-control methods. In the United States, the high transaction costs of pollution regulation and enforcement associated with wildlife conservation are bringing adversaries into new collaborative arrangements. An array of new incentives, including conservation trust funds, easements, and market incentives, can only broaden the scope for more efficient conservation partnerships.

I want to stress that we should not throw out the proverbial baby with the bathwater. The modern conservation movement is not so much unserviceable as insufficient. Many principles for better policies already are recognized, whereas others are emerging and evolving into a diverse and adaptive conservation creed. For example, the World Conservation Union’s Strategy on Conservation for Sustainable Development, the World Commission on Environment and Development, the Convention on Biological Diversity, and the lessons from Airlie House all point to common ground. The common goals and principles include large-scale, long-term maintenance of ecological processes; broad public participation; clear and equitable rights and responsibilities for resource use; environmental impact assessment; adequate safety margins to ensure sustainability; and adaptive management strategies based on continuous monitoring.

In moving forward, then, success will depend on creating a demand-pull rather than a command-drive. It will mean shifting the locus of discussion, decision, capability, and action from the national and international levels to the local level in a way that allows flexibility and experimentation to match culture and circumstance. Although there is no rigid formula for success, several factors can help get the process going:

  • Participation and collaborative partnerships. Involvement of the ultimate traditional or legal landowners, communities, or lessees is the starting point of engagement. Communication with other interest groups, such as government agencies, NGOs, and businesses, aims at building trust, negotiating interests, and allocating roles, rights, and responsibilities. This often entails lengthy effort devoted to agreeing on procedures, breaking down asymmetries in knowledge, developing an operational language, and perhaps calling on new independently gathered information or turning to outside facilitation and arbitration.
  • Scale-relevance. Linkages are needed among land authorities and other bodies to address the scale of resources or biodiversity. This calls for spatial connections between landowners as well as institutional linkages with national agencies and conservation bodies.
  • Local self-organizing and self-regulating institutions. These institutions should have delegated roles and responsibilities based on codes, regulations, internal enforcement, and accountability.
  • Multiple goals, integration, and adaptive management. All parties should work together to determine how conservation goals and interests can be incorporated into other development objectives and, if necessary, to create the incentives to do so.

Creating a demand-pull locally means opening up a cycle of exchange among conservation stakeholders into which relevant and timely information, expertise, and management can be drawn. It is this cycle of exchange that best distinguishes the process from the “delivered” science, policy, legislation, and plans of past conservation practice.

Integrating conservation initiatives

Although the involvement of local institutions is critical, the role of other institutions is no less important. National NGOs play a pivotal part as watchdogs over government and resource users. Together with international NGOs and foundations, they can help develop conservation principles, priorities, and strategies, as well as lobby for their implementation and raise seed funds. Moreover, together with experienced facilitators from the corporate world, they have a new and central role in helping develop participatory approaches and the skills and institutional capacity needed to advance local initiatives. Increasingly, businesses and universities should help develop and deliver techniques, technology, and marketing strategies for sustainable use, forming interacting institutions, as in Malpai. In an interesting twist, the Airlie House meeting declared the death of “donors” and the birth of “resource brokers.” That perhaps better defines the constructive role that aid agencies can play in creating demand-pull and public accountability.

Despite its limitations, the role of government is critical in creating an arena for the formulation of conservation goals.

To an extent, defining roles for institutions itself risks perpetuating the stereotyped roles of donor, recipient, watchdog, and so on. What we should be encouraging is multiple pathways and feedback linkages for the exchange of knowledge, values, and skills among individuals and reciprocating groups, as well as the formation of flexible institutions that facilitate the process.

Finally, as the Airlie House participants recognized, governments are central to the process. Indeed, from the foregoing tenets, principles, and processes, the role government is called upon to play in aligning those efforts with the broader, long-term societal goals is crucial. Despite the difficulties, a number of governments have made a start. An example from Kenya shows how the making of a policy itself identifies the importance of government in starting the political process that feeds the cycle of exchange.

In 1996, I had the opportunity as director of the Kenya Wildlife Service to help review and revise the country’s 20-year-old wildlife policy. The starting point was public engagement with all stakeholders to create a dialogue on the wildlife issues. KWS commissioned an independent five-person review group to gather opinions and recommendations from a cross-section of society on how to minimize the conflicting interests between people and wildlife. The debates and findings were covered widely in the media. In addition, a number of technical reviews on biodiversity, land use, tourism, and legal instruments were funded by donors and partly directed by NGOs. Each technical review had public input. KWS established a coordinating group to draw up the policy recommendations and, again, to present them for public discussion.

The policy framework specifically took into account the social and political trends in Kenya, the national development aspirations, and the multiple jurisdictions over land. Ultimately, the policy recognized the need to thoroughly restructure KWS in line with a new biodiversity mission. Rather than trying to do everything everywhere, KWS’s principal functions were to create partnerships for conservation; oversee the transfer of rights linked to capacity and responsibilities; provide a unifying framework for biodiversity conservation; and take ultimate responsibility for oversight, arbitration, monitoring, and enforcement.

Among the new policy’s specifics, a compromise was struck between state and individual land ownership, aimed at minimizing common property ownership problems and the fragmentation of land due to fencing when wildlife is privately owned like livestock. The aim was to create an enabling atmosphere for transferring rights progressively to landowning authorities. This entailed mobilizing landowner associations at the scale of ecosystems, giving them legal standing, creating awareness of the conservation opportunities, and, finally, linking rights to responsibilities. The policy also dealt with the weaknesses of traditional rural communities by encouraging partnerships with NGOs and private-sector groups. The aim was to build the planning and management capacity to undertake conservation enterprises integrated with other forms of land use.

Once the restructuring and mobilization were well under way, KWS called on key partners to define a Minimum Conservation Area (MCA). The MCA included parks, reserves, and nonprotected regions constituting a national framework for conserving biodiversity in the long term, regardless of jurisdiction. The MCA (which is to be progressively refined by the same process as better technical data become available) also set priorities for action by KWS, donors, and NGOs. In collaboration with the adjoining states of Uganda and Tanzania, steps were taken to establish a regional MCA linking conservation strategies in all three states under the East African Cooperation.

Unfortunately, there have been setbacks. Recent political events in Kenya have resulted in two changes in the KWS directorship in less than a year, and the nation’s tourist industry collapsed in 1997, cutting KWS’s revenues in half. As a result, the organization’s ability and willingness to conserve wildlife countrywide has shrunk. This experience provides some important lessons for governments generally as the role of wildlife agencies contracts in the face of growing environmental problems, budgetary limitations, and stronger civil society. The role of government is nonetheless critical in creating an arena for the formulation of larger conservation goals and plans within each country and internationally. No less important is the need for policy to create the process of participation from the outset, rather than to treat participation as a product of governmental deliberation and legislation.

There are early indications that creating such a process has developed its own momentum in Kenya, with landowners, NGOs, and donor groups working collaboratively on programs and funding. Biodiversity trust funds, conservation enterprise funds, and ecotourism funds in excess of $50 million to promote local initiatives have been established. A newly formed National Landowners Wildlife Forum and NGOs have played a central role in these developments. There also are early signs of success in terms of biodiversity conservation. By 1998, the area slated for local sanctuaries by landowners exceeded 1,000 square miles, an area far larger than the total set aside as national parks in the past 40 years. In January 1998, an independent study commissioned by KWS showed that, on balance, populations of wildlife on lands involving community-based conservation were stable or increasing, whereas populations elsewhere were in decline.

The message, then, is that throughout all societies, we will need to adopt flexible and adaptive conservation strategies that supersede top-down practices and a unitary environmental ethic. Environmental ethics should instead flow from and reinforce agreements based on an open and accountable democratic process. Ideology, principles, policy, legislation, and action will emerge out of this process and be sanctioned by cultural expectations and norms, rather than be imposed and flounder because of resistance, indifference, and noncompliance.

Recommended reading

D. B. Botkin, Discordant Harmonies: A New Ecology for the Twenty-first Century. New York: Oxford University Press, 1990.

B. Furze, T. De Lacy, and J. Birckhead, Culture, Conservation and Biodiversity. New York: John Wiley & Sons, 1996.

J. C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven, Conn.: Yale University Press, 1998.

E. P. Weber, Pluralism by the Rules: Conflict and Cooperation in Environmental Regulation. Washington, D.C.: Georgetown University Press, 1998.

D. Western, R. M. Wright, and S. C. Strum (eds.), Natural Connections: Perspectives in Community-Based Conservation. Washington, D.C.: Island Press, 1994.

The World Commission on Environment and Development, Our Common Future. New York: Oxford University Press, 1987.

Wider Education

December 10, 1999, the day I began to write this column, I came across two press reports that reinforced my belief that the time was right for Issues to publish this special issue on the future of higher education. That day’s Washington Post reported that the U.S. Army is preparing to offer college-level courses via the Internet for enlisted soldiers stationed all across the globe. On the same day, the Chronicle of Higher Education reported that Indiana University and the University of Missouri at Columbia were each granted accreditation to operate online high schools. As Jorge Klor de Alva points out in this issue, and as the University of Phoenix has demonstrated, traditional institutions do not have a monopoly on education. We not only cannot be sure how education will be provided, we cannot know for certain who will provide it.

No one doubts that education will play an increasingly important role in the coming century. Not only will all of us have to update our job skills repeatedly, but as citizens we will have to make momentous decisions about how to use our rapidly expanding scientific and technological capabilities. And although there is widespread agreement about the social value of education, we can expect a strong reaction to its rising cost. The productivity of the education system is frozen. Each teacher is training roughly the same number of students as a century ago. Indeed, there is strong public support for reducing class size, which can only increase the per capita cost. With productivity increasing in most areas of the economy and more people needing more education, the relative share of the economy devoted to education will rise steadily. In the competition for resources, education will not get all it wants. The political battles over who has access to magnet high schools, elite universities, and the top professional schools will fill the headlines and distract us from the more important task of improving education of all types for all people.

Our most critical need is not for more desks in the classrooms at the Stanford Law School or the Harvard Business School. The country is not suffering from a shortage of corporate lawyers and business consultants. Nor do we need pale imitations of educational programs designed to train people for the thin layer of jobs at the top. We need vibrant and innovative education programs to prepare people for the vast array of new jobs and opportunities that will be emerging. We need to enhance the quality and cost-effectiveness of public high schools, community colleges, job-based training, and continuing education. The elites cannot drag the nation into the future. A truly New Economy will be one in which creativity and initiative are applied at all levels. George Campbell, Jr., reminds us that the nation grows more ethnically and racially diverse each year. It is in no one’s interest to deny anyone the opportunity to contribute to society to the greatest extent possible.

Higher education will mean something different in the next century. Robert M. Rosenzweig is right that we must continue to do well many of the things that we have done well in the past. The goal is not to overhaul what exists but to expand and enrich what we do. In his call for a new social contract between the universities and society, James J. Duderstadt warns that the question is not whether higher education will change dramatically, it is who will lead and direct that change. He believes that the universities can and should be at the forefront because they can take a broad humanistic view of what the society needs. But if they choose instead to become shrines to their own past greatness, the strong hand of the market will lead the way. Market forces should play a role in the direction of higher education, but so should the forces of openness, independent inquiry, academic freedom, intellectual rigor, and high ethical purpose that are part of the tradition of higher education.

Forum – Winter 2000

Marine havens

In “Creating Havens for Marine Life” (Issues, Fall 1999), Tundi Agardy’s call for a comprehensive national policy for the protection of marine resources is right on the mark and is exactly the need I have addressed in H.R.2425, the Oceans Act of 1999. As Agardy writes, it is time for the government to respond to the public outcry, media attention, and increased advocacy for ocean conservation. The Clinton administration responded this year with the biggest request for funding of ocean conservation, exploration, and research programs in U.S. history. However, this funding request was largely ignored, especially in the House of Representatives. I encourage your readers to contact their representative and tell him or her how they feel about this issue. The oceans and marine resources are simply too precious to be subjected to a national policy of neglect.

It is time for us to extend our land ethic of conservation and environmental awareness to the sea. We also need a structural review and reform of the governmental bodies that create and implement ocean policy. We have not reviewed our ocean policy for 33 years, since the Stratton Commission was given the task in 1966 of examining the nation’s stake in the development and preservation of the marine environment and of formulating a comprehensive, long-term national program for marine affairs. Since that review, the U.S. population has grown from 196.5 million to about 270 million people, over 50 percent of whom live within 50 miles of our shores. By the year 2010, this figure will increase to at least 75 percent, with all of the attendant potential environmental consequences of having so many people concentrated in areas with diverse and fragile ecosystems. Meanwhile, wetlands and other marine habitats are threatened by pollution and human activities. A study of extinction rates of aquatic animals, published in October 1999 in Conservation Biology, reports that aquatic species are going extinct at a rate five times faster than land species. In addition, ocean and coastal resources once thought to be inexhaustible are now seriously depleted. In its annual report to Congress, the National Marine Fisheries Service (NMFS) states that about half of U.S. fish species are known to be overfished; but even more important, the status of overfishing is unknown for the majority (65 percent) of stocks assessed by NMFS.

Fish crises are expensive. This Congress just appropriated a total of $30 million to meet the needs of fishers out of work in Alaska and New England, and I am struggling to find funds for disaster relief for the fishers in my district who, on October 1,1999, were given 24 hours notice that their allowable take was being cut by 75 percent. Fishing crises such as these could be avoided with better data and cooperation among federal, state, and local governments and the private sector (including the fishing industry).

The Oceans Act of 1999 contains provisions similar to those of the 1966 act. It calls for the creation of a Stratton-type commission, called the Commission on Ocean Policy, to examine ocean and coastal activities and to report its recommendations for a national policy. In developing the report, the commission would assess federal programs and funding priorities, laws and their effects on ocean policy, infrastructure needs, conflicts among marine users, and the integration of ocean and coastal activities and technological opportunities. In an era of frugality when it comes to the environment, we need to maximize the effectiveness of our agencies and programs and improve communication among all the bodies involved with ocean and coastal activities.

Not only does half of the U.S. population live near the coast, but 293 (67 percent) of the members of the House are from districts with coastline or within the coastal zone. We are all critically dependent on the oceans and the resources we derive from them. Commercial and recreational fishing provides 1.5 million jobs and an estimated $111 billion annually to the nation’s economy, and more than 30 percent of U.S. gross national product is produced in coastal counties. Our oceans and beaches are our leading tourist destination, with 85 percent of tourist revenues being spent in coastal states. In 1993, more than 180 million Americans visited coastal waters nationwide; and in California alone the revenue generated by tourism is approximately $38 billion annually, much of it attributable to coastal visits. The beautiful coasts and ocean in my own district are key to the area’s $1.5 billion travel and tourism industry.

We need to make a commitment to these places of inestimable wealth and breathtaking mystery by reassessing our national ocean policy. In that spirit, I would like to leave you with some words from Sylvia Earle’s recent book Wild Ocean: “We carry the sea with us. We weep and sweat and bleed salt, and if we go far enough back down our family tree to the trunk we can understand why there is a feeling of kinship. To me all life in the sea is family.”

REP. SAM FARR

Democrat of California


Tundi Agardy has gifted us with a robust and compelling call for a new network of marine reserves. Sensibly, however, she also notes that marine reserves, although necessary, are not the only solution. Because jurisdiction begins at the shore, reserves alone will do nothing to reduce the toxic brew of pollutants that typically drains from the land to the sea.

It is thus time not only to build marine reserves but also to blur the artificial divisions between land and sea that hamper proper coastal protection. U.S. governmental agencies have failed us and marine biodiversity badly, and it is folly to think that these agencies now suddenly have the political will to take the steps that are needed. But with public education can come impetus for change.

The time has come to adopt the “precautionary principle” in environmental matters, so that pollutants that would have been created on land are instead avoided by source reduction, waste minimization, and pollution prevention. By halting overfishing and habitat destruction while also preventing pollution from land-based sources, we can truly begin to act with the sort of wisdom that Agardy provides.

ROB WILDER

Director of Education

Pacific Whale Foundation

Maui, Hawaii


Tundi Agardy’s article deserves to be read, and read again, by everyone who deals with ocean policy in North America. She explains that widespread injury to marine environments goes far beyond catching too many of the kinds of fish that we eat, and that systems of undisturbed marine protected areas, comparable with protected wilderness areas on land, are desperately needed. I agree with her completely.

Most people see only the surface of the ocean; the bottom is mysterious. On the North Atlantic coastal shelves, this mysterious bottom is scraped and scoured by mobile fishing gear, such as otter trawls and scallop dredges, more than once a year; yet how many people know what happens to sponges, cerianthid worms, and other bottom-dwelling animals, as the heavy gear, like a bulldozer, crashes by?

An excellent example of the impact of ignorance of the ocean environment is the widespread destruction of northern deep-sea corals. All along the outer edges of the continent’s coastal shelves, as far north as the Arctic, horny corals used to grow on rocky outcrops, boulders, and shell deposits, forming “forests” of “trees” several meters in height and covering large areas. In addition, stony corals formed huge mounds, known as bioherms. Yet most people don’t know that these colonial animals existed, let alone that they have been destroyed.

Fishing for bottom-dwelling fish in the past 40 or so years has tended to concentrate on the edges of the continental shelves where productivity is highest. Most of this effort has used damaging mobile fishing gear. As a result, the coral mounds and forests have been extensively cleared away, both accidentally during fishing and also purposefully because they get in the way. Studies in Europe have shown that northern deep-sea corals constitute habitat for numerous other species, but almost no work has been done on these ecosystems in North America. No effort has been made to address questions about the significance of the loss of deep-sea corals for fisheries declines, nor do we know anything about the potential loss of marine biodiversity as a result of this destruction.

Just as North Americans want to protect the few remaining patches of old-growth forest on land, so we need to do this on the bottom of the ocean. The damage may be at least as extensive in the ocean as on land, where over 95 percent of the original forests and grasslands have been cleared or transformed.

In Canada, not only is advocacy work on protection of deep-sea corals being left to small groups of environmental activists, even the scientific research is being led by them. The Ecology Action Centre in Halifax, Nova Scotia, published the first attempt to assess the status of deep-sea corals in the northwest Atlantic and is organizing a conference of world experts to be held in August 2000.

Fortunately, some remnant patches of deep-sea horny and stony corals remain. Because these colonial formations grow extremely slowly, taking thousands of years to create large structures, speedy restoration of their previous extent is impossible, but it is clear that the remnant patches deserve special protection. Adequately enforced marine protected areas are clearly required, and Tundi Agardy’s call to action to protect not just these special organisms, but also examples of all the various ocean-bottom types, needs to be widely heard.

MARTIN WILLISON

School for Resource and Environmental Studies

Dalhousie University

Halifax, Nova Scotia

Canada


Technology and development

F. M. Scherer’s “Global Growth Through Third World Technological Progress” (Issues, Fall 1999) is a welcome sign of the recent long-overdue revival of interest in science, technology, and development. As he correctly points out, energy research, development, and diffusion is a subject worthy of a major international effort. It links global warming, an area of critical global concern, with energy, a key input in economic development, and is thus an attractive area for international collaboration, where the interests of developing and developed countries coincide.

More generally, the time is ripe for a major increase in support for research, development, and diffusion of technology for the benefit of developing countries, and for helping them to participate in today’s global information-based economy, both through training in entrepreneurship and technology management, as Scherer proposes, and through assisting them to develop public policies and programs that encourage technological innovation. There is a special need for technology that will benefit poor and otherwise disadvantaged people within those countries.

This task calls for broad collaboration between the public and private sectors in developing and advanced countries, including countries such as Korea that have just joined the ranks of the advanced countries. It requires flexible institutional arrangements that can take advantage of fast-moving advances in information and biotechnology but can also tackle humble but essential problems in fields such as traffic safety, nutrition, and low-cost sanitation.

These characteristics will require institutional innovations that will draw on lessons learned by many institutions in different parts of the world and will take advantage of the capabilities of the Internet to make possible a network of collaborating institutions in developing and developed countries. In addition to the international agricultural research institutions mentioned by Scherer, I would call attention to such institutional models as Fondacion Chile, a marketing-led organization whose function is to use technology as a vehicle for launching new industrial sectors; the binational R&D foundations that encourage collaboration between U.S. business and firms in Israel and other countries; the International Service for the Acquisition of Agri-Biotech Applications, which facilitates the transfer of privately owned technologies to developing countries; extension services that assist local industry to conserve energy in Brazil and other developing countries; and energy-oriented venture capital companies in India and elsewhere.

Such programs require not only funding but the willingness of governments and firms in advanced countries to devote major resources to them. They also require a substantial change in attitudes and policies that inhibit technological innovation in many developing countries. The alternative is a worldwide shakeout in which developing countries that do not succeed in managing technology will fall farther and farther behind, contributing to economic decline and political disorder, mass migrations, environmental disasters, and possible nuclear incidents.

CHARLES WEISS

Distinguished Professor and Director

Program in Science, Technology, and International Affairs

Georgetown University School of Foreign Service

The author is former Science and Technology Adviser to the World Bank


Space imaging

Ann M. Florini and Yahya Dehqanzada provide a broad and thoughtful look at the role of commercial satellite imagery in increasing global transparency (“Commercial Satellite Imagery Comes of Age,” Issues, Fall 1999). By transparency, they mean the ability of nongovernmental organizations and individuals to possess information that has traditionally been concealed by geography, distance, and the actions of governments. In this regard, commercial remote sensing is part of a broader trend toward global transparency that is resulting from technologies such as air travel, telecommunications, satellite broadcasts, and the Internet. The power of these technologies has also been reinforced by political and economic trends such as democratization and trade liberalization.

The article recognizes the potential security and privacy problems that can result from satellite imagery but goes on to show that governmental efforts to control or even restrict commercial imagery will be self-defeating. In particular, such efforts are likely to be counterproductive to U.S. economic and political interests in promoting global transparency. This makes sense in that the United States is an open society, and we have had long experience with transparency; even while having occasional mixed feelings about the resulting political accountability. And the technologies of commercial satellite imagery are also those of information technology in general–a leading area of U.S. economic strength.

The key policy question for the United States and other open societies is whether to embrace or resist the opportunities presented by commercial imagery. This is not an easy choice, as many societies, and typically governments, resist change and seek to preserve the status quo. Commercial satellite imagery is a force for change that will create dynamic business opportunities; empower nongovernmental organizations in areas such as the environment, international security, and human rights; and cause confusion as future media analysts argue over the interpretation of particular images.

But as the article says, “The only practical choice is to embrace emerging transparency, take advantage of its positive effects, and learn to manage its negative consequences.” A corollary might be that the most destructive choice would be to ignore or deny the spread of satellite imagery, to fail to learn to use it, and to fail to learn to operate in a more transparent world. Those who learn to use and analyze imagery, not just how to take it and possess it, will have the competitive advantage in this new environment.

SCOTT PACE

The Rand Corporation

Washington, D.C.


Perhaps the most illuminating aspect of the article by Ann M. Florini and Yahya Dehqanzada is its illustration of the long history of debate over commercial remote sensing policy. Resolving that debate has involved incremental steps over many years. More will be required now that the successful launch of Ikonos-2 has ushered in the age of commercially available 1-meter satellite imagery.

In technologies ranging from encryption to communications satellites, the difficult reality faced by the U.S. government is that it has lost much of the control it once had over access to dual-use products. The end of the Cold War and the growing challenge of foreign competitors limit what the United States can do to prevent access to high-resolution commercial imagery from space.

The most critical aspect of the availability of such data is the skill with which it is analyzed. The “bloopers” referred to by the authors should not be dismissed with a smile or a wince. What may be an embarrassment to a media outlet could become a disastrous national security situation. The U.S. government has focused so far on its legitimate national security concerns about the widespread availability of high-resolution commercial imagery and when it can implement shutter control. Equally critical, however, is the interpretation of that imagery. It may be prudent now to focus on motivating satellite imagery customers to train imagery interpreters properly. The National Aeronautics and Space Administration and the United Nations have long histories in training people around the world to use satellite remote sensing data for civilian purposes. Broadening and enhancing such programs for both government and private customers could have a payoff far in excess of their costs by avoiding potentially catastrophic misunderstandings.

Imagery companies have a vested interest in the correct interpretation of their products as well, and could work with governments to ensure an adequate supply of skilled interpreters. Those companies also may decide not to sell imagery to any and all customers, instead recognizing the virtue of self-regulation discovered by so many other U.S. industries hoping to stave off government rules.

The advent of high-resolution commercial imagery has also focused more attention on “space control.” The antisatellite debate of the 1970s and 1980s is now more broadly focused on various methods to ensure that the United States can use its satellites during a crisis while denying enemies the use of their own space assets. The authors argue that the United States would find it self-defeating to “violate the long-held international norm of noninterference with satellite operations.” However, bearing in mind the authors’ explanation that some countermeasures (such as spoofing and jamming) leave no evidence of tampering, it may be naive to conclude that such a long-held norm of noninterference exists. The brute force approach of antisatellite interceptors may be replaced by more subtle techniques, but the objective of limiting the ability of enemies to use their satellites–commercial or military–against the United States and its allies still appears valid.

Many other issues will arise as the transparency discussed by the authors evolves. Marrying such imagery with the precise navigational data from the Global Positioning System could have profound national security and societal consequences. Not only will it raise issues about potentially enhancing terrorist activities, for example, but also about privacy and the extent to which such information can be used in civil and criminal court cases. Much more work awaits the policy community in addressing this new era of commercial space imagery.

MARCIA SMITH

Washington, D.C.


Curriculum reform

As one of the architects of the Third International Mathematics and Science Study (TIMSS) and an author of several books reporting on the TIMSS findings, it is particularly pleasing to me to see the point of view expressed by Eamon M. Kelley, Bob H. Suzuki, and Mary K. Gaillard in “Education Reform for a Mobile Population” (Issues, Summer 1999) and in the National Science Board (NSB) report the article refers to.

Since the TIMSS study, we have done additional curriculum analyses on current state standards and assessments (for an ACHIEVE project) and on the most frequently used standardized tests (for a state of California project). We found the “mile-wide inch-deep” curriculum to be alive and well. The new generation of standards currently used by the states has not changed in any appreciable way from Kelley et al.’s description of the U.S. curriculum as lacking coherence, depth, and continuity. So the NSB recommendations still need to be heralded, as they have yet to permeate state policy.

A second aspect of our analyses examined the content profiles associated with the most frequently used standardized mathematics tests in this country and with the state mathematics and science assessments of almost half of the states. We found that these tests in the eighth grade do not test the content typically found in the curriculum of the top-achieving countries (nor, for the most part, do they test the more challenging content that states include in their standards). We also found these tests to lack focus and coherence.

The other disturbing element we encountered in the analysis of these data is that the content of such tests does not line up very well with the states’ own standards. For example, the major emphasis of mathematics tests in the eighth grade continues to center on arithmetic and computation, in spite of state standards that include concepts of functions, slope, congruence, similarity, and proportionality. These results again illustrate the fragmentation of the U.S. curriculum.

These new data make even more compelling the authors’ call for making more explicit the linkages between K-12 content standards and college admissions. The country would do well to take heed of the recommendations made in the NSB report and articulated by Kelley et al. in their article.

WILLIAM SCHMIDT

Michigan State University

East Lansing, Michigan


Fixing the Forest Service

H. Michael Anderson’s call to fund the U.S. Forest Service entirely through congressional appropriations is a blueprint for disaster (“Reshaping National Forest Policy,” Issues, Fall 1999). As he says himself, funding that depends on annual appropriations is “subject to the vagaries of congressional priorities and whims.” Although the current system of Forest Service funding provides perverse incentives to land managers, increasing political ties will not provide the ecological stewardship we would all like to see on our public lands.

Congress already appropriates hundreds of millions of dollars each year for federal land stewardship. Yet at least 39 million acres of federal forest land are at extreme risk from catastrophic wildfire. An additional 26 million acres are highly susceptible to disease and insect infestation. Few of our forests resemble those of 100 years ago. This is not the result of timber harvest, forest roads, or recreation use. Nearly a century of fire suppression has literally changed the structure of many forest lands. Wagon trains could once roll through open savannas of ponderosa pine forests in the intermountain West. Fires kept brush down and prevented competition from shade-tolerant fir. Today, however, our forests are loaded with debris and are 82 percent denser than in 1928. These are not healthy forests by anyone’s definition.

How did our forests get this way? Relying on congressional budgets, federal land managers must play politics that serve powerful constituencies. Although the role of fire as a natural forest process has long been known, politics and regulations make it almost impossible for forest managers to use fire as a tool.

In many areas, even fire use must first be preceded by some type of logging to reduce the density and fuel load in order to avoid uncontrollable fires. But regulations, from the Clean Air Act to the National Environmental Policy Act, make this almost impossible, and the public input process often prevents timber harvest where it is desperately needed.

A preferred method would be to allow our forest professionals, rather than Washington politicians, to manage our forest lands. If we want our lands to be managed for their ecological integrity, we must get the incentives right. Cut the ties to Washington funding. Allow federal land managers to use the resources of the land to provide for economic and ecological sustainability.

HOLLY LIPPKE FRETWELL

Research Associate

Political Economy Research Center

Bozeman, Montana


H. Michael Anderson provides an excellent summary of the problems that plague the Forest Service and the challenges that Chief Mike Dombeck faces in his attempts to turn the agency around. One challenge Anderson identifies is the need to develop a long-term policy concerning roadless areas in national forests. On October 13, 1999, President Clinton directed the Forest Service to prepare an environmental impact statement (EIS) for a nationwide roadless area management policy to be adopted through administrative regulation. This is the approach Anderson recommends for dealing with the problem. The agency has begun the process of seeking public input on the rulemaking process. A draft EIS is to be available for review and comment in the spring of 2000, with final regulations due before the end of 2000.

The current road system in the national forests includes 380,000 miles of road, enough to circle the globe more than 15 times. Estimates of the amount of roadless area contained in parcels larger than 5,000 acres are around 40 million acres. A roadless area policy is necessitated by the difficulties the agency has experienced in maintaining the existing road system and by growing scientific evidence about the detrimental environmental impacts associated with the construction of expensive new forest roads. Recent polls have also shown that 60 to 70 percent of the U.S. public want public lands to be protected from developments (such as oil drilling, logging, and mining) that imperil the benefits of clean water, biological diversity, and wildlife habitats provided by roadless areas. So except for the timber industry, its professional and congressional allies, and motorized recreational groups, response to Clinton’s announcement has been generally favorable.

The roadless area EIS and resulting regulations will test Dombeck’s ability to unify his agency around a vision of ecosystem protection and restoration and to bring the agency out of its dark period. Timing could be everything. If the 2000 election brings in a Republican President and a Republican Congress, the entire initiative as well as other needed reforms could be scuttled. Dombeck’s immediate task will be to ensure that the process does not become sidetracked or mired during the writing of the EIS.

Replacing the agency’s utilitarian focus with a land ethic emphasizing ecological integrity and long-term sustainability will require bold policy action. By directly confronting the damaging effects of roads, the Clinton administration and Chief Dombeck can leave a valuable legacy. How the issue plays out politically in the next year merits continued monitoring.

HANNA J. CORTNER

School of Renewable Natural Resources

East University of Arizona

Tucson, Arizona


H. Michael Anderson presents an accurate description of the radical changes Chief Mike Dombeck has initiated at the U.S. Forest Service. Although Anderson clearly articulates the changes Dombeck had instituted before publication of his article, the extent to which the Wilderness Society’s recommendations have been implemented since the article’s publication is simply astounding. Either Anderson is clairvoyant, or Chief Dombeck is catering to the whims of Anderson’s organization.

Unfortunately, Chief Dombeck is steering the Forest Service on a collision course with Congress and national forest user groups. By implementing policy changes that are clearly contrary to existing laws and the statutorily defined purposes of the national forests, Dombeck has effectively usurped the authority of Congress to establish and oversee natural resource policies on federal lands. If dramatic change in the purpose of the national forests is warranted, it is the responsibility of the legislative branch of our government to make that change. Dombeck has unilaterally set a course to deprive the public of the benefits of multiple forest use for all Americans and instead to give preeminence to the preservation of biological preserves for the benefit of the few.

Consider the interests of the forest products community. Chief Dombeck has testified before Congress that the national forests grow some 22 billion board feet of timber every year while allowing over 7 billion feet to die. In 1999, under Dombeck’s leadership, the Forest Service sold just 2 billion feet for commercial purposes. This was less than a third of the annual mortality and less than 10 percent of net growth. This is not a recipe for sustainable management; it is a recipe for ecological disaster. If we continue to grow over 10 times more than is being removed, the forests will become more overcrowded and susceptible to disease and insect infestations, and will eventually succumb to catastrophic wildfire. Doesn’t a modest timber sale program designed to keep growth and removal in balance make better ecological sense?

President Clinton is fond of observing that the national forests contribute only 5 percent of our nation’s wood product needs; thus, going to zero percent is inconsequential. What the president doesn’t admit is that one-half of this country’s softwood sawtimber is growing in the national forests. If half of the wood inventory is only contributing 5 percent of our needs, something is very much out of balance. This policy becomes more ludicrous when we recognize that 40 percent of U.S. wood needs are being imported from other countries. This makes no economic sense, no ecological sense, no moral sense. How can we, in good conscience, sit on such a huge reservoir of renewable resources and let other countries meet our wood product needs?

Chief Dombeck is not reshaping national forest policy. He is rewriting it to the detriment of all the multiple users of the national forests. Perhaps more disturbing than the degree to which the chief’s policies depart from tradition is the extent to which they are scripted by the Wilderness Society.

JIM GEISINGER

President

Northwest Forestry Association

Portland, Oregon


In response to H. Michael Anderson’s article on the Forest Service, I hope readers will note that having vanquished the “bigs” in the timber industry in terms of harvests from the public lands, the environmental movement can now be expected to turn its attention to motorized recreation as the next great adversary. One can only hope that these new wars can be fought without the flaming rhetoric that spawned the term “timber beasts.” Environmentalists and recreationists want many of the same things from the public lands, beginning with wildness and the spiritual renewal that draws people of many stripes. Perhaps these commonalties will provide enough common ground to allow advocates to seek negotiated settlements rather than another generation of managing these lands in the courts.

JAMES W. GILTMIER

Senior fellow

Pinchot Institute for Conservation

Washington, D.C.

From the Hill – Winter 2000

Big increase for NIH boosts R&D spending in FY 2000 budget

Bolstered by the largest-ever dollar increase in R&D spending for the National Institutes of Health (NIH), total federal support for R&D as well as basic research will increase substantially in the fiscal year (FY) 2000 budget.

President Clinton signed an omnibus bill incorporating the remaining unsigned appropriations bills into law on November 30. He did so only after extensive negotiations with a Congress that used various budgetary tricks to keep spending technically below self-imposed budget caps and to make it seem that Social Security money was not being used. One consequence of this maneuvering, however, is that $3 billion of NIH’s $17.1 billion budget will not be appropriated until September 29, 2000, a day before the end of FY 2000, in order to shift spending to FY 2001. Upshot: NIH will effectively have to operate for nearly all of FY 2000 on less than its FY 1999 budget.

Total federal support for R&D in FY 2000 will increase to $83.3 billion, which is $4 billion or 5 percent more than in FY 1999. NIH received the biggest chunk, a nearly $2.2 billion or 14.4 percent increase. The Department of Defense’s (DOD’s) R&D spending will climb by $1.1 billion, a 3 percent increase, to $39.1 billion. Other agencies received only modest or slight increases; several suffered cuts.

Nondefense R&D spending will total $40.9 billion, up 7.1 percent or $2.7 billion, with nearly all of this increase accounted for by the NIH spending boost. Excluding NIH, nondefense R&D will rise only 2.4 percent, or $555 million, to $23.7 billion, barely ahead of the expected inflation rate of 2 percent.

Basic research will increase to $19.1 billion in FY 2000, an increase of $1.8 billion or 10.6 percent. But again, the increases will go mostly to NIH-funded life sciences and medical research. However, the National Science Foundation (NSF), the second largest supporter of basic research and the largest supporter of most non-life science research, received a basic research increase of 6 percent to $2.5 billion. The National Aeronautics and Space Administration’s (NASA’s) basic research will increase by 18 percent to $2.5 billion, mostly because of a reclassification of existing work from applied to basic research. The Department of Defense (DOD), the primary supporter of basic research in engineering, mathematics, and computer sciences, will see its basic research (the 6.1 account) rise by 5.4 percent to $1.2 billion.

The Clinton administration, which made information technology (IT) research a high priority, received a large part of what it had requested. The administration proposed $366 million for a new six-agency Information Technology for the 21st Century (IT2) initiative to support long-term fundamental IT research. Though Congress did not label the program as such, it appropriated $235 million in new research money, including $126 million for NSF and $60 million for DOD.

Here is more information on how the major R&D agencies fared:

Total R&D by Agency Congressional Action on R&D in the FY 2000 Budget (FINAL)
(budget authority in millions of dollars)

  Change from Request Change from FY99
  FY99
Estimate
FY00
Request
FY00
FINAL
Amount Percent Amount Percent
Defense (military) 37,975 35,065 39,109 4,044 11.5% 1,134 3.0%
     (S&T 6.1,6.2,6.3 + Medical) 7,791 7,386 8,652 1,265 17.1% 861 11.0%
     (All Other DOD R&D) 30,184 27,679 30,457 2,778 10.0% 274 0.9%
National Aeronautics & Space Admin. 9,715 9,770 9,778 8 0.1% 63 0.6%
Energy 6,974 7,467 7,232 -235 -3.1% 258 3.7%
Health and Human Services 15,750 16,047 18,094 2,047 12.8% 2,344 14.9%
     (National Institutes of Health) 14,971 15,289 17,125 1,835 12.0% 2,153 14.4%
National Science Foundation 2,714 2,890 2,854 -36 -1.2% 140 5.2%
Agriculture 1,638 1,850 1,693 -156 -8.5% 56 3.4%
Interior 567 584 562 -22 -3.8% -5 -0.9%
Transportation 603 836 643 -193 -23.1% 40 6.7%
Environmental Protection Agency 669 645 645 1 0.1% -23 -3.5%
Commerce 1,075 1,172 1,096 -76 -6.5% 21 2.0%
     (NOAA) 600 600 617 17 2.8% 17 2.8%
     (NIST) 468 565 473 -92 -16.3% 5 1.0%
Education 224 276 246 -30 -10.7% 22 10.0%
Agency for Int’l Development 143 94 143 49 51.9% 0 -0.1%
Department of Veterans Affairs 674 663 665 2 0.4% -9 -1.3%
Nuclear Regulatory Commission 49 47 47 0 -0.5% -2 -4.5%
Smithsonian 138 146 143 -3 -2.2% 5 3.5%
All Other 443 353 395 42 11.9% -48 -10.8%
Total R&D 79,350 77,904 83,346 5,442 7.0% 3,996 5.0%
Defense R&D 41,208 38,483 42,497 4,014 10.4% 1,288 3.1%
Nondefense R&D 38,142 39,422 40,850 1,428 3.6% 2,708 7.1%
     Nondefense R&D minus NIH 23,171 24,133 23,725 -407 -1.7% 555 2.4%
Basic Research 17,276 18,101 19,112 1,011 5.6% 1,836 10.6%
Applied Research 16,640 16,642 17,534 892 5.4% 894 5.4%

AAAS estimates. Includes conduct of R&D and R&D facilities. Includes rescissions and emergency appropriations.

All figures are rounded to the nearest million. Changes calculated from unrounded figures.

DOD. In addition to a 5.4 percent basic research increase, DOD received a 7.5 percent boost in applied research (the 6.2 account) to $3.4 billion. DOD science and technology programs will increase by 11 percent to $8.7 billion. However, the Defense Advanced Research Projects Agency (DARPA) budget was cut by $82 million, or 4.2 percent, to $1.8 billion.

NIH received a big increase for the second year in a row, keeping the agency on course toward doubling its budget in five years. Spending will be up at every institute by more than 12 percent. Five institutes will have more than 20 percent more to spend. NIH will receive $20 million to fund cooperative R&D between NIH and the biotechnology, pharmaceutical, and medical device industries.

NASA. NASA’s total budget will be $13.6 billion in FY 2000, 0.5 percent less than in FY 1999. Total NASA R&D, which excludes the Space Shuttle and its mission support costs, will increase slightly by 0.6 percent to $9.8 billion. Of this, the Science, Aeronautics, and Technology account will receive $5.6 billion, down 1.2 percent but $161 million more than the president’s request. Although Space Science spending will increase by 2.7 percent to $2.2 billion, less funding was provided for future Discovery and Explorer missions, which could result in fewer spacecraft launches than NASA had planned over the next few years. Life and Microgravity Sciences and Applications will receive $275 million, an increase of 4.3 percent. Much of this increase is targeted for a dedicated shuttle science mission by 2001. NASA spending also will include $2.3 billion for continued development and construction of the International Space Station, which is $70 million or 3.1 percent more than in FY 1999 but $161 million less than the request.

Department of Energy (DOE). In the wake of congressional anger over allegations of security breaches and mismanagement at DOE laboratories, Congress recently moved the weapons-related activities to a new semiautonomous agency within DOE called the National Nuclear Security Administration. DOE’s R&D budget will be $7.2 billion, up $258 million or 3.7 percent. Its Science account will total $2.6 billion for R&D, down 0.3 percent. Fusion Energy Sciences will receive a 11.2 percent boost to $246 million, and Nuclear Physics will increase by 3.9 percent to $347 million. Congress declined to fund the Scientific Simulation Initiative, part of the proposed IT2 initiative. Funding for the Spallation Neutron Source was reduced to $117 million. DOE’s investments in energy R&D will all receive substantial increases: nuclear energy ($91 million, up 19.3 percent), fossil energy ($330 million, up 11.9 percent), and energy conservation ($440 million, up 10 percent). In defense R&D, the Stockpile Stewardship program was funded at $2.2 billion, which is $126 million or 5.9 percent more than last year.

NSF. NSF’s total budget will rise by 5 percent to $3.9 billion. R&D funding, which excludes its education and training activities and overhead costs, will total $2.9 billion, up 5.2 percent.

Department of Commerce. Total Commerce R&D will be $1.1 billion, up 2 percent. The National Institute of Standards and Technology (NIST) will receive a 1 percent increase, or $5 million, to $473 million. NIST’s Advanced Technology Program (ATP) was cut by 27 percent to $130 million. The budget for the mostly intramural Construction of Research Facilities program was nearly doubled to $108 million. The National Oceanic and Atmospheric Administration’s programs for natural resources and environmental R&D will increase by $17 million, or 2.8 percent, to $617 million.

U.S. Department of Agriculture (USDA). USDA’s R&D budget will rise to $1.7 billion, up 3.4 percent. The final legislation blocked a nonappropriated competitive agricultural research grants program from spending a planned $120 million in FY 2000. The existing competitive grants program, the National Research Initiative, received $119 million, the same as last year but far less than the request of $200 million. The Agricultural Research Service received $903, up 4.2 percent.

Department of the Interior. Interior’s R&D budget will decline by 0.9 percent to $562 million. The U.S. Geological Survey (USGS) received $496 million, 0.2 percent less than in FY 1999, partially because of a major restructuring of USGS activities.

Environmental Protection Agency (EPA). EPA’s R&D budget was cut by 3.5 percent to $645 million, although that amount is what the president requested. In order to make room for congressionally designated projects, Congress trimmed the request for R&D related to the Climate Change Technology Initiative and other R&D programs.

Department of Transportation (DOT). DOT received $643 million, up 6.7 percent. Because of a multiyear reauthorization of transportation programs in May 1998 that significantly boosted funding for highways, the total DOT budget will climb $2.1 billion to $50.1 billion. DOT R&D will share in these gains.

White House revamps export control policy on encryption products

Bowing to congressional pressure, the Clinton administration has drastically revamped its export control policy on encryption products. Despite the reversal in course, proponents of more liberalized export controls still are not completely happy with the administration’s position.

The proposed changes conform to a much more significant extent with the goals of House bill H.R. 850, the Security and Freedom through Encryption (SAFE) Act, which has strong support in Congress. They include easing export restrictions on retail products, allowing a higher bit standard for encryption exports, and supporting decryption technologies for law enforcement agencies.

The proposal would decontrol the sale of 64-bit encryption products (the existing standard is 56 bits), which would bring U.S. policy into compliance with the Wassenaar Agreement, a multinational treaty on export controls. In addition, 64-bit encryption products could now be imported under a license exception after only a one-time technical review, a proposal that is also included in the SAFE Act. However, encryption products could not be exported to the so-called “Terrorist 7” nations that support terrorism. Finally, foreign nationals would no longer need an export license to do encryption work for U.S. firms. The administration announced its changes in September 1999 and said it would unveil draft export control regulations by mid December.

In addition to the changes in export control policy, the administration also proposed legislation aimed at dealing with the concerns raised by law enforcement and national security officials about the liberalization of the sale of encryption products. The Cyberspace Electronic Security Act would create a mechanism by which government officials could obtain access to data protected by encryption products while providing greater protection of the rights and privacy of data holders. The bill would require that officials first obtain a court order before they could obtain the key to an encryption system. Currently, keys can be accessed with a simple grand jury subpoena. The bill would also provide protection for the techniques used by law enforcement officials for decoding encrypted material.

Reaction to the administration’s new policy was cautious at best. Proponents of the SAFE Act say it is a significant step towards achieving their goals, but they are waiting to see if truly substantive changes are implemented. “This announcement is long on potential but short on detail, and Congress will be watching carefully to make sure that the regulations issued in December match the policy announced today,” said Rep. Bob Goodlatte (R-Va.).

Thomas J. Donahue, president of the U.S. Chamber of Commerce, said in a letter to Goodlatte that the SAFE Act addresses several issues that the new policy does not, including such issues such as codifying the policy, providing a time frame for technical review, preventing the government from mandating the use of certain types of encryption products, and prohibiting mandatory key escrow accounts. “We are concerned that loosening export controls on encryption products through the regulatory process without the legislative safeguards contained in H.R. 850 could be detrimental to the long-term interest of the business community,” he said.

The administration counters that the new policy properly balances the interests of all parties involved by trying to meet the privacy and security concerns of the public while allowing law enforcement officials to do their jobs. Attorney General Janet Reno called the new policy “a balanced approach which will encourage the use of encryption but protect national security and public safety.” One of the SAFE Act’s most ardent supporters, Americans for Computer Privacy, seems to concur with the attorney general. “This development is the new policy America needs to maintain its technological leadership, strengthen the government’s abilities to protect our critical infrastructure, and fight crime in the Information Age,” said the group in a statement.

At least one member of Congress, however, questioned whether the administration was caving in to the demands of the high-tech industry and sacrificing national security needs. Rep. Curt Weldon (R-Penn.) said he believed that the United States may be giving up its edge in information security. “I’m not convinced that what we’re doing here is necessary and logical,” Weldon said. “I want to be absolutely certain that we maintain our information superiority.”


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Remaking the Academy in the Age of Information

Higher education around the world must undergo a dramatic makeover if it expects to educate a workforce in profound transformation. In 1950, only one in five U.S. workers was categorized as skilled by the Bureau of Labor Statistics. By 1991, the percentage had risen to 45 percent, and it will reach 65 percent in 2000. This dramatic upheaval in the labor force and in its educational and training needs reflects the fact that a great shift has taken place in the corporate world from an overwhelming reliance on physical capital, fueled by financial capital, to an unprecedented focus on human capital as the primary productive asset. This development, combined with the aging of the baby boomers, has been altering the course of higher education at a pace and with a significance undreamed of even five years ago. Likewise, this rate of change in the workforce and its educational needs has been the context for the success of the new for-profit, postsecondary institutions–making it possible for the University of Phoenix (UOP), with nearly 70,000 full-time students and more than 26,000 continuing education students, to become the largest accredited private university in the United States.

In a world where technology expenditures dominate capital spending and the skills that accompany it have half-lives measured in months, not years; where knowledge is accumulating at an exponential rate; where information technology has come to affect nearly every aspect of one’s life; where the acquisition, management, and deployment of information are the key competitive advantages; where electronic commerce already accounts for more than 2.3 million jobs and nearly $500 billion in revenue; education can no longer be seen as a discrete phenomenon, an option exercised only at a particular stage in life or a process following a linear course. Education is progressively becoming for the social body what health care has been to the physical and psychic one: It is the sine qua non of survival, maintenance, and vigorous growth.

Not surprisingly, a new education model, which UOP anticipated since its founding in 1976, has been quickly molding itself to fit the needs of our progressively more knowledge-based economy. Briefly, the education required today and into the future assumes that learners will need to be reskilled numerous times in their working lives if they wish to remain employed. Access to lifelong learning will therefore become progressively more critical for employees as well as their employers, who will find themselves pressured to provide or subsidize that access if they wish to retain their workforce and remain competitive. This new model is also based on the need to provide learning experiences everywhere and at any time and to use the most sophisticated information and telecommunications technologies. It is also characterized by a desire to provide educational products tailored to the learner; and in order to be competitive in the marketplace, it emphasizes branding and convenience.

It is not difficult to imagine why what were once innovations championed by UOP have become common practice in the corporate and political worlds. A quick survey of the contrasts between the “old” and “new” economies helps to elucidate their necessity. A knowledge-based economy must depend on networks and teamwork with distributed responsibilities; its reliance on technology makes it inherently risky and extremely competitive; and the opportunities created by new and continually evolving jobs place the emphasis on ownership through entrepreneurship and options, rather than on wages and job preservation. With technology and the Internet have also come globalization and e-commerce, making a virtue of speed, change, customization, and choice, and a vice of the maintenance of the status quo, standardization, and top-down hierarchical organization. This is a dynamic setting where win-win solutions are emphasized and public-private partnerships are widely prized. In such a vibrant milieu as this, many of the risk-averse, traditional rules of higher education are beginning to appear not merely quaint but irrelevant or, to the less charitable, downright absurd.

What society needs

The contemporary disconnect between what traditional higher education provides, especially in research institutions and four-year colleges, and what society wants can be gleaned in part through a 1998 poll of the 50 state governors. The aptly titled inquest “Transforming Postsecondary Education for the 21st Century” reveals that the governors’ four priorities were (1) to encourage lifelong learning (97 percent), (2) to allow students to obtain education at any time and in any place via technology (83 percent), (3) to require postsecondary institutions to collaborate with business and industry in curriculum and program development (77 percent), and (4) to integrate applied or on-the-job experience into academic programs (66 percent). In contrast–and most tellingly–the bottom four items were: (1) maintain faculty authority for curriculum content, quality, and degree requirements (44 percent); (2) maintain the present balance of faculty research, teaching load, and community service (32 percent); (3) ensure a campus-based experience for the majority of students (21 percent); and (4) in last place–enjoying the support of only one of the governors responding–maintain traditional faculty roles and tenure (3 percent).

But politicians and business leaders are not the only ones having second thoughts about the structure and rules undergirding higher education today. In a recent poll primarily of university presidents, administrators, and faculty by one of the six official accrediting bodies [the North Central Association (NCA) of Colleges and Schools], the respondents identified the following trends as likely to have the greatest impact on NCA activities: increasing demands for accountability (80 percent), expanding use of distance education (78 percent), increasing attention to teaching and learning (72 percent), and expanding use of the Internet (71 percent).

Perhaps more than any other institution, UOP has contributed to the recognition that education today must be ubiquitous, continuous, consumer-driven, quality-assured, and outcomes-oriented. In effect, UOP has truly shattered the myth for many that youth is the predominant age for schooling, that learning is a top-down localized activity, and that credentialing should depend on time spent on task rather than measurable competence. From its inception, UOP has addressed itself to working adults; and given what it has done in this niche, it has become the country’s first truly national university. In doing so, it has helped to prove that the age of learning is always, the place of learning is everywhere, and the goal of learning for most people is best reached when treated as tactical (with clear, immediate aims), as opposed to strategic (with broad aims and distant goals).

By restricting itself to working adults (all students must be at least 23 years old and employed), UOP contributes to U.S. society in a straightforward fashion: In educating a sector previously neglected or underserved, it helps to increase the productivity of individuals, companies, and regions. A 1998 survey of UOP’s alumni–with a 41 percent response rate–eloquently expresses my point: 63 percent of the respondents stated that UOP was their only choice, and 48 percent said they could not have completed their degree if it were not for UOP. The assessments of quality were also gratifying: 93 percent of alumni reported that UOP’s preparation for graduate school was “good to excellent”; 80 percent agreed that compared with coworkers who went to other colleges and universities, the knowledge and skills they gained from their major prepared them better for today’s job market; and 76 percent agreed that compared with coworkers who went to other colleges and universities, their overall education at UOP gave them a better career preparation.

That said, how UOP or any other institution of higher education is likely to contribute to human well-being in the coming century is not obvious. UOP must continually balance the inevitable need to invest in its transformation with the need to fulfill its present promises to its students, their employers, its regulators and shareholders, and to its own past. But maintaining this balance is a difficult task, because the road leading to the new millennium has been made bumpy by the uncertainty that has accompanied the rapid technological and economic changes.

Shifting sands

To begin with, the New Economy can be characterized by unprecedented employment churn, which is making a potential student out of every worker. Labor Department officials claim that an estimated 50 million workers, or about 40 percent of the workforce, change employers or jobs within any one year. Most of this churn comes from increases in productivity made possible, in part, by companies reducing their labor force in unprofitable or underperforming sectors and expanding their head count in more profitable areas. In addition, a significant part of the churn results from shifts in the ways companies are managed and organized. Today’s companies, facing more varied competition than in the past, must be more flexible than ever before. To accomplish this, they need management and a workforce that have been reeducated and retrained to be cross-functional, cross-skilled, self-managed, able to communicate and work in teams, and able to change on a moment’s notice. In this far more demanding workplace, managers and others who do not meet the criteria are usually the first to be dropped, but the more fortunate are retrained or reeducated.

The model of higher education, as represented by, say, Harvard, is an ideal that not even today’s Harvard seeks to implement.

In an environment with this level of churn and organizational and managerial transformation, where the median age is in the mid-30s and where adults represent nearly 50 percent of college students, a growing number of learners are demanding a professional, businesslike relationship with their campus that is characterized by convenience, cost- and time-effective services and education, predictable and consistent quality, seriousness of purpose, and high customer service geared to their needs, not those of faculty members, administrators, or staff. Put another way, students who want to be players in the New Economy are unlikely to tolerate a just-in-case education that is not practical, up-to-date, or career-focused.

This is not to imply, as some zealots of the new believe, that traditional institutions, especially research-driven ones, are going to disappear. What I mean instead is that the model of higher education, as represented by, say, Harvard, is an ideal that not even today’s Harvard seeks to implement. For instance, Harvard Provost Harvey Fineberg, reflecting on the future of his institution, recently spoke about the UOP model by making reference to Intel founder Andy Grove’s anxious observation that the U.S. domestic steel industry is moribund today because it chose not to produce rebar (the steel used to reinforce concrete) and thereby permitted the Japanese to gain market share in the country. Nervous about the future of his venerable institution and other traditional centers of higher education, he asked during an interview published in the Boston Globe, “Is the University of Phoenix our rebar?” And fearful of being left behind by the future that UOP is helping to create, Fineberg concluded with the observation, “I know that Harvard has to change. No institution remains at the forefront of its field if it does the same things in 20 years that it does today.”

Indeed, no institution of higher education in today’s economy can afford to resist change. Ironically, some of the most jarring characteristics of today’s innovative institutions–their for-profit status, their lack of permanent buildings and faculties, and their need to be customer service-oriented–were actually common among the ancestral universities of the West. What these old institutions had in common with their traditional descendants, however, is that both were and continue to be geographically centered; committed to the pedagogical importance of memorization (rather than information management); and, perhaps even more important, synchronous in their demand that all students meet at regular intervals at specific times and places to hear masters preach to passive subjects.

But the needs of the New Economy challenge higher education to provide something different. Web-based education, an inherently locationless medium, is likely to push to the margins of history a substantial number of those institutions and regulatory bodies that seek to remain geographically centered. Meanwhile, the Internet and the database management systems that make useful the information they transport and handle can provide time-constrained consumers with just-in-time information and learning that, because it can be accessed asynchronously, places the pedagogical focus on arriving at syntheses and developing critical thinking while making localized learning and mere memorization secondary. And with asynchronicity and high electronic interactivity, socialization can be refocused on the educational process, a phenomenon that is reinforced by a commitment to results-oriented learning based on actual performance of specified and testable outcomes, rather than, as in the traditional situation, relying primarily on predetermined inputs and subjective criteria to maintain and assess quality.

All this represents a huge challenge for higher education and technology. A brief comparison of traditional and online university settings may help here. To begin with, there is the issue of content and its delivery. The predominance of the lecturing faculty member, the bored or passive student, and the one-size-fits-all textbook is subject to much condemnation, yet the alternatives are also problematic. Discussion-oriented education, which characterizes e-education, is not easily undertaken successfully. It requires the right structure to make everyone contribute actively to his or her own education, it calls for unlimited access to unlimited resources, and it is best unconstrained by locations in “brick and mortar” classrooms and libraries. Likewise, it calls for a guidance, maturity, and discipline that are often well beyond the reach of indifferent faculty members and unmotivated students, and it is helpless in the face of a disorganized or illogical curriculum. In short, the online education world needed by the New Economy is a daunting one, with no place for jaded teachers or faulty pedagogy.

With these challenges in mind, who can step forward within the world of traditional higher education to force a changing of the rules so as to transform the institutions of the past into those that can serve the needs of the knowledge-based economy of today and tomorrow?

Principles and practices

Making front- and back-office functions convenient and accessible 24 hours a day, 7 days a week, is today primarily a matter of will, patience, and money. But creating access to nearly “24 7” academic programs able to meet the needs of the New Economy is a totally different matter. This also calls for rethinking the rules that guide higher education today. To drive home the point that this is not a simple matter and to answer the question I just posed, I must remark on the catechism that articulates our faith at UOP. We believe that the needs of working adult students can be distilled into six basic propositions, which are easy to state but difficult to practice, particularly for traditional institutions:

  • First, these students want to complete their education while working full-time. In effect, they want all necessary classes to be available in the sequence they need and at times that do not conflict with their work hours. But for this to become a reality, the rule that permits faculty to decide what they will teach and when must be modified, and that is not an easy matter, especially when it comes to tenured faculty.
  • Second, they want a curriculum and faculty that are relevant to the workplace. They want the course content to contribute to their success at work and in their career, and they want a faculty member who knows more than they do about the subject and who knows it as the subject is currently understood and as it is being practiced in fact, not merely in theory. To make this desideratum a reality, the rule that would have to be revamped is the one that decrees faculty will decide on their own what the content of their courses will be. In addition, faculty would have to stay abreast of the most recent knowledge and most up-to-date practices in their field. Here the dominant version of the meaning of academic freedom would have to be reconsidered, for otherwise there would be no force that could compel a tenured professor either to be up to date or to teach a particular content in a particular way.
  • Third, they want a time-efficient education. They want to learn what they need to learn, not what the professor may desire to teach that day; they want it in the structure that will maximize their learning; and they want to complete their degree in a timely fashion.
  • Fourth, they want their education to be cost-effective. They do not want to subsidize what they do not consume (dorms, student unions, stadiums), and they do not want to pay much overhead for the education they seek.
  • Fifth, and this should be no surprise, they expect a high level of customer service. They want their needs to be anticipated, immediately addressed, and courteously handled. They do not want to wait, stand in line, deal with indifferent bureaucrats, or be treated like petitioning intruders as opposed to valued customers.
  • Last, they want convenience: campuses that are nearby, safe, with well-lit parking lots, and with all administrative and student services provided where the teaching takes place.

The UOP model has been addressing these needs for more than a quarter of a century by focusing on an education that has been designed specifically for working adults. This means an education with concentrated programs that are offered all year round during the evening and where students take their courses sequentially, one at a time. All classes are seminar-based, with an average of 14 students in each class (9 in the online courses), and these are facilitated by academically qualified practitioner faculty members, all of whom hold doctorates or master’s degrees, all of whom have been trained by UOP to teach after undergoing an extensive selection review process, and all of whom must work full-time in the field in which they are specifically certified to teach. In turn, all of the curriculum is outcomes-oriented and centrally developed by subject matter experts, within and outside the faculty, supported by the continuous input and oversight provided by UOP’s over 6,500 practitioner faculty members who, although spread across the entire country and overseas, are each individually integrated into the university’s faculty governance structure. This curriculum integrates theory and practice, while emphasizing workplace competencies along with teamwork and communication skills–skills that are well developed in the study groups that are an integral part of each course. Last, every aspect of the academic and administrative process is continually measured and assessed, and the results are integrated into the quality-improvement mechanisms responsible for the institution’s quality assurance.

The rule that permits faculty to decide what they will teach and when must be modified, and that is not an easy matter.

Still, my tone of confidence, and indeed pride, should not lead us away from the question that follows the critical observation made of his own institution by Harvard’s provost: In the face of the challenges the new millennium portends, how durable is the UOP model, or the many others it has inspired, likely to be? For instance, although content is quickly becoming king, its sheer volume is placing a premium on Web portals, online enablers, marketing channels, and information-organizing schemes. In turn, these initiatives–demanded by the knowledge-based economy–have the capacity to transform higher education institutions into totally unrecognizable entities. Online enablers, the outsourcers who create virtual campuses within brick and mortar colleges, can provide potentially unlimited access to seemingly unlimited content sources. And the channels they establish for marketing education can easily be used to market other products to that very important consumer group.

Online information portals can provide remote proprietary and nonproprietary educational content, and more important, they can integrate themselves into the traditional institutions. Traditional institutions that begin with outsourcing educational functions to the portals could eventually find it cost-effective to outsource other academic, administrative, financial, and student services to the technologically savvy portals.

The importance of the role portals and online enablers will play in the transformation of the traditional academy cannot be overestimated. Quite apart from the Amazon.com­like possibilities they open for some higher education institutions, another way to appreciate their effect is to think of them in terms of the parallel represented by the shift of retail banking out of the branch to the ATM and then onto the desktop. Just as bank customers can use the ATMs of many banks, students may find it possible to replace or supplement their alma mater’s courses with courses or learning experiences derived from any other accredited institution, corporate university, or relevant database. Fear of this possibility has spurred traditional institutions to undermine innovations such as the ill-fated California Virtual University, to slow the efforts of Western Governors University, and to create problems for the United Kingdom’s Open University in its ambitious plans for the United States. The power of the entrenched faculty will make it difficult for traditional institutions to take advantage of new technology and adapt to the evolving needs of students.

Winners and losers

What institutions, then, are likely to be the winners in the future? Because staying ahead is critical to UOP, let me return to it once more as a source for speculation. In the light of the dramatic shifts taking place, it may be that UOP can better serve the adult learners of the future by transforming a significant part of itself so as to function as a platform or hub that emphasizes its role as a search engine (an identifier and provider of content), as a portal (a gateway to databases and links to learning experiences), as a rubric-meister (a skilled organizer of complex data), and as an assessor (a recognized evaluator of content, process, and effectiveness whose assessments can help take the guesswork out of shopping for education and training). This is a legitimate proposal for any university that has prided itself on its capacity to innovate and to transform itself. It is as legitimate, at least, as the one the railroads should have posed to themselves when confronted with the question, “Are you in the business of trains, tracks, and warehouses or of transportation?” And it is worth remembering the fate they suffered for their unanimous adherence to the former position. In effect, if, as any university that wants to survive into the next millennium must believe, UOP is primarily in the business of education rather than of brick and mortar classrooms and self-created curriculum, its transformations in the future should be and no doubt will be dictated primarily by what learners need, not by what it has traditionally done.

Consolidation in distance learning can be expected, but the behemoths lie unformed and, I suspect, unimagined.

But before the openness of future possibilities seduces us into forging untimely configurations, a simple warning is in order. A proposal such as the one I have laconically described is not easily implemented even in an innovative university such as mine. After all, UOP is fully aware that to serve its markets well in the future it must provide a variety of delivery modes and educational products, but it is not easy to identify what information technology and telecommunication products are worth investing in. For instance, although UOP pioneered interactive distance learning as early as 1989; although it has the world’s largest completely online, full-time, degree-seeking student enrollment (more than 10,000 students and growing at over 50 percent per year); and although it rightly prides itself on the effectiveness of its online degree programs, we recognize that all our experience and our new Web-enabled platform, which we developed at a substantial cost, cannot in themselves guarantee that we have a solid grasp on the future of interactive distance learning.

First of all, the evolution of distance education has not yet reached its Jurassic Age. Consolidation can be expected, but the behemoths lie unformed and, I suspect, unimagined. An acquisition that does not entail a soon-to-be-extinct technology is hard to spot when technology is changing at warp speed. And opportunities to integrate the next hot model are easy to pass up. Only deep pockets and steel nerves are likely to survive the seismic technological displacements to come.

That said, to serve its markets and thrive, UOP, like any other higher education provider that seeks to survive the next few decades, will need to keep its focus as distance education begins to blur with the edutainment and database products born of the large media companies and the entertainment and publishing giants. That focus, always oxymoronically tempered by flexibility, is most likely to be on the use of any medium–PC, television, Internet appliance, etc.–that permits the level of interaction that leads to effective education and that can command accreditation (if such is still around), a premium price, and customers whose sense of satisfaction transforms them into effective advocates.

Still, although it is a widespread mantra among futurists of higher education that colleges and universities will undergo a profound transformation primarily as a consequence of the quickly evolving information and communication technologies, this does not necessarily imply the demise of site-specific educational venues. To survive deep into the next century, UOP, like any other innovative institution, will need to reaggregate some major parts of itself to form a centralized content-producing and widely based distribution network, but it is unlikely to be able to do this without some forms of campus-based delivery. Having already advanced further than any other institution in unbundling faculty roles (that is, in separating teaching from content, development, and assessment), UOP, without abandoning its physical presence of multiple sites distributed globally, is likely to shape itself more along the lines of a media company and educational production unit than to continue solely as a brick and mortar university with a massive online campus. With media specialists as guides and content experts on retainer, UOP will probably emerge as a mega-educational system with widely distributed campuses, multiple sites in cyberspace, and possibly with a capacity for self-regulated expansion.

As education moves more toward the certification of competence with a focus on demonstrated skills and knowledge–on “what you know” rather than “what you have taken” in school–more associations and organizations, who can prove themselves worthy to the U.S. Department of Education, will be able to gain accreditation. Increased competition from corporate universities, training companies, course-content aggregators, and publisher-media conglomerates will put a premium on the ability of institutions not only to provide quality education but to do so in a way that meets consumers’ expectations. In short, as education becomes more a continuous process of certification of new skills, institutional success for any higher education enterprise will depend more on successful marketing, solid quality assurance and control systems, and effective use of the new media and not solely on the production and communication of knowledge. This is a shift that I believe UOP is well positioned to undertake, but I am less confident that many non-elite, especially private traditional academic, institutions will manage to survive successfully.

That glum conclusion leads me to a final observation: Societies everywhere expect from higher education the provision of an education that can permit them to flourish in the changing global economic landscape. Institutions that can continually change to keep up with the needs of the transforming economy they serve will survive. Those that cannot or will not change will become irrelevant, will condemn misled masses to second-class economic status or poverty, and will ultimately die, probably at the hands of those they chose to delude by serving up an education for a nonexistent world.

Building on Medicare’s Strengths

The aging of the U.S. population will generate many challenges in the years ahead, but none more dramatic than the costs of providing health care services for older Americans. Largely because of advances in medicine and technology, spending on both the old and the young has grown at a rate faster than spending on other goods and services. Combining a population that will increasingly be over the age of 65 with health care costs that will probably continue to rise over time is certain to mean an increasing share of national resources devoted to this group. In order to meet this challenge, the nation must plan how to share that burden and adapt Medicare to meet new demands.

Projections from the 1999 Medicare Trustees Report indicate that Medicare’s share of the gross domestic product (GDP) will reach 4.43 percent in 2025, up from 2.53 percent in 1998. Although this is a substantial increase, it is actually smaller than what was being projected a few years ago. This slowdown in growth does not eliminate the need to act, but it does allow some time for study and deliberation before we do act.

Projected increases in Medicare’s spending arise because of the high costs of health care and growing numbers of people eligible for the program. But most of the debate over Medicare reform centers on restructuring of Medicare. This restructuring would rely on contracting with private insurance plans, which would compete for enrollees. The federal government would subsidize a share of the costs of an average plan, leaving beneficiaries to pay the remainder. More expensive plans would require beneficiaries to pay higher premiums. The goal of such an approach is to make both plans and beneficiaries sensitive to the costs of care, leading to greater efficiency. But this is likely to address only part of the reason for higher costs of care over time. Claims for savings from options that shift Medicare more to a system of private insurance usually rest on two basic arguments: First, it is commonly claimed that the private sector is more efficient than Medicare, and second, that competition among plans will generate more price sensitivity on the part of beneficiaries and plans alike. Although seemingly credible, these claims do not hold up under close examination.

Medicare efficiency

Looking back over the period from 1970 to 1997, Medicare’s cost containment performance has been better than that of private insurance. Starting in the 1970s, Medicare and private insurance plans initially grew very much in tandem, showing few discernible differences (see Chart 1). By the 1980s, per capita spending had more than doubled in both sectors. But Medicare became more cost-conscious than private health insurance in the 1980s; and cost containment efforts, particularly through hospital payment reforms, began to pay off. From about 1984 through 1988, Medicare’s per capita costs grew much more slowly than those in the private sector.

This gap in overall growth in Medicare’s favor stayed relatively constant until the early 1990s, when private insurers began to take the rising costs of health insurance seriously. At that time, growth in the cost of private insurance moderated in a fashion similar to Medicare’s slower growth in the 1980s. Thus, it can be argued that the private sector was playing catch-up to Medicare in achieving cost containment. Private insurance thus narrowed the difference with Medicare in the 1990s, but as of 1997 there was still a considerable way for the private sector to go before its cost growth would match Medicare’s achievement of lower overall growth.


It should not be surprising that the per capita rates over time are similar between Medicare and private sector spending, because all health care spending shares technological change and improvement as a major factor driving high rates of expenditure growth. To date, most of the cost savings generated by all payers for care has come from slowing growth in the prices paid for services but making only preliminary inroads in reducing the use of services or addressing the issue of technology. Reining in the use of services will be a major challenge for private insurance as well as Medicare in the future, and it is not clear whether the public or private sector is better equipped to do this. Further, Medicare’s experience with private plans has been distinctly mixed.

Reform options such as the premium support approach seek savings by allowing the premiums paid by beneficiaries to vary so that those choosing higher-cost plans pay substantially higher premiums. The theory is that beneficiaries will become more price conscious and choose lower-cost plans. This in turn will reward private insurers that are able to hold down costs. And there is some evidence from the federal employee system and the Calpers system in California that this has disciplined the insurance market to some degree. Studies that have focused on retirees, however, show much less sensitivity to price differences. Older people may be less willing to change doctors and learn new insurance rules in order to save a few dollars each month. Thus, what is not known is how well this will work for Medicare beneficiaries.

For example, for a premium support model to work, at least some beneficiaries must be willing to shift plans each year (and to change providers and learn new rules) in order to reward the more efficient plans. Without that shifting, savings will not occur. In addition, there is the question of how private insurers will respond. (If new enrollees go into such plans each year, some savings will be achieved, but these are the least costly beneficiaries and may lead to further problems as discussed below.) Will they seek to improve service or instead focus on marketing and other techniques to attract a desirable, healthy patient base? It simply isn’t known whether competition will really do what it is supposed to do.

A concerted effort to expand benefits is necessary if Medicare is to be an efficient and effective program.

In addition, new approaches to the delivery of health care under Medicare may generate a whole new set of problems, including problems in areas where Medicare is now working well. For example, shifting across plans is not necessarily good for patients; it is not only disruptive, it can raise the costs of care. Some studies have shown that having one physician over a long period of time reduces the costs of care. And if only the healthier beneficiaries choose to switch plans, the sickest and most vulnerable beneficiaries may end up being concentrated in plans that become increasingly expensive over time. The case of retirees left in the federal employee high-option Blue Cross plan and in a study of retirees in California suggest that even when plans become very expensive, beneficiaries may be fearful of switching and end up substantially disadvantaged. Further, private plans by design are interested in satisfying their own customers and generating profits for stockholders. They cannot be expected to meet larger social goals such as making sure that the sickest beneficiaries get high-quality care; and to the extent that such goals remain important, reforms in Medicare will have to incorporate additional protections to balance these concerns as described below.

Core principles

The reason to save Medicare is to retain for future generations the qualities of the program that are valued by Americans and that have served them well over the past 33 years. This means that any reform proposal ought to be judged on principles that go well beyond the savings that they might generate for the federal government.

I stress three crucial principles that are integrally related to Medicare’s role as a social insurance program:

  • The universal nature of the program and its consequent redistributive function.
  • The pooling of risks that Medicare has achieved to share the burdens across sick and healthy enrollees.
  • The role of government in protecting the rights of beneficiaries–often referred to as its entitlement nature.

Although there are clearly other goals and contributions of Medicare, these three are part of its essential core. Traditional Medicare, designed as a social insurance program, has done well in meeting these goals. What about options relying more on the private sector?

Universality and redistribution. An essential characteristic of social insurance that Americans have long accepted is the sense that once the criterion for eligibility of contributing to the program has been met, benefits will be available to all beneficiaries. One of Medicare’s great strengths has been providing much-improved access to health care. Before Medicare’s passage, many elderly people could not afford insurance, and others were denied coverage as poor risks. That changed in 1966 and had a profound impact on the lives of millions of seniors. The desegregation of many hospitals occurred on Medicare’s watch. And although there is substantial variation in the ability of beneficiaries to supplement Medicare’s basic benefits, basic care is available to all who carry a Medicare card. Hospitals, physicians, and other providers largely accept the card without question.

Once on Medicare, enrollees no longer have to fear that illness or high medical expenses could lead to the loss of coverage–a problem that still happens too often in the private sector. This assurance is an extremely important benefit to many older Americans and persons with disabilities. Developing a major health problem is not grounds for losing the card; in fact, in the case of the disabled, it is grounds for coverage. This is vastly different than the philosophy of the private sector toward health coverage. Even though many private insurers are willing and able to care for Medicare patients, the easiest way to stay in business as an insurer is to seek out the healthy and avoid the sick.

Will reforms that lead to a greater reliance on the market still retain the emphasis on equal access to care and plans? For example, differential premiums could undermine some of the redistributive nature of the program that assures even low-income beneficiaries access to high-quality care and responsive providers.

The pooling of risks. One of Medicare’s important features is the achievement of a pooling of risks among the healthy and sick covered by the program. Even among the oldest of the beneficiaries, there is a broad continuum across individuals’ needs for care. Although some of this distribution is totally unpredictable (because even people who have historically had few health problems can be stricken with catastrophic health expenses), a large portion of seniors and disabled people have chronic problems that are known to be costly to treat. If these individuals can be identified and segregated, the costs of their care can expand beyond the ability of even well-off individuals to pay over time.

A major impetus for Medicare was the need to protect the most vulnerable. That’s why the program focused exclusively on the old in 1965 and then added the disabled in 1972. About one in every three Medicare beneficiaries has severe mental or physical health problems. In contrast, the healthy and relatively well-off (with incomes over $32,000 per year for singles and $40,000 per year for couples) make up less than 10 percent of the Medicare population. Consequently, anything that puts the sickest at greater risk relative to the healthy is out of sync with this basic tenet of Medicare. A key test of any reform should be whom it best serves.

If the advantages of one large risk pool (such as the traditional Medicare program) are eliminated, other means will have to be found to make sure that insurers cannot find ways to serve only the healthy population. Although this very difficult challenge has been studied extensively, as yet no satisfactory risk adjustor has been developed. What has been developed to a finer degree, however, are marketing tools and mechanisms to select risks. High-quality plans that attract people with extensive health care needs are likely to be more expensive than plans that focus on serving the relatively healthy. If risk adjustors are never powerful enough to eliminate these distinctions and level the playing field, then those with health problems, who also disproportionately have lower incomes, would have to pay the highest prices under many reform schemes.

The role of government. Related to the two principles above is the role that government has played in protecting beneficiaries. In traditional Medicare, this has meant having rules that apply consistently to individuals and ensure that everyone in the program has access to care. It has sometimes fallen short in terms of the variations that occur around the country in benefits, in part because of interpretation of coverage decisions but also because of differences in the practice of medicine. For example, rates of hospitalization, frequency of operations such as hysterectomies, and access to new tests and procedures vary widely by region, race, and other characteristics. But in general Medicare has to meet substantial standards of accountability that protect its beneficiaries.

If the day-to-day provision of care is left to the oversight of private insurers, what will be the impact on beneficiaries? It is not clear whether the government will be able to provide sufficient oversight to protect beneficiaries and ensure them of access to high-quality care. If an independent board–which is part of many restructuring proposals–is established to negotiate with plans and oversee their performance, to whom will it be accountable? Further, what provisions will be in place to step in when plans fail to meet requirements or leave an area abruptly? What recourse will patients have when they are denied care?

One of the advantages touted for private plans is their ability to be flexible and even arbitrary in making decisions. This allows private insurers to respond more quickly than a large government program and to intervene where they believe too much care is being delivered. But what look like cost-effectiveness activities from an insurer’s perspective may be seen by a beneficiary as the loss of potentially essential care. Which is more alarming: too much care or care denied that cannot be corrected later? Some of the “inefficiencies” in the health care system may be viewed as a reasonable response to uncertainty when the costs of doing too little can be very high indeed.

Preserving what works

Much of the debate over how to reform the Medicare program has focused on broad restructuring proposals. However, it is useful to think about reform in terms of a continuum of options that vary in their reliance on private insurance. Few advocate a fully private approach with little oversight; similarly, few advocate moving back to 1965 Medicare with its unfettered fee-for-service and absence of any private plan options. In between, however, are many possible options and variations. And although the differences may seem technical or obscure, many of these “details” matter a great deal in terms of how the program will change over time and how well beneficiaries will be protected. Perhaps the most crucial issue is how the traditional Medicare program is treated. Under the current Medicare-plus-Choice arrangement, beneficiaries are automatically enrolled in traditional Medicare unless they choose to go into a private plan. Alternatively, traditional Medicare could become just one of many plans that beneficiaries choose among–but probably paying a substantially higher premium if they choose to do so.

What are the tradeoffs from increasingly relying on private plans to serve Medicare beneficiaries? The modest gains in lower costs that are likely to come from some increased competition and from the flexibility that the private sector enjoys could be more than offset by the loss of social insurance protection. The effort necessary to create in a private plan environment all the protections needed to compensate for moving away from traditional Medicare seems too great and too uncertain. And on a practical note, many of the provisions in the Balanced Budget Act of 1997 that would be essential in any further moves to emphasize private insurance–generating new ways of paying private plans, improving risk adjustment, and developing information for beneficiaries, for example–still need a lot of work.

Special attention to the needs of the disabled population should not get lost in the broader debate.

In addition, it is not clear that there is a full appreciation by policymakers or the public at large of all the consequences of a competitive market. Choice among competing plans and the discipline that such competition can bring to prices and innovation are often stressed as potential advantages of relying on private plans for serving the Medicare population. But if there is to be choice and competition, some plans will not do well in a particular market, and as a result they will leave. In a market system, withdrawals should be expected; indeed, they are a natural part of the process by which uncompetitive plans that cannot attract enough enrollees leave particular markets. If HMOs have a hard time working with doctors, hospitals, and other providers in an area, they may decide that this is not a good market. And if they cannot attract enough enrollees to justify their overhead and administrative expenses, they will also leave an area. The whole idea of competition is that some plans will do well and in the process drive others out of those areas. In fact, if no plans ever left, that would be a sign that competition was not working well.

But plan withdrawals will result in disruptions and complaints by beneficiaries, much like those now occurring in response to the recently announced withdrawals from Medicare-plus-Choice. For various reasons, private plans can choose each year not to accept Medicare patients. In each of the past two years, about 100 plans around the country have decided to end their Medicare businesses in some or all of the counties they serve. In those cases, beneficiaries must find another private plan or return to traditional Medicare. They may have to choose new doctors and learn new rules. This situation has led to politically charged discussions about payment levels in the program, even though that is only one of many factors that may cause plans to withdraw. Thus, not only will beneficiaries be unhappy, but there may be strong political pressure to keep federal payments higher than a well-functioning market would require.

What I would prefer to see is an emphasis on improvements in the private plan options and the traditional Medicare program, basically retaining the current structure in which traditional Medicare is the primary option. Rather than focusing on restructuring Medicare to emphasize private insurance, I would place the emphasis on innovations necessary for improvements in health care delivery regardless of setting.

That is, better norms and standards of care are needed if we are to provide quality-of-care protections to all Americans. Investment in outcomes research, disease management, and other techniques that could lead to improvements in treatment of patients will require a substantial public commitment. This cannot be done as well in a proprietary for-profit environment where new ways of coordinating care may not be shared. Private plans can play an important role and may develop some innovations on their own, but in much the same way as we view basic research on medicine as requiring a public component, innovations in health care delivery also need such support. Further, innovations in treatment and coordination of care should focus on those with substantial health problems–exactly the population that many private plans seek to avoid. Some private plans might be willing to specialize in individuals with specific needs, but this is not going to happen if the environment is one that emphasizes price competition and has barely adequate risk adjusters. Innovative plans would be likely to suffer in that environment.

The current programs to provide protections to low-income beneficiaries are inadequate.

Finally, the default plan–for those who do not or cannot choose or who find a hostile environment in the world of competition–must, at least for the time being, be traditional Medicare. Thus, there needs to be a strong commitment to maintaining a traditional Medicare program while seeking to define the appropriate role for alternative options. But for the time being, there cannot and should not be a level playing field between traditional Medicare and private plans. Indeed, if Medicare truly used its market power as do other dominant firms in an industry, it could set its prices in markets in order to drive out competitors, or it could sign exclusive contracts with providers, squeezing out private plans. When private plans suggest that Medicare should compete on a level playing field, it is unlikely that they mean this to be taken literally.

Other reform issues

Although most of the attention given to reform focuses on structural questions, there are other key issues that must also be addressed, including the adequacy of benefits, provisions that pass costs on to beneficiaries, and the need for more general financing. Even after accounting for changes that may improve the efficiency of the Medicare program through either structural or incremental reforms, the costs of health care for this population group will still probably grow as a share of GDP. That will mean that the important issue of who will pay for this health care–beneficiaries, taxpayers, or a combination of the two–must ultimately be addressed to resolve Medicare’s future.

Improved benefits. It is hard to imagine a reformed Medicare program that does not address two key areas of coverage: prescription drugs and a limit on the out-of-pocket costs that any individual beneficiary must pay in a year. Critics of Medicare rightly point out that its inadequacy has led to the development of a variety of supplemental insurance arrangements, which in turn create an inefficient system in which most beneficiaries rely on two sources of insurance to meet their needs. Further, without a comprehensive benefit package that includes those elements of care that are likely to naturally attract sicker patients, viable competition without risk selection will be difficult to attain.

It is sometimes argued that improvements in coverage can occur only in combination with structural reform. And some advocates of a private approach to insurance go further, suggesting that the structural reform itself will naturally produce such benefit improvements. This implicitly holds the debate on improved benefits hostage to accepting other unrelated changes. And to suggest that a change in structure, without any further financial contributions to support expanded benefits, will yield large expansions in benefits is wishful thinking. A system designed to foster price competition is unlikely to stimulate expansion of benefits.

Expanding benefits is a separable issue from how the structure of the program evolves over time. However, it is not separable from the issue of the cost of new benefits. This is quite simply a financing issue, and it would require new revenues, probably from a combination of beneficiary and taxpayer dollars. A voluntary approach to providing such benefits through private insurance, such as we have at present, is seriously flawed. For example, prescription drug benefits generate risk selection problems; already the costs charged by many private supplemental plans for prescription drugs equal or outweigh their total possible benefits because such coverage attracts a sicker-than-average set of enrollees. A concerted effort to expand benefits is necessary if Medicare is to be an efficient and effective program.

Disability beneficiaries. A number of special problems face the under-65 disabled population on Medicare. The 18-month waiting period before a Social Security disability recipient becomes eligible for coverage creates severe hardships for some beneficiaries who must pay enormous costs out of pocket or delay treatments that could improve their disabilities if they do not have access to other insurance. In addition, a disproportionate share of the disability population has mental health needs, and Medicare’s benefits in this area are seriously lacking. Special attention to the needs of this population should not get lost in the broader debate.


Beneficiaries’ contributions. Some piece of a long-term solution probably will (and should) include further increases in contributions from beneficiaries beyond what is already scheduled to go into place. The question is how to do so fairly. Options for passing more costs of the program on to beneficiaries, either directly through new premiums or cost sharing or indirectly through options that place them at risk for health care costs over time, need to be carefully balanced against beneficiaries’ ability to absorb these changes. Just as Medicare’s costs will rise to unprecedented levels in the future, so will the burdens on beneficiaries and their families. Even under current law, Medicare beneficiaries will be paying a larger share of the overall costs of the program and spending more of their incomes in meeting these health care expenses (see Chart 2).

In addition, options to increase beneficiary contributions to the cost of Medicare further increase the need to provide protections for low-income beneficiaries. The current programs to provide protections to low-income beneficiaries are inadequate, particularly if new premium or cost-sharing requirements are added to the program. Participation in this program is low, probably in part because it is housed in the Medicaid program and is thus tainted by its association with a “welfare” program. Further, states, which pay part of the costs, tend to be unenthusiastic about it and probably also discourage participation.

Financing. Last but not least, Medicare’s financing must be part of any discussion about the future. We simply cannot expect as a society to provide care to the most needy of our citizens for services that are likely to rise in costs and to absorb a rapid increase in the number of individuals becoming eligible for Medicare without facing the financing issue head on. Medicare now serves one in every eight Americans; by 2030 it will serve nearly one in every four. And these people will need to get care somewhere. If not through Medicare, then where?

Confronting the Paradox in Plutonium Policies

The world’s huge stocks of separated, weapons-usable military and civil plutonium are at present the subject of profoundly contradictory, paradoxical policies. These policies fail to squarely confront the serious risks to the nuclear nonproliferation regime posed by civil plutonium as a fissile material that can be used by rogue states and terrorist groups to make nuclear weapons.

About 100 metric tons of weapons plutonium has been declared surplus to military needs by the United States and Russia and will be converted to a proliferation-resistant form (ultimately to be followed by geologic disposal) if present policy commitments are realized. But no comparable national or international policy applies to the civil plutonium stocks, although these are already more than 50 percent greater than the stocks of military plutonium arising from the dismantling of bombs and warheads.

Most of the separated civil plutonium has been created at commercial fuel reprocessing plants in Britain and France, to which various other countries, especially Germany and Japan, have been sending some of the spent fuel from their commercial uranium-fueled reactors, expecting to eventually use the returned plutonium as reactor fuel. But plans for recycling plutonium as reactor fuel have been slow in maturing, so large inventories of civil plutonium have accumulated.

Risks of separated plutonium

The greatest concentration of civil plutonium stocks is at the nuclear fuel reprocessing centers in France at La Hague, on the English Channel; and in Britain at Sellafield, on the Irish Sea. The total for all French and British civil stocks, at the reprocessing centers and elsewhere, is over 132 tons, part of it held for utilities in Germany, Japan, and other countries. Russia’s stock of about 30 tons is the third largest, nearly all of it stored at the reprocessing plant at Chelyabinsk, in the Urals. Smaller but significant stocks of plutonium are present in Germany, Japan, Belgium, and Switzerland, converted in considerable part to a mixed plutonium-uranium oxide fuel called MOx, now waiting to be used in designated light water reactors. The recycling of plutonium in breeder reactors is quite different technology from that found in light water reactors, and commercial development of breeders has been beset by repeated economic and technical reverses that have left the future of breeders very much in doubt. In the United States, early ventures in fuel reprocessing, plutonium recycling, and commercial breeder development all came to an end by the late 1970s and early 1980s. But several tons of separated civil plutonium remain in this country from the early reprocessing effort.

According to estimates by the International Atomic Energy Agency (IAEA), total global civil stocks of separated plutonium may exceed 250 tons by the year 2010. The stakes will continue to be enormous for keeping the separated civil and military plutonium secure and well guarded against intrusions by outsiders and malevolent designs by insiders. Granted, there are national and IAEA safeguards for closely accounting for and protecting all separated plutonium and fresh MOx. We believe that these safeguards reduce the risk of plutonium diversions, thefts, and forcible seizures to a low probability. But in our view the risk is still too great in light of the horrendous consequences of failure.

Separated plutonium could become a target for theft or diversion by a subnational terrorist group, possibly one assisted by a rogue state such as Iraq or North Korea. Less than 10 kilograms of plutonium might suffice for a crude bomb small enough to be put in a delivery van and powerful enough to explode with a force thousands of times greater than that of the bomb that destroyed the federal building in Oklahoma City.

The risk posed by separated plutonium has partly to do with the possibility of diversions for weapons use by a nation whose utilities own the plutonium. Indeed, the potential for such a diversion by a state that has a store of plutonium could increase over time. Even a respected nonnuclear-weapons state, such as Japan, might at some future time feel coerced by new and threatening circumstances to break with the nonproliferation regime and exploit its civil plutonium to make nuclear weapons.

But the greater and more immediate problem is the risk of theft or diversion by terrorists, and that risk lies chiefly in the circulation of plutonium within the nuclear fuel cycle. The many different fuel cycle operations, such as shipping, blending with uranium, fabrication into fresh MOx, storage, and further shipping, all provide opportunities for diversion. A plutonium disposition program will therefore be less than half a loaf unless accompanied by a commitment to end all further separation of plutonium.

An industrial disposition campaign

In less than two decades from start up of the campaign, the nuclear industries of France and Britain could convert virtually the entire global inventory of separated civil plutonium and half the surplus military plutonium to a proliferation-resistant form. But this would mean ending all civil fuel reprocessing and, with the completion of the plutonium conversion campaign, ending all plutonium recycling as well.

For the French and British nuclear industries to embrace so profound a change would mean a marked shift in policy by government as well as industry, for in France the nuclear industry is wholly government-owned and in the United Kingdom British Nuclear Fuels is a national company. The change in government policy in those two countries could be achieved only with the cooperation of key governments abroad–especially the governments of the United States, Russia, Germany, and Japan–and of foreign utilities that own significant amounts of plutonium.

In addition, there would have to be a growing demand for safe plutonium disposition around the world: by political leaders and their parties, by environmental and safe-energy groups, and by the peace groups, policy research groups, and international bodies that together make up the nuclear nonproliferation community. Essential to all the foregoing will be a keen awareness of the proliferation risks associated with separated plutonium and of the possibilities for safely disposing of that plutonium.

We believe that a plutonium disposition campaign relying mostly on nuclear facilities already existing in Britain and France could be carried out far more quickly than would be possible for campaigns elsewhere requiring the construction or modification of whole suites of industrial plants and reactors. Indeed, disposition of Russia’s surplus military plutonium alone is expected to depend on construction of a new MOx fuel plant that the major industrial countries will almost certainly be called upon to pay for.

The United States, where much development work has been done on plutonium disposition, will most likely continue with its own program for disposition of surplus U.S. military plutonium. But the French and British should be encouraged to assume a major, indeed dominant, role in the disposition of Russia’s surplus military plutonium as well as in the disposition of the world’s stocks of separated civil plutonium.

The French-British campaign could be expected to squarely meet what the United States and Russia have decided on (with approval by the major industrial countries, or “G-8”) as the standard appropriate for safe disposition of surplus U.S. and Russian military plutonium. The standard agreed to was first adopted by the U.S. Department of Energy (DOE) on the recommendation of the National Academy of Sciences’ (NAS’s) Committee on International Security and Arms Control (CISAC). Known as the “spent fuel standard,” it represents a rough measure of the proliferation resistance afforded by the obstacles that spent fuel presents to plutonium recovery, namely its intense radioactivity, its great mass and weight, and its requirement for remote handling. The obstacles referred to are very real, especially for any party lacking the resources of a nation-state.

The job ahead

In meeting the spent fuel standard, the United States plans to have its surplus military plutonium disposition program proceed along two tracks. On one track, plutonium will be converted to MOx to be used in certain designated reactors and thereby rendered spent. On the other track, plutonium will be immobilized by incorporating it in massive, highly radioactive glass logs. In DOE parlance, the two tracks are the “MOx option” and the “immobilization option.” Ultimately, after repositories become available, the spent MOx and the radioactive glass logs would be placed in deep geologic disposal.

We must create a global network of internationally sanctioned centers for storage and disposal of spent fuel.

The MOx and immobilization options are clearly within the capabilities of the nuclear industry in France and Britain. The MOx option offers the more immediate promise. MOx fuel manufacturing capacity in France and Britain (together with some in Belgium) will soon rise to approximately 350 tons of MOx production a year, which is more than enough for an intensive and expeditious plutonium disposition campaign. The MOx fuel plants are either operating already or are built and awaiting licensing to receive civil plutonium.

Further, Electricité de France has designated 28 of its reactors to operate with MOx as 30 percent of their fuel cores, and of these, 17 already are licensed to accept MOx. With all 28 reactors in use, half of the world inventory of 300 tons of separated civil plutonium and surplus non-U.S. military plutonium expected by the year 2010 could be converted to spent MOx in about 17 years (we exclude here the 50 or so tons of U.S. military plutonium that the United States will dispose of itself). The fresh civil MOx going into the reactors (we assume a plutonium content of 6.6 percent) would be easily handled because it emits relatively little external radiation; but the spent MOx coming out of the reactors would be intensely radioactive and present a significant barrier to plutonium diversion. The spent MOx would not be reprocessed but rather marked for eventual geologic disposal.

The immobilization option, as thus far developed in the United States, is not as well defined as the MOx option. Now favored by DOE is a “can-in-canister” concept that is still under technical review. The plutonium would be imbedded in ceramic pucks that would be placed in cans arrayed in a latticework inside large disposal canisters. Molten borosilicate glass containing highly radioactive fission products would be poured into these canisters.

At DOE’s request, an NAS panel is currently reviewing the can-in-canister design to judge whether it does in fact meet the spent fuel standard. Experts from the national laboratories have, for instance, offered conflicting views about whether terrorists using shaped explosive charges might quickly separate the cans of plutonium pucks from the radioactive glass. The NAS review panel awaits further studies, including actual physical tests, to either approve the present design or arrive at a better one.

Yet despite the uncertainties, immobilization remains an important option, to be carried out in parallel with the MOx option or, as some advocate, to be chosen in place of the MOx option. Immobilization does not entail the security problems that come from having to transport plutonium from place to place. In the MOx option, by contrast, there is a risk in transporting plutonium from reprocessing centers to MOx factories and in transporting fresh MOx to reactors. We see this risk as acceptable only because the MOx program would be completed in less than two decades and then be shut down.

If DOE can arrive at an acceptable immobilization design, the French and British could no doubt come up with an acceptable design of their own, either a variant of the can-in-canister concept or, perhaps better, a design for a homogeneous mixture of plutonium, glass, and fission products. Cogema, the French nuclear fuel cycle company, has at La Hague two industrial-scale high-level waste vitrification lines now operating and another on standby. British Nuclear Fuels Limited (BNFL) has a similar line of French design at Sellafield. Earlier we noted that with the MOx option, half of the 300-ton plutonium inventory expected by the year 2010 could be disposed of by the French and British in about 17 years; disposing of the other half by immobilization could also take about 17 years. There are not yet sufficient data to compare the costs of the MOx and immobilization options.

The nuclear industry’s future

For the nuclear industry in France and Britain, a commitment to such a plutonium disposition campaign and to ending fuel reprocessing and plutonium recycling would be truly revolutionary. It would mark a sea change in industry thinking about plutonium and proliferation risks–not just in these two countries, but far more widely.

With development of an economic breeder program proving stubbornly elusive, plutonium simply cannot compete as a nuclear fuel on even terms with abundant, relatively inexpensive, low-enriched uranium. In hindsight it seems clear that use of plutonium fuel abroad has depended more on government policy and subsidy than on economics. And politically, plutonium has been only a burden, at times a heavy one. In Germany in the early to mid 1980s, protesters came out by the thousands to confront police in riot gear at sites proposed for fuel reprocessing centers (which as things turned out were never built). For the nuclear industry worldwide, and even in France and Britain, it is vastly more important to find solutions to the problems of long-term storage and ultimate disposal of spent fuel than to sustain a politically harassed, artificially propped-up, fuel reprocessing and plutonium recycling program.

Worldwide there are about 130,000 metric tons of spent fuel, about 90,000 tons of it stored at 236 widely scattered nuclear stations in 36 different countries, the rest stored principally at spent fuel reprocessing centers and at special national spent fuel storage facilities such as those in Germany and Sweden. Of the approximately 200,000 tons of spent fuel generated since use of civil nuclear energy began, only 70,000 tons have been reprocessed. This gap promises to continue, because although about 10,000 tons of spent fuel are now being generated annually, the world’s total civil reprocessing capacity is only 3,200 tons.

Spent fuel has a curious dual personality with respect to proliferation risks. On the one hand, as made explicit by the formally ordained spent fuel standard, spent fuel is inherently resistant to proliferation because of its intense radioactivity and other characteristics. But uranium spent fuel contains about 10 kilograms of plutonium per ton, and the approximately 1,100 tons of recoverable plutonium in the present global inventory of spent fuel is about four times the amount that was in the arsenals of the United States and the Soviet Union at the peak of the nuclear arms race.

As CISAC has recognized, meeting the spent fuel standard will not be the final answer to the plutonium problem, because recovery of plutonium from spent fuel for use in nuclear explosives is possible for a rogue state such as Iraq or North Korea and even for a terrorist group acting with state sponsorship. Accordingly, the nuclear nonproliferation regime cannot be complete and truly robust until storage of nearly all spent fuel is consolidated at a relatively few global centers, the principal exception being fuel recently discharged from reactors and undergoing initial cooling in pools at the nuclear power stations. But what is particularly to the point here is that the nuclear industry will itself be incomplete until a global system for spent fuel storage and disposal exists, or at least is confidently begun. Without such a system, the nuclear industry will be in a poor position to long continue at even its present level of development, much less aspire to a larger share in electricity generation over the next century.

A lack of government urgency

Not the slightest beginning has been made in establishing the needed global network of centers for long-term storage and ultimate disposal of spent fuel. No country is close to opening a deep geologic repository even for its own spent fuel or high-level waste, quite aside from opening one that would accept such materials from other countries. A common and politically convenient attitude on the part of many governments has been to delay the siting and building of repositories until decades into the future. Under the IAEA, an international convention for radioactive waste management has been adopted; but although this may result in greater uniformity among nations with respect to standards of radiation protection for future people, the convention does not mention, even as a distant goal, establishing a global network of storage and disposal centers available to all nations.

The United States has the most advanced repository program, yet it is a prime case in point with respect to a lack of urgency and priority. Yucca Mountain, about 100 miles northwest of Las Vegas, Nevada, has long been under investigation as a repository site. But Congress lets this program poke along underfunded. This past fiscal year, more than $700 million went into the Nuclear Waste Fund from the user fee on nuclear electricity, yet rather than see all this money go to support the nuclear waste program, Congress chose to have about half of it go to federal budget reduction. The Yucca Mountain project received $282.4 million.

The time may be propitious for stopping or rapidly phasing out fuel reprocessing.

The Yucca Mountain repository is scheduled to be licensed, built, and receiving its first spent fuel by the year 2010, but as matters stand this will not happen. Even the promulgation of radiation standards for the project has languished from year to year. A delay in opening the repository would not itself be troubling if the government would adopt a policy of consolidating surface storage of spent fuel near the Yucca Mountain site. In fact, we have repeatedly urged adoption of such a policy, one benefit being that it would allow all the time needed for exploration of Yucca Mountain and for development of a repository design that meets highly demanding standards of containment.

But little progress has been made on this front either, and spent fuel continues to accumulate at the more than 70 U.S. nuclear power stations, threatening some of them with closure. The state of Minnesota, for instance, limits the amount of storage onsite at Northern States Power’s Prairie Island station. Also, the wrong example is being set from the standpoint of the nuclear nonproliferation regime. In our view, consolidated storage at a limited number of internationally sanctioned sites, with greater central control over spent fuel shipments and inventories, should be the universal rule.

One might think that opponents of nuclear energy, especially among the activists who make it their business to probe nuclear programs for weaknesses, would be deploring the lack of consolidated spent fuel storage. But neither the activists here in the United States nor those in Europe are doing so. Indeed, as part of their strategy for stopping work at Yucca Mountain, the U.S. activists insist that all spent fuel remain at the nuclear stations, for the next half century if need be. For them, the unresolved problem of long-term storage and ultimate disposal of nuclear waste should be left hanging around the neck of the nuclear enterprise in order to hasten its demise. Activists acknowledge that sooner or later safe disposal of such waste will be necessary, but in their perspective the radiation hazards are for the ages and what is urgent is to shut down nuclear power. The nonproliferation regime and the need to strengthen it don’t enter into these calculations. But the plutonium in spent fuel poses risks not just for the ages but right now. Rogue states and terrorists are here with us today.

What is needed is to have the safe disposition of plutonium become a central and widely understood rationale for the storage and disposal of spent fuel and high-level waste. In disposition of separated civil and military plutonium the final step would be geologic disposal of the spent MOx and canisters of radioactive glass. This would occur along with disposal of spent uranium fuel containing the vastly larger amount of plutonium in that fuel. The 47 kilograms of civil plutonium contained in every ton of spent MOx is nearly five times the 10 kilograms contained in a ton of spent uranium fuel, but even the latter is enough for one or two nuclear weapons. Accordingly, geologic disposal of spent fuel would be needed for a robust nonproliferation regime even if no plutonium had ever been separated.

Creating a global network of internationally sanctioned centers for storage and disposal of spent fuel and high-level waste storage and disposal centers has a powerful rationale on these grounds alone, and it is a rationale that needs to be clearly recognized.

An opportunity for industry

The nuclear industry in the United States, France, Britain, and around the world should be working determinedly to make policymakers, editorial writers, and society at large understand what is at stake. This is the most effective thing the industry can do to promote a political sea change with respect to acceptance of plans for spent fuel storage and disposal that are vital to nuclear power’s survival. But proclaiming a concern for strengthening the nonproliferation regime will ring hollow if, as in France, the further separation and recycling of plutonium are to continue and indeed expand. The MOx cycle now planned by the French would have a working level of plutonium of about 23 tons circulating through the system, either in its separated form or as fresh MOx.

The nuclear industry, especially in Europe, Russia, and Japan, must rethink its old assumptions and demonstrate in dramatic fashion its concern to ensure a technology that is far less susceptible to abuse by weapons proliferators. We see an attractive deal waiting to be struck: The nuclear industry gives up civil fuel reprocessing and plutonium separation and volunteers to assume a central role in the safe disposition of all separated plutonium, civil and military alike. In return, the governments of all nations that are able to help (not least the United States) would commit themselves to creating the global network of centers needed for storage and disposal of spent fuel and high-level waste. Underlying such a deal must be a wide societal and political understanding that to let things continue indefinitely as they are will present an unacceptable risk of eventual catastrophes.

Leaders of the nuclear enterprise, after sorting out their thinking among themselves, might propose an international conference of high officials from government, the nuclear industry, and the nonproliferation regime. This conference, addressing the realities of plutonium disposition and spent fuel storage and disposal, would try to agree on goals, the preparation of an action plan, and an appropriate division of responsibilities. Such a conference, if successful, could create a new day for nuclear energy.

One might, for instance, see a new urgency and priority on the part of the U.S. Congress and White House with respect to providing both consolidated national storage of spent fuel and a geologic repository capable of protecting future people from dangerous radiation and from recovery of plutonium for use as nuclear explosives. The United States might agree even to accept at least limited amounts of foreign spent fuel when this would achieve a significant nonproliferation objective. A similar response to the new international mandate could be expected from other countries.

Time to end reprocessing

In the 1970s, two U.S. presidents, Gerald Ford (a Republican) and Jimmy Carter (a Democrat), moved to withdraw government support for commercial reprocessing and plutonium recycling because of the proliferation risks. President Carter urged other countries to follow the U.S. lead and go to a “once-through” uranium fuel cycle, with direct geologic disposal of spent fuel. But the French and British reprocessors, unmoved by the U.S. initiative, continued on their own way, and many foreign utilities (especially in Germany and Japan) were eager to enter into contracts, for the national laws or policies under which they operated either favored reprocessing or insisted upon it.

But circumstances today are quite different. Some individuals of stature within the reprocessing nations themselves are showing a new attitude. In February 1998, the Royal Society of the United Kingdom Academy of Science, in its report Management of Separated Plutonium, found “the present lack of strategic direction for dealing with civil plutonium [to be] disturbing.” The working group that prepared the report included several prominent figures from Britain’s nuclear establishment, including the then chairman of the British Nuclear Industry Forum and a former deputy chairman of the United Kingdom Atomic Energy Authority. Although cautious and tentative in thrust, the report suggested, among other possibilities, cutting back on reprocessing.

Economically, too, the time may be propitious for stopping or rapidly phasing out reprocessing. Under the original 10-year baseload contracts for the reprocessing to be done at the new plants at Sellafield and La Hague, all the work was paid for up front, leaving these plants fully amortized from the start. With fulfillment of the baseload contracts now only a few years off, BNFL and Cogema are a long way from having their order books filled with a second round of contracts. In an article on December 10, 1998, Le Monde reported that if German utilities, under the dictates of government policy, withdraw from their post-baseload contracts, Cogema would either have to shut down UP-3 (the plant built to reprocess foreign fuel) or operate it at reduced capacity and unprofitable tariffs.

On the other hand, if the French and British nuclear industries were to undertake an intensive campaign for safe disposition of plutonium, they would surely receive fees and subsidies ensuring an attractive return on their investment in MOx fuel plants and high-level-waste vitrification lines.

The United States should proceed with all deliberate speed to establish a center for storage and disposal in Nevada.

Another reason why reprocessing nations should reexamine their belief in plutonium recycling is that past claims for waste management benefits from such recycling are, on close examination, overstated or wrong. For instance, the National Research Council’s 1995 report Separations Technology and Transmutation Systems points out that in a geologic repository the long-term hazards from contaminated ground water will be created mainly from fission products, such as technetium-99, and not from plutonium. Discharged MOx fuel will contain no less technetium than spent uranium fuel and will contain more iodine-129. Recycling fission products, along with plutonium and other transuranics, could theoretically benefit waste management, but only after centuries of operation and at the expense of more complicated and costly reprocessing.

As a possible longer-term option for plutonium disposition, France has described a MOx system that would also include a suite of 12 fast reactors deployed as plutonium burners. In this scenario, which assumes that the formidable costs of fast reactors and their reprocessing facilities are overcome, all spent fuel would be reprocessed and its plutonium recycled. But the substantial inventory of plutonium would be daunting. About 10 tons of plutonium would be needed to start up each fast reactor, or 120 tons altogether. Most of that would remain as inventory in the system. Further, the two-year working inventory of separated plutonium and fresh plutonium fuel needed by the 12 reactors would be about 50 tons. The potential here for thefts, diversions, and forcible seizures of plutonium is undeniable.

Creating the global network of centers

Under the best of circumstances and with the strongest leadership, creating a global network of storage and disposal centers for spent fuel and high-level waste will still be an extraordinary challenge. But the job is not undoable provided certain critical conditions are met.

Of overriding importance is that one of the major nuclear countries establish a geologic repository at home, inside its own boundaries. Unless this is done, the very concept of international centers falls under the suspicion that what’s afoot is an attempt by the nuclear countries to dupe countries with no nuclear industry into taking their waste. And if the proposed recipient nation should be a poor country desperate for hard currency, the whole thing looks like a cynical and egregious bribe. What this all points up is that the United States should proceed with all deliberate speed to establish a center for storage and disposal in Nevada. No other country is in a position to take the lead in this.

Once this condition is satisfied, then to offer strong economic incentives to potential host countries should become not only acceptable but expected, because the service proposed is one that should demand high compensation. The Russian Duma, for instance, might look more favorably on current proposals for storage of limited amounts of foreign spent fuel in Russia, especially knowing that part of the revenue therefrom can go toward establishing Russia’s own permanent geologic repository.

Let’s take Australia as another example. An advanced democratic society in the Western tradition, Australia is a major producer of uranium but has no nuclear power industry of its own. Beyond its well-populated eastern littoral is a vast desert interior, from which in the main the nation gets only very limited economic benefits. Pangea Resources, a Seattle-based spinoff of Golder Associates of Toronto, has been circulating a plan for a repository that would be built somewhere in the West Australian desert.

In this venture, Pangea has received substantial financial backing from British Nuclear Fuels and the Swiss nuclear waste agency. Until now, Australia’s attitude has been thumbs down, but that attitude might change if the United States should create in Nevada a repository that could be a prototype for repositories on desert terrain around the world, and if at the same time the Australians knew they would be doing their part toward strengthening the nuclear nonproliferation regime.

It’s not often that a single commercial enterprise is presented with the chance to bring about, on a global scale, an enormous improvement in its own fortunes and at the same time strengthen a regime vital to the protection of society. But just such a chance is now at hand for the civil nuclear industry. If it fails to take it, the consequences may be the industry’s gradual decline, perhaps even its ruin, and the continuation of a grave danger to us all.

Airline Deregulation: Time to Complete the Job

Deregulation of the airline industry, now more than two decades old, has been a resounding success for consumers. Since 1978, when legislation was passed ending the government’s role in setting prices and capacity in the industry, average fares are down more than 50 percent when adjusted for inflation, daily departures have more than doubled, and the number of people flying has more than tripled.

Yet even as the economy booms and people fly in record numbers, travelers are increasingly heard complaining about widely varying fares, complex booking restrictions, and crammed planes and airports. Among longtime business travelers, these complaints are often followed by fond but fuzzy recollections of the days before deregulation, when airline workers were supposedly more attentive, seating spacious, and flights usually on time and direct. Even leisure travelers, who have been paying record low fares, can be heard grousing about harried service, crowded flights, and missed connections.

High fares in some markets and a growing gap in the prices charged for restricted and unrestricted tickets have not only raised the ire of some travelers but also prompted concern about the overall state of airline industry competition. Although reregulating the airlines remains anathema to most industry analysts and policymakers, there is no shortage of proposals to fine-tune the competitive process in ways that would influence the fare, schedule, and service offerings of airlines.

Unfortunately, the history of aviation policy suggests that attempts by government to orchestrate airline pricing and capacity decisions, however well intended and narrowly applied, run a real risk of an unhealthy drift backward down a regulatory path that has stifled airline efficiency, innovation, and competition. Today, numerous detrimental policies and practices remain in place, even though they have long since outlived their original and often more narrow purposes. These enduring policies and practices–particularly those designed to control airport and airway congestion–deserve priority attention by policymakers seeking to preserve and expand consumer gains from deregulation.

Troubling legacies

The airline industry was originally regulated out of concern that carriers, left to their own devices, would compete so intensely that they would set fares too low to generate the profits needed to reinvest in new equipment and other capital. It was feared that this self-destructive behavior would, in turn, lead to the degradation of safety and service, ultimately leading to either an erosion of service in some markets or dominance by one or two surviving carriers.

Regulators on the now-defunct Civil Aeronautics Board (CAB) took seriously their mission to avert such duplicative and destructive competition. No new trunk airlines were certified after CAB was formed in 1938, and vigorous competition among the regulated carriers was expressly prohibited. Airlines were assigned specific routes and service areas and given formulas governing the fares they could charge and the profits they could earn. They were even subject to rules prescribing the kinds of aircraft they could fly and their seating configurations.

Established when the propeller-driven DC-3 was king and when air travel was almost exclusively the domain of the affluent and business travelers, CAB was slow to react to the effects of new technology and the changing demands for air travel. The widespread introduction of jet airliners during the 1960s greatly increased travel speed, aircraft seating capacity, and overall operating efficiencies. By flying the faster and more reliable jets, the airlines were able to schedule more flights and use their equipment and labor more intensively. As travel comfort and convenience increased, passenger demand escalated.

Constrained by regulation, the airlines could respond only awkwardly to changing market demands. Meanwhile, the nation’s aviation infrastructure, consisting of the federal air traffic management system and hundreds of local airports, was barely able to keep pace with the changes. Airports in many large cities desperately needed new gates and terminals to handle the larger jets and increased passenger volumes. The air traffic control system, designed and managed by the Federal Aviation Administration (FAA) for a much smaller and less demanding propeller-based industry, suddenly had to handle many more flights by faster jets operating on much tighter schedules.

Outdated government rules and practices are continuing to hinder airline competition and operations.

A fundamental shortcoming, which remains to this day, is that neither the local airports nor the air traffic control system were properly priced: that is, paid for by users in a way that reflects the cost of this use and the value of expanding airport and airway capacity. The air traffic control system has long been financed by revenues generated from federal ticket taxes and levies on jet fuel. Unfortunately, there is little correlation between the size and incidence of these taxes and the cost and benefits of air traffic control services. Likewise, airport landing fees rarely do more than cover the wear and tear on runways. Among other omissions, they are not equated with the costs imposed by users (on others) when taking up valuable runway space during peak periods. Both airport and airway capacity are allocated to users on a first-come, first-served basis, a simple queuing approach that provides little incentive for low-value users, such as small private aircraft, to shift some of their activity to less congested airports and off-peak travel times. Not only has this approach been accompanied by air traffic congestion and delays, but it has prompted a series of often arbitrary administrative and physical controls on airline and airport operations that have had anticompetitive side effects.

In the regulated airline industry of the 1960s and early 1970s, many shortcomings in the public provision of aviation infrastructure could be addressed by the relevant parties acting cooperatively. For instance, when seeking to curb mounting air traffic congestion, the FAA imposed hourly quotas on commercial operations at several of the nation’s busiest airports, including Washington’s National, New York’s LaGuardia, and Chicago’s O’Hare airports. As a practical matter, this quota system (as opposed to the queuing used elsewhere) could be smoothly implemented only because a small number of airlines were permitted by CAB to operate from these airports and could thus decide among themselves who would use the scarce take-off and landing slots.

Other airport access controls were agreed on by the airlines, the federal government, and the local authorities. Most notably, nonstop flights exceeding prescribed distances were precluded from flying into or out of National and LaGuardia airports. Similarly, aircraft headed to or from points outside of Texas (and later bordering states) were excluded from Dallas’s Love Field. The purpose of these so-called “perimeter” limits was to promote the use of the newer and more spacious Dulles, JFK, and Dallas-Fort Worth airports for long-haul travel. In a highly regulated environment–in which airline prices and service areas could be adjusted by regulators to compensate for the effects of these restrictions–the airlines had little incentive to object vigorously to these proscriptions, many of which were later codified in federal law and rulemakings.

Airlines and airports in the regulated era also cooperated in the funding of airport expansion. Concerned that airport authorities would exploit their local monopoly positions by sharply raising fees on airport users and spending the revenue on lavish facilities, the federal government placed stringent restrictions on the use of federal aid to airports. Most funds could be used only for runway and other airside improvements and were accompanied by regulations limiting the recipient’s ability to raise landing fees. Hence, when it became necessary to modernize and expand gates and other passenger facilities, particularly after the introduction of jets, many large airport authorities turned to their major airline tenants for financing help. In return, the airlines signed long-term leases with airports that often gave them control over a large share of gates and the authority to approve future expansions. The possible anticompetitive effects of these leases generated little, if any, serious attention.

Learning new tricks

Largely unforeseen 20 years ago was the extent to which major carriers, once deregulated, would shift to hub-and-spoke operations. By consolidating passenger traffic and flights from scores of “spoke” cities into hub airports, the major carriers were quickly able to gain a foothold in hundreds of additional city-pair markets. This network capability was especially valuable for attracting business travelers interested in frequent departures to a wide array of destinations. The airlines soon discovered that time-sensitive business travelers would pay more for such convenience.

The introduction of frequent flier programs made hub-and-spoke networks even more effective in attracting business fliers. By regularly using the same airline, travelers were rewarded with free upgrades to first class, preferential boarding, access to privileged airport lounges, and free trips.

Crowded airports, flight delays, and discontent over fares and services should not be viewed as shortcomings of deregulation.

Hub-and-spoke systems coupled with the frequent flier programs put the startup airlines at a competitive disadvantage. Without access to the slot-controlled airports, the new airlines faced a handicap in competing for the highly lucrative business market. A wholly voluntary process for distributing slots became impossible in a highly competitive environment. Unfortunately for the new airlines, the FAA grandfathered most of these slot assignments to the large incumbents, allowing them to sell or lease the slots as they saw fit.

New entrants were further hindered in their efforts to build desirable route systems by the persistence of perimeter rules at several key airports. Though strongly supported by residents living near these airports as a way to curtail airport traffic and noise, these limits on long-distance flights are a highly arbitrary means of regulating airport access. The switchover to hub-and spoke systems by the incumbents made it much easier for them to operate within the perimeter limits, because a high proportion of their passengers travel on short- and medium-haul flights connecting from hubs located within the perimeter. For new entrants without well-situated hubs–or the ability to effect changes in the perimeter rules, such as the extension of the limit for Washington National to Dallas-Fort Worth, a main hub for both American and Delta Airlines–these limits created another competitive disadvantage.

Many of the incumbents operated hubs from the very same airports where they also held exclusive-use gate leases and long-term facility and service contracts. The new entrants pointed to these arrangements as significant obstacles to gaining access to gates and other airport services essential for effective competition. By the end of the 1980s, these entry barriers, coupled with the business failure of many new entrants and mounting evidence of high fares in hub markets, prompted growing concern about the sufficiency of airline competition.

Predatory pricing?

During the Gulf War and the national economic recession of the early 1990s, the airlines experienced a sharp drop-off in demand and subsequent operating losses. As the industry began to recover, the excess equipment and labor shed by major carriers created conditions that were favorable for a new wave of startup airlines and further expansion of some existing niche carriers. The former Texas intrastate operator Southwest Airlines began flying in most regions of the United States. By the mid-1990s, one in five travelers was flying on Southwest and other smaller, startup airlines.

For the most part, these new entrants sought profitability through the intense use of labor and equipment and high load factors achieved by offering low fares in city-pair markets with high traffic densities or the potential to achieve such densities through lower fares. By challenging incumbent airlines at their hubs, the new carriers hoped to tap into pent-up demand from leisure travelers and even to attract a fair amount of business traffic. Almost uniquely, Southwest chose to focus its growth at secondary airports in or near major metropolitan areas, thus avoiding congested hubs and minimizing head-to-head competition with major carriers. To many observers, this new wave of entry represented a healthy and overdue development that would counter the tendency of major airlines to exploit market concentration in major hub cities such as Atlanta, Denver, and Chicago.

It was therefore a matter of concern when the new entrants complained that they encountered sharp price cutting by major incumbent carriers, particularly when entering concentrated hub markets. The Department of Transportation (DOT) questioned whether incumbents were setting fares well below cost in an effort to divert customers away from the new challengers, seeking their demise in order to raise fares back to much higher, pre-entry levels. There were also reports of incumbents using their long-term leases and other airport contractual arrangements to exclude challengers; for instance, by refusing to sublease idle gates.

We need a more rational pricing system for providing and allocating airport and airway capacity.

Concerned about possible predatory practices in the airline industry, and recognizing the uncertainty and expense involved in trying to prove such conduct through the courts under traditional antitrust law, DOT offered its own criteria for detecting predatory pricing. It proposed an administrative enforcement process to police unfair competition in the airline industry. Sharp price cutting and large increases in seating capacity in a city-pair market by a major airline in response to the entry of a lower-priced competitor would trigger an investigation and possible enforcement proceedings.

DOT’s proposal, made in April 1998, prompted strong reactions. It was lauded by some, including many startup airlines, as a necessary supplement to traditional antitrust enforcement, giving new entrants the opportunity to compete on the merits of their product. Others, including most major airlines, have criticized it as a perilous first step toward reregulation of passenger fares and service and as incompatible with traditional antitrust enforcement. Meanwhile, in May 1999, the Department of Justice (DOJ) filed a civil antitrust action against American Airlines, claiming that it engaged in predatory tactics.

Spurring more competition

Whether pursued by DOJ or DOT, the development and application of an empirical test for predatory pricing that would not inhibit legitimate pricing responses poses significant challenges. As a practical matter, it would require information, gathered retrospectively, about an airline’s cost structure and the array of options it had available to it for using resources and capacity more profitably. More important, it would do little to remove underlying impediments to entry and competition. After all, for predatory pricing to be a profitable strategy, it must be accompanied by other competitive barriers that allow the airline to gain and sustain market power. Competition is critical to making deregulation work. Accordingly, aviation policies aimed at benefiting consumers should first and foremost center on those areas where government practices are hindering competition.

A good place to start would be to correct the many longstanding inefficiencies and inequities in the provision of aviation infrastructure. Aircraft operators should be charged the cost of using and supplying airport and airway capacity. Neither the use nor the supply of airport runways and air traffic control services is determined on the basis of their highest-value uses. A commercial jet with hundreds of passengers, paying thousands of dollars in ticket and jet fuel taxes, is given no more priority in departing and landing than a small private aircraft. Access determined by first-come, first-served queuing is a guarantee that demand and supply will be chronically mismatched and congestion and delays will ensue, with air travelers suffering as a result. For low-cost airlines that must make intensive use of their aircraft and labor, recurrent congestion and delays are especially troublesome impediments to market entry, and ones that are only likely to get worse as demand for air travel escalates.

Airports still subject to outmoded slot and perimeter controls would make ideal candidates for experimentation with congestion-based landing fees and other market-based methods for financing the supply of airport and airway capacity. Not only would such cost-based pricing offer a way to control airport externalities such as noise and delay, it would do so with far fewer anticompetitive side effects. In addition, it is past time to reassess the competitive effects and incentives of federal aid rules that limit the ability of airports to raise revenues through higher landing fees.

The laggard performance of the public sector in providing and allocating the use of critical aviation infrastructure is a serious deficiency that will become more troublesome as air travel continues to grow. However, crowded airports, flight delays, and discontent over passenger fares and services should not be viewed as shortcomings of deregulation itself, but as clarion calls to complete the deregulation process, instilling more market incentives wherever sensible and feasible.

Making the Internet Fit for Commerce

The laws of commerce, which were established in a marketplace where sellers and buyers met face to face, cannot be expected to meet the needs of electronic commerce, the rapidly expanding use of computer and communications technology in the commercial exchange of products, services, and information. E-commerce sales, which exceeded $30 billion in 1998, are expected to double annually, reaching $250 billion in 2001 and $1.3 trillion in 2003. In addition, by 2003 the Internet will compete with radio to be the third largest medium for advertising, surpassing magazines and cable television. Online banking and brokerage are becoming the norm. In early 1998, 22 percent of securities trades were made online, and this figure is rising rapidly. Now is the time to review and update the laws of commerce for the digital marketplace.

In any commercial transaction, there are multiple interests to protect. Buyers and sellers desire protection from transactions that go wrong due to fraud, a defective product, a buyer that refuses to pay, or other reasons. Buyers and sellers may also want privacy, limiting how others obtain or use information about them or the transaction. Governments need effective and efficient tax collection. This includes sales or value-added taxes imposed on a transaction as well as profit or income taxes imposed on a vendor. Finally, society as a whole has an interest in restricting sales that are considered harmful, such as the sale of guns to criminals.

The legal, financial, and regulatory environment that has developed to protect buyers, sellers, and society as a whole is inconsistent with emerging technology. When purchases are made over a telecommunications network rather than in person, there is inherent uncertainty about the identity of each party to the transaction and about the purchased item. Furthermore, it is difficult for either party to demonstrate that transaction records are accurate and complete. This results in uncertainty and potential conflict in four critical areas: taxation, privacy protection, restricted sales such as weapons to criminals and pornography to minors, and fraud protection.

Telephone and mail order businesses face similar problems, but e-commerce is different. With mail order, buyer and seller know each other’s address, so tax jurisdictions are clear, and perpetrators of fraud and sellers of illegal goods can be traced. This is not true with e-commerce. Mail order revenues are a negligible fraction of the economy, so the fact that sales taxes are rarely collected for mail order is tolerable. E-commerce revenues will be significant. Current law is particularly inapplicable to e-commerce of information products such as videos, software, music, and text, which can be delivered directly over the Internet. These sales produce no physical evidence, such as shipping receipts or inventory records. As a result, auditors cannot enforce tax law, and postal workers cannot check identification when making a delivery. And if either party claims fraud, it may be impossible to retrieve the transmitted item, prove that the item was ever transmitted, or locate the other party.

Two schools of thought have emerged about how to deal with e-commerce conflicts. One is that the infant industry needs protection from regulation. Lack of government interference has helped e-commerce grow, and heavy-handed regulation could cripple its burgeoning infrastructure and deny citizens its benefits. This philosophy underlies the position that all e-commerce should be tax-exempt, that all Internet content should be unregulated, and that consumers are sufficiently served by whatever privacy and fraud protections develop naturally from technological innovation and market forces. Proponents call this industry self-regulation.

Others argue that policies governing traditional commerce evolved for good reasons and that those reasons apply to e-commerce. They warn of the dangers of having different rules for different forms of commerce. If digitized music purchased online is tax-free and compact disks purchased in stores are taxed, then e-commerce is favored, and consumers who cannot afford Internet access from home suffer. Moreover, if a particular sale is illegal in stores but is legal online, then e-commerce undermines society’s ability to restrict some purchases.

The problem is that rules developed for traditional commerce may not be applicable or enforceable for e-commerce. To meet old objectives, proponents push additional laws, sometimes with significant side effects. For example, the state of Washington considered legislation to impose criminal penalties on adults who make it possible for minors to access pornography on the Internet. Because there is no perfect pornography filter, this could effectively ban Internet use in schools and prohibit a mother from giving her 17-year-old son unsupervised Internet access from home. Australia prohibited Australian Web sites from displaying material inappropriate for minors, thereby denying material to adults as well. Similarly, laws have been proposed to ensure that sales taxes are always collected, except when transactions are provably tax-exempt. Some proposals include unachievable standards of proof, forcing vendors to tax all sales. Worse, laws could make tax collection so expensive that e-commerce could not survive.

Policymakers are often forced to choose between conflicting societal goals–for example, between collecting taxes and promoting valuable new services–because policies and institutions are not equipped to meet both objectives. This need not be the case here. The United States can devise a system that protects against misuse of e-commerce without stifling its growth.

Pornography, cryptography, and other restrictions. The most prominent e-commerce controversy is the easy availability of pornography on the Internet. The draconian solutions are to censor material intended for adults or deny minors Internet access. In the 1996 Communications Decency Act, Congress penalized those who provide indecent material to minors. The U.S. Supreme Court found the law unconstitutional because it would interfere with communications permitted between adults. The fundamental problem is the inability of vendors to ascertain a customer’s age.

Congress passed a less restrictive version in 1998 that affects only commercial Web sites. It allows pornography vendors to assume that customers are adults if they have credit cards. This protects the financial interests of pornographers, but it allows minors with access to credit cards to obtain pornography without impediment and prevents adults with poor credit from doing so. This also undermines the privacy of adults who do not want pornography purchases on their credit card records.

Other restrictions have been proposed in Congress to protect children, including bans on Internet gambling and liquor sales. Such restrictions might protect children, but they would deprive adults of these services and reduce revenues for the respective industries. If these services do remain legal, some customers may insist on anonymity to participate, further complicating the need to check customers’ ages. In addition, sales may be restricted in some jurisdictions and not others, which is problematic on the global Internet. For example, a New York court found that an online casino in the Caribbean violated New York laws, because New Yorkers can lie about their location and gamble. This court would shut down online casinos worldwide if they cannot determine whether customers are in New York.

Current law is particularly inapplicable to products such as music and software, which can be delivered directly over the Internet.

The desire to maintain security in online transactions has led to a debate over the use of encryption. Law-abiding individuals use encryption to promote security, but criminals can use it to evade law enforcement. The United States does not regulate domestic sale of encryption software but tightly restricts its export. This is difficult to enforce, because popular products such as Web browsers often incorporate encryption capability. Besides, a vendor who sells and distributes software over the Internet must determine a buyer’s nationality from an Internet address, which is an unreliable indicator. The upshot is that legal sales could be hampered, whereas savvy foreign buyers can readily circumvent the rules.

Security issues also arise in other contexts. For example, legislation has been proposed to ban gun sales via the Internet, because online gun vendors cannot check customer identification to prevent sales to criminals. This blanket prohibition would deny law-abiding citizens this convenience.

The alternative to broad restrictions is a system in which vendors can access and reasonably believe customer credentials, which might indicate whether a customer has a criminal record or is a minor or a U.S. citizen. Policymakers should penalize those who ignore credentials in cases where they could be available, and only in those cases. A final point about sales restrictions: U.S. laws affect only U.S. vendors. If other nations do not impose and enforce similar laws, U.S. restrictions may achieve little or nothing.

Fraud and other failed transactions. Two problems must be addressed in order to provide protection against fraud. First, a transaction must create an incorruptible record. In traditional commerce, this can be accomplished with a paper receipt that is hard to forge. In e-commerce, one might reveal all information about the transaction to a third party. This is not always effective, because the resulting record may not be trustworthy or available when needed. Moreover, this reduces the privacy of buyers and sellers.

Second, it must be possible to check the credentials of other parties. Credentials could include a buyer’s identity or just a credit rating. The chief technical officer of Internet software vendor CyberSource Corporation told Congress that in its early years, 30 percent of the company’s sales were fraudulent; many buyers were thieves using stolen credit card numbers. CyberSource could not collect because the buyer could not be identified or located, and the item could not be retrieved. Buyers also need to check sellers’ credentials for protection. For example, does that online pharmacy really have licensed pharmacists on staff?

Fraud would be more difficult if a unique identifier were embedded in each computer. Intel provided this feature in its latest processor, and Microsoft did the same in software. But the public immediately and loudly expressed its opposition, because such identifiers could undermine privacy. For example, Web sites could use identifiers to track the viewing habits of individuals in tremendous detail, or an identifier could reveal the authorship of documents created or distributed anonymously.

Another way to identity parties is through electronic signatures. Some commercial “certificate authorities” already provide such services. When a customer establishes an account, the certificate authority validates the customer’s identity. The company then assigns the customer an electronic “secret key.” Encryption techniques allow a customer to demonstrate that he knows this secret key by applying an electronic signature.

Unfortunately, there is no guarantee that certificate authorities operate honestly. Anyone can offer this service, and there is no government oversight. Consequently, it is not clear that their assurances should be legally credible. Moreover, today’s commercial services often undermine privacy by presenting all information about a given customer, rather than just the minimal credentials needed for a particular transaction. They may do so because providing all the information makes it harder for a dishonest certificate authority to remain undetected, which is important given the lack of oversight.

Tax collection. A total of 46 states tax e-commerce, but taxes are collected on only 1 percent of e-commerce sales. This tax is simply unenforceable. As a result, e-commerce vendors have an unfair advantage, and state revenues are decreased. Many states depend heavily on sales tax revenues, so they want enforcement even if it damages e-commerce. Taxation of e-commerce has all the practical difficulties posed by restricted sales and fraud protection, and more. Sometimes, vendors must know about their customers to determine whether a given tax applies. For example, taxes may not be collected from customers in some locations or from licensed wholesalers. Such customers must supply trustworthy credentials, but this raises corresponding privacy concerns.

Neither sales tax on a transaction nor revenue tax on a vendor can be enforced without auditable records that are trustworthy. Traditional commerce generates paper trails of cash register logs, signed bills of sale, and shipping records that are difficult to alter or forge. E-commerce often produces only electronic records that are easily changed, especially when the transaction takes place entirely over a network. Without exchanging physical currency or touching pen to paper, people can buy stocks and airline tickets; transfer funds to creditors; “sign” contracts; and download magazines, music, videos, and software. The enormous increase in speed and decrease in costs in these transactions will make commerce without exchange of physical objects increasingly common.

Such transactions create two problems for tax auditors. First, transactions leave no physical evidence behind. Second, unlike a physical product, information can be sold many times. Thus, revenue figures cannot be corroborated by examining inventory. Auditors must depend entirely on transaction records. If transaction records can be changed without risk of detection, any policy that requires such records for enforcement is doomed.

Many policies neither support taxation nor protect privacy. Vendors in the state of Washington, for example, are expected to ask customers for their names and addresses, and collect taxes when customers give a Washington address or no address. Thus, anonymous out-of-state sales are taxed when they should not be. More important, name and address need not be verified or even verifiable, so customers within the state can establish false out-of-state accounts and easily evade taxes.

The 1998 Internet Tax Freedom Act prohibited new taxes on e-commerce for three years, although it does not affect existing taxes applicable to e-commerce, many of which predate computers. The act established a commission to advise Congress by April 2000 on policies to enact before this three-year moratorium ends. The first year was spent arguing about who should be on the commission, and the commission never met. It is unclear whether this group will develop any policies or, if it does, whether its recommendations will be followed.

Privacy. There are already calls for legislation to further regulate the way today’s credit card companies, banks, stores, and others use and share personal information. Online vendors can capture extensive information about their customers; for example, they know what products customers look at, not just what they buy. Privacy protection creates a particularly thorny dilemma because it works against fraud protection, restricted sales, and taxation. These other objectives could be easier to achieve if transaction details were public.

On the other hand, some capabilities required for these other objectives, such as the ability to retrieve trustworthy credentials, are also essential when applying traditional privacy policies to e-commerce. For example, people are legally entitled to view their personal credit records and correct any errors. Applying this policy to e-commerce would fail unless a vendor can verify the identity of the person requesting access to this information.

Similar problems arise when different privacy policies apply to different users. For example, the 1998 Children’s Online Privacy Protection Act prohibited vendors from collecting personal information from children without parental permission. Consequently, vendors must be able to distinguish minors from adults and to identify a minor’s parent, which should require trustworthy credentials. (Today, a minor can lie about age without detection.)

Missing links

The most controversial issues of e-commerce have common underlying causes. Because buyers and sellers lack trustworthy information about each other during the transaction and auditors lack trustworthy records after the transaction, it has been necessary to compromise important policy objectives such as privacy and fair taxation. Rather than fight over which sacrifice to make, we should create an environment in which these objectives are compatible. We must supply the missing elements.

Records must be generated for each transaction. Any attempt to forge, destroy, or retroactively alter records must face a significant risk of detection. Records stored electronically can be changed without detection. If a vendor and customer agree to such a change, or if the customer’s records will be unavailable, then vendors can alter records with impunity. A third party is necessary if transaction records are to be trustworthy. This might be a credit card company. But how do you know the third party’s records are correct and complete? Today it is impossible, making problems inevitable.

Moreover, transaction records must go to third parties without undermining privacy. Today, many e-commerce customers and merchants entirely surrender their privacy to a credit card company and often to each other. It is no surprise that Internet users routinely cite privacy concerns as their primary reason for not engaging in more e-commerce. Parties to a transaction should not be forced to reveal anything beyond the credentials necessary for that particular transaction, which need not include identity. Even that information should be unavailable to everyone outside the transaction, except for authorized auditors. It should even be impossible to determine whether a particular person has engaged in any transactions at all.

Government should use commercial services whenever practical, rather than developing its own.

I want to propose a system that solves many of these problems. Conceptually, it works as follows: All parties create a record containing the specifics of a transaction. All parties sign it. A party that is subject to audits then has its copy notarized. To enable a true audit, outside entities must be involved in recording the transaction. This system therefore includes verifiers, notaries, and auditors. Verifiers check the identity of all parties and vouch for credentials. Notaries oversee every transaction record, establishing a time and date and insuring that any subsequent modifications are detectable. Auditors review and confirm the accuracy of records.

Separating verifier and notary functions is crucial. A verifier knows the true identity of some customers. Notaries know whether that verifier’s unidentified customer is engaged in transactions, and perhaps some information about those transactions. If an organization (such as a credit card company) served as both verifier and notary, it could know that a given person is participating in specific transactions, thereby undermining that person’s privacy.

Technically, the system is based on public-key encryption. Each entity E gets a public key, which is available to everyone, plus a secret key, which only E knows. A message encoded using E’s public key can only be decoded with E’s secret key, so only E can decode it. E can “sign” a record by encoding it with E’s secret key. If a signed record can be decoded with E’s public key, then E must have signed the record. Public-key encryption operations are executed transparently by software.

Any person (or company) who wants an audit trail must first register with one or more verifiers. To register, this person tells the verifier her public key but not her secret key. She has the option of providing additional information, which she may designate as either public or private. Public information can be used as credentials during transactions. Private information may be accessed later by authorized auditors. The verifier is responsible for checking the veracity of all customer information, public and private.

For example, one individual might provide her name and social security number as private information and her U.S. citizenship as public information. Her nationality, public key, and account number are publicly displayed on the verifier’s Web site. Auditors can check her identity if necessary. Vendors know only her verifier account number and citizenship, allowing her to anonymously purchase U.S. encryption software that is subject to export restrictions.

This individual might also register with a second verifier. This time, she declares as public information that she is a software retailer, so she can avoid certain sales taxes. She keeps her nationality confidential. Because she has two verifier accounts, no one can determine that she is both a software retailer and a U.S. citizen. This would enable her, for example, to purchase stock with both accounts without revealing that there is only one buyer.

For each verifier account, a relationship is established with one or more notaries. The auditor must be informed of all verifier accounts and relationships with notaries. Then, e-commerce transactions can begin. In a transaction, all parties create a description of the relevant details using a standardized format. For a software purchase, the description might include the software title, warranty, price, date, time, and the locations of buyer and seller. A transaction record would consist of this description, plus the electronic signature and verifier account of each party. The record would be equivalent to a signed bill of sale and would prove that all parties agreed.

Each party in a transaction that requires an audit trail would submit its copy of the transaction record to an associated notary. This makes it possible to later audit one party without viewing the records of the others. It is possible to “hash” the record so that the notary cannot read all parts of the record, which protects the privacy of all parties. A party submitting a record must also provide verifiable proof of identity, probably using a verifier account number and an electronic signature. This allows the notary to later assemble all records submitted by a given vendor, so auditors can catch a vendor that fails to report some transactions. The notary adds a time stamp and processes the record. Once a record is processed, subsequent changes are detectable by an auditor, even if all parties to the transaction and the notary cooperate in the falsification. The notary also creates a receipt. Anyone with a notarized record and the associated receipt can verify who had the record notarized and when, and can determine that no information has subsequently been altered.

Entities that are not subject to audits, including most consumers, would be largely unaffected by this system. Most could register over the Internet with the click of a mouse button. A customer who makes restricted purchases such as guns or encryption software might be required to register once in person. Software executes other functions transparently.

Several companies currently provide some necessary verifier and notary functions, but not all. For example, there are notaries that establish the date of a transaction, but none can produce a list of all transactions notarized for a given vendor, which is essential. There is little incentive for entrepreneurs to offer such services, given that a notary’s output is rarely called for, or recognized, under today’s laws.

If government leads, industry will build

Trustworthy commercial verifiers and notaries are needed. A government agency or government contractor could provide the services, but private companies would be more efficient at adapting to rapid changes in technology and business conditions. Commercial competition would also protect privacy, because it allows customers to spread records of their transactions among multiple independent entities. Like notaries public, banks, and bail bondsmen, e-commerce verifiers and notaries would be private commercial entities that play a crucial role in the nation’s financial infrastructure and its law enforcement.

How would anyone know that services provided by a private company are trustworthy? The federal government should support voluntary accreditation of verifiers and notaries. Only accredited firms would be used when generating records to comply with federal laws or to interact with federal agencies. Others are likely to have confidence in a firm accredited by the government, which could further bolster e-commerce, but a state or private company would be free to use unaccredited firms.

To obtain accreditation, a verifier or notary would demonstrate that its technology has certain critical features. A notary, for example, would show that any attempt by the notary or its customers to alter or delete a notarized record would be detectable. The system must also be secure and dependable, so that the chances of lost data are remote. The specific underlying technology used to achieve this is irrelevant. Accredited firms must also be financially secure and well insured against error or bankruptcy. The insurer guarantees that even if a notary or verifier business fails, the information it holds will be maintained for a certain number of years. The insurer therefore has incentive to provide effective oversight.

A new corps of government auditors is needed to make this system work. Auditors must randomly check records from notaries and verifiers to ensure that nothing has been altered. Auditors would also keep track of the verifiers and notary accounts held by each electronic vendor and by any other parties subject to audit.

Many current laws and regulations require written records, written signatures, or written time stamps. Federal and state legislation should allow electronic versions to be accepted as equivalent when the technology is adequate. Contracts should not be unenforceable simply because they are entirely electronic. It should be possible to legally establish the date and time of a document with an electronic notary. A notice sent electronically should be legally equivalent to a notice sent in writing, when technology is adequate. For example, a bank could send a foreclosure notice electronically, provided that accredited verifiers and notaries produce credible evidence that the foreclosure was received in time.

Similarly, electronic records of commercial transactions should carry legal weight when technology is adequate. For example, commercial vendors must show records to stockholders and tax auditors. The Securities and Exchange Commission, the Internal Revenue Service, and state tax authorities should establish standards for trustworthy electronic records using accredited verifiers and notaries. Government could improve its own efficiency by using these systems. Congress took the first small step in 1998 by directing the federal government to develop a strategy for accepting electronic signatures. Government should use commercial services whenever practical, rather than developing its own.

The new approaches used to identify parties in e-commerce raise novel policy issues regarding identity. Who is responsible if a verifier incorrectly asserts that an online doctor is licensed? Certificate authorities already want to limit their liability, but such limits discourage the use of appropriate technology and sound management. It should also be illegal for customers to provide inaccurate information to a verifier. If this inaccurate information is used to commit another crime such as obtaining a gun for a criminal, that is an additional offense. Other identity issues are related to electronic signatures. It should be illegal to steal someone’s secret code, which would enable forgery of electronic signatures. It should even be illegal to deliberately reveal one’s secret code to a friend. Forgery and fraud are illegal, but these acts that enable forgery and fraud in e-commerce may currently be legal because they are new.

Accredited and unaccredited verifiers and notaries should be required to notify customers about privacy policies, so that consumers can make informed decisions. Vendors could be allowed to sell restricted products such as pornography, encryption software, guns, and liquor if and only if they check credentials and keep verifiable records where appropriate. For dangerous physical goods such as guns, double checking is justified. Online credentials would include name, mailing address, and criminal status; the name would be verified again when the guns were delivered.

One of the most difficult outstanding issues is to determine when a vendor must collect sales tax and to which government entity it should be paid. At present, taxes are collected only when buyer and seller are in the same state. Policies should change so that collection does not depend on the location of the seller, because too many e-commerce businesses can easily be moved to avoid taxes. This forces vendors to collect taxes for customers in all jurisdictions. There are 30,000 tax authorities in the United States with potentially different policies; monitoring them all is a costly burden. Tax rates and policies should be harmonized throughout larger regions with one organization collecting taxes. For example, there might be a single tax policy throughout each state. Because cities often have higher taxes than rural areas, cities may oppose harmonization.

A vendor still cannot tell a customer’s location (and vice versa). A verifier could provide trustworthy static information, such as billing address or tax home, but not actual location at the time of purchase. Taxing based on static information is more practical, although a few people may manipulate this information to evade taxes. At minimum, each party should state its location in a notarized transaction record, so that retroactive changes are detectable.

Now is the time to devise policies that are both technically and economically appropriate for e-commerce, before today’s practices are completely entrenched. This can only be accomplished by addressing fundamental deficiencies in the e-commerce system rather than by debating individual controversies in isolation. This includes the creation of commercial intermediaries. Verifiers can provide trustworthy credentials, and notaries can ensure that transaction records are complete and unaltered. Dividing responsibilities for these functions among competing notaries and verifiers will capture enough information for tax auditors and law enforcement agents to pursue illegal activities without sacrificing the privacy of the law-abiding.

To make this happen, government should develop accreditation procedures for verifiers and notaries. It should update laws and regulations to allow electronic records to replace written records when and only when the technology is adequate. Government should also use these new services. It should develop new policies on taxation and restricted sales that are consistent with e-commerce. And for those who try to exploit the new technology illegally, criminal codes should provide appropriate punishments.

Changing paths, changing demographics for academics

The decade of the 1990s has seen considerable change in the career patterns for new doctorates in science and engineering. It was once common for new doctorates to move directly from their graduate studies into tenure track appointments in academic institutions. Now it is more likely that they will find employment in other sectors or have nonfaculty research positions. This change has created a great deal of uncertainty in career plans and may be the reason for recent decreases in the number of doctorates awarded in many science and engineering fields.

Another change is that the scientific and engineering workforce is growing more diverse in gender, race, and ethnicity. Throughout the 1970s and 1980s, men dominated the science and engineering workplace, but substantial increases in the number of female doctorates in the 1990s has changed the proportions. Underrepresented minorities have also increased their participation but not to the same extent as have female scientists and engineers.

The narrowing tenure track

The most dramatic growth across all the employment categories has been in nonfaculty research positions. The accompanying graph documents the growth in such positions between 1987 and 1997. The 1987 data reflects the percentage of academic employees who earned a doctorate from 1977 to 1987 who were employed in nontenured positions in 1987. The 1997 data reflects the percentages for those who earned doctorates between 1987 and 1997. In many fields the percentage of such appointment almost doubled between 1987 and 1997.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

A rapidly growing role for women

Between 1987 and 1997, the number of women in the academic workforce increased substantially in the fields in which they had the highest representation–biological sciences, medical sciences, and the social and behavioral sciences. The rate of increase for women was even faster in the fields in which they are least represented–agricultural sciences, engineering, mathematics, and physical sciences. Still, women are underrepresented in almost all scientific and technical fields.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

Slow growth in minority participation

Minority participation also expanded during the period but at a slower rate than for women. African Americans, hispanics and native Americans comprise about 15 percent of the working population but only about 5 percent of the scientists and engineers working in universities. The data also shows substantial increases in the proportion of underrepresented minorities, but they are still not represented at a rate commensurate with their share of the population.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

Universities Change, Core Values Should Not

Half a dozen years ago, as I was looking at the research university landscape, the shape of the future looked so clear that only a fool could have failed to see what was coming, because it was already present. It was obvious that a shaky national economy, strong foreign competition, large and escalating federal budget deficits, declining federal appropriations for research and state appropriations for education, diminished tax incentives for private giving, public and political resistance to rising tuition, and growing misgivings about the state of scientific ethics did not bode well for the future.

Furthermore, there was no obvious reason to expect significant change in the near future. It was plain to see that expansion was no longer the easy answer to every institutional or systemic problem in university life. At best, U.S. universities could look forward to a period of low or no growth; at worst, contraction lay ahead. The new question that needed to be answered, one with which university people had very little experience, was whether the great U.S. universities had the moral courage and the governance structures that would enable them to discipline the appetites of their internal constituencies and capture a conception of a common institutional interest that would overcome the fragmentation of the previous 40 years. Some would, but past experience suggested that they would probably be in the minority.

So that is what I wrote, and I did not make it up. For the purposes of a book I was writing at the time, I had met with more than 20 past and present university presidents for extensive discussions of their years in office and their view of the future. The picture I have just described formed at least a part of every individual’s version of the challenge ahead. Some were more optimistic than others; some were downright gloomy. But to one degree or another, all saw the need for and the difficulty of generating the kind of discipline required to set priorities in a process that was likely to produce more losers than winners. As predictions go, this one seemed safe enough.

Well, a funny thing happened on the way to the world of limits: The need to set limits apparently disappeared. Ironically, it was an act of fiscal self-restraint on the part of a normally unrestrained presidency and Congress that helped remove institutional self-restraint from the agenda of most universities. The serious commitment to a balanced federal budget in 1994, made real by increased taxes and a reasonably enforceable set of spending restraints, triggered the longest economic expansion in the nation’s history. That result was seen in three dramatic effects on university finances: Increased federal revenues eased the pressure on research funding; increased state revenues eased the appropriation pressure on state universities; and the incredible rise in the stock market generated capital assets in private hands that benefited public and private university fundraising campaigns. The first billion-dollar campaign ever undertaken by a university was successfully completed by Stanford in 1992. Precisely because of the difficult national and institutional economic conditions at the time, it was viewed as an audacious undertaking. However, within a few years billion-dollar campaigns were commonplace for public and private universities alike.

To be fair, the bad days of the early 1990s did force many institutions into various kinds of cost-reduction programs. In general, these focused on administrative downsizing, the outsourcing of activities formerly conducted by in-house staff, and something called “responsibility-centered management.” It could be argued, and was, that cutting administrative costs was both necessary and appropriate, because administrations had grown faster than faculties and therefore needed to take the first reductions. That proposition received no argument from faculties. The problem of how to deal with reductions in academic programs was considerably more difficult. I believed that the right way to approach the problem was to start with the question, “How can we arrange to do less of what we don’t do quite so well in order to maintain and improve the quality of what we know we can do well?” Answering that question was sure to be a very difficult exercise, but I believed it would be a necessary one if a university were to survive the hard times ahead and be ready to take advantage of the opportunities that would surely arise when the economic tide changed.

As it happened, the most common solution to the need to lower academic costs was to offer financial inducements for early retirement of senior faculty and either reduce the size of the faculty or replace the more expensive senior people with less expensive junior appointments or with part-time, non-tenure-track teachers. These efforts were variously effective in producing at least short-term budget savings.

All in all, it was a humbling lesson, and I have learned it. I am out of the prediction business. Well, almost out. To be precise, I now believe that swings in economic fortune–and present appearances to the contrary notwithstanding, there will surely be bad times as well as good ones ahead–are not the factors that will determine the health and vitality of our universities in the years to come. One way or another, there will always be enough money available to keep the enterprise afloat, although never enough to satisfy all academic needs, much less appetites. Instead, the determining factors will be how those responsible for these institutions (trustees, administrations, and faculties) respond to issues of academic values and institutional purpose, some of which are on today’s agenda, and others of which undoubtedly lie ahead. The question for the future is not survival, or even prosperity, but the character of what survives.

Three issues stand out as indicators of the kind of universities we will have in the next century: the renewal of university faculties as collective entities committed to agreed-on institutional purposes; the terms on which the growing corporate funding of university research is incorporated into university policy and practice; and the future of the system of allocating research funding that rests on an independent review of the merits of the research and the ability of the researcher. All three are up for grabs.

Faculty and their institutions

Without in any way romanticizing the past, which neither needs nor deserves it, it is fair to say that before World War II the lives of most university faculty were closely connected to their employing institutions. Teaching of undergraduates was the primary activity. Research funding was scarce, opportunities for travel were limited, and very few had any professional reason to spend time thinking about or going to Washington, D.C. This arrangement had advantages and disadvantages. I think the latter outweighed the former in the prewar academic world, but however one weighs the balance, there can be no dispute that what followed the war was radically different. The postwar story has been told many times. The stimulus of the GI Bill created a boom in undergraduate enrollment, and government funding of research in science and technology turned faculty and administrations toward Washington as the major source of good things. The launching of Sputnik persuaded Congress and the Eisenhower administration, encouraged by educators and their representatives in Washington, that there was a science and education gap between the Soviet Union and the United States. There was, but it was actually the United States that held the advantage. Nevertheless, a major expansion of research funding and support for Ph.D. education followed.

At the same time, university professors were developing a completely different view of their role. What had once been a fairly parochial profession was becoming one of the most cosmopolitan ones. Professors’ vital allegiances were no longer local. Now in competition with traditional institutional identifications were connections with program officers in federal agencies, with members of Congress who supported those agencies, and with disciplinary colleagues around the world.

Swings in economic fortune are not the factors that will determine the health and vitality of our universities in the years to come.

The change in faculty perspectives has had the effect of greatly attenuating institutional ties. Early signs of the change could be seen in the inability of instruments of faculty governance to operate effectively when challenged by students in the 1960s, even when those challenges were as fundamental as threats to peace and order on campus. Two decades after the student antiwar demonstrators had become respectable lawyers, doctors, and college professors, Harvard University Dean Henry Rosovsky captured the longer-term consequences of the changed relationship of faculty to their employing universities. In his 1990-91 report to the Harvard Faculty of Arts and Sciences, Rosovsky noted the absence of faculty from their offices during the important reading and exam periods and the apparent belief of many Harvard faculty that if they teach their classes they have fulfilled their obligations to students and colleagues. He said of his colleagues, “. . . the Faculty of Arts and Sciences has become a society largely without rules, or to put it slightly differently, the tenured members of the faculty–frequently as individuals–make their own rules . . . [a]s a social organism, we operate without a written constitution and with very little common law. This is a poor combination, especially when there is no strong consensus concerning duties and standards of behavior.”

What Rosovsky described at Harvard can be found at every research university, and it marks a major shift in the nature of the university. The question of great consequence for the future is whether faculties, deans, presidents, and trustees will be satisfied with a university that is as much a holding company for independent entrepreneurs as it is an institution with a collective sense of what it is about and what behaviors are appropriate to that understanding. I have no idea where on the continuum between those two points universities will lie 20 or 50 years from now. I am reasonably confident, however, that the question and the answer are both important.

Universities and industry

In March 1992, the presidents of five leading research universities met at Pajaro Dunes, California. Each president was accompanied by a senior administrator involved in research policy, several faculty members whose research involved relations with industry, and one or two businessmen close to their universities. The purpose of the meeting was to examine the issues raised by the new connections between universities and the emerging biotechnology industry. So rapidly and dramatically have universities and industry come together in a variety of fields since then that reading some of the specifics in the report of the meeting is like coming across a computer manual with a chapter on how to feed Hollerith cards into a counter-sorter. But even more striking than those details is the continuity of the issues raised by these new relationships. Most of them are as fresh today as they were nearly two decades ago, a fact that testifies both to their difficulty and their importance.

These enduring issues have to do with the ability of universities to protect the qualities that make them distinctive in the society and important to it. In the words of the report, “Agreements (with corporations) should be constructed . . . in ways that do not promote a secrecy that will harm the progress of science, impair the education of students, interfere with the choice of faculty members of the scientific questions or lines of inquiry they pursue, or divert the energies of faculty members from their primary obligations to teaching and research.” In addition, the report spoke to issues of conflict of interest and what later came to called “conflict of commitment,” to the problems of institutional investment in the commercial activities of its faculty, the pressures on graduate students, and issues arising out of patent and licensing practices.

All of those issues, in their infancy when they were addressed at Pajaro Dunes, have grown into rambunctious adolescents on today’s university campuses. They can be brought together in a single proposition: When university administrators and faculty are deciding how much to charge for the sale of their research efforts to business, they must also decide how much they are willing to pay in return. For there will surely be a price, as there is in any patronage relationship. It was government patronage, after all, that led universities to accept the imposition of secrecy and other restrictions that were wholly incompatible with commonly accepted academic values. There is nothing uniquely corrupting about money from industry. It simply brings with it a set of questions that universities must answer. By their answers, they will define, yet again, what kind of institutions they are to be. Here are three questions that will arise with greater frequency as connections between business and university-based research grow:

  • Will short-term research with clearly identified applications be allowed to drive out long-term research of unpredictable practical value, in a scientific variation of Gresham’s Law?
  • Can faculty in search of research funding and administrators who share that interest on behalf of their institution be counted on to assert the university’s commitment to the openness of research processes and the free and timely communication of research results?
  • Will faculty whose research has potential commercial value be given favored treatment over their colleagues whose research does not?

I have chosen these three among many other possible questions because we already have a body of experience with them. It is not altogether reassuring. Some institutions have been scrupulous in attempting to protect institutional values. Others have been considerably less so. The recent large increases in funding for the biomedical sciences have relieved some of the desperation over funding pressures that dominated those fields in the late 1980s and early 1990s, but there is no guarantee that the federal government’s openhandedness will continue indefinitely. If it does not, then the competition for industrial money will intensify, and the abstractions of institutional values may find it hard going when pitted against the realities of the research marketplace.

Even in good times, the going can be hard. A Stanford University official (not a member of the academic administration, I hasten to add) commented approvingly on a very large agreement reached between an entire department at the University of California at Berkeley and the Novartis Corporation: “There’s been a culture for many years at Stanford that you do research for the sake of doing research, for pure intellectual thought. This is outdated. Research has to be useful, even if many years down the line, to be worthwhile.” I have no doubt that most people at Stanford would be surprised to learn that their culture is outdated. I am equally certain, however, that the test of usefulness as a principal criterion for supporting research is more widely accepted now than in the past, as is the corollary belief that it is possible to know in advance what research is most likely to be useful. Since both of those beliefs turn the historic basis of the university on its head, and since both are raised in their starkest form by industry-supported research, it is fair to say that the extent to which those beliefs prevail will shape the future course of research universities, as well as their future value.

Preserving research quality

Among the foundation stones underlying the success of the U.S. academic research enterprise has been the following set of propositions: In supporting research, betting on the best is far more likely to produce a quality result than is settling for the next best. Although judgments are not perfect, it is possible to identify with a fair degree of confidence a well-conceived research program, to assess the ability of the proposer to carry it out, and to discriminate in those respects among competing proposers. Those judgments are most likely to be made well by people who are themselves skilled in the fields under review. Finally, although other sets of criteria or methods of review will lead to the support of some good research, the overall level of quality will be lower because considerations other than quality will be weighed more heavily in funding decisions.

It is remarkable how powerful those propositions have been and, until recently, how widely they were accepted by decisionmakers and their political masters. To see that, it is only necessary to contrast research funding practices with those in other areas of government patronage, where the decimal points in complicated formulas for distributing money in a politically balanced manner are fought over with fierce determination. Reliance on the system of peer review (for which the politically correct term is now “merit review”) has enabled universities to bring together aggregations of top talent with reasonable confidence that research funding for them will be forthcoming because it will not be undercut by allocations based on some other criteria.

The future of research universities will continue to be determined by the extent to which they are faithful to the values that have always lain at their core.

Notwithstanding the manifest success of the principle that research funding should be based on research quality, the system has always been vulnerable to what might be called the “Lake Woebegone Effect”: the belief that all U.S. universities and their faculty are above average, or that given a fair chance would become so. That understandable, and in some respects even admirable, belief has always led to pressures to distribute research support more broadly on a geographic (or more accurately, political-constituency) basis. These pressures have tended to be accommodated at the margins of the system, leaving the core practice largely untouched.

Since it remains true that the quality of the proposal and the record and promise of the proposer are the best predictors of prospective scientific value, there is reason to be concerned that university administrators, faculty, and members of Congress are increasingly departing from practices based on that proposition. The basis for that concern lies in the extent to which universities have leaped into the appropriations pork barrel in an effort to obtain funds for research and research facilities that is based not on an evaluation of the comparative merits of the project for which they seek funds but on the ability of their congressional representatives to manipulate the appropriations process on their behalf. In little more than a decade, the practice of earmarking appropriations has grown from a marginal activity conducted around the fringes of the university world to an important source of funds. A record $787 million was appropriated in that way in fiscal year 1999. In the past decade, a total of $5.8 billion was given out directly by Congress with no evaluation more rigorous than the testimony of institutional lobbyists. Most of this largesse was directed to research and research-related projects. Even in Washington, those numbers approach real money.

More important than the money, though, is what this development says about how pressures to get in or stay in the research game have changed the way in which faculty and administrators view the nature of that game. The change can be seen in the behavior of members of the Association of American Universities (AAU), which includes the 61 major research universities. In 1983, when two AAU members won earmarked appropriations, the association voted overwhelmingly to oppose the practice and urged universities and members of Congress not to engage in it. If a vote were taken today to reaffirm that policy, it is not clear that it would gain support from a majority of the members. Since 1983, an increasing number of AAU members have benefited from earmarks, and for that reason it is unlikely that the issue will be raised again in AAU councils.

Even in some of the best and most successful universities there is a sense of being engaged in a fierce and desperate competition. The pressure to compete may come from a need for institutional or personal aggrandizement, from demands that the institution produce the economic benefits that research is supposed to bring to the local area, or those and other reasons combined. The result, whatever the reasons, has been a growing conclusion that however nice the old ways may have been, new circumstances have produced the need for a new set of rules.

At the present moment, we are still at an early stage in a movement toward the academic equivalent of the tragedy of the commons. It is still possible for each institution that seeks to evade the peer review process to believe that its cow can graze on the commons without harm to the general good. As the practice becomes more widespread, the commons will lose its value to all. Although the current signs are not hopeful, the worst outcome is not inevitable. The behavior of faculty and their administrations in supporting or undermining a research allocation system based on informed judgments of quality will determine the outcome and will shape the nature of our universities in the decades ahead.

There are other ways of looking at the future of our universities than the three I have emphasized here. Much has been written, for example, about the effects of the Internet and of distance education on the future of the physical university. Much of this speculation seems to me to be overheated; more hype than hypothesis. No doubt universities will change in order to adapt to new technologies, as they have changed in the past, but it seems to me unlikely that a virtual Harvard will replace the real thing, however devoutly its competitors might wish it so. The future of U.S. universities, the payoff that makes them worth their enormous cost, will continue to be determined by the extent to which they are faithful to the values that have always lain at their core. At the moment, and in the years immediately ahead, those values will be most severely tested by the three matters most urgently on today’s agenda.

Support Them and They Will Come

On May 6, 1973, the National Academy of Engineering convened an historic conference in Washington, D.C., to address a national issue of crisis proportions. The Symposium on Increasing Minority Participation in Engineering attracted prominent leaders from all sectors of the R&D enterprise. Former Vice President Hubert H. Humphrey in his opening address to the group underscored the severity of the problem, “Of 1.1 million engineers in 1971, 98 percent were white males.” African Americans, Puerto Ricans, Mexican-Americans, and American Indians made up scarcely one percent. Other minorities and women made up the remaining one percent.

Symposium deliberations led to the creation of National Action Council for Minorities in Engineering (NACME), Inc. Its mission was to lead a national initiative aimed at increasing minority participation in engineering. Corporate, government, academic, and civil rights leaders were eager to lend their enthusiastic support. In the ensuing quarter century, NACME invested more than $100 million in its mission, spawned more than 40 independent precollege programs, pioneered and funded the development of minority engineering outreach and support functions at universities across the country, and inspired major policy initiatives in both the public and private sectors. Building the largest private scholarship fund for minority students pursuing engineering degrees, NACME supported 10 percent of all minority engineering graduates from 1980 to the present.

By some measures, progress has been no less than astounding. The annual number of minority B.S. graduates in engineering grew by an order of magnitude, from several hundred at the beginning of the 1970s to 6,446 in 1998. By other measures, though, we have fallen far short of the mark. Underrepresented minorities today make up about a quarter of the nation’s total work force, 30 percent of the college-age population, and a third of the birth rate, but less than 6 percent of employed engineers, only 3 percent of the doctorates awarded annually, and just 10 percent of the bachelor’s degrees earned in engineering. Even more disturbing, in the face of rapidly growing demand for engineers over the past several years, freshman enrollment of minorities has been declining precipitously. Of particular concern is the devastating 17 percent drop in freshman enrollment of African Americans from 1992 to 1997. Advanced degree programs also have declining minority enrollments. First-year graduate enrollment in engineering dropped a staggering 21.8 percent for African Americans and 19.3 percent for Latinos in a single year, between 1996 and 1997. In short, not only has progress come to an abrupt end, but the gains achieved over the past 25 years are in jeopardy.

Why we failed

One reason why the progress has been slower than hoped is that financial resources never met expectations. After the 1973 symposium, the Alfred P. Sloan Foundation commissioned the Task Force on Minority Participation in Engineering to develop a plan and budget for achieving parity (representation equal to the percentage of minorities in the population cohort) in engineering enrollment by 1987. The task force called for a minimum of $36.1 million (1987 dollars) a year, but actual funding came to about 40 percent of that. And as it happened, minorities achieved about 40 percent of parity in freshman enrollment.

Leaping forward to the present, minority freshman enrollment in the 1997-98 academic year had reached 52 percent of parity. Again the disappointing statistics and receding milestones should not come as a surprise. In recent years, corporate support for education, especially higher education, has declined. Commitments to minority engineering programs have dwindled. Newer companies entering the Fortune 500 list have not yet embraced the issue of minority underrepresentation. Indeed, although individual entrepreneurs in the thriving computer and information technology industry have become generous contributors to charity, the new advanced technology corporate section has not yet taken on the mantle of philanthropy or the commitment to equity that were both deeply ingrained in the culture of the older U.S. companies they displaced.

The failure to attract freshman engineering majors is compounded by the fact that only 36 percent of these freshman eventually receive engineering degrees, and a disproportionately small percentage of these go on to earn doctorates. This might have been anticipated. Along with the influx of significant numbers of minority students came the full range of issues that plague disenfranchised groups: enormous financial need that has never been adequately met; poor K-12 schools; a hostile engineering school environment; ethnic isolation and consequent lack of peer alliances; social and cultural segregation; prejudices that run the gamut from overt to subtle to subconscious; and deficient relationships with faculty members, resulting in the absence of good academic mentors. These factors drove minority attrition to twice the nonminority rate.

It should be obvious that the fastest and most economical way to increase the number of minority engineers is to make it possible for a higher percentage of those freshman engineering students to earn their degrees. And that’s exactly what we have begun to do. Over the past seven years, NACME developed a major program to identify new talent and expand the pipeline, while providing a support infrastructure that ensures the success of selected students. In the Engineering Vanguard Program, we select inner-city high-school students–many with nonstandard academic backgrounds–using a nontraditional, rigorous assessment process developed at NACME. Through a series of performance-based evaluations, we examine a set of student attributes that are highly correlated with success in engineering, including creativity, problem-solving skill, motivation, and commitment.

Because the inner-city high schools targeted by the program, on average, have deficient mathematics and science curricula, few certified teachers, and poor resources, NACME requires selected students to complete an intense academic preparation program, after which they receive scholarships to engineering college. Although many of these students do not meet standard admissions criteria for the institutions they attend, they have done exceedingly well. Students with combined SAT scores 600 points below the average of their peers are graduating with honors from top-tier engineering schools. Attrition has been virtually nonexistent (about 2 percent over the past six years). Given the profile of de facto segregated high schools in predominantly minority communities (and the vast majority of minority students attend such schools), Vanguard-like academic preparation will be essential if we’re going to significantly increase enrollment and, at the same time, ensure high retention rates in engineering.

Using the model, we at NACME believe that it is possible to implement a program that, by raising the retention rate to 80 percent, could within six years result in minority parity in engineering B.S. degrees. That is, we could raise the number of minority graduates from its current annual level of 6,500 to 24,000. Based on our extensive experience with supporting minority engineering students and with the Vanguard program, we estimate that the cost of this effort will be $370 million. That’s a big number–just over one percent of the U.S. Department of Education budget and more than 10 percent of the National Science Foundation budget. However, a simple cost-benefit analysis suggests that it’s a very small price for our society to pay. The investment would add almost 50,000 new engineering students to the nation’s total engineering enrollment and produce about 17,500 new engineering graduates annually, serving a critical and growing work force need. This would reduce, though certainly not eliminate, our reliance on immigrants trained as engineers.

Crudely benchmarking the $367.5 million cost, it’s equivalent to the budget of a typical, moderate-sized polytechnic university with an undergraduate enrollment of less than 10,000. Many universities have budgets that exceed a billion dollars, and there are none that produce 17,000 graduates annually. The cost, too, is modest when contrasted with the cost of not solving the underrepresentation problem. For example, Joint Ventures, a Silicon Valley research group, estimates that the work force shortage incrementally costs Silicon Valley high-technology companies between $3 billion and $4 billion dollars annually because of side effects such as productivity losses, higher turnover rates, and premium salaries. At the same time, minorities, who make up almost half of California’s college-age population, constitute less than 8 percent of the professional employees in Silicon Valley companies. Adding the social costs of an undereducated, underutilized talent pool to the costs associated with the labor shortage, it’s clear that investment in producing more engineers from underrepresented populations would pay enormous dividends.

Given the role of engineering and technological innovation in today’s economy and given the demographic fact that “minorities” will soon make up a majority of the U.S. population, the urgency today is arguably even greater than it was in 1973. The barriers are higher. The challenges are more exacting. The threats are more ominous. At the same time, we have a considerably more powerful knowledge base. We know that engineering is the most effective path to upward mobility, with multigenerational implications. We know what it takes to solve the problem. We have a stronger infrastructure of support for minority students. We know that the necessary investment yields an enormous return. We know, too, that if we fail to make the investment, there will be a huge price to pay in dollars and in lost human capital. The U.S. economy will not operate at its full potential. Our technological competitiveness will be challenged. Income gaps among ethnic groups will continue to widen.

We should also remember that this is not simply about social justice for minorities. The United States needs engineers. Many other nations are increasing their supply of engineers at a faster rate. In recent years, the United States has been able to meet the demand for technically trained workers only by allowing more immigration. That strategy may no longer be tenable in a world where the demand for engineers is growing in many countries. Besides, it’s not necessary.

In the coming fall, 600,000 minority students will be entering their senior year in high school in the United States. We need to enroll only 5 percent of them in engineering in order to achieve the goal of enrollment parity. If we invest appropriately in academic programs and the necessary support infrastructure, we can achieve graduation parity as well. If we grasp just how important it is for us to accomplish this task, if we develop the collective will to do it, we can do it. Enthusiasm and rhetoric, however, cannot solve the problem as long as the effort to deliver a solution remains substantially underfunded. Borrowing from the vernacular, we’ve been there and done that.