Scrutinizing the Inscrutable

Business and government leaders around the world are pondering developments in China and India. Everyone can see that the future of more than a third of the world’s population is of paramount importance, and all are eager to reach a clear understanding of what these countries hope to achieve and how successful they will be. Few, however, seem to appreciate how difficult it is to understand what is happening in two ancient civilizations that include more than 2 billion people.

Journalist Edward Luce provides an enlightening overview of the many forces at play in modern India, with some reflections on how India differs from China, in his terrific new book In Spite of the Gods: The Strange Rise of Modern India. Luce takes the reader on a kaleidoscopic tour through the legacy of Gandhi and Nehru, the enduring presence of the caste system, the rise of Hindu nationalism, the remnants of the British-derived civil service system, and the uneven emergence of a modern educated class of tech-savvy workers. His thumbnail review of the recent past makes it clear why outsiders might have trouble keeping up with Indian developments:

“In the last thirty years, India has been through a nineteen-month spell of autocracy; it has lost two leaders of the Nehru-Gandhi family to assassination; it has faced separatist movements in Punjab, Kashmir, Assam, and elsewhere; and it has switched from a closed economic regime to an open(ish) economy. It has moved from secular government to Hindu nationalist government and back again; it has gone from single-party rule to twenty-four party rule, from anti-nuclear to nuclear, from undeclared border wars with Pakistan to lengthy peace process. It has also moved from virtual bankruptcy to a lengthy boom.”

Luce notes that what is most remarkable about this chaotic period is that since the 1991 decision to open the economy, India has made steady progress in many critical social indicators. In fact, there has been roughly 1% annual improvement in the national poverty rate, literacy, life expectancy, and UN-calculated human development index. Luce sites former U.S. ambassador to India John Kenneth Galbraith’s characterization of the country as “functioning anarchy.”

A review of China’s recent history is every bit as mind boggling: the rise of Mao Tse Tung, the alliance with the Soviet Union, the split from the Soviet Union, the complete restructuring of the economy, the upheaval of the social order in the cultural revolution, a second restructuring of the economy, the birth and repression of a democracy movement, and the high-wire strategy of maintaining an authoritarian political system with a free market economy. Yet through all this China has achieved roughly 2% annual improvement in economic output, trade, and education.

Luce’s primary point is that there is no simple way to understand what is happening in these countries. U.S. policy-makers must prepare themselves for a long and challenging effort to see beyond the headlines and aggregate statistics. This issue is a first step in that direction. The authors dig deeper into the education data to try to evaluate the quality of recent graduates. They consider whether recent economic trends will continue in the future, and they examine current policies and political currents, teasing out important discrepancies between stated policy and actual practice. We hope that this will make an important contribution to the understanding of China and India, but we realize that this is only a beginning.

Luce’s survey of India touches on innumerable questions that will have to be addressed if India is to continue on its path to economic growth: increasing the participation of untouchables and women, improving the productivity of agriculture, resisting Hindu nationalism, reaching stable relationships with its neighbors, improving its infrastructure, expanding educational opportunities for all, and confronting the rising threat of HIV/AIDS. China shares some of these concerns and has others of its own. The road to the future is far from clear for either country.

Although Luce does not discuss China in any detail, he does raise one critical difference between China and India could be the critical factor in their futures. Luce points out that China started its economic revival first, and its authoritarian political system has made it possible to make changes more quickly. This, he suggests, is why China has been a 2% society while India has been a 1% society. But in this difference, Luce finds India’s hidden strength, the quality that could enable it to eventually surpass China. The ability to resolve conflict and to progress without sacrificing diversity through the messy democratic process means that the new India is being built on a strong and broad foundation. India will be in a better position to adapt to new conditions and meet new challenges. It knows how to deal with internal tensions. China, on the other hand, has taken a top-down approach that allows for greater efficiency but does not allow for social pressures to be vented. It’s economy is growing tall, but it’s foundation is narrow. No one can predict how over the long term China will work out the inherent tension between an open economy and a closed political system.

For better and worse, India still encompasses much of the British influence of its colonial period. One British inheritance that might serve it well in the future is the insight of Winston Churchill that “Democracy is the worst form of government, except for all those others that have been tried from time to time.”

Forum – Spring 2007

Communications security

Jon M. Peha’s description of our nation’s emergency response interoperability clearly highlights the challenges we faced in the multiple attacks of September 11, 2001, as well as many other incidents that haven’t made national headlines (“Improving Public Safety Communications,” Issues, Winter 2007).

We in the Department of Homeland Security S&T Directorate (DHS S&T) charged with addressing these issues are as excited as Peha by the release of public safety spectrum in the 700-megaherz (MHz) range. As Peha correctly notes, the physical properties involved make this valuable “real estate.” Devices that use 700 MHz have reasonable range (though typically less than the lower-frequency bands) and good building penetration (though not as good as 400 MHz). These two attributes may help us address the communications problems commonly found in responding to terrorist attacks or natural disasters.

I also believe that this frequency allocation will spur broadband applications such as data and video, as well as promote uses not yet conceived. But most important, it will relieve pressure on public safety channels that are currently overloaded, even during routine operations.

I further agree that “fundamental changes in technology and public policy are needed” and many in government and the private sector are working to that end. However, I do not agree that only a nationwide broadband network can meet these needs. The sheer scale and complexity of the current national infrastructure require that we use what the DHS SAFECOM program describes as a system-of-systems approach.

This approach reflects the emergency response community’s vision for interoperable communications and its regard for the actual operational requirements articulated by the responders who use the systems and equipment. Where Peha argues that today’s flexibility has come “at the expense of standardization,” SAFECOM has shown that standards can increase competition, compatibility, and innovation—without federal mandates. SAFECOM emphasizes a bottom-up approach that puts a focus on user needs and capitalizes on the inherent robustness of a system of systems. Because local organizations own and operate more than 90% of the emergency response wireless infrastructure, any successful effort to improve emergency response interoperability must be driven by the local emergency responders themselves and take into account the fiscal realities they daily face.

DHS realizes that local governments alone cannot unite a patchwork of systems without a consistent overarching vision. That is why SAFECOM has developed tools such as the Statement of Requirements (SoR), a living document based on communications standards, written by and for practitioners. The SoR, along with coordinated Grant Guidance and other resources, is encouraging the design, manufacture, and procurement of equipment that will interoperate. Interoperability means greater consistency and leads to increased flexibility. The result is the implementation, where appropriate, of various information processing and other digital solutions to produce compatible systems now and without tying agencies to specific manufacturers.

But technology and standards alone are not the only answers. SAFECOM’s Interoperability Continuum identifies five factors critical for interoperability success: governance, standard operating procedures (SOPs), training, exercises, and usage. With practitioner guidance and input, SAFECOM is working with emergency responders to address each category. For example, our recently published template for the development of SOPs helps local governments address the critical human elements of the interoperability problem.

I agree there is no single silver-bullet solution, but I applaud Peha’s proposal of a 700-MHz solution as well as his analysis of the barriers to such a solution. His notion is a welcome addition to the debate on the interoperability issue. A successful approach to interoperability progress puts emergency responders in the lead, serves rural as well as urban areas, recognizes the existing investments across the nation and meets the reliability requirements of our emergency responders. In service of this critical national mission, we can do no less.

DAVID G. BOYD

Director, Command, Control and Interoperability Division

Science and Technology Directorate

Department of Homeland Security

Washington, DC


Climate policy

I agree with much of what Ambassador Richard E. Benedick has to say in his thoughtful piece on “Avoiding Gridlock on Climate Change” (Issues, Winter 2007). Although I am not yet convinced of an “impending” global catastrophe, nonetheless I believe that the principal measures called for in his paper can be justified from a risk management perspective. Rather than waste time quibbling over likelihoods, I feel that the probability of untoward events is sufficiently large to justify a variety of risk reduction measures.

I also agree that the challenge is to transform the United Nations (UN) Convention on Climate Change “into a forum for dissemination of new ideas and practical results, rather than an instrument for illusory consensus, rhetoric, and delay.” And I agree that to do this requires breaking from the UN’s traditional approach of consensus-seeking among nearly 190 nations. With the majority of emissions produced by a much smaller number of countries, Benedick asks somewhat rhetorically “Is it necessary to have everybody at the table?” The answer is a resounding no! His suggestion for a parallel structure of smaller, focused negotiation to achieve partial solutions to specific pieces of the problem seems eminently sensible.

The piece that Benedick chooses to focus on is technology development. He notes that “it is ironic that governments were negotiating emission reduction targets while simultaneously reducing their budgets for energy technology R&D.” He rightly notes that “technology development is the missing guest at the Kyoto feast.” The disconnect between targets and timetables and the costs required to achieve them has been a recurring theme in the climate debate. Nowhere is this more evident than in the assumption that inexpensive low-emitting technologies will somehow just materialize.

This is where Benedick’s call for focused negotiation among a smaller subset of players could be especially valuable. What is needed is an international consortium of developed-country investors (both public- and private-sector) to provide the technological wherewithal for the transition to a less carbon-intensive economy. These technologies are not only needed by developed countries to fulfill their emission reduction obligations but even more urgently by developing countries to fuel economic expansion in an environmentally compatible manner.

But despite the critical role of technology, certain market failures must first be addressed. There is need for a sustained commitment of resources on the part of the of developed-country governments to provide the basic research needed to lay the foundations for technology development, and the private sector of these countries must be given incentives to bring these technologies to the marketplace. Benedick and I may disagree over the role of market mechanisms versus command and control, but we are in complete agreement about the pivotal role of technology in meeting our climate goals.

Finally, I would like to thank Benedick for taking the time to reflect on an institution for which he has been a keen and thoughtful observer since its inception—the UN Framework Convention on Climate Change—but even more important, rather than being content to sit back and criticize, for offering practical suggestions for “putting the world back on the right path.” I commend his lucid and provocative article to those who missed it.

RICHARD G. RICHELS

Senior Technical Executive

Electric Power Research Institute

Palo Alto, California


The aging psyche

“Growing Old or Living Long: Take Your Pick” (Issues, Winter 2007) by Laura L. Carstensen is a succinct summary of a growing body of work of great importance to our understanding of the psychological and emotional processes of aging; in particular, motivation. The work focuses on the real-life conditions of older people. What was once the privilege of the few has become the common destiny of the many, and the work Carstensen and her colleagues pursue directly benefits those who are living longer. She continues to explore what she calls socioemotional selectivity theory, a life-span theory of motivation.

According to Carstensen, when conditions create a sense of the fragility of life, both younger and older people prefer to pursue emotionally meaningful experiences and goals. The prospect of death contracts the sense of time and may be particularly relevant to the role of motivation in autobiographical memory.

It is reminiscent of an early paper by Eduardo Krapf, who wrote of the atrophy of a sense of the future, or “Torschsluspanik”: “panic at the closing door of the gate” [“On Ageing,” Proceedings of the Royal Society of Medicine 41 (1953): 957–963].

A critical question is “how could it be that aging, given inherent losses in critical capabilities, is associated with an improved sense of well-being?” One of her studies with an order of Catholic nuns supports “the idea that people remember their personal past positively over time,” in contrast to the experience itself. One possible interpretation of this finding is that older people find it difficult and even unacceptable to consider themselves to have lived unhappy lives.

The Carstensen studies were conducted on healthy older people. It would be interesting to undertake comparable studies of older people who report depression or are diagnosed as depressed.

ROBERT N. BUTLER

President and Chief Executive Officer

International Longevity Center

New York, New York

www.ilcusa.org


Transportation security

R. William Johnstone’s “Not Safe Enough: Fixing Transportation Security” (Issues, Winter 2007) hits the nail precisely on the head. His refreshingly honest and realistic assessment of the current state of affairs not only avoids the D.C. bureaucratic two-step but shows his unmitigated nerve by daring to offer appropriate and effective changes to the transportation security system and its unfortunate failing policies. Few people have the credibility, critical thinking, and integrity to write such an article.

I am the former Transportation Security Agency Acting Director for Industry Training responsible for the initial design of the Federal Flight Deck Program (FFDO) (voluntary arming of pilots) and the Aviation Crew Member Self-Defense (CMSD) training program, which was made mandatory by the Homeland Security Act of 2002. Unfortunately, the CMSD program was later declared voluntary under pressure on both Congress and the administration from members of the industry. I also had a hand in rewriting the sensitive security document known as the Common Strategy. Hence, I can tell you that Johnstone’s concerns regarding the effectiveness of all elements of the new system are well founded.

I particularly share his concerns regarding the deficiencies in each of the onboard security measures. It is my sincere professional and personal opinion that the pilots and flight attendants have not received the appropriate security training needed to counter ongoing concerns regarding possible future terrorist attacks on commercial aviation. Additionally, knowing the level of Johnstone’s knowledge and documented research, I am equally confident that a lack of funding and comprehensive realistic training development and delivery plague all of the transportation industry.

Johnstone’s recommendations are well founded and supported. Congress must officially declare transportation security a matter of national security so that the administration will be free to treat it as such. This will go a long way toward alleviating the impact of dual mandates that cause transportation industry representatives to fight against sound security measures because of funding and control concerns. From here, maybe we can get back to working together to fight our common enemy and implement the other well-defined suggestions presented in Johnstone’s article.

DENNY L. DILLARD

President and Chief Executive Officer

FAST Training and Operations Group

Westminster, Colorado


Competitiveness

The semiconductor industry epitomizes many of the points made by Robert D. Atkinson in “Deep Competitiveness” (Issues, Winter 2007). Atkinson’s call for a sense of urgency is underscored by recent trends in where semiconductors are made and sold. In 2002, 30% of semiconductor manufacturing equipment was sold in the United States, but this had fallen to only 19% by 2006. In 2000, more semiconductors were consumed in the United States than in any other region, but by 2004 there were twice as many semiconductors sold in the Asia/Pacific region than in the United States, and the gap has grown since.

The semiconductor industry’s successful comeback against Japanese competition in the 1980s also supports Atkinson’s rebuttal of those pundits who dismiss the seriousness of the current competitive challenge on the grounds that we met past challenges. As Atkinson notes, we overcame past challenges precisely because we took them seriously. The United States had fallen behind Japan in worldwide semiconductor market share in 1986, and many executives were determined that the chip industry not share the fate of the U.S. television industry, a sector that had suffered an irreversible decline that is readily apparent when one browses the flat-panel TV aisles in a store today. The United States responded with economic sanctions to enforce a trade agreement to open the Japan market and end Japanese dumping and with the formation of the SEMATECH industry/government research partnership. These actions were unprecedented at the time and were critical to the United States retaking semiconductor market share leadership in 1993.

Atkinson’s focus on an enhanced R&D credit and new industry/government/university research partnerships is echoed in the semiconductor industry’s current activities and policy recommendations. The value of the U.S. R&D credit is far below those of our trading partners, and other nations are quick to point out their tax advantages when companies are deciding where to expand their R&D activities.

The Semiconductor Industry Association has launched a Nanoelectronics Research Initiative (NRI) that pulls together the semiconductor companies, 23 universities in 12 states, state governments, and the National Science Foundation. The purpose of the NRI is to identify the next new logic switch, perhaps based on a particle’s spin or a molecule’s shape, to replace today’s transistor. The country whose companies are first to market will probably lead the coming nanoelectronics era in the way that the United States has led for half a century in microelectronics.

DARYL HATANO

Vice President, Public Policy

Semiconductor Industry Association

San Jose, California


Robert D. Atkinson writes about competitiveness but has a more fundamental purpose in mind. He intends to influence the way we think about the economic past, present, and future.

Atkinson begins his argument by effectively debunking the myth that the 1980s semiconductor challenge from Japan was simply a mirage. He illustrates the way in which the federal government and the states responded with policies to support innovation, speed the movement of ideas from the laboratory to the living room, and forge pragmatic partnerships with a business world that was adapting Toyota’s high-performance workplace to U.S. conditions.

Atkinson is equally persuasive in taking on the idea that nations never compete.

He favors positive-sum efforts to compete through investment, innovation, and education but warns about national strategies based on what he calls “market mercantilism,” where countries seek an economic edge through currency manipulation and intellectual property theft.

Finally, he argues for a growth economics defined by our ability to innovate and to translate those innovations into investment and jobs in the United States.

I differ with Atkinson only in calling for even bolder action. His call for a world based on markets, not mercantilism, is sound but not equal to the current challenge. Record trade and account deficits weigh heavily on the U.S. manufacturing base and threaten the country’s long-term capacity for innovation. We should be moving now to multilateral negotiations with the world’s leading economies to restore trade balance. When the war on terror puts geopolitics ahead of economic priorities, we need to prevent the loss of key industries by compensating them in a way that will strengthen their innovative capacity. After all, the U.S. edge in national security has often depended on the strength of our economy and our capacity for innovation.

In addition to a comprehensive 21stcentury competitiveness strategy, we need a periodic presidential report on the state of the innovation system that will drive policy to build on strengths and correct weaknesses. A national commitment to seek new sources of energy can, like the space race of an earlier era, attract young Americans to careers in science and engineering, drive innovation, improve the environment, and add flexibility to foreign policy.

Atkinson makes a very important contribution by pointing to new ways of thinking about the economy and making several imaginative proposals in the policy arena. I can only second his call for a sense of urgency that will lead to action now.

KENT H. HUGHES

Woodrow Wilson Center

Washington, DC


Robert D. Atkinson’s article on competitiveness talks of various incentives and subsidies to industry to encourage progress. But any pyschologist will tell you that two motives are better than one, and the lure of profits needs to be supplemented by the fear of competitors.

It took foreign competition to force our automakers to improve. This is not a matter of private enterprise versus government. The reason tractor manufacture in Russia was so bad was that there was only one factory making tractors. Why not look at the massive U.S. mergers and acquisitions that promise to reduce competition and the various other ways in which we protect profits from any erosion? The very fact that economists assume there must be unemployment to prevent inflation implies some lack of real competition. And do we really do the rest of the world a favor by exporting subsidized farm products?

JAMES N. MORGAN

Professor emeritus of economics

University of Michigan

Ann Arbor, Michigan


Robert D. Atkinson begins his article with a call for action to meet the challenges facing the United States in a global economy, and he has a point. The factual analysis presented in the recent Council on Competitiveness report Competitive Index: Where America Stands recognizes that despite America’s considerable economic achievements over the past two decades, there are serious warning signs that need to be heeded.

Identifying fundamental research and a skilled workforce as critical to competitiveness also puts him in good company. The National Science Board, the President’s Council of Advisors on Science and Technology, and the National Academy of Engineering are among those who make the same case. The need for a genuinely level international economic playing field has been articulated by the Council on Competitiveness.

Atkinson’s approach complements what the Council on Competitiveness has called the “innovation ecosystem.” Innovation requires the interaction of a number of factors in geographic proximity. It calls for research universities to interact with industry and provide networking for the start-up companies they spin off. It requires the presence of a skilled workforce and of venture capital. It requires tangible infrastructure such as broadband access as well as the intangible infrastructure of innovation-friendly policies. These ingredients must balance and complement each other.

I welcome Atkinson’s appreciation for the importance of university/industry interaction, which can have a profound effect on economic development. Twenty-five percent of Georgia Tech’s research is conducted with industry, compared to a national average of 8%, and we have operated a successful technology incubator for over 25 years. In the past decade, we have spun off 75 companies. Such activities benefit universities and their students by teaching about the issues facing industry and their customers.

I also commend Atkinson’s support for expanding the federal R&D portfolio, which would enable a better balance of the physical sciences and engineering with health and medicine. For too long, patterns of funding have left physical sciences and engineering behind, which makes no sense given that the interdisciplinary nature of innovation requires all disciplines to move forward together. Congress and the president have signaled an intention to address this imbalance, but progress has been slow.

My concern about Atkinson’s proposals is his advocacy of a $2 billion fund to support industry/university research alliances. These days funds of this magnitude are scarce, and $2 billion would go a long way toward doubling the budget of the National Science Foundation (NSF), directly addressing the rebalancing of the federal R&D portfolio. Second, past attempts at funding programs for industry/university alliances have caused controversy relative to who gets the funding, why a major industry that already supports research should receive federal funding, and how to balance shares between industry and universities.

Agency-based industry/university/government programs that are carefully structured can and do work. For example, NSF’s Science and Technology Centers require industry involvement and support of the research. However, creating a large federal entity to drive industry/university/government collaboration is likely to be unworkable and create more problems than it solves.

G. WAYNE CLOUGH

President

Georgia Institute of Technology

Atlanta, Georgia

Vice-Chair

Council on Competitiveness

Washington, DC

G. Wayne Clough is a member of the National Science Board and of the President’s Council of Advisors on Science and Technology.


Space wars

The article by Joan Johnson-Freese in the Winter 2007 Issues (“The New U.S. Space Policy: A Turn Toward Militancy?”) is a welcome respite from the spin forthcoming from official State and Defense Department quarters. In light of the recent Chinese test of a destructive antisatellite (ASAT) weapon, Johnson-Freese’s article is also eerily prescient, noting “If the United States proceeds with development of these technologies, at staggering cost, others can and will do the same, only in a cheaper, easier, defensive mode.” From news leaking about long-term planning for the test, it seems Beijing had already made up its mind about the U.S. pathway in space and decided to do something to counter it.

Granted, the motivation behind the Chinese test is highly unclear, in part because of the long-term U.S. policy against engagement with China regarding the future of military space. There are several possible interpretations: (1) The Chinese had long ago decided that they needed an offensive and asymmetric strategy of holding U.S. space assets at risk in any conflict over Taiwan, and Beijing’s diplomatic offensive against the space weaponization was a cover to buy time to achieve that capability. (2) The Chinese ASAT test was conceived largely as a deterrent to U.S. space-based missile defenses, which China views as a threat to its nuclear deterrent, rather than an offensive program. (3) The test is an effort to bring the United States to the negotiating table over space-based missile defense and space weapons; a classic Cold War “two-track” tactic using a display of hard power to jolt the other side into discussions and to ensure possession of a bargaining chip.

Whatever the motivation, the test was reckless and irresponsible in that it resulted in large quantities of space debris that will remain in orbit for decades and pose a threat to all satellites in nearby regions. It therefore highlights in a very dramatic way the need for responsible nations to come together to establish rules of the road for behavior in space, both in peacetime and in wartime (those cases may be different, but that needs to be discussed) and methods to enforce those rules.

It also lays bare the fallacy of the long-held U.S. position against engaging with others regarding the future military uses of space, as well as the emptiness of the Bush administration’s harsh rhetoric. This approach has failed to “dissuade or deter” a potential adversary from “developing capabilities” to impede U.S. freedom of action in space. It has also undercut established norms against testing ASAT weapons by insisting on keeping U.S. options to do so. Further, it has provided political cover for a potential adversary to argue that it is only reacting to “threats” from the United States. It is time to rethink this head-in-the-sand approach and for the U.S. government to take the lead in discussions aimed at establishing clear guidelines regarding what is acceptable and unacceptable behavior in space for all players. As the Chinese have demonstrated, the United States may still hold primacy in space, but it doesn’t own it. Insisting on complete freedom of action regarding military space will not reduce future threats to U.S. space assets but create a Wild West environment where everyone is less secure.

THERESA HITCHENS

Director

Center for Defense Information

Washington, DC


While Joan Johnson-Freese had her ear to the ground listening for “the words ‘space weapons’ “ from the new U.S. space policy that she analyzes (or for “the echo of such sentiments” as she later writes), a soundless bolt from the blue (perhaps in space it should be called a bolt from the black) was preparing to strike: the Chinese antisatellite (ASAT) missile test of January 12, 2007. The “incoherence and disingenuousness—and militancy” of U.S. policy that she diagnosed (with some justification) have now been dwarfed by even greater incoherence and disingenuousness from China.

She argues that the new U.S. space policy “perpetuat[es] the false belief that space assets can be defended; in reality, it is impractical if not impossible from a technical perspective to defend space assets.” This is an important conclusion, but it is not well supported in this article; it is only assumed to be true. It leads to a policy recommendation: “The only way to protect assets is to outlaw attacks and the technologies that enable attacks” and to “try” (she might more accurately have written “hope”) to develop a legal regime to verify any such attacks once they occur.

Two problems with this approach are not adequately addressed in the article. First, the concept of outlawing anything requires a detection and enforcement mechanism that is strikingly absent (and perhaps congenitally impossible) in this arena. Second, the technologies that enable such attacks are inherently impossible to exhaustively define, as she indicates in her own words: “Given the dual-use nature of space technology, almost anything shot out of the atmosphere might qualify.”

The Chinese ASAT missile was clearly a space weapon, for example, but what about the small free-flier spacecraft just announced that will be deployed from Shenzhou-8 next year to televise the planned spacewalk? That same piece of hardware, delivered on a small automated rendezvous vehicle (of the type being developed by a dozen different programs around the world), would be a very effective weapon against a spacecraft considered a military target. Resolving this definitional dilemma remains a critical challenge for developing treaties, laws, and codes of conduct that will have any reliable utility.

Johnson-Freese perceptively draws attention to widespread suspicions about U.S. military intentions in space, which she lays at the door of the writers of that policy but which perhaps have been sparked by other agencies that she also describes. She cites an October 19 article in the Times of London that denounces what it sees as the “comically proprietary tone about the U.S.’s right to control access to the rest of the solar system,” and she correctly observes that the wording of the actual policy makes it “difficult to characterize its purpose so bluntly.”

But then she adds that “this negative perception certainly exists,” as if it were the fault of the policy itself rather than the fault of journalists who inaccurately reported that the policy said that the U.S. would deny access to space to anyone it chose, and of other journalists who copied the error without taking the trouble to read the policy. Perhaps they, like Johnson-Freese in her own admission, were “hearing” things in the policy that weren’t really there and unfairly disregarded subsequent U.S. clarifications. Her now-ironic rationalization of such misplaced but inflammatory suspicions against the United States is expressed in the words “actions sometimes speak louder.”

On January 12, 2007, they did indeed speak louder and may have drowned out the heart of Johnson-Freese’s argument.

JAMES OBERG

Dickinson, Texas

James Oberg is the author of the U.S. Space Command’s book Space Power Theory (1999).


Nuclear deterrence

In his thoughtful comments (in Forum, Issues, Winter 2007) on Thomas C. Schelling’s article on nuclear deterrence (“Nuclear Deterrence for the Future,” Issues, Fall 2006), Richard Garwin says that “Deterrence of terrorist use of nuclear weapons by threat of retaliation against the terrorist group has little weight, when we are doing everything possible to detect and kill the terrorists even when they don’t have nuclear weapons.” Schelling, in his article, stresses the difficulty that terrorists might have in constructing a weapon and suggests that if they do acquire one, they may use it strategically rather than in the punitive way of the 9/11 attacks. He notes that a main problem facing them if they want to construct a bomb would be the acquisition of nuclear materials. Garwin outlines steps to make it difficult or impossible for the terrorists to acquire such materials or to acquire a complete nuclear weapon.

Although I agree wholeheartedly with the need for and advisability of carrying out the steps Garwin recommends, I am dubious about our ability to ensure that they can be 100% successful. Moises Naim, in his book Illicit (New York: Doubleday, 2005), describes a vast global smuggling network that has demonstrated the ability to bypass all means put in place to stop such activity. There is no reason to believe that the security of nuclear weapons or materials, however tight, will be totally leak-proof under the assault of such a network. There must therefore be a way to make deterrence work in this context, along with the preventive measures Garwin recommends.

It is true that transnational terrorists by definition don’t present the kind of target that permits the threat of retaliation against a country that has characterized nuclear deterrence in the past. The essence of deterrence, however, is not necessarily contained within national boundaries. Deterrence works when the deterree perceives that a retaliatory attack in response to an aggressive act will cost the deterree so much that any potential gains from the initial attack are vastly outweighed by the losses that would result from the inevitable counterstrike. There are a number of potential targets that today’s terrorists who are most threatening to the United States must hold so dear that if they were destroyed in a counterstrike the terrorists would be deprived of their raison d’etre. Those targets may be distributed in countries other than those from which the terrorists may be operating; a retaliatory attack against them would not be designed to kill terrorists per se, as Garwin suggests, but rather to remove from them the very thing they are fighting for. In the process, the roots of their civilization could be threatened or even eliminated.

Thus, a warning to the effect that a terrorist nuclear attack on the United States would put at risk all that the terrorists hold most dear could have the desired deterrent effect. Indeed, if made with the correct wording, it should stimulate the countries within which terrorists may be hiding and plotting to exert greater efforts than they appear to have been doing to put the terrorists out of business—in their own defense if not out of sympathy for the United States. The precise targets need not be specified; the terrorists and the countries harboring them will know what they are.

It may be argued that the United States would not strike back in such a way, precisely because the strike could hit countries and peoples that are far removed from the terrorists who made the attack on the United States and that may not even be harboring such terrorists at the time of the attack. However, the world should learn from history not to underestimate the potential of the United States to react in violent and unexpected ways when it finds itself or its vital interests to be under extreme stress. The examples of Pearl Harbor, Korea, and Kuwait are the most outstanding of many such examples from 20th-century history. If a nuclear weapon of terrorist origin explodes in a major U.S. city such as New York or Washington, leaving several hundred thousand people dead and a large part of the city devastated, the terrorists don’t know what might motivate the United States as it strikes back in anger. Respect for country boundaries and innocent peoples could look very different after such an event, especially if it is a decapitating strike and lower levels of the U.S. government and U.S. military are the ones who are making the decisions about retaliation.

In the mid-1990s, the Naval Studies Board of the National Academy of Sciences carried out a study, under U.S. Navy sponsorship, of post–Cold War conflict deterrence. The study was chaired by the late Gen. Andrew Goodpaster, formerly Supreme Allied Commander of NATO forces. Garwin, Schelling, and myself, among many others, were participants. The study report [Post–Cold War Conflict Deterrence (Washington, DC: National Academy Press, 1997)] did not reach any firm conclusions about the use of nuclear weapons for deterrence, but it noted the body of opinion that in the emerging post–Cold War environment of international relationships, our nuclear weapons would be useful only to deter the use of nuclear weapons by others. The report emphasized that the conditions of deterrence remained to be worked out. It also emphasized the importance of communicating the resulting policies, with appropriate allocations of both clarity and ambiguity, to potential adversaries.

The recommended work to devise new deterrence policies to fit modern conditions has yet to be undertaken seriously. We should not leave one of the most dangerous and unpredictable threats to U.S. and world security out of such consideration.

SY DEITCHMAN

Bethesda, Maryland


Mental health

The article, A Healthy Mind for a Healthy Population (Issues, Summer 2006), asserts that health care for mental problems and substance use conditions “requires fundamental redesign.” I could not agree more. As the newly confirmed Administrator of the Substance Abuse and Mental Health Services Administration (SAMHSA), within the Department of Health and Human Services, I share the author’s concerns regarding quality, access, and outcomes of behavioral health services in the United States (U.S.).

I am pleased to report that SAMHSA has already taken steps to “transform” mental health care in America and implemented a new, innovative financing approach for substance abuse treatment and recovery support services. The President’s New Freedom Initiative and Access to Recovery program have cemented recovery as the new framework for public policy development in mental health and substance abuse services in this country. However, in lieu of providing a litany of SAMHSA’s many important activities already under way to achieve improved access and higher quality services, we must remain focused on how much is yet to be accomplished.

New data point to the alarming and unacceptable rates of early mortality and morbidity for people with serious mental illnesses, a loss of 25 years. Sadly, these years lost to early death and disability are primarily attributable to a lack of attention to consumers’ physical healthcare needs. And after years of debate, we have established that individuals with co-occurring mental and substance use disorders should be the expectation, not the exception in our treatment systems. While we have made notable strides in this area over the past decade, there is yet much to be achieved when it comes to ensuring that access to services exist, evidence-based practices are applied, and financing structures promote, not hinder, integrated care. Our own 2005 data tells us that among the 5.2 million adults with both serious mental and substance use problems fewer than half (47%) received mental health treatment or substance use treatment at a specialty facility. Only 8.5% received treatment for both mental health problems and substance use treatment.

To continue our quest for quality improvement, we must continue efforts to reduce the significant lag time between the generation of new scientific knowledge and its application at the community level by prevention and treatment programs and providers. These issues demand our attention and our action. Through SAMHSA’s partnership with the states to adopt a few carefully chosen national and state-level outcome measures, we are building accountability and effectiveness measures into every grant dollar and agency program, rewarding performance with proven results at the levels of the individual, family, community, and service system as well as measuring our own effectiveness as an agency

SAMHSA welcomes the support provided in the article for the work we are doing. We hope this will help open further doors when collaborating with consumers, family members, clinicians, academicians, policymakers, and administrators in behavioral health. Together, through concerted efforts, we can effectively make recovery the outcome for those persons with mental and substance abuse disorders.

TERRY L. CLINE

Administrator

Substance Abuse and Mental Health Services Administration

Rockville, MD

www.samhsa.gov

China’s Drive Toward Innovation

China’s president, Hu Jintao, has said that his country must give priority to independent innovation in science and technology to enable China to be at the forefront of scientific and technological development. This statement is not remarkable for the leader of a major trading nation. U.S. President George W. Bush featured the same objective in his 2006 State of the Union message. He recorded his belief that government must work to help create in the United States a new generation of innovation and an atmosphere in which innovation thrives. In his first policy statement to the Japanese Diet, Prime Minister, Shinzo Abe called for similar steps. The European Union Commission has proposed a 10-point program for immediate actions to make the business environment in its member states more innovation-friendly.

What makes Chinese government statements any more or less remarkable or credible than those of other governments’ highest officials? Answering that question requires a look at what China is planning as follow-up on its stated objective and an evaluation of the likelihood of its achieving success.

In the Chinese system of governance, statements by the leadership shape national and local policies to a degree not seen in other major trading nations. There is a singularity of purpose in China rarely found in Western governments. The pronouncements of China’s top leaders have been accompanied by an amazing array of detailed policy measures at all levels of government. China already is well into a process of industrializing. What Beijing has decided to do is “to move China from an imitation to an innovative stage of production …from ‘made in China’ to ‘made by China.’ ”

China’s leadership sees innovation as essential for the country to continue its economic growth, maintain political stability, support advanced military capabilities, and retain its global trade and geopolitical power. Ma Kai, minister of China’s National Development and Reform Commission, recently gave a compelling rationale for this policy. “China’s economic growth largely relies on material inputs and its competitive edge is to a great extent based on cheap labor, cheap water and land resources, and expensive environmental pollution,” he said. “Such a competitive edge will be weakened …with the rising price of raw materials and the enhancement of environmental protection. Therefore, we should enhance [our] independent innovation capability . . . and increase the contribution of science and technology advancement to [our] economic growth.”

In short, for China, innovation is a policy of nearly unrivaled importance.

Architecture of innovation

The overarching document of China’s innovation planning and strategy is the State Council’s Medium- and Long-Term Program on Science and Technology Development (20062020), issued in January 2006. To achieve the plan’s objectives, China is using a variety of policy tools to promote, favor, and reward indigenous innovative technologies. An overall goal is to increase R&D spending to 2.5% of gross domestic product by 2010, a doubling of the current rate. China’s 2010 target is comparable to the current rate of spending by the United States, 0.6% less than Japan’s, and 0.6% more than the European Union’s. The expected doubling of China’s spending is to be accompanied by the implementation of key state projects launched to generate important strategic products. The breadth and scale of these projects are huge. In the United States, comparisons might reasonably be found in investments during the period 1945-1991 in telecommunications, space exploration, communication, aeronautics, and energy.

The Chinese plan identifies 16 key state projects covering a number of priority sectors. These sectors include core electronic components, high-end general chips, basic software, technology for manufacturing extremely large integrated circuits, new-generation broadband wireless mobile telecommunications, high-end numerically controlled machine tools and basic manufacturing technology, development of large oil and gas fields, large nuclear power plants with advanced pressurized water reactors or high-temperature gas-cooled reactors, control and treatment of water pollution, development of genetically modified biological species, development of important new drugs, control and treatment of AIDS and other major contagious diseases, production of large aircraft, high-resolution Earth-observing systems, and launching manned space flights, and lunar exploration projects.

By any measure, this is an ambitious agenda. But these initiatives do not stand alone. The central government has committed to releasing this year 99 plans to implement specific policy goals of its strategic plan. Among other efforts, these plans are expected to call for accelerating creation of independent “well-known” Chinese brands, supporting the technology innovation of small- and medium-sized enterprises, issuing corporate bonds for qualified high-technology enterprises, regulating the management of start-up investment funds and the debt-financing ability of start-ups, suggesting ways to establish and improve regional intellectual property, standardizing foreign acquisition of key Chinese enterprises in the equipment manufacturing industry, building research-orientated universities, promoting state-supported high technology and new technology industry development zones, establishing guidelines and funding for venture capital investment, creating tax policies supporting the development of start-ups, and establishing “green channels” to help bring talented individuals who have studied abroad back to China.

Blueprint of actions

What is being attempted is to achieve in a short period nothing less than what more developed market economies have achieved with a head start of decades and, in some cases, more than a century. China is attempting to put into place the economic, educational, and legal infrastructure to achieve accelerated future growth based substantially on innovation.

Even within the China context, the new innovation policy is remarkably different from prior Chinese government initiatives. It is different in depth, because it involves long-term technology planning beyond single-ministry technology development programs. And it is different in breadth, because its execution is broadly dispersed among a half-dozen ministries at the central-government level and a large number of agencies at the provincial and local levels, and because it is strengthened by powerful policy tools.

To reach its considerable goals, China will draw on an array of direct and indirect policies and actions:

Policy tools. Given China’s political history, it should come as no surprise that its policies to promote innovation tend to be more interventionist than those of many other countries. China’s first policy steps were intended to seek foreign direct investment in high-end manufacturing and innovation, both areas that lagged other economies. In this, China followed a very different path than did Japan or Korea. This acceptance of foreign capital and technology was not seen as contradicting ideological precepts. According to China’s Ministry of Commerce, foreign direct investment “is an important element of China’s fundamental principle of opening up to the outside world” and “one of the great practices of building up [a] socialist economy with Chinese characteristics.” To attract foreign investment, China grants substantial tax incentives. A key aim is to foster technology transfer and specific tax measures that make those transfers tax-advantaged transactions.

CHINA’S LEADERSHIP SEES INNOVATION AS ESSENTIAL FOR THE COUNTRY TO CONTINUE ECONOMIC GROWTH, MAINTAIN POLITICAL STABILITY, SUPPORT ADVANCED MILITARY CAPABILITIES, AND RETAIN ITS GLOBAL TRADE AND GEOPOLITICAL POWER.

China also recognized that it needed to substantially increase its protection of intellectual property if foreign investors were to introduce proprietary technology. As a result, China developed a National Intellectual Property Rights Strategy. Lu Wei, deputy director general of the Technical Economic Department, Development Research Center of the State Council, described the policy, in part, in the following terms: “To adapt the strategy to China’s development situation…we shall not only encourage self-innovation, but also encourage absorption, consumption, and innovation of introduced technologies.”

In addition, China has invested substantially in human resources in order to foster foreign direct investment, technology transfer from foreign firms, and indigenous innovation capability. In some news accounts, China is estimated to have produced 600,000 college and technical school graduates in science and engineering in 2004, whereas the United States produced an estimated 70,000 graduates the same year. Although China’s number is overstated, because it counts undergraduate degrees and technical school training as well as internationally competitive graduate engineers, the numbers are large and growing, and therefore so is the talent pool. In addition, China is facilitating the return to China of Chinese engineers working in other countries. If the experience of Silicon Valley and similar locations is an indicator, the greatest amount of technology transfer takes place through the process of diffusion as engineers change jobs.

In policy actions that lie somewhere between broad support and industry-specific intervention, China is developing high-technology incubation parks on a scale and with a determination not seen in any other country. According to Chinese officials, in 2004, high-tech parks had 38,565 participating companies, representing $226.4 billion in production and $19.7 billion in investments in infrastructure. The government says it plans to establish 30 more parks by 2010. Particular emphasis is given to attracting foreign R&D facilities. Fifteen Korean companies have R&D centers in China, 14 of which have been established since 2000. Samsung and LG Electronics have three each, which concentrate on developing technology and product models for the Chinese market.

Typical of such parks is the 16-square-kilometer Shanghai Zhangjiang Hi-Tech Park, which aspires to become both China’s Silicon Valley and its Pharmaceutical Valley. In the pharmaceutical area, the park has attracted $10.6 billion in foreign capital from 42 companies, including Roche, GlaxoSmithKline, and Medtronic, and has established 31 R&D institutes and a hospital for clinical trials. In the field of electronics, the park has attracted 70 “fabless” computer chip companies (which design, develop, and market their products but do not manufacture them), three foundries, two photomask producers, 12 packaging and test companies, 34 equipment vendors, and numerous systems application companies.

Need for standards. A tool various Chinese government officials say they intend to use to promote indigenous innovation—and to restrain foreign competition—is the process of setting standards. The recently promulgated Shanghai Municipal Government Intellectual Property Strategy demonstrates the possibilities. This strategy calls for the government to “actively promote the formulation and implementation of technical standards with self-owned intellectual property rights and translate that technological advantage into a marketplace advantage to maximize the benefits of intellectual property rights.” The National Medium- and Long-Term Program on Scientific and Technological Development takes this one step further, calling for the government to “actively take part in the formulation of international standards and drive the transferring of domestic technological standards to international standards.”

The highly contested case of China’s recent proposal to impose its own domestic “WAPI” wireless security standard rather than use an international standard, for products such as pagers and laptop computers rather than use an international standardis an example of this policy in practice. Although China was within its rights to impose a standard for domestic consumption, the plan would have required foreign suppliers of wireless devices to share their proprietary technology with a Chinese partner company. In turn, the Chinese partner would have supplied an essential encryption algorithm needed for serving the Chinese market. After receiving strong presentations from very high level U.S. officials, the Chinese government put the plan on hold indefinitely. That was not the end of WAPI, however, as China has attempted to have its WAPI domestic standard recognized and accepted at the World Intellectual Property Organization. As yet, China has made little progress in attaining that objective.

Government procurement. Many countries have used government procurement to provide preferences and protection to domestic industries. The World Trade Organization’s Government Procurement Agreement (GPA) is designed to ensure that member governments use open, transparent, competitive, unbiased, merit-based, and technologically neutral procurement procedures. In 2001, China committed to initiate negotiations for membership in the GPA “as soon as possible,” and at the April 2006 meeting of the U.S.–China Joint Commission on Commerce and Trade, China declared the “negotiations on China’s entry to the GPA will be launched no later than the end of December 2007.” However, recent Chinese policy documents indicate that China intends for its state institutions to go against the basic tenets of the GPA. According to one document, for example, “Finance departments at the provincial level shall work with the science and technology departments at the same level to establish implementation plans for developing indigenous innovation government procurement policies for their provinces.”

Given the prominent role that China’s centralized governmental structure plays in the nation’s economy, discriminatory government purchasing policies at the central, provincial, and local levels can provide a significant amount of protection to foster indigenous innovation and have a very powerful negative effect on trade. China’s State Council has decreed that: “The government shall set a priority procurement policy on important high-tech equipment and products developed by domestic enterprises with independent intellectual property. [We shall] provide policy support to enterprises purchasing domestic high-tech equipment, and support the formulation of technological standards through government procurement.” The government procurement policy with respect to software has been the subject of consultations between China and its trading partners.

Import policies. Under international rules, governments can much more easily manage foreign investments than imports. Although it is not yet clear how China will proceed, there is some evidence of the country’s current thinking on imports. In a statement relating to the equipment manufacturing industry, the State Council said that imports of key equipment using foreign capital will be subject to “strict examination and study.” Given World Trade Organization commitments, there is little that can be done legitimately through direct controls to slow the inflow of undesired imports. But whenever governments can confer or deny a benefit, there are possibilities to influence the kind and quantity of imports brought in by a particular investor. The State Council’s stated goal is to establish by 2010 competitive Chinese equipment manufacturing companies with their own intellectual property to meet China’s needs in the energy, transportation, raw materials, and defense sectors.

Competition policies. China has proposed, but not yet implemented, an Anti-Monopoly Law intended to foster innovation. Where they exist, competition laws have served various purposes. In the United States, Canada, the United Kingdom, and Germany, the primary stated goal of such laws is to protect consumers. In other places, economic development, industrial policy, or social objectives may be the primary policy drivers. In China, there is clearly a concern by top-level policymakers that an imbalance between China’s indigenous companies’ portfolio of intellectual property and those of its trading partners’ companies is highly problematic. Wang Xiaoye, a professor at the Chinese Academy of Social Sciences, noted that multinational companies possess capital and technological advantages that enable them to quickly dominate markets. She concluded that the “adoption of an Anti-Monopoly Law will serve as an important tool for China to check the influence of multinationals.”

In a similar vein, the Fair Trade Bureau of the State Administration for Industry and Commerce in 2004 released a report claiming that certain multinational companies used their technological advantages and international property rights to dominate sectors of the Chinese market. The report specifically names Kodak, Tetra Pak, and Microsoft, among others, as potential targets of the forthcoming Anti-Monopoly Law. Additional news reports indicate that other advanced technology and innovative companies, such as Intel, could also be a target of the legislation.

A key concern about the Anti-Monopoly Law, a draft of which was circulated in January 2006, relates to how “monopolistic conduct,” which includes “abuse of dominant market position,” is defined. The draft states that entities within a “relevant market” are considered to hold a dominant market position if the individual market share of one entity in a market accounts for more than half, the joint market share of two entities accounts for more than two-thirds, or the joint market share of three entities accounts for more than three-fourths. Of course, much will depend on defining the relevant market. In China, a foreign company with a strong patent portfolio might easily command a large portion of a product market. As China’s strategic plan states,“[We shall] prevent the abuse of intellectual property that unfairly restricts the market mechanism for fair competition and may prevent scientific-technological innovation and the expansion and application of scientific-technological achievements.” Foreign companies have already been put on notice that administrators (not yet chosen) of the new Anti-Monopoly Law might find tempting targets in Western companies that have the strongest intellectual property positions.

Direct funding. Direct government financial support will be an important part of Chinese government innovation policy, just as it is for all countries seeking to promote innovation. Thus, for the most part, China is not out of step with its competitors. But when the funding focuses on one or more particular sectors, investment and trade distortions are likely to occur. An example of what China may decide to do can be seen in the measures that the government has taken to promote the development of a domestic equipment-manufacturing industry. These measures include preferential taxation for the sector, incentives for the purchase of Chinese-made machinery, value-added-tax rebates on imported parts and materials, allocation of special funds for technologically advanced products, and giving enterprises relief from certain social responsibilities. Although the government has not fully defined what relieving enterprises of social responsibilities means, executives of U.S. automobile and steel companies can testify that if this means relieving them of “legacy costs,” including the health and pension costs of retired workers, such steps can mean the difference between profitability, stunning losses, and sometimes even bankruptcy.

By far the highest profile recent instance of government funding is the announced agreement by two Chinese municipalities to attract semiconductor fabrication facilities by funding all of the capital costs of a private company. The beneficiary is the Shanghai Manufacturing International Corporation, a major semiconductor foundry. The company has announced that it will receive the benefit of construction of two new chip fabrication facilities, from Chengdu and Wuhan (and Hubei) local government agencies. This amounts to a grant of billions of U.S. dollars. According to news reports, the company also will receive a “management fee” and will have the option to buy the plants in the future. The company will retain all profits… Chongqing has announced very recently that it will also offer similar support to create an indigenous semiconductor industry.

Drivers of innovation

A variety of factors are driving the prospects for homegrown innovation in China. In the broad picture, China is experiencing an impressive rate of growth in its gross domestic product. Although such growth can be something of a double-edged sword, in that resources can be scarce in a rapidly growing economy, the general rule is that individuals who seek rewards for innovation are more likely to find them in a buoyant economy.

China also has a huge talent pool. Enormous resources are being poured into graduating engineers and scientists, and given the immense population base, this is an effort that in sheer numbers can equal the combined output of many of China’s foreign competitors.

China’s large domestic market is both an incentive to indigenous production and a magnet for foreign direct investment. Indeed, it is a market that a global company cannot afford to ignore. China hopes to learn much from these foreign companies, enabling it to leapfrog the painful earlier steps in innovation that were required of the technology-donor companies. The fact that the Chinese market is growing increasingly sophisticated also helps attract higher-end foreign investment.

China is increasing its protection of intellectual property rights, but much more remains to be done. Formal intellectual property protection, poorly developed to nonexistent in much of China until relatively recently, is making strides, particularly in Beijing and Shanghai. A number of cases have resulted in satisfactory outcomes for foreign and domestic holders of intellectual property rights. This movement is being bolstered by the incentives for indigenous patenting that create domestic stakeholders in a functioning intellectual property–rights system.

On top of these advantages is a government—or more accurately, a series of governments, at the central, provincial, and municipal levels—pledged to full economic mobilization to support innovation. At the provincial and local levels, there even is something of a rivalry to achieve often grand objectives. The setting of priorities may be particularly effective where the objectives are specific, such as those related to integrated circuits as spelled out in the current Five-Year Plan of the Ministry of Information Industries. The plan calls for China to “significantly increase the self-sufficiency ratio to over 70% for integrated circuits used for information and national defense security, and to over 30% for integrated circuits used in communications and digital household appliances…. We should basically achieve self-sufficiency in key products supply.”

These various factors have enabled China to achieve a variety of concrete measures of success. High-tech exports have been growing at an annual rate of more than 40% over the past five years. China ranks third among nations in R&D expenditures. Another crude measure of success is the increasing number of domestic patents being filed with the State Intellectual Property Office. In 2005, a total of 171,619 patents were filed, up from 99,278 patents in 2001—a 73% increase. Whether this represents true innovation can be assessed only with hindsight, as the high-tech exports are probably very heavily the products of foreign multinational corporations and their Chinese joint ventures, and the products exported may often consist of DVD players and similar electronics items, which some would describe as tech but not high-tech products.

Hurdles to innovation

Imbedded in some of the very factors underlying China’s successes, however, are a number of factors that can and do inhibit innovation. Clearly, there is a major overhang of the vestiges of a command economy. The people who make local and national plans may not always be those closest to the cutting edge of innovation and may be unlikely to fully understand its needs. Nonmarket factors tend to skew economic activity. This same pattern played out in Japan. When the cheering died down a bit, observers of the “Japanese miracle” began to note that Japan suffered deeply from problems of crony capitalism that saddled its banking system with nonperforming loans and contributed to depressing its economic growth for over a decade. In China, some two-thirds of the economy is accounted for by state-invested companies and state-owned companies. No one considers these companies as a group be on the leading edge of innovation. Although these companies tend to be favored for government support, they are on the whole unlikely to become hotbeds of innovation.

The influence of the state can be too pervasive. Complicating the positive story of the dominance of market forces today are stories of the recent resurgence of the Communist Party’s involvement in business. It is too early to determine what impact this will have, but it is likely to reinforce a relationship-based pattern of transactions that often may run counter to the dictates of the market.

THE ONLY THING THAT IS SURE ABOUT CHINA IS THAT WHATEVER IS TRUE TODAY WILL BE DIFFERENT IN A YEAR, AND PERHAPS UNRECOGNIZABLE IN FIVE YEARS.

China also faces a number of pressing workforce issues. In the rest of the world—particularly in the United States, Europe, and Japan, but also in India and other nations in competition with China—the huge number of Chinese students receiving engineering degrees annually is causing concern. But a number of recent studies have found comparisons between China and, in particular, the United States, to be misleading. For example, a study by Duke University researchers found that in 2004, China produced 351,537 graduates in the engineering, computer science, and information technology fields—just over half the number widely reported in news accounts. Further, the quality of Chinese graduates often has been sacrificed to achieve quantity. China’s state-centered and rote learning approach to education is heavy on theoretical and Marxist learning, producing “ivory tower” engineers with few problem-solving and teamwork skills. Engineering curricula are often crowded with ideological courses that detract from the quality of the graduates entering the workforce and from their ability to innovate. The Ministry of Science and Technology, along with other Chinese planners, is not unaware of these defects and has issued new state guidelines and opinions to “further strengthen the cultivation of talents in short supply.” But this is an enormous hurdle to overcome.

A key impediment to accelerating innovation is the inadequate protection of intellectual property. This exists, in part, because there is relatively little history or culture of protecting intellectual property rights and only a recent history of private property. One result is that China’s share of world patents is extremely low, and Chinese officials report that some 99% of Chinese companies own no patents. Officials further acknowledge that the quality of many Chinese patents is poor. This situation may be changing. The State Intellectual Property Office has provided figures on shares of Chinese patents that do show an improvement. In 2005, foreign companies accounted for 54% of patents issued in China, whereas domestic companies accounted for 27%. In 2002, the figures were 73% for foreign companies and 46% for domestic companies.

But enforcement of intellectual property rights in China is likely to remain a problem for some time to come. Even with the best intentions, there is and will remain a serious shortage of intellectual property specialists within the Chinese legal system and companies’ management. Outside observers also believe that the Chinese government faces something of a policy dilemma with respect to accelerating the enforcement of intellectual property rights. Counterfeit goods are far less expensive than branded items. With China’s great divide between people with disposable income and those living in poverty, the balance between the desire to enforce intellectual property rights and to alleviate poverty may easily weigh in favor of the latter.

Forced technology transfer may pose another, perhaps larger, problem—although statistics will never be available to indicate how widespread the problem is. When foreign firms want to enter the Chinese market to sell or produce goods, the price they pay for entry can be an agreement on the kind and amount of technology that will be transferred to Chinese companies. Issues of this kind surface from time to time and indicate that the problem is extensive and important. One example is the nonpayment of royalties to Japanese manufacturers for DVD players, an extremely large market segment. The royalties appear to have been involuntarily waived. Some people in the industry maintain that the Chinese government gave official administrative guidance not to pay the royalties. Certainly, government officials have made many general pronouncements that the payment of royalties by Chinese companies should be avoided wherever possible. Foreign firms undoubtedly tolerate this practice of violation of contractual commitments because being in the Chinese market is profitable or is expected to offer profits in the future.

Forcing technology transfer has its costs. Because of deficiencies in the protection of intellectual property rights, foreign investors withhold core technologies as well as cutting-edge technologies, thus limiting technology transfer to the more routine technologies. There also are reports of companies holding back key process and product components because of concern about intellectual property rights. Chinese planners are well aware of these problems, and such awareness is apparently spurring their increased emphasis on fostering indigenous invention.

Last, a combination of government-promulgated measures impedes innovation. The Chinese government is far from monolithic, and this is true far beyond the division of power with provincial and municipal governments. Some government officials believe that the less intervention, the better; and that the operation of the market, left to some extent unfettered, is the best course of development. But other officials believe in and practice techno-nationalism, implementing policies that ultimately inhibit innovation by alienating foreign firms.

The road forward

On the road into the city of Suzhou, a large rooftop sign proclaims “Development is an Immutable Truth” in English, under massive Chinese characters. The message was from the Chinese leader Deng Xiao Ping, and although it has been translated in various other ways over time, the message is unmistakable: There is one acceptable path for China and that is economic development. Deng was “the great helmsman” for the Chinese economy. His opening of the Chinese economy and his economic reforms transformed China into an economic miracle of its own. Two decades ago, it was common for Asia specialists to say that in geographic terms, Japan was a few smallish islands off the coast of China, but in economic terms, it was China that was an island off the coast of Japan. No one would say that today.

Clearly, it would be a serious error to think that China will not evolve into a major source of innovation in the not-too-distant future. When governments concentrate national resources on achieving specific industrial goals, they can succeed. Moreover, foreign firms still see China as a vast and vital market, as well as a major production platform for export. Although foreign companies may be guarded in transferring technology, the greatest form of technology transfer is human, and there is a very large number of Chinese engineers working for foreign firms who someday, perhaps soon, will repatriate to China. Stock options and other forms of economic and psychic rewards of participating in a new frontier within China, along with an improving quality of life and a lower cost of living, will act as magnets. At the same time, U.S. immigration policy effectively pushes Chinese citizens who received PhD’s at U.S. universities to return home, and home is becoming much more inviting.

China—particularly its capacity for innovation—remains a work in progress. Chinese officials have sought to foster innovation through active intervention as well as allowing much private activity to take place unencumbered. Foreign firms’ participation in China’s economy is essential, as is external collaboration by indigenous firms. To date, the results have been mixed. The key question is whether China can continue with as much state-run intervention as is called for in its current bureaucrat plans and still create a market economy that enables substantial innovation to take place.

The only thing that is sure about China is that whatever is true today will be different in a year, and perhaps unrecognizable in five years. Anyone who traveled to China in the 1980s and saw the empty fields of Pudong across the Huangpo River from Shanghai could not have foreseen the abundance of industrial and scientific development that exists there today.

What one finds generally in China today is a strong will to succeed and an excitement about China’s growth potential and scientific and engineering possibilities. This excitement is somewhat reminiscent of the faith that settlers in the United States had in moving west in the first half of the 19th century. China is engaged in an exciting venture, one of the greatest human experiments of our time. Like the Green Revolution or the manned space flight program, the economic development of a vast continent and even vaster populace is an enormous challenge. Although the world has not seen the first dominant Chinese innovation—an I-pod, wonder drug, or Windows operating system—there is no reason to think that such contributions will not be forthcoming, and perhaps sooner than skeptics think. China has assimilated contract high-end manufacturing and is moving into contract design. It would be a mistake to bet against China’s earning a respectable place in the forefront of innovation. The only question is when.

Where the Engineers Are

Although there is widespread concern in the United States about the growing technological capacity of India and China, the nation actually has little reliable information about the future engineering workforce in these countries. U.S. political leaders prescribe remedies such as increasing U.S. engineering graduation rates to match the self-proclaimed rates of emerging competitors. Many leaders attribute the increasing momentum in outsourcing by U.S. companies to shortages of skilled workers and to weaknesses in the nation’s education systems, without fully understanding why companies outsource. Many people within and beyond government also do not seem to look ahead and realize that what could be outsourced next is research and design, and that the United States stands to lose its ability to “invent” the next big technologies.

At the Pratt School of Engineering of Duke University, we have been studying the impact of globalization on the engineering profession. Among our efforts, we have sought to assess the comparative engineering education of the United States and its major new competitors, India and China; identify the sources of current U.S. global advantages; explore the factors driving the U.S. trend toward outsourcing; and learn what the United States can do to keep its economic edge. We believe that the data we have obtained, though not exhaustive, represent the best information available and can help U.S. policymakers, business leaders, and educators chart future actions.

Assessing undergraduate engineering

Various articles in the popular media, speeches by policy-makers, and reports to Congress have stated that the United States graduates roughly 70,000 undergraduate engineers annually, whereas China graduates 600,000 and India 350,000. Even the National Academies and the U.S. Department of Education have cited these numbers. Such statements often conclude that because China and India collectively graduate 12 times more engineers than does the United States, the United States is in trouble. The remedy that typically follows is for the United States to graduate more engineers. Indeed, the Democrats in the House of Representatives in November 2005 proposed an Innovation Agenda that called for graduating 100,000 more engineers and scientists annually.

RATHER THAN TRYING TO MATCH THEIR DEMOGRAPHIC NUMBERS AND COST ADVANTAGES, THE UNITED STATES NEEDS TO FORCE COMPETITORS TO MATCH ITS ABILITY TO INNOVATE.

But we suspected that this information may not, in fact, be totally accurate. In an analysis of salary and employment data, we did not find any indication of a shortage of engineers in the United States. Also, we obtained anecdotal evidence from business executives doing business in India and China that indicated that those were the countries with shortages. To obtain better information about this issue, we embarked on a project to obtain comparable engineering graduation data from the United States, China, and India.

U.S. graduation statistics are readily available from the Department of Education’s National Center for Education Statistics. Extensive data on engineering education are also collected by the American Society for Engineering Education and the Engineering Workforce Commission. In order to collect similar data for China and India, we initially contacted more than 200 universities in China and 100 in India. Chinese universities readily provided aggregated data, but not detail. Some Indian universities shared comprehensive spreadsheets, but others claimed not to know how many engineering colleges were affiliated with their schools or lacked detail on graduation rates by major. In the case of China, we eventually obtained useful data from the Ministry of Education (MoE) and, most recently, from the China Education and Research Network (CERN). In India, we obtained data from the National Association of Software and Service Companies (NASSCOM) and the All India Council for Technical Education (AICTE).

What we learned was that no one was comparing apples to apples.

In China, the word “engineer” does not translate well into different dialects and has no standard definition. We were told that reports sent to the MoE from Chinese provinces did not count degrees in a consistent way. A motor mechanic or a technician could be considered an engineer, for example. Also, the numbers included all degrees related to information technology and to specialized fields such as shipbuilding. It seems that any bachelor’s degree with “engineering” in its title was included in the ministry’s statistics, regardless of the degree’s field or associated academic rigor. Ministry reports also included “short-cycle” degrees typically completed in two or three years, making them equivalent to associate degrees in the United States. Nearly half of China’s reported degrees fell into this category.

In India, data from NASSCOM were most useful. The group gathers information from diverse sources and then compares the data to validate projections and estimates. However, NASSCOM’s definition of engineer includes a wide variety of jobs in computer science and fields related to information technology, and no breakdown is available that precisely matches the U.S. definition of engineer, which generally requires at least four years of undergraduate education. Still, the group’s data provide the best comparison. Data from the three countries are presented in Table 1.

TABLE 1
Four-Year Bachelor’s Degrees in Engineering, Computer Science, and Information Technology Awarded from 1999 to 2004 in the United States, India, and China

  1999-2000 2000-2001 2001-2002 2002-2003 2003-2004 2004-2005
United States 108,750 114,241 121,263 134,406 137,437 133,854
India 82,107 109,376 129,000 139,000 170,000
China: MoE and CERN 282,610 361,270
China: MoE Yearbook 212,905 219,563 252,024 351,537 442,463 517,225

Note: Gray-highlighted data may be a substantial overestimate.

We believe that both sets of data from China presented in Table 1 are suspect, but they represent the best estimates available. The CERN numbers are likely to be closer to actual graduation rates but are available for only two years. The MoE numbers do, however, reflect a real trend—that graduation rates have increased dramatically in China.

To better understand the impact of the increases in gradation rates reported in China, we analyzed teacher/student ratios and numbers of colleges. As part of this effort, we visited several schools in China and met with several business executives and an official of the Communist Party.

The surge in engineering graduation rates can be traced to a series of top-down government policy changes that began in 1999. The goals of the changes were twofold: to transform science and engineering education from “elite education” to “mass education” by increasing enrollment, and to reduce engineering salaries. What we found is that even as enrollment in engineering programs has increased by more than 140% over the past five years, China has been decreasing its total number of technical schools and their associated teachers and staff. From 1999 to 2004, the number of technical schools fell from 4,098 to 2,884, and during that period the number of teachers and staff at these institutions fell by 24%. So graduation rate increases have been achieved by dramatically increasing class sizes.

We learned that only a few elite universities, such as Tsinghua and Fudan, had been allowed to lower enrollment rates after they noted serious quality problems as a result of increases they had made. The vast majority of Chinese universities complied with government directives to increase enrollment.

Our interviews with representatives of multinational and local technology companies revealed that they felt comfortable hiring graduates from only 10 to 15 universities across the country. The list of schools varied slightly from company to company, but all of the people we talked to agreed that the quality of engineering education dropped off drastically beyond those on the list. Demand for engineers from China’s top-tier universities is high, but employers complained that supply is limited.

At the same time, China’s National Development and Reform Commission reported in 2006 that 60% of that year’s university graduates would not be able to find work. In an effort to “fight” unemployment, some universities in China’s Anhui province are refusing to grant diplomas until potential graduates show proof of employment. The Chinese Ministry of Education announced on June 12, 2006, that it would begin to slow enrollment growth in higher education to keep it more in line with expected growth in the nation’s gross domestic product. Although Chinese graduation rates will continue to increase for a few years, while the last few high-enrollment classes make their way through the university system, we expect that the numbers of engineering graduates will eventually level off and may even decline.

In India, the growth in engineering education has been largely bottom-up and market-driven. There are a few regulatory bodies, such as the AICTE, that set limits on intake capacities, but the public education system is mired in politics and inefficiency. Current national debates focus on a demand for caste-based quotas for more than half of the available seats in public institutions.

Private enterprise has been India’s salvation. The nation has a growing number of private colleges and training institutions. Most of these face quality issues, but a few of them do provide good education. In 2004, India had 974 private engineering colleges, as compared with only 291 public and government institutions. New training centers have sprung up to address skills gaps that exist between companies’ needs and the capabilities of college graduates. NIIT, an international corporation that provides education and training in information technology in a number of countries, is the largest private training institute and runs more than 700 training centers across India. These centers serve corporations that need to train employees, as well as job seekers trying to break into the information technology industry. The company claims to serve as a “finishing school” for engineers.

Among the universities funded by the government, the Indian Institutes of Technology are best known and reputed to provide excellent education. But they graduate only a small percentage of India’s engineers. For example, during the 2002-2003 academic year, the institutes granted a total of 2,274 bachelor’s degrees, according to school officials. The quality of other universities varies greatly, but representatives of local companies and multinationals told us that they felt comfortable hiring the top graduates from most universities in India—unlike the situation in China. Even though the quality of graduates across all universities was inconsistent, corporate officials felt that with additional training, most graduates could become productive in a reasonable period.

Industry trends in outsourcing

Our research into engineering graduation rates raised many questions. We wondered, for example, about possible links between trends in education and the hiring practices and experiences of U.S. companies engaged in outsourcing. Were companies going offshore because of the superior education or skills of workers in China, India, or elsewhere, or because of a deficiency in U.S. workers? Would companies hire the large numbers of Chinese or Indian engineers graduating from two-or three-year technical programs? What were the relative strengths or weaknesses of engineering graduates when they joined multinationals? What skills would give U.S. graduates a greater advantage, and would offshoring continue even if they had these skills?

To answer some of these questions, we surveyed 58 U.S. corporations engaged in outsourcing engineering jobs. Our findings include:

Degree requirements. We were surprised that the majority of respondents said they did not mandate that job candidates possess a four-year engineering degree. Forty percent hired engineers with two- or three-year degrees, and an additional 17% said they would hire similar applicants if they had additional training or experience.

Engineering offshore. Forty-four percent of respondents said their company’s U.S. engineering jobs are more technical in nature than those sent abroad, 1% said their offshore engineering jobs are more technical in nature, and 33% said their jobs were equivalent. Thirty-seven percent said U.S. engineering employees are more productive, whereas 24% said U.S. and offshore engineering teams are equivalent in terms of productivity. Thirty-eight percent said their U.S. engineering employees produced higher-quality work, 1% said their company’s offshore engineering employees produced higher-quality work, and 40% said the groups were equal.

Engineering shortages in the United States. We asked several questions about company policies in hiring engineers to work in the United States. First, we asked about job acceptance rates, which are an indicator of the competition a company faces in recruiting staff. Acceptance rates of greater than 50% are generally considered good. Nearly one-half of the respondents had acceptance rates of 60% or higher. Twenty-one percent reported acceptance rates of 80 to 100%, and 26% of respondents reported 60 to 79% acceptance rates. Eighty percent said acceptance rates had stayed constant or increased over the past few years.

It is common in many industries to offer signing bonuses to encourage potential employees to accept a job offer. We found, however, that 88% of respondents to our survey did not offer signing bonuses to potential engineering employees or offered them to only a small percentage of their new hires. Another measure of skill supply is the amount of time it takes to fill a vacant position. Respondents to our survey reported that they were able to fill 80% of engineering jobs at their companies within four months. In other words, we found no indication of a shortage of engineers in the United States.

Reasons for going offshore. India and China are the top offshoring destinations, with Mexico in third place. The top reasons survey respondents cited for going offshore were salary and personnel savings, overhead cost savings, 24/7 continuous development cycles, access to new markets, and proximity to new markets.

Workforce issues. Given the graduation numbers we collected for China and India, we expected to hear that Indian corporations had difficulty hiring whereas Chinese companies did not. Surprisingly, 75% of respondents said India had an adequate to large supply of well-qualified entry-level engineers. Fifty-nine percent said the United States had an adequate supply, whereas 54% said this was the case in China.

Respondents said the disadvantages of hiring U.S. engineers were salary demands, limited supply of available people, and lack of industry experience. The disadvantages of hiring Chinese engineers included inadequate communication skills, visa restrictions, lack of proximity, inadequate experience, lack of loyalty, cultural differences, intellectual property concerns, and a limited “big-picture” mindset. The disadvantages of hiring Indian engineers included inadequate communication skills, lack of specific industry knowledge or domain experience, visa restrictions, lack of proximity, limited project management skills, high turnover rates, and cultural differences.

CHINA IS RACING AHEAD OF THE UNITED STATES AND INDIA IN ITS PRODUCTION OF ENGINEERING AND TECHNOLOGY PHD’S AND IN ITS ABILITY TO PERFORM

Respondents said the advantages of hiring U.S. engineers were strong communication skills, an understanding of U.S. industry, superior business acumen, strong education or training, strong technical skills, proximity to work centers, lack of cultural issues, and a sense of creativity and desire to challenge the status quo. The key advantage of hiring Chinese entry-level engineers was cost savings, whereas a few respondents cited strong education or training and a willingness to work long hours. Similarly, cost savings were cited as a major advantage of hiring Indian entry-level engineers, whereas other advantages were technical knowledge, English language skills, strong education or training, ability to learn quickly, and a strong work ethic.

Future of engineering offshore. The vast majority of respondents said the trend will continue, and their companies plan to send an even wider variety of jobs offshore. Only 5% said their overseas operations would stabilize or contract.

To complement our survey, we also met with senior executives of a number of U.S. multinationals, including IBM, Microsoft, Oracle, and GE in India and China. All of them talked of major successes, expressed satisfaction with the performance of their groups, and foresaw significant expansion. They said their companies were responding to the big opportunities in these rapidly growing markets. They expected that R&D would be moved closer to these growth markets and that their units would increasingly be catering to worldwide needs.

Graduate and postgraduate engineering education

Our interest in globalization also led us to look at the need for and production of engineers in the United States, China, and India who have advanced engineering or technology degrees or who have pursued postgraduate training in these areas. We traveled to China and India to meet with business executives and university officials and to collect data from a variety of sources.

The business executives said that for higher-level jobs in R&D, they preferred to hire graduates with master’s or PhD degrees. They did not mandate a PhD for research positions, and they said they often found many capable master’s-level graduates. Chinese executives said it was getting easier to hire master’s and PhD graduates, but Indian executives said it was getting harder. In both countries, they reported seeing an increasing number of expatriates returning home and bringing extensive knowledge and experience with them.

The deans and other university officials we met, especially those at top-level institutions, talked about the increasing demand they were seeing for their graduates and the shortages they were experiencing in hiring PhD graduates for faculty positions. They reported frequently having to compete with private industry and universities abroad for such graduates.

In our analysis of actual graduation data, we found that U.S. numbers were readily available from the Department of Education’s National Center for Education Statistics, the American Society for Engineering Education, and the Engineering Workforce Commission. For China and India, the picture was much different, as government officials maintained that little information on such issues is available. Still, we have accumulated some data.

During our trip to China, we were able to examine reports issued by the MoE on the state of education throughout the country. These reports detail degree production across a variety of disciplines, including engineering. Unfortunately, they offer no explanation as to how their statistics are tabulated. We believe that the data are gathered in inconsistent ways from the various Chinese provinces and that there are problems with how degrees are classified and their accreditation or quality. Although we consider the data suspect, they represent the best information available on Chinese education and allow valid inferences of trends.

Some MoE information is available online, but detailed data, including the production of engineering master’s and PhD graduates, are published only in the ministry’s Educational Statistical Yearbooks. These yearbooks generally are not permitted to leave China. In addition, the data are presented a year at a time and, in some cases, are available only in Chinese. In Beijing, with the help of local students, we combed government libraries and bookstores, searching for these publications. We ultimately were able to assemble 10 years’ worth of data on Chinese graduate engineering degrees.

To obtain graduate statistics for India, we traveled to Bangalore and New Delhi and visited NASSCOM, the AICTE, and the Ministry of Science and Technology and University Grants Commission. From the ministry we obtained useful information about PhD graduates. Obtaining data on master’s degree graduates proved much more difficult.

Although NASSCOM is considered to be an authority on India’s supply of engineering and technology talent, for master’s degree graduates it maintains data only on students who obtain a specialized degree in computer application. We obtained more data on master’s degree graduates from the AICTE, a government body that regulates university and college accreditation and determines how many students each institution may enroll in various disciplines. Each year, the body issues a report titled Growth in Technical Education that includes data on intake capacities. Current versions of the reports are readily available, but archives are difficult to obtain. The data in these reports are not published online, and paper versions of the reports rarely leave India. Our team met with a number of AICTE officials at a variety of venues to obtain physical copies of the reports covering 10 years. For various technical reasons, we could not use data from the reports directly, but we were able to adjust them statistically to obtain what we consider to be valid measurements. We validated our methodology with various AICTE representatives and academic deans.

An added complication with India’s master’s degree data is that students can pursue two different master’s degrees within engineering, but graduates are often counted together. The first is a traditional technical master’s degree in engineering, computer science, or information technology. These degrees, which require two years of study, are similar in structure to master’s degree offerings in the United States and China. The second is a master’s of computer application (MCA) degree, a three-year degree that offers a foundation in computer science to individuals who previously had received a bachelor’s degree in a different field. Most MCA recipients receive an education equivalent to a bachelor’s degree in computer science. For our analysis, we included statistics on MCA degrees but separated them analytically from more traditional master’s degrees.

Table 2 shows our comparative findings related to master’s degrees, and Table 3 shows our findings related to PhD degrees.

TABLE 2
Ten-Year Trend in Engineering and Technology Master’s Degrees in the United States, China, and India (Actual and Estimated Data)

Note: 2001-02 Chinese data (hashed line) from the Ministry of Education represent a significant outlier and thus were removed from our analysis.

TABLE 3
Ten-Year Trend in Engineering and Technology PhD Degrees in the United States, China, and India

Note: 2001-02 Chinese data (hashed line) from the Ministry of Education represent a significant outlier and were removed from our analysis.

In the United States, close to 60% of engineering PhD degrees awarded annually are currently earned by foreign nationals, according to data from the American Society for Engineering Education. Indian and Chinese students are the dominant foreign student groups. Data for 2005 that we obtained from the Chinese government show that 30% of all Chinese students studying abroad returned home after their education, and various sources report that this number is steadily increasing. Our interviews with business executives in India and China confirmed this trend.

The bottom line is that China is racing ahead of the United States and India in its production of engineering and technology PhD’s and in its ability to perform basic research. India is in particularly bad shape, as it does not appear to be producing the numbers of PhD’s needed even to staff its growing universities.

Immigrants provide entrepreneurial advantages

Although our research has revealed some issues of concern for the United States, we also want to focus on what we consider to be the country’s advantages in today’s increasingly globalized economy. We believe that these advantages include the United States’ open and inclusive society and its ability to attract the world’s best and brightest. Therefore, we have studied the economic and intellectual contribution of students who came to the United States to major in engineering and technology and ended up staying, as well as immigrants who gained entry based on their skills.

Economic contributions. In 1999, AnnaLee Saxenian of the University of California, Berkeley, published a study showing that foreign-born scientists and engineers were generating new jobs and wealth for the California economy. But she focused on Silicon Valley, and this was before the dotcom bust. To quantify the economic contribution of skilled immigrants, we set out to update her research and look at the entire nation. She assisted us with our research.

We examined engineering and technology companies founded from 1995 to 2005. Our objective was to determine whether their chief executive officer or chief technologist was a first-generation immigrant and, if so, the country of his or her origin. We made telephone contacts with 2,054 companies. Overall, we found that the trend that Saxenian documented in Silicon Valley had become a nationwide phenomenon:

  • In 25.3% of the companies, at least one key founder was foreign-born. In the semiconductor industry, the percentage was 35.2%.
  • Nationwide, these immigrant-founded companies produced $52 billion in sales and employed 450,000 workers in 2005.
  • Almost 80% of immigrant-founded companies were within two industry fields: software and innovation/manufacturing-related services. Immigrants were least likely to start companies in the defense/aerospace and environmental industries.
  • Indians have founded more engineering and technology companies during that past decade than immigrants from Britain, China, Taiwan, and Japan combined. Of all immigrant-founded companies, 26% have Indian founders.
  • The mix of immigrants varies by state. For example, Indians dominate in New Jersey, with 47% of all immigrant-founded startups. Hispanics are the dominant group in Florida, and Israelis are the largest founding group in Massachusetts.

Intellectual contribution. To quantify intellectual contribution, we analyzed patents applications by U.S. residents in the World Intellectual Property Organization patent databases. Foreign nationals residing in the United States were named as inventors or co-inventors in 24.2% of the patent applications filed from the United States in 2006, up from 7.3% in 1998. This number does not include foreign nationals who became citizens before filing a patent. The Chinese were the largest group, followed by Indians, Canadians, and British. Immigrant filers contributed more theoretical, computational, and practical patents than patents in mechanical, structural, or traditional engineering.

Overall, the results show that immigrants are increasingly fueling the growth of U.S. engineering and technology businesses. Of these immigrants groups, Indians are leading the charge in starting new businesses, and Chinese create the most intellectual property.

We have been researching this issue further. Preliminary results show that it is the education level of the individuals who make it to the United States that differentiates them. The vast majority of immigrant founders have master’s and PhD degrees in math- and science-related fields. The majority of these immigrant entrepreneurs entered the United States to study and stayed after graduation. We expect to publish detailed findings this summer.

Informing national decisions

The findings of our studies can help inform discussions now under way on how best to strengthen the nation’s competitiveness. The solutions that are most commonly prescribed are to improve education from kindergarten through high school and especially to add a greater focus on math and science; increase the number of engineers that U.S. colleges and universities graduate; increase investments in basic research; and expand the number of visas (called H1B’s) for skilled immigrants.

Improving education is critical. As we have seen from the success of skilled immigrants, more education in math and science leads to greater innovation and economic growth. There is little doubt that there are problems with K-12 education and that U.S. schools do not teach children enough math and science. However, the degradation in math and science education happened over a generation. Even if the nation did everything that is needed, it will probably take 10 to 15 years before major benefits become apparent. Given the pace at which globalization is happening, by that time the United States would have lost its global competitive edge. The nation cannot wait for education to set matters right.

Even though better-educated students will be better suited to take their places in the nation’s increasingly technology-driven economy, education is not the sole answer. Our research shows that companies are not moving abroad because of a deficiency in U.S. education or the quality of U.S. workers. Rather, they are doing what gives them economic and competitive advantage. It is cheaper for them to move certain engineering jobs overseas and to locate their R&D operations closer to growth markets. There are serious deficiencies in engineering graduates from Indian and Chinese schools. Yet the trend is building momentum despite these weaknesses. The government and industry need to pay attention to this issue and work to identify ways to strengthen U.S. industry while also taking advantages of the benefits offered by globalization.

The calls to graduate more engineers do not focus on any field of engineering or identify any specific need. Graduating more engineers just because India and China graduate more than the United States does is likely to create unemployment and erode engineering salaries. One of the biggest challenges for the engineering profession today is that engineers’ salaries are not competitive with those of other highly trained professionals: It makes more financial sense for a top engineering student to become an investment banker than an engineer. This cannot be fixed directly by the government. But one interesting possibility can be seen in China, where researchers who publish their work in international journals are accorded status as national heroes. U.S. society could certainly offer engineers more respect and recognition.

A key problem is that the United States lacks enough native students completing master’s and PhD degrees. The nation cannot continue to depend on India and China to supply such graduates. As their economies improve, it will be increasingly lucrative for students to return home. Perhaps the United States needs to learn from India and China, which offer deep subsidies for their master’s and PhD programs. It is not clear whether such higher education is cost-justified for U.S. students. Given the exorbitant fees they must pay to complete a master’s and the long period it takes to complete a PhD, the economics may not always make sense.

It is clear that skilled immigrants bring a lot to the United States: They contribute to the economy, create jobs, and lead innovation. H1B’s are temporary visas and come with many restrictions. If the nation truly needs workers with special skills, it should make them welcome by providing them with permanent resident status. Temporary workers cannot start businesses, and the nation currently is not giving them the opportunity to integrate into society and help the United States compete globally. We must also make it easier for foreign students to stay after they graduate.

Finally, the United States does need to increase—significantly—its investment in research. The nation needs Sputnik-like programs to solve a variety of critical problems: developing alternative fuels, reducing global warming, eliminating hunger, and treating and preventing disease. Engineers, scientists, mathematicians, and their associated colleagues have vital roles to play in such efforts. The nation—government, business, education, and society—needs to develop the road maps, create the excitement, and make it really cool and rewarding to become a scientist or engineer.

India’s Growth Path: Steady but not Straight

By almost any reckoning, the Indian economy is booming. This year, Indian officials revised their estimated economic growth for 2006 from 8% to 9.3%. This growth has been sustained over the past several years, effectively doubling India’s income every eight to nine years.

Since 1991, the year India removed some of the most crippling controls ever imposed on business activities in a non-communist country, it has not only been attracting large amounts of foreign investment, it has also begun luring back many skilled Indians who had chosen to live overseas. It has also lifted millions of Indians out of poverty.

Such a scenario seemed impossible to conceive in 1991, when the Indian economy was on the ropes, as its foreign reserves plummeted to a level that would cover only three weeks of imports and the main market of its exports (the Soviet Union, and by extension, the eastern bloc) unravelled. Forced to make structural adjustments to its economy, India lifted many restrictions on economic activity, as a result of which macroeconomic indicators have improved vastly (Table 1). Foreign investors, once shy, have returned.

The gross domestic product grew at a compounded rate of 9.63% annually during this period, and capital inflows increased dramatically. This change is remarkable because since India’s independence in 1947, it had pursued semi-autarkic policies of self-sufficiency and self-reliance, placing hurdles and barriers in the path of foreign and domestic businesses. Foreign investors shunned India because they were not welcome there. Restrictions kept multinational firms out of many areas of economic activity, and once in, companies were prevented from increasing their investments in existing operations. In 1978, the government asked certain multinationals to dilute their equity to 40% of the floating stock or to divest. IBM and Coca Cola chose to leave India rather than comply. Six years later, a massive explosion at a Union Carbide chemical plant in Bhopal, which killed more than 2,000 people within hours of the gas leak, brought India’s relations with foreign investors to its nadir.

TABLE 1
Capital Flows in India

  U.S. $ (billions)   Percent of GDP
1992-93 2004-05 % Growth 1992-93 2004-05
Net capital flows 5.16 31.03 16.1 2.40 4.79
Official flows 1.85 1.51 -1.9 0.88 0.23
Debt 2.38 12.71 15.0 1.11 1.96
FDI 0.32 5.59 27.1 0.15 0.86
Portfolio equity 0.24 8.91 35.1 0.11 1.37
Miscellaneous -0.98 3.90 -0.45 0.60
Current account 59.93 313.41 14.8 27.86 48.34
Capital account 36.67 158.30 13.0 17.05 24.42

The biggest change in 1991 was that India stopped micromanaging its economy by instituting policies:

  • Allowing foreign firms to own a majority stake in subsidiaries;
  • Liberalizing its trading regime by reducing tariffs, particularly on capital goods;
  • Making it easier for businesses to take money in and out of India;
  • Lifting limits on Indian companies to raise capital, to close business units that were no longer profitable, and to expand operations without seeking approval from New Delhi; and,
  • Complying with the World Trade Organization (WTO), after considerable internal debate and opposition, by strengthening its patent laws.

This last measure is a brave one, because there is a considerable body of Indian opinion, ranging from anti-globalization activists to open source enthusiasts, aligned against strengthening the patent regime. India has had an ambivalent relationship with the idea of property and private ownership: Squatting on somebody else’s property is not unusual; Indians copy processes and products, not always successfully; they resent multinationals exercising intellectual property rights; they protest when foreign firms establish such rights over products or processes Indians consider to be in public domain; and India has one of the world’s most enthusiastic communities of software developers who prefer the open-source model.

After passing legislation confirming India’s compliance with the WTO intellectual property rights regime, government officials now hope that many more companies will join the 150 multinational companies, including GE and Microsoft, that have set up research and development (R&D) labs in India to tap into the country’s talent pool of engineers. But its legislative will is already under challenge. The Swiss pharmaceutical company Novartis, which makes an anti-leukemia drug called Gleevac specifically targeting a particular form of cancer, has tested the new law, by seeking to overturn a local ruling that would have prevented Novartis from extending its patent over Gleevac, at a time when Indian generic drug makers want to enter the market. The new Indian law allows patents to be granted on new versions of older medicines, provided the company can show that the new version is a significant improvement on the original version. Health activists say millions of lives are at stake; Novartis says only about 7,500 people in India are affected by this form of cancer, and 90% of them receive the medicine for free, as part of the firm’s philanthropic activities. The case, which is being heard in an Indian court, will be a measure of India’s determination to continue opening its economy.

Because of its well-founded concern with providing millions of poor people with affordable medicine, India has for years maintained a drug price-control order, which restricted the prices companies could charge for pharmaceuticals. In addition, its patent policies prevented multinationals from patenting products; they could only patent processes. Indian generic drug manufacturers could circumvent patenting processes, which meant an Indian company could manufacture a copycat drug, with virtually no development costs, simply by finding an alternative process for producing the drug. That is changing now, and with interesting consequences.

Today, most sectors of the Indian economy are open for foreign investment (Table 2). As a result, foreign investment has increased across the board, with electrical products, electronics products, and telecommunication sectors being the main beneficiaries of the new regime (Table 3). To be sure, these figures appear small compared to the amount of foreign investment India’s immediate rival China attracts yearly. But domestic capital formation is high in India, making India less reliant on foreign investment than is China, and Indian governments have had to do with a fractious opposition, which has vocally opposed liberalization.

Indeed, the past 15 years haven’t been politically easy. The kind of stories for which India routinely attracts headlines—terrorist attacks, religious strife, caste-based violence, natural disasters, the nuclear standoff with neighboring Pakistan, violence by movements seeking greater autonomy, if not outright independence—have continued to appear with unfailing regularity. On top of that, India has had five parliamentary elections in this period, yielding five different prime ministers leading outwardly unstable coalitions. And yet, the economy has continued to grow, as if on auto-pilot, ignoring these distractions.

Annual economic growth of 8% means that India adds the equivalent of the national income of a medium-sized European economy to itself every year. In theory, it means giving every Indian $200 a year in a country where one in four Indians continues to earn less than a dollar a day. This has raised millions out of poverty, spread stock ownership among the middle class, and made billionaires out of some Indians. There are now more than two dozen Indians on the list of the world’s wealthiest individuals published by Forbes magazine, and you find them increasingly at the World Economic Forum at Davos.

Indeed, for the past several years, India has been the talk of the town at Davos. Whether among delegates or among speakers on various panels, Indians are ubiquitous. India has used this visibility to distinguish itself from China by emphasizing its pluralist democracy as much as its high-growth potential. At the Zurich airport where most Davos delegates arrive, India has posters advertising itself as “the world’s fastest-growing free market democracy.” This growth momentum is remarkable because for a long time, the economy grew at what the economist Raj Krishna derisively called “the Hindu rate of growth” of some3% per year.

The un-China

In many ways India is emerging as a major counterpoint to China. To be sure, China is far ahead of India in having built much superior infrastructure. It attracts multiples more of dollars in foreign investment and dominates global trade. China is extending its railway lines to remote parts of its western region, and today there are more skyscrapers in Shanghai than in Manhattan. But while China is building from scratch, India is fixing and tinkering with its creaking infrastructure. China had few railway lines to start with when the Communists assumed power in 1949; India had the world’s largest rail network at Independence in 1947. In keeping with its revolutionary ethos, China eliminated its entrepreneurs, sending them into exile or labor camps; socialist India permitted them to operate, keeping them hidden from public view as though they were the family’s black sheep.

TABLE 2
Foreign Investment Limit in Indian

Companies by Sector

Permitted Equity Sector
0 RETAIL TRADING, REAL ESTATE BELOW 25 ACRES, ATOMIC ENERGY. LOTTERIES, GAMBLING, AGRICULTURE AND PLANTATIONS
20-49 Broadcasting.
26 Print media and news channels, defense, insurance, petroleum refining
49 Airlines, telecom, investment companies in infrastructure
51 Oil and gas pipelines, trading
51-100 Petroleum exploration74
74 Petroleum distribution, mining for diamonds, precious stones, coal, nuclear fuel, telecom, satellites, internet services, banking, advertising
74-100 Airports
100 All other areas

TABLE 3
Sectoral Composition of Foreign Direct Investment (August 1991 – November 2004)

Sector U.S. $ (billions) % of total
Electrical, electronics, and Software 3.8 15.1
Transportation 2.9 11.4
Telecommunications 2.7 10.5
Oil and electricity 2.5 9.8
Services 2.2 8.2
Chemicals 1.7 6.0
Food processing 1.1 4.2
Metals 0.5 1.9
Others 15.0 32.9
Total 32.3 100.0

India can only tinker with its infrastructure and cannot steamroll reforms, because it cannot ignore people at home who oppose its policies. As we shall see, this opposition applies to how India deals with its technology as well as to how it handles its political and religious conflicts. China has the luxury of not dealing with public opinion. Mountains are high and emperors are far away when it comes to making money, but the unitary state asserts itself if anyone disrupts what Beijing considers harmony. In India, adversity in the countryside can attract media attention and broad public outrage, which can be powerful enough to topple a ruling party.

China is overwhelmingly dependent on foreign capital (although as its savings mountain rises higher, less so), whereas in India, the domestic private sector, which has always existed, reinvests much of its retained earnings, and its domestic stock markets are relatively efficient at intermediating between savings and investments. As a result, annual flow of foreign direct investment (FDI) is not a good indicator of the Indian economy’s attractiveness: FDI inflow into India in 2005 was $6.6 billion; in China, the figure was $72.6 billion. But nobody thinks that China is 12 times more attractive than India. In fact, the investment bank Goldman Sachs now says that in the long run India will grow faster than China. India, which ranks 50th in world competitiveness indices, (China is 49th), has moved up five notches in recent years; China has fallen three notches, indicating a narrowing gap. The FDI Confidence Index, prepared by the consulting firm A.T. Kearney, which tracks investor confidence among global investors, ranks India as the world’s second-most desired destination for FDI after China, replacing the United States.

Vision 2020 Technology Priorities•Advanced Sensors (mechanical, chemical, magnetic, optic¸ and bio sensors)

Agro-food processing (cereals, milk, fruit and vegetables)

Chemical process industries (oil and gas, polymers, heavy chemicals, basic organic chemicals, fertilizers, pesticides, growth regulators, drugs and pharmaceuticals, leather chemicals, perfumes, flavors, coal).

Civil Aviation (airline operations, manufacture and maintenance, pilot training, airports)

Food and agriculture (resource management, crop improvement, biodiversity, crop diversification, animal sciences)

Electric power (generation, transmission and distribution), instrumentation and switchgear)

Electronics and communications (components, photonics, optoelectronics, computers, telematics, fiber systems, networking)

Engineering industries (foundry, forging, textile machinery)

Healthcare (infetious diseases, gastrointestinal diseases, genetic and degenerative diseases, diabetes, cardiovascular diseases, mental disorders, injuries, eye disorders, renal diseases, hypertension)

Materials and processing (mining, extraction, metals, alloys, composite and nuclear materials, bio materials, building materials, semiconductors)

Life sciences and biotechnology (healthcare, environment, agriculture)

Road transportation (design and materials, rural roads, machinery, metro systems)

Services (Financial services, travel and tourism, intellectual property rights)

Strategic Industries (aircraft, weather survey, radar, space communications, remote sensing, robotics)

Telecommunications (networks, switching)

Waterways (developing smart waterways)

Sectors that dominate foreign investment in India—software, electric products, electronic products, telecommunication, chemicals, pharmaceuticals and infrastructure— require highly skilled professional staff, and India’s sophistication in these sectors surprises many outside India. How could a country with more than 300 million illiterate people also have the kind of scientific human resources that bring some of the world’s largest corporations to base their R&D labs in India?

Today, GE and Microsoft are among many multinationals that have set up such units in India, tapping the skills of Indian engineers and scientists and patenting discoveries for commercial applications. According to the U.S. Patent and Trademark Office, Indian entities registered 341 U.S. patents in 2003 and had 1,164 pending applications, compared to a mere 54 applications ten years earlier. At home, there were 23,000 applications pending for Indian patents in 2005, up from 17,000 in 2004. Indian authorship of scientific papers also rose from 12,500 articles in the ISI Thomson database in 1999 to 15,600 in 2003.

Recapturing past glory

Saying that the 21st century will belong to India, Raghunatha Mashelkar, director-general of the Council for Scientific and Industrial Research (CSIR), India’s nodal science research institute, said: “India will become a unique intellectual and economic power to reckon with, recapturing all its glory, which it had in the millennia gone by.” The glory he is referring to is of the Indus Valley civilization (2500 BCE), which had developed a sewage system that was then unrivaled in the world. The mathematical concept of zero was invented in India in the first millennium, and many concepts in the decimal system and geometry were explored by Indian mathematicians. Its ancient medical science, ayurveda, is still practiced in India, and some accounts say that in 200 BCE Indians were perhaps the first in the world to smelt iron to make steel.

Capturing the rational impulse of science was a priority for India’s first prime minister, Jawaharlal Nehru, who governed India from 1947 to1964. He told his audiences he wanted India to cultivate “a scientific temper.” At independence, Nehru said: “Science alone … can solve the problems of hunger and poverty, of insanitation and illiteracy, of superstition and deadening customs.” For him, science would pave the way towards self-sufficiency, which was a cornerstone of his concept of national security. As a fan of the Soviet-style planned economy (India continues to produce five-year plans), Nehru saw great promise in a state-led industrial effort and invested significant resources to build a massive public sector. He called those steel plants and power plants “temples of modern India.”

Nehru’s thoughts continue to reverberate in speeches Indian leaders make. In the science policy the government issued in 2003, it broadened the aims of science, recognizing its “central role in raising the quality of life of the people … particularly the disadvantaged sections of the society.” Nehru’s grandson Rajiv Gandhi, who was prime minister between 1984 and 1989, instituted technology missions to identify and champion specific technologies and to bring the inventions in the labs to market, albeit guided by the state.

The current Indian president (a largely ceremonial post) is Abdul Kalam, a missile scientist who ran India’s elite Defense Research and Development Organization lab. In his speeches, he has regularly stressed scientific thinking, promoting innovation of technology to harness the power of science to achieve broader social and economic goals.

The major initiative in this regard is the Technology Information Forecasting Assessment Council (TIFAC), set up as an autonomous organization under the department of science and technology, chaired by R Chidambaram, a former chairman of India’s atomic energy commission. The council observes global technological trends and formulates preferred technology options for India. Its objectives include:

  • Undertaking technology assessment and forecasting studies in select areas of the economy;
  • Observing global trends and formulating options for India;
  • Promoting key technologies; and,
  • Providing information on technologies.

It has produced feasibility surveys, facilitated patent registration, and prepared two important documents: “Technology Vision 2020 for India” and “A Comprehensive Picture of Science and Technology in India.” The Vision 2020 project provided detailed studies on infrastructure, advanced technologies, and technologies with socioeconomic implications, and it identified key areas on which to focus (see box).

Planning or dreaming?

Listing goals is relatively easy. Does India have the skilled people to achieve them? Does it provide sufficient incentives for R&D? Which sectors are promising? How seriously should the world take the Indian challenge?

Although the quantity and quality of India’s scientific professionals are a matter of some dispute, there is no denying that the Indian Institutes of Technology (IITs) produce what a recent Duke University study calls “dynamic” (as against “transactional”) engineers. Indeed, the saga of the IITs encompasses much of what is forward looking and backward looking in India.

The first IIT was inaugurated by Nehru, who called the IIT “a fine monument of India, representing India’s urges, India’s future in the making.” There are now seven IITs in India, and many states are clamoring for more. But there are concerns about managing quality, and a shortage of faculty members is forcing some IITs to look overseas to recruit faculty. Retaining faculty is also hard when better-paying jobs are available in the private sector.

The IITs’ extremely harsh selection regime ensures that only the brightest make it. The seven IITs accept only about 4,000 students a year, about one of every 100 who apply. Microsoft chairman Bill Gates calls the IIT “an incredible institution that has really changed the world and has the potential to do even more in the years ahead.”

The IITs have undoubtedly been good for India. They have burnished India’s reputation and produced many talented graduates who have gone on to play a major role in a wide range of businesses. But at home they are criticized as elitist and an impediment to social goals. Some have pointed out that a large number of IIT graduates leave India and that many never return even though the state has subsidized their education significantly. Kirsten Bound, who recently wrote “India: The Uneven Innovator,” a study of India’s science and technology prowess for Demos, the British think tank, as part of a project mapping the new geography of science, described an IIT as “a departure lounge for the global knowledge economy.” Although it is true that some 70% of IIT graduates left India for much of the past 50 years, in recent years the figure has dropped to 30%, according to CSIR’s Mashelkar. Other education activists have complained that instead of lavishing its resources on the IITs, the state should invest more in primary education to tackle the problem of mass illiteracy. Some argue that the IITs should replace their strictly meritocratic admission system with a quota system that guarantees that all social groups are adequately represented.

Although IITs have produced CEOs of leading western multinationals, they are not known for original research. And unlike western universities, they are not known to be incubators of entrepreneurial ideas that spawn new businesses.

IIT faculty and alumni are aware of that and encourage greater innovation. A strong foundation of research is growing in India, but it is not coming from the universities.

The environment for R&D has been evolving slowly but steadily in India. After relaxing curbs on foreign investment in 1991, India agreed to comply with the TRIPS rules on intellectual property protection. As a developing country, India was allowed a five-year period of adjustment (1995-2000) and a further five-year extension for pharmaceuticals and agricultural chemicals. Meanwhile, it amended its copyright law to be consistent with the Bern convention on copyrights and became a member of the World Intellectual Property Organization.

Multinationals recognized these changes and began establishing R&D operations. Texas Instruments set up the first real western R&D center in 1985, and today about 150 companies have R&D centers in India, where they have invested more than $1 billion. A survey by PriceWaterhouseCoopers found that 35% of multinationals said they were likely to set up R&D centers in India, compared to 22% in China and only 12% in Russia.

The Demos study found that Texas Instruments, Oracle, and Adobe have developed complete products in India. Microsoft has 700 researchers on its rolls in Bangalore, making it the company’s third-largest lab outside of the United States. GE employs 2,300 employees in Bangalore, making it the company’s biggest R&D center in the world. Many of these employees are Indians returning from abroad. Jack Welch, then CEO of GE, who decided to set up the R&D unit in India, said at the time of the opening of the center: “India is a developing country, but it is a developed country as far as its intellectual infrastructure is concerned. We get the highest intellectual capital per dollar here.” The Microsoft experience, too, has been positive for the company. Its India lab works on digital geographics, hardware, communication systems, multilingual systems, rigorous software engineering, and cryptography. Its list of advisors comprises star faculty from the IITs.

The trouble for India is that this successful commercial R&D culture has taken root in India’s domestic industries. As India was setting its course for the future, it found itself in a peculiar position. On one hand, it had a political leadership committed to promoting science and technology, and it invested in elite institutions that produced thousands of graduates with cutting-edge skills. But those graduates and institutes simply could not make breakthroughs or develop technologies in which the markets had an interest. One formidable barrier was that the governing class’s deep distrust of the capitalist model, which resulted in punitive tax laws and other policy measures designed to prevent individuals from earning excessive financial gains from their innovations. Many of those with the talent and drive to innovate decided that India was not the place for them.

The result was that government came to dominate Indian R&D. The state spends $4.5 billion a year directly, and when one adds the amount spent by state-owned companies, the state’s share amounts to an overwhelming 85% of total R&D expenditures in the country. Private firms have argued that they have not invested in R&D in pre-liberalization India because they were prevented from reaping the economic rewards of innovation. Indeed, a survey of annual reports of 8,334 listed companies on Indian stock exhanges in 2003 by the Administrative Staff College of India found that 86% of Indian companies spent nothing on R&D. Even InfoSys, arguably one of the leading Indian software firms, spends only 1% of its annual revenue on R&D, and R&D spending is low among all the IT outsourcing firms.

This may seem surprising, given that the Indian software industry has shown enormous growth in recent years and now accounts for nearly 5% of India’s GDP. The number of Indians working in the sector has grown phenomenally too, from 284,000 in 1999 to 1.3 million last year. By next year, IT exports may account for one-third of India’s total exports. Yet despite this growth, thoughtful observers have pointed out that India’s transition from maintaining source codes to innovative work has been slow. There are few Indian shrink-wrapped software products, and comparatively little intellectual property.

The major reason that India’s science infrastructure is not related to markets is because the policy environment removes some of the incentives for the private sector to invest in innovation. The critical link between the lab, the venture capitalist, and the marketplace has not been forged. Instead, India’s government-run research system focuses primarily on basic science with little near-term commercial value or develops products that do not meet market needs. According to T.S. Gopi Rethinaraj, a nuclear engineer who teaches science, technology, and public policy at the National University of Singapore: “One common feature is the general disconnect from commercially relevant and competent products and services.”

Indications that India is beginning to change are emerging in the pharmaceutical industry. India accounts for one-sixth of the global market for pharmaceuticals, and Indian companies are achieving significant success with production of generic drugs for export. The strengthening of intellectual property protection is creating an environment in which the companies believe that they will be able to reap the benefits of their research investments. Dr. Reddy’s Labs, a leading domestic drug company, spends 15% of its revenues on R&D, and other firms are approaching that level. Total R&D spending by Indian pharmaceutical companies rose 300% in the first five years of this decade, and Indian companies have begun making more complex formulations. Indian government labs and private companies have become the leading holders of Indian patents. Industry leaders believe that they could have a considerable cost advantage in the R&D stage of drug development if they can create a network of labs to work together.

Although there is considerable excitement in India with so many western firms setting up R&D units, nationalist-minded Indians worry that the country will not benefit. Some nationalists argue that talented Indians are lured from government labs by the higher salaries at multinationals and that once plugged into the global economy, they lose interest in Indian challenges and issues. But supporters say it is good that multinationals are coming to India, because it keeps Indian talent at home. Many Indians now see each new investment as a vote of confidence in India, and political opposition, while loud, has slowed. Businesses now know that India is a good place to do R&D, and many perceive that it will become even more so in the future.

Still, the public policy debate continues. The IITs have begun offering their students help in understanding the patent process, but counterfeiting remains rife. By one estimate, three-quarters of the software used in India is pirated. Criticism of the strengthening of the Indian patent regime has been most vociferous from social activists, who fear that India’s millions of poor may now find it impossible to access lifesaving medicines. They worry that Indian companies that now supply many inexpensive generic drugs to nonprofit groups for use in Africa will abandon these products. One critic, Sujatha Byravan, executive director at the Council for Responsible Genetics in Cambridge, Massachusetts, writes: “Pressure from international and domestic pharma companies has produced legislation that will create a situation where sick people will end up paying much more than they now do for desperately-needed medicines. Ignoring the ramifications of the patent bill is disingenuous.” More pointedly, she says that the Indian scientific community has been serving the needs of only the Indian middle class, not the hundreds of millions who live in abject poverty. CSIR’s Mashelkar counters that Indian laws permit compulsory licensing of drugs deemed essential, make it very difficult for companies to win extensions of their patents, and allow companies to manufacture drugs for poor countries not capable of producing their own.

The debate will continue, but over time it appears that India will become a more hospitable place to do R&D and that it will eventually spread from the labs of the multinationals to India’s domestic companies.

The long and winding road

In spite of all the hopeful signs, many major problems persist in India: The country abounds with avoidable diseases; its trains remain overcrowded; its buses fall over cliffs; its bridges collapse; its water taps run dry (and much of what comes from the tap is not safe); and the electrical system often breaks down, in part because of the widespread theft of power by the rich and poor alike. And there is the problem of the 300 million-plus people who cannot read or write. Economist Lester Thurow has pointed out that mass illiteracy is the yoke that will prevent India from rising higher.

There has been progress, but the work is never done. In 1980, two-thirds of Indians had no formal schooling, a figure that dropped to two-fifths by 2000. But that leaves a very large number of people who cannot read or write. Still, a youthful population engenders hope. President Kalam said recently: “India, with its billion people, 30% of whom are in the youthful age group, is a veritable ocean of talent, much of which may be latent. Imagine the situation when the entire sea of talent is allowed to manifest itself.”

India’s dreams remain elusive, but they are worthy dreams. The country’s emergence as a possible source of technological innovation highlights its ambiguous relationship with influences from abroad. In the Indian political culture major transformations can happen only incrementally, and a government in faraway Delhi cannot afford to ignore the demands from the hinterland.

There is genuine commitment to the Gandhian idea of self-reliance and a preference for home-grown technology and products. But there is also keen interest in modernity. In India, the land of synthesis, tradition and modernity need not be in opposition. It was modern India’s founding father, Mohandas Gandhi, who said: “I do not want my house to be walled in on sides and my windows to be stuffed. I want the cultures of all lands to be blown about my house as freely as possible. But I refuse to be blown off my feet by any.”

Emerging Economies Coming on Strong

The Council on Competitiveness released its flagship publication, Competitiveness Index: Where America Stands, last November. While the United States remains the global economic leader, the Index makes the case that its position is not guaranteed. The data and our analysis clearly point to a changing global environment, confirming the need to revisit how the United States will sustain its past position of economic strength and dominance under these new circumstances. The growth of emerging economies will reduce the U.S. share of the global economy, but it is unclear exactly how this will affect U.S. prosperity.

THE GLOBAL ECONOMIC LANDSCAPE IS CHANGING DRAMATICALLY, BUT THE UNITED STATES CAN CONTINUE TO PROSPER AS LONG AS IT CAPITALIZES ON ITS STRENGTH IN DIVERSITY AND CREATIVITY.

Two issues in particular must factor into the calculus as we proceed:

Knowledge is becoming an increasingly important driver of value in the global economy. A larger share of trade is also captured by services, and a larger share of assets and investments is intangible. This shift to services, high-value manufacturing, and intangibles creates more opportunities for the United States with its traditionally strong position in knowledge-driven activities and an already high stock of tangible as well as intangible assets.

Multinational companies are evolving into complex global enterprises, spreading their activities across value chains over different locations to take advantage of regional conditions and competencies. This process creates more competition as regions now must prove their competitiveness in order to attract and retain companies and investments. For the United States, it begs a fundamental rethinking of how states and localities strategize and execute economic development activities.

The United States will almost inevitably be a smaller part of a growing world economy due to the structural changes under way across the globe. However, there is no reason why the United States cannot retain its position as the most productive and prosperous country in the world.

The coming economy will favor nations that reach globally for markets, and those who embrace different cultures and absorb their diversity of ideas into the innovation process. It will be fueled by the fusion of different technical and creative fields, and thrive on scholarship, creativity, artistry, and leading-edge thinking. These concepts are U.S. strengths. These concepts are the nation’s competitive advantage. These concepts are uniquely American—for now.

Growing share of the global economy

In the past five years, China, India, and Russia, together with other fast-growing economies mostly in Asia and Latin America, have averaged almost 7% growth compared with 2.3% in rich economies. According to Goldman Sachs, by 2039, Russia, India, China, and Brazil together could be larger than the combined economies of the United States, Japan, the United Kingdom, Germany, France, and Italy. China alone could be the world’s second-largest economy by 2016 and could surpass the United States by 2041.

Emerging Economies’ Share of Key Indicators

Source:World Bank, UNCTAD, U.S.Department of Energy, EIA

Most populous and still growing

Fast-growing populations and economies are translating into a large worldwide increase in middle-income consumers. While industrialized countries will add 100 million more middle- income consumers by 2020, according to projections by A.T. Kearney, the developing world will add more than 900 million, and China alone will add 572 million.

Population and projected growth

Source: U.S. Census

Large professional workforce

In a sample of 28 low-wage countries, the McKinsey Global Institute found about 33 million “young professionals” (university graduates with up to seven years experience), compared to 18 million in a sample of eight high-wage countries, including 7.7 million in the United States. However, McKinsey found that only 2.8 million to 3.9 million of the 33 million in low-wage countries had all the skills necessary to work at a multinational corporation, compared to 8.8 million in high-wage countries.

Young Professionals, 2003 (Thousands)

Source:McKinsey Global Institute, The Emerging Global Labor Market: Part IIÑ The Supply of Offshore Talent in Services (June 2005)

Technology export leaders

Foreign multinationals have played a critical role in the development of advanced technology capabilities in emerging economies. For example, 90% of China’s information technology exports come from foreignowned factories. The United States, still the world’s largest overall producer of advanced technology, now has a trade deficit in this area, in part because U.S. technology firms have expanded production globally to meet both foreign and domestic demand.

Top Ten High-Tech Exporters (1986), Billions of 1997 U.S. Dollars

Source: Global Insight, Inc.

U.S. foreign operations outpace exports

Despite their global expansion, the activities of U.S. multinationals are still overwhelmingly based in the United States. The U.S. share of their total employment, investment, and production has changed relatively little even as globalization has accelerated. The primary motivation for moving production offshore is to search for new customers. Overall, 65% of U.S. foreign affiliate sales are to the local market, 24% to other countries, and only 11% are exported back to the United States. Foreign multinationals have played a critical role in the development of advanced technology capabilities in emerging economies. For example, 90% of China’s information technology exports come from foreign-owned factories. The United States, still the world’s largest overall producer of advanced technology, now has a trade deficit in this area, in part because U.S. technology firms have expanded production globally to meet foreign and domestic demand.

Sales Volumes of U.S. Multinationals

Source: U.S. Bureau of Economic Analysis

Steady increase of offshore investments

For decades, multinational corporations have set up foreign subsidiaries to perform manufacturing and assembly for overseas markets. In recent years, this model has evolved, as companies have developed global infrastructures that allow them to locate other business activities—from customer services and computer programming to R&D—nearly anywhere in the world.

Percentage of U.S. Corporate Investment Spent Offshore

Source: A.T. Kearney, Foreign Direct Investment Confidence Index (2005)

Science’s Social Effects

In 2001, the National Science Foundation (NSF) told scientists that if their grant proposals failed to address the connection between their research and its broader effects on society, the proposals would be returned without review. The response was a resounding “Huh?”

It’s time we faced facts. Scientists and federal funding agencies have failed to respond adequately to a reasonable demand from Congress and the public. The demand: Researchers and their tax-supported underwriters must take a comprehensive look at the broader implications of their science in making decisions about what research to support.

There are exceptions, but scientists and engineers generally have had a difficult time meeting this merit review criterion. Yes, the quantity of responses to what is called the “broader impacts criterion” has risen steadily. But the quality of those responses remains a persistent problem. In order to improve the quality, we need a more interdisciplinary approach to generating and reviewing grant proposals.

In theory, it might be reasonable to think this problem could be addressed by teaching scientists and engineers how to assess the broader effects of their research. In practice, however, such attempts have led to the widespread view that intellectual merit is the primary and scientific criterion, and that broader impacts is a secondary and minor “education” criterion. Too often, the responsibility for satisfying the broader impacts criterion has been taken over by education and public outreach (EPO) professionals. They are hired to facilitate education activities for scientists, who are trained chiefly in science, not in education.

This approach allows scientists to conduct their research on their own while the EPO professionals take care of education and outreach. But it reinforces the idea that research in science and engineering is separate from education in science and engineering; an idea that runs counter to one of the main motivations behind the broader impacts criterion, which is that scientific research and education can and should be integrated.

To our knowledge, all NSF-sponsored workshops in 2005 and 2006 that offered advice to scientists on how to address the broader impacts criterion focused on broader effects only in terms of education and outreach. The danger inherent in this approach is that education and outreach are liable to emphasize a triumphalist view, highlighting only the striking advances of science and technology. This approach does not reflect on the larger moral, political, and policy implications of the advance of scientific knowledge and technological capabilities. Granted, education and public outreach are important elements of the broader impacts criterion. But without equal consideration of the ethical, political, and cultural elements of science, the focus on education and outreach threatens not only to absolve scientists and engineers of the responsibility to integrate their research and education activities, but also to turn the broader impacts criterion into an advertisement for science and technology.

One can hardly blame EPO professionals for marketing themselves as experts who can help with issues of broader effects. Unfortunately, however, EPO professionals have now come to be viewed as the group uniquely qualified to help scientists confused about how to satisfy the broader impacts criterion. EPO activities focus on issues such as expanding the participation of underrepresented groups (for example, by facilitating campus visits and presentations at institutions that serve those groups), enhancing research and education infrastructure (for example, by contributing to the development of a digital library), disseminating research more widely (for example, by developing a partnership with a museum or a science and nature center to develop an exhibit), and benefiting society (for example, by interpreting the results of specialized scientific research in formats understandable for nonscientists).

It is simply a misinterpretation of the broader impacts criterion to label it the education criterion. It would make more sense to place science in its larger societal context. Take, as just one example, the goal of increasing the participation of underrepresented groups. That goal is not fulfilled solely by giving presentations at minority-serving institutions or by including a woman or minority group member on the research team. It should also involve giving some thought to why diversity is important to scientific research (for example, by exploring Philip Kitcher’s ideal of well-ordered science or David Guston’s calls for the democratization of science). The danger is that without such reflection the goal of increasing minority representation will simply appear as another case of identity politics.

It is, of course, true that scientists simply don’t have time to read philosophy or studies of science. But they don’t have time for reading educational theory either, yet sensitivity to questions of teaching remains part of the science portfolio. The same should be true of the ethical and policy implications of their work.

EPO professionals have taken it upon themselves to engage scientists on the level of science’s broader educational effects. We applaud this as long as scientists and engineers participate in EPO activities rather than treat EPO professionals as separate subcontractors. Instead of allowing EPO professionals to shoulder the sole burden of articulating science and technology’s broader effects, more of us ought to share the load. Integrating research and education is a worthy ideal that NSF is concerned to promote, but it hardly exhausts the possibilities inherent in the broader impacts criterion, which encompasses issues such as the democratization of science, science for policy, interdisciplinarity, and issues of ethics and values.

The challenge facing NSF, and the scientific and technical communities generally, is that disciplinary standards of excellence alone no longer provide sufficient warrant for the funding of scientific research. Put differently, Vannevar Bush’s 1945 model for science policy has broken down at two crucial points. First, it is no longer accepted that scientific progress automatically leads to societal progress. As long as this belief was the norm, disciplinary standards within geophysics or biochemistry were sufficient for judging proposals, and the wall separating science from society could remain intact. Second, and following from the first, recognition of the inherently political nature of science has become an accepted part of the landscape. But the point is not that science is subjective; science and engineering daily demonstrate their firm grasp on reality, even if the old dream of scientific certainty has faded, at least for the scientifically literate. No, the point is that science is deeply and inescapably woven into our personal and public lives, from the writing of requests for proposals to decisions made at the lab bench to the advising of congressional committees.

Unlike EPO professionals, researchers on science—historians and philosophers of science, policy scientists, and researchers in science, technology, and society studies— have generally failed to recognize the broader impacts criterion as an opportunity. We have built careers by reflecting on the broader effects of science and technology, but we have offered little help to scientists and engineers perplexed by the demand to assess and articulate those broader effects. Humanists and social scientists who conduct research on science, especially research on the relationship between science and society, should seize the opportunity the broader impacts criterion presents. We should work with scientists to help them reflect on and articulate the broader effects of their research. We should follow the example of EPO professionals, becoming facilitators in the assessment of the effects of research. But we should do so by instilling a critical spirit of reflection in scientists and engineers.

For their part, scientists should embrace, not merely meet (or even attempt to avoid) the broader impacts criterion. We philosophers believe that publicly funded scientists have a moral and political obligation to consider the broader effects of their research. To paraphrase Socrates, unexamined research is not worth funding. But if calls to duty sound too preachy, we can also appeal to enlightened self-interest. Agency officials, from the NSF director on down, are constantly asked to explain the results of the funding NSF receives and distributes. A fresh set of well-thought-out accounts of the broader effects of last year’s funded research is likely to play better on Capitol Hill than traditional pronouncements about how investments in science drive the economy and are therefore necessary to insure the U.S. competitiveness.

Sadly, there is little evidence that proposals deemed strong in terms of the broader impacts criteria find themselves at any significant advantage over proposals that are weak on those topics. Often, the criterion is used as a sort of tiebreaker in cases in which reviewers must decide between proposals of otherwise equal intellectual merit. Although in principle, there isn’t a problem with occasionally using this approach, tiebreaking is not the criterion’s only function.

WE PHILOSOPHERS BELIEVE THAT PUBLICLY FUNDED SCIENTISTS HAVE A MORAL AND POLITICAL OBLIGATION TO CONSIDER THE BROADER EFFECTS OF THEIR RESEARCH. TO PARAPHRASE SOCRATES, UNEXAMINED RESEARCH IS NOT WORTH FUNDING.

To encourage scientists and engineers to use the broader impacts criterion to its fullest, NSF should include an EPO professional and a researcher on science both as individual reviewers of proposals and as members of review panels. Such an approach—particularly in the review panels, in which researchers from different disciplines interact with each other—will encourage all reviewers to be more responsive to the broader impacts criterion. This, in turn, will encourage scientists and engineers to be more concerned with the broader effects of their research. Scientists and engineers will be motivated to seek out both EPO professionals and researchers on science to work together on grant proposals. The result? The kind of integrated and interdisciplinary research NSF seeks to support.

Scientists may view these suggestions as attempts at politicizing the (ideally) value-neutral pursuit of science. We suspect that such a reaction may underlie many scientific and technical researchers’ resistance to the criterion, as if assessing and articulating the broader effects of scientific and technical research were somehow outside science and engineering. It’s as if the criterion somehow represents outside interference in science.

We also suspect that one reason EPO professionals have been so successful in engaging scientists and engineers on broader effects is the widely shared view among scientists that any resistance on the part of the public to the advancement of science and technology is simply due to lack of science education. The public certainly ought to know more about science and technology, but there is little evidence that universalizing scientific and technological literacy would by itself produce a wholly supportive public.

If society needs to be educated about science and technology (and it does) scientists and engineers, too, need to be educated about the effect of science and technology on society, as well as the effect of society on science and technology. The broader impacts criterion represents an excellent (perhaps the best) opportunity for scientists, engineers, researchers on science and technology, policymakers, and members of the larger society to engage in mutual education. This promise will be fulfilled only if scientists, engineers, EPO professionals, and researchers on science work together and use the criterion to the fullest.

Finally, concern with the criterion should go beyond helping NSF improve its merit review process, and even beyond helping NSF achieve its larger goals of integration and interdisciplinarity. Insofar as science and technology have effects on our society, asking scientists and engineers to consider and account for those broader effects before they commit themselves to a particular research program, and before taxpayers commit to funding that program, sounds eminently reasonable. This is not to suggest that members of the public should have the final say on every funding proposal. It is to suggest, however, that publicly funded science should not always be judged only on its scientific merit by scientists. We need to explore the possibility of a new ideal of impure science, in which scientists and engineers both educate and learn from others about the relation between science and society.

U.S. Competitiveness: The Education Imperative

U.S. competitiveness and the country’s standing among our global counterparts have been persistent issues in public policy debates for the past 20 years. Most recently they have come to prominence with the publication of reports from the National Academies, the Electronics Industries Alliance, and the Council on Competitiveness, each of which argues that the United States is in danger of losing out in the economic competition of the 21st century.

There is no single cause for the concerns being raised, and there is no single policy prescription available to address them. However, there is widespread agreement that one necessary condition for ensuring future economic success and a sustained high standard of living for our citizens is an education system that provides each of them with a solid grounding in math and science and prepares students to succeed in science and engineering careers.

Unless the United States maintains its edge in innovation, which is founded on a well-trained creative workforce, the best jobs may soon be found overseas. If current trends continue, along with a lack of action, today’s children may grow up with a lower standard of living than their parents. Providing high-quality jobs for hard-working Americans must be our first priority. Indeed, it should be the central goal of any policy in Congress to advance U.S. competitiveness.

The United States is in direct competition with countries that recognize the importance of developing their human resources. The numbers and quality of scientists and engineers being educated elsewhere, notably in China and India, continue to increase, and the capabilities of broadband communications networks make access to scientific and engineering talent possible wherever it exists. The result is that U.S. scientists and engineers must compete against their counterparts in other countries, where living standards and wages are often well below those of the United States. Policies for maintaining U.S. competitiveness must consider how to ensure that U.S. scientists and engineers are educated to have the skills and abilities that will be in demand by industry and will allow them to command salaries that will sustain our current living standards.

Because the foundation for future success is a well-educated workforce, the necessary first step in any competitiveness agenda is to improve science and mathematics education. Unfortunately, all indications are that the United States has some distance to go in preparing students for academic success in college-level courses in science, mathematics, and engineering. Current data show that U.S. students seem to be less prepared than their foreign contemporaries.

The National Assessment of Educational Progress (NAEP), often referred to as the nation’s report card, has tracked the academic performance of U.S. students for the past 35 years. Achievement levels are set at the basic (partial mastery of the knowledge and skills needed to perform proficiently at each grade level), proficient, and advanced levels. Although student performance in mathematics improved between 1990 and 2000, most students do not perform at the proficient level. In the NAEP assessment for grades 4 and 8 in 2003 and for grade 12 in 2000, only about one-third of 4thand 8th-grade students and 16% of 12th-grade students reached the proficient level.

In science, progress has also been slow. Between 1996 and 2000, average NAEP science scores for grades 4 and 8 did not change, and grade 12 scores declined. For grades 4 and 8 in 2000, only about one-third of 4th- and 8th-grade students achieved the proficient level, and only 18% achieved that level by grade 12.

The United States also fares poorly in international comparisons of student performance in science and mathematics, such as the Program for International Student Assessment (PISA), which is coordinated by the Organization for Economic Cooperation and Development (OECD). PISA focuses on the reading, mathematics, and science capabilities of 15-year-olds and seeks to assess how well students apply their knowledge and skills to problems they may encounter outside of a classroom. In the recently released 2003 PISA results, U.S. students, compared with contemporaries in 49 industrial countries, ranked 19th in science and 24th in mathematics. U.S. students’ average science scores did not change from the first PISA assessment in 2000, whereas student scores increased in several OECD countries. Consequently, the relative position of U.S. students declined as compared with the OECD average.

A separate set of international comparisons—the Third International Mathematics and Science Study (TIMSS)— tracked the performance of students in three age groups from 45 countries. Although U.S. 4th-grade students performed quite well (above the international average in both mathematics and science), by the 8th grade, U.S. students scored only slightly above the international average in science and below the average in mathematics. By the 12th grade, U.S. students dropped to the bottom, outperforming only Cyprus and South Africa. The TIMSS results suggest that U.S. students actually do worse in science and mathematics comparisons the longer they stay in school.

Boosting teacher expertise

Although these findings are not encouraging and there are no simple answers for how to improve K-12 science and mathematics education, doing nothing is not an option. The place to start is to reduce the number of out-of-field teachers. Research has indicated that teachers play a critical role in students’ academic performance. It is unlikely that students will be proficient in science and mathematics if they are taught by teachers who have poor knowledge of their subjects.

The urgency of solving this problem is evident. For example, 69% of middle-school students are taught by math teachers with neither a college major in math nor a certificate to teach math. 93% of those same students are also taught physical science by teachers with no major or certificate. Although the situation at the high-school level improves, even there 31% of students are taught by math teachers with neither a college major in math nor a certificate to teach math. Likewise, 67% of high-school physics students are taught by similarly unqualified teachers.

Even teachers with basic science or mathematics proficiency may still be poorly prepared to teach these subjects. In a 1997 speech, Bruce Alberts, then president of the National Academy of Sciences (NAS), pointed out that one of the most informative parts of the TIMSS survey was a series of videotapes showing randomly selected teachers from the United States and Japan teaching 8th-grade math classes. The results of expert reviews of the taped classes found that none of the 100 U.S. teachers surveyed had taught a high-quality lesson and that 80% of the U.S. lessons, compared with 13% of the Japanese, received the lowest rating. Clearly, content knowledge must be combined with pedagogical skill to achieve the best educational outcomes.

In 2005, I and several of my colleagues on the House Science and Technology Committee asked NAS to carry out an assessment of the United States’ ability to compete and prosper in the 21st century. In particular, we asked NAS to chart a course forward, including the key actions necessary for creating a vital, robust U.S. economy with well-paying jobs for our citizens. NAS formed a panel of business and academic leaders ably chaired by Norm Augustine, the former chairman of Lockheed Martin. The panel conducted a study that was neither partisan nor narrow and subsequently released a report in the fall of 2005 called Rising Above the Gathering Storm.

The NAS report outlines a number of actions to improve the U.S. innovation environment. Its highest-priority recommendation addresses teachers. In particular, the report states that “laying the foundation for a scientifically literate workforce begins with developing outstanding K-12 teachers in science and mathematics.” The report calls for recruiting 10,000 of the best and brightest students into the teaching profession each year and supporting them with scholarships to obtain bachelor’s degrees in science, engineering, or mathematics, with concurrent certification as K12 science or mathematics teachers.

I believe the report was right on target in identifying teachers as the first priority for ensuring a brighter economic future. To implement the recommendations, I introduced legislation in the last Congress, which was approved by the House Science and Technology Committee, and have introduced largely similar legislation in the current Congress (H.R. 362).

The legislation provides generous scholarship support to science, mathematics, and engineering majors willing to pursue teaching careers, but even more important, it provides grants to universities to assist them in changing the way they educate science and mathematics teachers. It is not sufficient just to encourage these students to take enough off-the-shelf education courses to enable them to qualify for a teaching certificate. Colleges and universities must foster collaborations between science and education faculties, with the specific goal of developing courses designed to provide students with practical experience in how to teach science and mathematics effectively based on current knowledge of how individuals learn these subjects. In addition to early experience in the classroom, students should receive mentoring by experienced and expert teachers before and after graduation, which can be especially helpful in stemming the current trend in which teachers leave the profession after short tenures. Teachers who emerge from the program would combine deep knowledge of their subject with expertise in the most effective practices for teaching science or mathematics.

This approach is modeled on the successful UTEACH program, pioneered by the University of Texas (UT), which features the recruitment of science majors, highly relevant courses focused on teaching science and mathematics, early and intensive field teaching experiences, mentoring by experienced and expert teachers, and paid internships for students in the program.

The UTEACH program, which began as a pilot effort in 1997 with 28 students, has grown to more than 400 students per year. It has been successful in attracting top-performing science and mathematics majors to teaching careers. UTEACH students have average SAT scores and grade point averages that exceed the averages for all students in UT’s College of Natural Sciences. Moreover, a high proportion of graduates from the program remain in the classroom. 75% of UTEACH graduates are still teaching five years past graduation, which is well above the national average of 50%.

In addition to improving the education of new teachers, my legislation provides professional development opportunities for current teachers to improve their content knowledge and pedagogical skills. The activities authorized include part-time master’s degree programs tailored for in-service teachers and summer teacher institutes and training programs that prepare teachers to teach Advanced Placement and International Baccalaureate courses in science and mathematics.

NSF’s key role

The legislation I authored would house most of these education programs at the National Science Foundation (NSF). I strongly believe that NSF’s role is key to success because of the agency’s long history of accomplishment in this area, its close relationship with the best scientists and engineers in the nation, and its prestige among academics and educators in math and science education, which is unmatched by any other federal agency.

The effectiveness of NSF programs in attracting the participation of science, math, and engineering faculty in K12 science and mathematics education initiatives is demonstrated by the NSF Mathematics and Science Partnership (MSP) program, which aims to improve science and mathematics education through research and demonstration projects to enhance teacher performance, improve pedagogical practices, and develop more effective curricular materials. The program focuses on activities that will promote institutional and organizational change both at universities and in local school districts. It is highly competitive, with a funding rate of 8% in the 2006 proposal cycle. Through the summer of 2006, the program has funded 72 partnerships involving more than 150 institutions of higher education, more than 500 school districts, more than 135,000 teachers, and more than 4.1 million students. Approximately 50 businesses have also participated as corporate partners.

A major component of the MSP program is teacher professional development. Grant awards under the program require substantial leadership from disciplinary faculty in collaboration with education faculty. Of the 1,200 university faculty members who have been involved with MSPs, 69% are disciplinary faculty, with the remainder principally from education schools.

THE MOST REMARKABLE ASPECT OF THE DEBATE ABOUT THE SUPPLY AND DEMAND FOR S&T WORKERS AND THE EFFECTS OF OFFSHORING IS THAT THE ARGUMENTS ARE BASED ON VERY LITTLE FACTUAL DATA.

The MSP grants are large enough to allow the awardees to implement substantial, sustained, and thorough professional teacher development activities. For example, the Math Science Partnership of Greater Philadelphia involves 13 institutions of higher education and 46 school districts. This partnership targets teachers of grades 6 through 12, spanning the full breadth of mathematics and science courses and encompassing a wide geographical region, with a focus on the densely populated Philadelphia suburbs.

The preliminary assessment data for the MSP program show that the performance of students whose teachers are engaged in an MSP program improves significantly. Initial findings from nine MSP programs, involving more than 20,000 students, shows that 14.2% more high-school students were rated at or above the proficient level in mathematics after one year with a teacher in an MSP program. This reverses the national trend in which a declining number of students achieve this rating each year. Not all of the preliminary data show such dramatic improvement: The corresponding figure for middle-school students is 4.3%, and the first data evaluating improvement in science suggest that gains are more modest than they are in mathematics.

It is too soon to expect final evaluations of the MSP partnerships. The goal of all teacher professional development is to improve student performance, but there is a substantial time lag between announcing an MSP grant program and the final analysis of data measuring student improvement. Even among partnerships funded in the first year, many are still working with their first cohorts of teachers. However, the initial data trends are promising.

The main lessons from the MSP program thus far are that it has succeeded in attracting substantial participation by science, mathematics, and engineering faculty, along with education faculty; it has generated widespread interest in participation; and it shows preliminary success in reaching the main goal of improved student performance. NSF’s track record shows it is the right place to house the proposed program to improve the education of new K-12 science and mathematics teachers.

Solving the attrition problem

The programs I have described for increasing the number of highly qualified science and mathematics teachers address the long-term problem of ensuring that the nation produces future generations of scientists, engineers, and technicians, as well as a citizenry equipped to function in a technologically advanced society. But there is also the problem of ensuring adequate numbers of graduates in science and technology (S&T) fields in the near term.

The legislation I plan to move through the Committee on Science and Technology in this Congress includes provisions aimed at improving undergraduate science, technology, engineering, and mathematics (STEM) education with the goal of attracting students to these fields and keeping them engaged. A serious problem with undergraduate STEM education is high student attrition. In most instances, attrition is not because of an inability to perform academically, but because of a loss of interest and enthusiasm.

This leak in the STEM education pipeline can be addressed in many ways. Certainly, increased attention by faculty to undergraduate teaching and the development of more effective teaching methods will help. In addition, there is a role for industry and federal labs to partner with universities for activities such as providing undergraduate research experiences, student mentoring, and summer internships.

The murky supply and demand picture

Although a well-educated S&T workforce of adequate size is generally regarded as an essential component of technological innovation and international economic competitiveness, there is disagreement and uncertainty about whether the current supply and demand for such workers is in balance and about the prospects for the future ability of the nation to meet its needs for such workers. The supply part of the equation centers on whether our education system is motivating and preparing a sufficient number of students to pursue training in these fields and whether the country will be able to continue to attract talented foreign students to fill openings in the S&T workforce, a third of which is currently made up of individuals from abroad. The demand side of the equation is clouded by increasing evidence that technical jobs are migrating from the United States.

In general, the migration of high-tech jobs mirrors what has happened in the manufacturing sector during the past 20 years. In the case of manufacturing, the decline in U.S.–based jobs has been attributed to lower production costs in low-wage countries, improved infrastructure in foreign countries, and the increased productivity of foreign workers. Now this same trend is encompassing high-tech jobs, which generally require a technical education, often for the very same reasons.

The overseas migration of manufacturing led to a deep restructuring of the hourly workforce: a switch to service jobs with generally lower wages and benefits and an increase in temporary workers. The trend for technical workers could result in similar dislocations of currently employed scientists and engineers and affect future employment opportunities. In addition, it is likely that current well-publicized trends will influence the career choices of students—a result that could accelerate the migration of jobs.

Some policy groups have advocated training more scientists and engineers to ensure that the nation can meet future demand and as a solution to the offshoring phenomenon. Advocates frequently cite increased graduation rates of scientists and engineers in China and India as one justification of this policy. Industry also frequently states that there is a shortage of trained scientists and engineers in the United States, forcing them to move jobs overseas that would otherwise remain here. In addition, these groups claim that we need to train more scientists and engineers to maintain U.S. technology leadership, which will result in greater domestic employment across the board. However, many professional societies and organizations (representing scientists and engineers) dispute these assessments.

Regardless of viewpoint, the most remarkable aspect of the debate about the supply and demand for S&T workers and the effects of offshoring is that the arguments are based on very little factual data. A recent RAND report, commissioned by the President’s Office of Science and Technology Policy, pointed out that the information available to policymakers, students, and professional workers is not adequate to make informed decisions either about policies for the S&T workforce or about individual career or training opportunities. The RAND study includes eight specific ways in which federal agencies can improve data collection in this area. Unfortunately, the Bush administration has not comprehensively enacted the RAND recommendations. A Government Accountability Office report also highlights the lack of data on the extent and policy consequences of offshoring.

At a roundtable discussion on June 23, 2005, the Democratic members of the Committee on Science and Technology attempted to frame what is known and not known about supply and demand for the U.S. S&T workforce; to delineate factors that influence supply and demand, including the offshoring of S&T jobs; and to explore policy options necessary to ensure the existence of an S&T workforce in the future that meets the needs of the nation. (The papers presented at the roundtable are available on the committee’s Web site.)

On the basis of available data on unemployment levels and inflation-adjusted salary trends of S&T workers, Michael Teitelbaum of the Alfred P. Sloan Foundation and Ron Hira of Rochester Institute of Technology concluded that no evidence exists for a shortage; in fact, the available data suggest that a surplus may exist. For example, Institute of Electrical and Electronics Engineers surveys of their membership show higher levels of unemployment during the past five years than for any similar time period during which such surveys have been conducted (starting in 1973) and also show salary declines in 2003 for the first time in 31 years of surveys. Teitelbaum indicated that there may well be exceptions in demand for some subfields that are not captured by available data, and Dave McCurdy, president of the Electronics Industries Alliance, said that it is necessary to look industry by industry in assessing the actual state of shortage or surplus.

Discussion of the effects of offshoring on the S&T workforce is constrained by the lack of reliable and complete data. However, the data that are available suggest that offshoring is growing and becoming significant. Hira compared data for major U.S. and Indian information technology (IT) services companies that showed significant differences for employee growth in 2004: up 45% for Wipro, up 43% for Infosys, and up 66% for Cognizant (three Indian companies) versus an 11% decline for Electronic Data Systems, no growth for Computer Sciences Corporation, and an 8% increase for Affiliated Computer Services (three U.S. companies).

Hira also described examples of high-level engineering design jobs moving offshore and provided anecdotal evidence that venture capitalists are beginning to pressure start-up companies to include offshoring in their business plans. As an indirect indicator of the increase in offshoring, Teitelbaum presented unpublished data from a study funded by the Sloan Foundation that showed substantial growth in Indian employment in software export companies (from 110,000 to 345,000 between 1999–2000 and 2004–2005) and in IT-enabled services companies (from 42,000 to 348,000 for the same interval).

The panelists agreed that the data available for characterizing the S&T workforce and for quantifying the impact of offshoring are inadequate. George Langford, the immediate past president of the National Science Board Committee on Education and Human Resources, noted the need for better information on science and engineering skill needs and on utilization of scientists and engineers. Both Hira and Teitelbaum suggested the need for government tracking of the volume and nature of jobs moving offshore, and particularly services jobs, for which little reliable data is available.

The policy recommendations from the roundtable fell into two areas. The first was that better data are needed to characterize the state of the S&T workforce and particularly to quantify the nature and extent of the migration of S&T jobs. This recommendation was shared by all the panelists.

The second set of recommendations focused on education and training. The thrust of these recommendations was that U.S. S&T workers will need to acquire skills that will differentiate them from their foreign competitors. This implies the need to identify the kinds of skills valued by industry and the need for much better information about the skill sets that industry can easily acquire abroad. This information should then inform the reformulation of science and engineering degree programs by institutions of higher education. In addition, the identification of skills requirements will allow the creation of effective retraining programs for S&T workers displaced by offshoring.

Finally, the panelists agreed that it is necessary to make careers in S&T more appealing to students. Specific recommendations included funding undergraduate scholarships and generous graduate fellowship programs and providing paid internships in industry.

There is much that the federal government, states, and the private sector can do in partnership to bring about the result we all seek: ensuring that the United States succeeds in the global economic competition. I believe that the Gathering Storm report provides an excellent blueprint for action. The question is simply this: Are we willing to invest in our children’s future? I know that I am not alone in answering “yes” to that question. We know what the problem is and we have solutions. What we need now is the will to stop talking and start taking substantive action.

Growing Old or Living Long: Take Your Pick

The 20th century witnessed two profound changes in regions of the world where people are well educated and science and technology flourish: Life expectancy nearly doubled, and fertility rates fell dramatically. As a result, individuals and populations are aging.

Virtually all educated people are aware of the graying of the United States, yet relatively few are as aware of its implications for science, technology, and human culture. Longer life is a remarkable achievement, but now we need to apply what we are learning in the natural and social sciences to redesign human culture to accommodate long lives. We need to find cures for Alzheimer’s disease and arthritis, develop technologies that render many age-related frailties such as poor balance invisible in the way eyeglasses now compensate for presbyopia, and begin seriously rethinking cultural norms, such as the timing of education and retirement.

Longevity is the largely unexpected consequence of improvements in general living conditions. Genetically speaking, we are no smarter or heartier than our relatives were 10,000 years ago. Nonetheless, in practical terms we are more biologically fit than our great-grandparents. Robert Fogel and his colleague Dora Costa coined the term “technophysio evolution” to refer to improvements in biological functioning that are a consequence of technological advances. They point out that technologies developed mostly in the past century vastly improved the quality and sustainability of the food supply. Subsequent improvements in nutrition were so dramatic that average body size increased by 50% and life expectancy doubled. The working capacity of vital organs greatly improved. Breakthroughs in manufacturing, transportation, energy production, and communications contributed further to improvements in biological functioning. Medical technology now enables full recovery from accidents or illnesses that were previously fatal or disabling.

Even technophysio evolution may be too narrow a term. Just as dramatic as the technologies are the acceptance and incorporation of the advances into everyday life. Not only was pasteurization discovered, it was implemented in entire populations. Not only were insights into the spread of disease observed in laboratories, community-wide efforts to dispose of waste were systematically undertaken. Not only was child development better understood, child labor laws prevented little ones from working long hours in unsafe conditions. Culture changed. Life expectancy increased because we built a world that is exquisitely attuned to the needs of young people.

Remember, however, that advances of the 20th century did not aim to increase longevity or alleviate the disabling conditions of later life. Longer life was the byproduct of better conditions for the young. The challenge today is to build a world that is just as responsive to the needs of very old people as to the very young. The solutions must come from science and technology. Unlike evolution by natural selection, which operates across millennia, improvements in functioning due to technological advances can occur in a matter of years. In fact, given that the first of the 77 million Baby Boomers turned 60 in 2006, there is no time to waste. To the extent that we effectively use science and technology to compensate for human frailties at advanced ages, the conversation under way in the nation changes from one about old age to one about long life, and this is a far more interesting and more productive conversation to have.

Psychological science and longevity

In psychology, as in most of the biological and social sciences, research on aging has focused mostly on decline. And it has found it. The aging mind is slower and more prone to error when processing information. It is less adept at considering old information in novel ways. Memory suffers. In particular, working memory—the ability to keep multiple pieces of information in mind while acting on them— declines with age. The ability to inhibit extraneous information when attempting to focus attention becomes impaired. Declines are especially evident on tasks that require effortful processing that relies on attention, inhibition, working memory, prospective memory, and episodic memory.

These changes begin in a person’s 20’s and 30’s and continue at a steady rate across the adult years. They occur in virtually everyone, regardless of sex, race, or educational background. In all likelihood, these effects are accounted for by age-related changes in the efficiency of neurotransmission.

Despite these changes in cognitive processing, the subjective experience of normal aging is largely positive. By experiential and objective measures, most older people remain active and involved in families and communities. The majority of people over 90 live independently. The National Research Council report The Aging Mind: Opportunities in Cognitive Research observed that performance on laboratory tasks does not map well onto everyday functioning. The committee speculated that much of the discrepancy occurs because people spend most of their time engaged in well-practiced activities of daily routines where new learning is less critical. Research shows that in areas of expertise, age-related decline is minimal until very advanced ages.

Arguably even more interesting and important is growing evidence that performance—even on basic processes such as semantic or general memory—improves under certain conditions. One of the first such studies was reported by Paul Baltes and Reinhold Kliegl in 1992. They demonstrated rather striking improvement in memory with practice. Baltes and Kliegl first enlisted younger and older people’s participation in a study of memory training. They assessed the participants’ baseline performance and, as expected, younger participants outperformed older participants. However, after this initial assessment, participants attended a series of training sessions in which they were taught memory strategies such as mnemonics. They found that older people’s memory performance benefited from practice so much that after only a few practice sessions, older people performed as well as younger people had before they had practiced. Younger peoples’ performance also improved with training, of course, so at no point in the study did older people outperform younger people at the same point in training. But the fact that older people improved to the equivalent of untrained younger people speaks to the potential for improvement.

More recently, scientists have begun to investigate social conditions that also may affect performance. Tammy Rahhal and her colleagues reasoned that because there are widespread beliefs in the culture that memory declines with age, tests that explicitly feature memory may invoke performance deficits in older people. They compared memory performance in younger and older people under two experimental conditions. In one, the instructions stressed the fact that memory was the focus of the study. The experimenter repeatedly stated that participants were to “remember” as many statements from a list as they could and that “memory” was the key. In the second condition, experimental instructions were identical except that the instructions emphasized learning instead of memory. Participants were instructed to “learn” as many statements as they could. Once again, rather remarkable effects were observed. Age differences in memory were found when the instructions emphasized memory, but no age differences were observed in the condition that instead emphasized learning.

In another study, Thomas Hess and his colleagues documented deficits in performance when participants were reminded about declines that accompany aging before they began the experiment. In their study, participants read one of three newspaper articles before completing a memory task. One simulation reaffirmed memory decline and raised concerns that it may be worse than previously documented. In another condition, participants read a simulated article that described research findings suggesting that memory may improve with age. The third article was memory-neutral. In Hess’s study, younger people outperformed older people in all three conditions, but the gap was significantly reduced in participants who read the positive account of memory. Most important, Hess’s team identified a potential mediator of these performance differences. Participants were required to write down as many words as they could remember and those who had read the positive account about memory were more likely to use an effective memory strategy, called semantic clustering, in which similar words are grouped together. These strategic efforts were not observed in participants who were reminded of age deficits. Such findings point to the role of motivation in cognitive performance.

Thus, although there is ample evidence for cognitive deficits with age, the story about aging is not a simple story of decline. Rather, it is a qualified and more nuanced story than the one often told. Even in areas where there is decline, there is also growing evidence that performance can be improved in relatively simple ways. This poses a challenge to psychology to identify conditions where learning is well maintained, to find ways to frame information in ways best absorbed, and ultimately to improve cognitive and behavioral functioning by drawing on strengths and minimizing weaknesses.

My students, colleagues, and I had been studying age-related changes in motivation for several years. We began to wonder whether changes in motivation would affect performance on cognitive tasks, and we set out to explore what we call socioemotional selectivity theory (SST), a life-span theory of motivation.

Motivation matters

SST was initially developed to address an apparent paradox in the aging literature. Despite losses in many areas, emotional well-being is as good if not better in older people as in their younger counterparts. Studies of specific components of emotional processing (such as physiological responses, facial expression, neural activation, and subjective feelings) suggest that this system is well maintained at older ages. Experience-sampling studies in which participants carry electronic pagers and report emotions at random times throughout their days show that negative emotions are experienced less frequently in older people and positive emotional experiences are just as frequent. Older people are more satisfied with their social relationships than are younger people, especially regarding relationships with their children and younger relatives. Fredda Blanchard-Fields and her colleagues find that older people solve heated interpersonal problems more effectively than do younger adults. Many social scientists refer to such findings as the “paradox of aging.” How could it be that aging, given inherent losses in critical capabilities, is associated with an improved sense of well-being?

Within the theoretical context of SST¸ there is no paradox. SST is distinguished from other life-span theories in that its principal focus concerns the motivational consequences of perceived time horizons. Instead of relying on the more traditional yardstick of chronological age, SST considers the effects of continually changing temporal horizons on human development. The theory maintains that two broad categories of goals shift in importance as a function of perceived time: those concerning the acquisition of knowledge and those concerning the regulation of feeling states. When time is perceived as open-ended, as it typically is in youth, people are strongly motivated to pursue information. They attempt to expand their horizons, gain knowledge, and pursue new relationships. Information is gathered relentlessly. In the face of a long and nebulous future, even information that is not immediately relevant may become so somewhere down the line.

PERCEIVED TIME HORIZONS, NOT CHRONOLOGICAL AGE, ACCOUNT FOR AGE DIFFERENCES IN GOALS AND PREFERENCES.

In contrast, when time is perceived as constrained, as it typically is in later life, people are motivated to pursue emotional satisfaction. They are more likely to invest in sure things, deepen existing relationships, and savor life. Under these conditions, people are less interested in banking information and instead invest personal resources in the regulation of emotion. In this way, SST specifies the direction of the age-related motivational shift and offers hypotheses about social preferences and goals as well as the types of material that people of different ages are most likely to attend to and remember. To be clear, the theory does not speak against experience-based change. Rather, it postulates that some of the age differences long thought to reflect intractable, unidirectional change instead reflect changes in motivation. The theory thus contributes to a more nuanced interpretation of age differences.

One key tenet of SST is that perceived time horizons, not chronological age, account for age differences in goals and preferences. Our research team has examined this theoretical postulate in a variety of ways in a number of studies. We hypothesized that older people would prefer emotionally meaningful goals over informational goals but that these preferences would change systematically when time horizons were manipulated experimentally. In several studies, we showed that younger people display preferences similar to those of the old when their time horizons are shortened, and older people show preferences similar to those of the young when their time horizons are expanded. Importantly, similar changes occur when natural events, such as personal illnesses, epidemics, political upheavals, or terrorism create a sense of shortened time horizons. Under such circumstances, the preferences of the young resemble those of older people. In other words, when conditions create a sense of the fragility of life, younger as well as older people prefer to pursue emotionally meaningful experiences and goals.

Thus, when findings like those described above began to appear in the literature, my students, colleagues, and I began to apply postulates from SST to the study of age differences in cognitive processing. The human brain does not operate like a computer. It does not process all information evenly. Rather, motivation directs our attention to goal-relevant information and away from irrelevant information. We see what matters to us. Imagine walking around a city block with the goal of finding a friend. You see very different things than you would see if you took the same walk while trying to find a particular species of bird. Indeed, in the latter scenario you might walk right by your friend without notice. In the former, you would surely miss the bird.

In an initial study, my former student Helene Fung and I reasoned that because older people prefer emotional goals, they may remember emotional information better than emotionally neutral information. This was an important idea to test because the standard practice in psychological science is to avoid emotional stimuli in tests of memory in order to minimize contamination of “pure” cognitive processes. We wondered if by doing so, experimenters were inadvertently handicapping the performance of older adults. A substantial literature on memory and persuasion shows that people are more likely to remember and be persuaded by messages that are relevant to their goals. Thus, we reasoned that marketing messages that promised emotionally meaningful rewards may be more effective with older people than those that promise to increase knowledge or expand horizons.

THE CHALLENGE TODAY IS TO BUILD A WORLD THAT IS JUST AS RESPONSIVE TO THE NEEDS OF VERY OLD PEOPLE AS TO THE VERY YOUNG.

First, we hypothesized that older people would be more likely than young people to remember advertisements that promised emotional rewards. Second, consistent with the theory, we hypothesized that modifying their time perspective would alter this preference. Recall that according to SST, age differences are due to differences in time horizons, not chronological age. Fung and I worked with a graphics design firm to develop pairs of advertisements for a range of products. The ads in each pair were identical except for the slogans. In each pair, one version had a slogan that promised an emotional reward and the other promised a future-relevant goal. For example, in a camera ad one slogan read “Capture those special moments” and the other version read “Capture the unexplored world.” In another set one slogan read “Stay healthy for the ones you love” and the matched slogan read “Stay healthy for your bright future.”

The results supported both hypotheses. In one study, older people remembered the emotional slogans and the products they touted better than did younger people, supporting our first hypothesis. To test our second hypothesis, we showed a subset of participants both versions of the ads at the same time and asked them to indicate their preference. Some were simply asked to indicate the one they liked best. Others, however, were presented with the following instruction before they were asked to indicate their preference: “Imagine that you just got a call from your physician who told you about a new medical advance that virtually insures you will live about 20 years longer than you expected and in relatively good health. Please look at these ads and tell us which one you prefer.” In this time-expanded condition, age differences were eliminated.

The positivity effect

Findings from this initial study suggested that in older people, memory of emotional information was superior to memory of other types of information. My colleagues Susan Charles and Mara Mather and I began to wonder whether such effects would be limited to emotionally positive material (most advertisements associate products with positive promises) or if there would be heightened attention to emotionally negative information as well. On this point, the theory was equivocal. Reasoning from our theory, we thought that there are (at least) two ways that emotional goals might influence older adults’ attention and memory. The first possibility was that all information relevant to emotional goals is more salient under time-limited conditions. This emotionally relevant focus would bias attention and memory in favor of both positive and negative information. The second possibility is that information that furthers emotional goals in general is more salient. This emotionally gratifying focus would bias attention and memory in favor of information that fosters positive emotional experiences and against information that generates negative emotional experiences.

A substantial literature in social psychology, albeit based exclusively on young adults, shows superior memory of negative information. Negative information is also widely believed to be weighted more heavily than positive information in impression formation and in decisionmaking. The burning question was whether such findings, long presumed to represent “human” preferences, actually represented preferences of young people.

We conducted a study in which young, middle-aged, and older adults viewed positive, negative, and neutral images on a computer screen and were then tested for their memory of the images. We found an age-related pattern in which the ratio of positive to negative material recalled increased with age. Younger people recalled equal numbers of positive and negative images. Middle-aged people showed a small but significant preference in memory for positive images. In older people, the preference for positive was striking. Older people remembered nearly twice as many positive images as negative or neutral images.

We were excited by this finding because it pointed to a particular type of information that was relatively well remembered. However, this first behavioral study did not allow us to know whether the negative images were not initially processed at all or were stored but then were less likely to be retrieved from memory. We conducted a second study using essentially the same images and procedures. In this subsequent study, we collaborated with neuroscientist John Gabrieli’s research team. We included event-related functional magnetic resonance imaging to examine brain activation while participants viewed the images. After the viewings, participants again recalled as many of the images as they could, and we computed the ratio of positive to negative images they recalled. The behavioral findings perfectly replicated those from the first study. Older adults remembered more positive images than negative images. We also observed that whereas amygdala activation increased in younger adults in response to both positive and negative images, amygdala activation was greater in older adults only in response to the positive images. These findings suggested that positive and negative stimuli are differentially encoded, pointing to attentional as well as memory processes.

At that point, we began to think that attention and memory can operate in the service of emotion regulation. That is, focusing on positive memories and images makes people feel good. Reasoning again from our motivational perspective, Mather and I posited that older people, at a subconscious or conscious level, may “disattend”to negative images. In both of the studies described above, participants had been required to look at a single image at a time. We asked whether older people, given a choice, would disengage from negative images. We designed a study in which pairs of photographs of 60 different faces were presented to participants on a computer screen. Each pair included one neutral and one emotional version of the same face. Twenty of the face pairs included a happy expression, 20 a sad expression, and 20 an angry expression. In the task, each trial consisted of the following sequence: a fixation point was displayed in the center of the screen for half a second; the neutral and emotional versions of one face were displayed in the right and left positions on the screen for one second; the faces disappeared from the screen; and a small gray dot appeared in the center of the screen location where one of the photographs had been. The dot remained on the screen until the participant pressed one of two response keys on the keyboard.

Participants were told that the study was investigating perceptual processes and that their task was to respond to a small dot displayed on the screen as quickly and accurately as possible. If they saw the dot appear on the right side of the screen, they should press the red key (the “k” key marked with a red sticker); if they saw the dot appear on the left side, they should press the blue key (the “d” key marked with a blue sticker). They were told that each time, before the dot appeared, they would see two faces on the screen and that they did not need to respond to these faces. Instead, they should just wait for the dot and respond to it as quickly as they could. Younger people responded to the dots with the same speed whether they were behind positive or negative faces. Older adults were significantly faster when the dot appeared behind the positive face than the negative face, indicating that when a neutral face was paired with a positive face, they were attending to the positive face, and that when the neutral face was paired with a negative face, they attended to the neutral face. This study provided further evidence for the favoring of positive over negative material in attentional processing in older adults.

WHEREAS YOUNGER ADULTS FAVOR NEGATIVE INFORMATION AS MUCH OR MORE THAN POSITIVE INFORMATION, BY MIDDLE AGE THIS PREFERENCE APPEARS TO HAVE SHIFTED TO A PREFERENCE FOR POSITIVE INFORMATION. OLDER ADULTS SHOW A DECIDED PREFERENCE IN MEMORY AND ATTENTION FOR POSITIVE INFORMATION.

With Quinn Kennedy, then a graduate student in my laboratory, Mara Mather and I began to reconsider the possible role of motivation in autobiographical memory. There is evidence in the literature that people remember their personal pasts more positively over time, but virtually all of the studies suffer from the inability to corroborate the accuracy of the memories. Maybe older people did have cheerier pasts than younger people. In this study, we were able to capitalize on the fact that I had collected data from an order of Catholic nuns in 1987. The project had been conducted, at the nuns’ request, to assess physical and emotional well-being in preparation for the aging of their religious community. In 1987, the mean age of the nuns was 66. In 2001, we returned and 300 of the 316 surviving nuns who had originally participated agreed to complete the questionnaires as they remembered completing them in 1987. A booklet describing media and religious events of 1987 was provided to help prime their memories for that year. Into this survey we embedded three experimental conditions. In one condition, the nuns were repeatedly encouraged to focus on their emotional states as they completed the questionnaires. In another condition, the sisters were repeatedly instructed to be as accurate as possible. Nuns in the control condition were simply asked to complete the questionnaire as they had in 1987. We then calculated the difference between 1987 and 2001 reports. Findings showed that both the oldest participants and younger participants who were focused on emotional states showed a tendency to remember the past more positively than they originally reported. In stark contrast, the youngest participants and older participants who were focused on accuracy tended to remember the past more negatively than originally reported. Among the nuns who completed the questionnaire without priming, the older nuns remembered the past more positively than they had originally reported it, and the younger nuns remembered it more negatively than they had reported it originally.

As noted above, there is substantive evidence that working memory declines with age. However, to the best of our knowledge, working memory of emotional information had never been examined empirically. Richard Davidson conceptualized affective working memory as the memory system that keeps feelings online as people engage in goal-directed behavior. Together with Joseph Mikels, Greg Larkin, and Patricia Reuter-Lorenz, we designed a study that examined working memory of positive and negative emotion in older and younger adults. Mikels and Reuter-Lorenz had previously developed a novel experimental paradigm to measure affective working memory. Images that had been normed as positive or negative were presented one at a time to participants on a computer screen. An image was presented briefly, and after a delay a second image was presented and removed. Based on memory, participants must judge which of two images was more negative (or in the case of positive trials, which was more positive). As a comparison task, participants completed a similar task that demanded judgments about the relative brightness of two images. As predicted, younger adults performed significantly better than older adults on the brightness comparison, which tested visual working memory. However, no age differences were observed in the comparison of faces, which assessed emotional working memory. Even more interesting was an interaction effect. Younger adults performed better than older adults on the negative emotion trials. But older adults outperformed younger adults on the positive emotion trials.

To summarize, whereas younger adults favor negative information as much or more than positive information, by middle age this preference appears to have shifted to a preference for positive information. Older adults show a decided preference in memory and attention for positive information. Although longitudinal studies are needed before conclusions about change over time can be drawn, cross-sectional comparisons suggest that the effect may emerge across adulthood. This “positivity effect” has been demonstrated in a range of experimental tasks that assess even the most vulnerable of aspects of cognitive processing, such as working memory. Theoretically, we argue that the pattern represents a shift in goals from those aimed at gathering information and preparing for the future to those aimed at regulating emotional experience and savoring the present.

The dark side of the positivity effect

We maintain that in general a focus on positive information benefits well-being. However, there are probably conditions when a chronic tendency to focus on positive material is maladaptive. One such context, we presumed, is decisionmaking, especially when options include both positive and negative features. When making decisions, negative features of options often have higher diagnostic value. If a person who is deciding whether to renew a health care plan remembers that she likes her physician but forgets that the plan does not pay for the hip surgery she needs, a suboptimal decision could be made.

Corinna Löckenhoff and I designed another study with two primary aims: to see whether in a decision context older people would review positive features of options more than negative features; and if this was the case, to see if we could eliminate the effect by modifying goals with instructions. Using computer-based decision scenarios, 60 older and 60 younger adults were presented with positive, negative, and neutral information about ostensible health care options. Some scenarios presented characteristics of physicians. Others presented features of health care plans. The information was hidden behind colored squares, and participants had to click on the square to see the information. They were told that positive information was behind white squares and negative information was behind black squares. We then observed how often participants examined the positive information versus the negative information. Later we tested their memory for the information. As we predicted, older adults reviewed and recalled a greater proportion of positive information than did younger adults. Most important, participants in one group were repeatedly reminded to “focus on the facts” and in this group the preference for positive information disappeared.

In one of our most recent studies, our research team with Mikels, Löckenhoff, and Sam Maglio collaborated with Stanford economist and physician Alan Garber and with Mary Goldstein, a geriatrician based at the Palo Alto Veteran’s Administration. The study examined whether older adults would make better decisions by focusing on their feelings about different options. We enlisted older and younger adults to make a series of health-related decisions. We presented the information about options under one of two instructional conditions. In one, participants were asked to focus on remembering the details about the options when making their choices. In the other, participants were asked to focus on their feelings about the options. For each decision, options were constructed so that one was clearly the better choice. Younger people performed better than older people when instructions asked them to focus on details. However, when participants were instructed to focus on their emotional reactions as they reviewed the options, the age difference was eliminated. Older peoples’ decision quality was as good as that of younger participants. Focusing on feelings when making decisions may be a good strategy for older adults.

Human need is the basis for virtually all of science. If we rise to the challenge of an aging population by systematically applying science and technology to questions that improve quality of life in adulthood and old age, longer-lived populations will inspire breakthroughs in the social, physical, and biological sciences that will improve the quality of life at all ages. Longevity science will reveal ways to improve learning from birth to advanced ages and to deter age-related slowing in cognitive processing. Longevity science will draw enormously on insights about individuals’ genomic predispositions and the environmental conditions that trigger the onset of disease, as well as identifying genetic differences in individuals who appear resilient despite bad habits. Longevity science will help us understand how stress slowly but surely affects health. Most of the challenges of longer-lived populations will require interdisciplinary collaborations. Psychological science must be a part of this process.

Transparency in Jeopardy

Alasdair Roberts provides an excellent sense of the history and key issues of efforts to hold governments accountable by requiring the disclosure of information they possess. Roberts has a comprehensive knowledge of global trends, which he describes and analyzes eloquently. He is not as successful, however, at providing a workable theory for when transparency is appropriate in the face of the competing demands for homeland and national security.

Roberts, an associate professor in the Maxwell School of Citizenship and Public Affairs and director of the Campbell Public Affairs Institute at Syracuse University, begins the book, which is essentially a series of essays, by examining the heroic cause of revealing government secrets. He opens by describing the German Parliament in Berlin, built in the wake of the reunification of East and West Germany, where a milestone in transparency took place with the release of millions of dossiers held by the Stazi, the East German secret police. Topped with a grand cupola made of glass, the building serves as a metaphor for the book, or at least for its optimistic agenda of increasing transparency in government.

Roberts offers a paean to transparency on the march. The United States passed the Freedom of Information Act (FOIA) in 1966. Twenty years later, only 11 countries—other wealthy democracies—had promulgated similar statutes. By the end of 2004, however, 59 countries had adopted right-to-information laws. What prompted this change? Roberts says many of the former Communist countries of Eastern Europe followed the German pattern and opened secret files as part of the shift to democracy. Public interest groups, of which Transparency International is perhaps the best known, pushed for transparency statutes as a check against corruption and other wrongdoing. International institutions, including the World Bank, encouraged aid recipients to pass transparency statutes, and many Third World countries have recently passed FOIA-type laws for the first time.

Roberts shares a number of examples—taken from far-flung locations such as India, Thailand, Japan, Uganda, Mexico, and Great Britain (which only recently began to implement its first FOIA)—in which transparency has served as a crucial tool for improving government activities. The examples show how transparency has helped address problems such as corruption in public works, favoritism in school admission, tainted blood supplies, and all manner of other excesses of governmental efforts to control information and thus their populations.

Paralleling this optimistic story of increasing transparency, however, is a counter-narrative of increasing secrecy. The 9/11 attacks have given new vigor to the perspective that secrecy is required to protect national security. The Bush administration’s push for greater secrecy began even before 9/11, most notably with Vice President Cheney’s insistence that his energy task force not be subject to openness requirements. The Watergate-era reforms that are viewed so positively by supporters of transparency are seen by Cheney and other officials as lamentable incursions into the powers of the presidency.

Roberts catalogs the many shifts toward secrecy during the Bush administration, including more restrictive FOIA policies, new classification and declassification rules, widespread use of the “sensitive but not classified” designation, the dismantling of government Web sites, and refusal to answer press and congressional queries. Somewhat surprisingly and somewhat persuasively, he concludes that the administration’s secrecy policies have not been entirely successful. He argues that whereas the transparency revolution that began with Watergate has become entrenched in statute, the Bush counterrevolution has not. I am not as sanguine as Roberts on this point. Because there have been so many recent secrecy efforts and because the government is still in the midst of defining the rules for the “long war”against terrorism, there are serious reasons to doubt that the current secrecy initiative has run its course.

Roberts goes on to explain the three trends he sees as threatening the march toward greater transparency, and here he is perhaps most compelling. The first trend is the development of what he calls “opaque networks.” Although the 9/11 Commission and the 2004 reform of the U.S. intelligence system made information-sharing a major priority in the fight against terrorism, much less appreciated has been the way in which information-sharing undermines the efforts to promote open government.Robert writes: “Transparency within the network is matched by opacity without.” For instance, cities and states that work with the federal government on homeland security face strict new limits on what they can disclose under their own FOI laws. At the international level, Roberts reports that many of the new democracies of Eastern Europe specifically passed national security and other limits on transparency at the insistence of the United States. To participate in new information-sharing networks, these countries had to promise not to disclose any secrets exchanged, even if their own antisecrecy laws said otherwise.

The second theme, called “the corporate veil,” highlights how privatization reduces transparency. In recent decades, an unusual political coalition supporting transparency developed: Public interest groups that favored government accountability teamed with companies that favored limits on government power. More recently, however, the government has increasingly outsourced work to the private sector. Companies have then resisted requests to disclose information related to the contracts, arguing that disclosure would jeopardize competitive secrets. Transparency advocates, meanwhile, fear that this increased secrecy in government contracts will lead to unaccountability, so that work will be performed less well, at higher cost, and possibly corruptly.

Roberts provides a number of examples of this trend. One is the outsourcing of prisons, in which corporate managers resist disclosing embarrassing details such as the number of escapes or the level of medical care. Another involves companies that develop commercial databases and sell their products to government agencies. In the United States, with certain exceptions, the Privacy Act assures citizens the right to access their own files when the data is held in a government “system of records.” But the act usually does not apply when the government obtains the same information about citizens from a private database company. These examples underscore the need to broaden the debate about when the government should outsource.

The third theme, on “remote control,” highlights the role of supranational institutions. Transparency statutes have been promulgated largely at the national or subnational level. Government power, however, is shifting to supranational institutions such as the European Union, the World Trade Organization, and the International Monetary Fund, which largely lack transparency statutes. Roberts concludes that promoting transparency within such organizations will be an uphill battle, with success depending on political factors in each setting.

Critique falls short

Although Roberts provides a rich sense of how transparency regimes operate in different countries, he is less successful in providing a normative theory for deciding when more transparency is appropriate. There is no empirical evidence presented to show where there are net benefits of transparency. There is no philosophically rigorous attempt to define categories where transparency should be favored or disfavored. There are few concrete policy implications, except for a general sense that more transparency would be good. Instead, the book succeeds at an intermediate level, with informed description and thoughtful discussion of particular practical problems.

In particular, Roberts does not effectively explain how transparency can be both increasing and decreasing at the same time. Early on, he describes transparency on the march, but in the remainder of the book he describes transparency in eclipse. How can both be true?

This apparent contradiction can be explained. On the one hand, the volume of information available to the public has increased enormously because of computerization. A simple FOIA request today might garner thousands or even millions of emails or data fields, on a scale unimagined when the law was created. In ways that are threatening to government officials, a leaked document today can be uploaded instantly to a blog and then quickly reach the mainstream media. At the same time, however, the categories of information subject to secrecy rules also are increasing. Together, these two factors can explain the anguish felt by transparency advocates and government officials alike. Advocates note the new categories of secrecy and lament the loss of access. Officials note the ways in which technology hastens the spread of secrets. Both sides of the debate see how they are worse off than before. Both are correct that they have lost something compared to what existed previously.

Roberts also falls short in clarifying the interplay between secrecy and security. Early on, he accurately describes how the 9/11attacks have bolstered individuals and groups who want to keep secrets in the name of national security. In response, as a transparency advocate, Roberts makes the vague claim that “In the long run, it may be a policy of openness rather than secrecy that best promotes security.” He says, for instance, that greater transparency may reveal the weaknesses in new homeland security programs and lead to their strengthening.

BETTER UNDERSTANDING OF THE RELATIONSHIP BETWEEN SECRECY AND SECURITY WILL BE CRUCIAL IN COMING YEARS AS THE NATION SEEKS TO BUILD A SOCIETY THAT IS BOTH OPEN AND SECURE.

Roberts’s hope that openness best promotes security has in fact become a major theme for security researchers in recent years. Most of them are active in computer security, where there is an oft-used maxim that “there is no security through obscurity.” This maxim is most persuasive in the context of open-source software projects. In such settings, when a security flaw is publicized, a corps of software programmers can leap into action to write an effective patch. Furthermore, the publicity alerts users of the software to the problem and encourages them to protect their own systems once a patch is developed. In short, publicity helps the defenders (the software writers and system users) but does not tell much to the attackers.

The difficulty is that sometimes publicity helps the attackers but not the defenders. The maxim that favors secrecy is “loose lips sink ships.” Revealing the path of the convoy helps would-be attackers but does nothing to aid defenders. Both “no security through obscurity” and “loose lips sink ships” cannot simultaneously be true. Instead, it is a key research task, including for transparency advocates such as Roberts, to determine the conditions under which disclosure is likely to help or hurt security. In other writings, I have tried to contribute to that research project, especially by identifying the costs and benefits of disclosure to attackers and defenders in various settings. One key theme is that secrets work well against a first attack, when the attackers might fall for a trap. Secrets work much less well against repeated attacks, such as when a hacker can try repeatedly to find a flaw in a software program or system firewall.

Better understanding of the relationship between secrecy and security will be crucial in coming years as the nation seeks to build a society that is both open and secure. By so ably documenting the current trends toward both openness and secrecy, Roberts has provided a crucial underpinning for that debate.

Moonstruck

Once this history of the Apollo program reaches its stride in chapter six, it presents a persuasive and telling critique of human spaceflight. The preface and the introductory chapter suffer from an excess of hollow rhetoric, dismissing the space race of the 1960s as “shallow and trivial”and the Apollo program as a “glorious swindle” in which “nearly every mission came close to catastrophe.” The rhetorical excess gradually abates in the next four chapters, but these devolve into a conventional historical narrative of the origins of the space race, based primarily on a modest and incomplete selection of secondary sources. The story is fine, but little distinguishes it from the vast existing literature on the topic.

An argument begins to emerge in chapter six with the treatment of Sputnik. From that point on, Dark Side of the Moon becomes an informed, focused, persuasive, and finally scathing indictment of the Apollo program in particular and human spaceflight in general. DeGroot explores the contrast between rhetoric and reality, the politics of fear, the triumph of public relations, the manipulation of public opinion, and finally the “lunacy” of those who nurtured Utopian visions of humankind’s future in space. Astronaut Michael Collins captures DeGroot’s opinion of the Moon, “this monotonous rock pile, this withered, Sun-seared peach pit.” It is little wonder to DeGroot that humans have not been back to the Moon in more than 30 years. The marvel is that the National Aeronautics and Space Administration (NASA) is gearing up to return in the coming decade.

Two historical figures dominate DeGroot’s tale. Wernher von Braun is the evil genius who planted in the U.S. psyche the image of humans effortlessly zipping around the solar system. DeGroot revels in von Braun’s Nazi past and his miracle conversion to American democracy, conceding all the while the charm and charisma of the man and his legendary powers of persuasion. Von Braun had the “mind of a drug dealer,” says DeGroot, and “the instincts of a populist.” He used his talents shamelessly and successfully to promote his “selfish fantasies” and his own “will to supremacy.” This “trickster,” this “spin-doctor for space . . . probably the only rocket man with a full-time publicist” was “intent on leading Americans down the lunar path.” The von Braun exposed by DeGroot is a familiar figure in U.S. space history, though seldom is he pilloried so wickedly.

Paired with von Braun in responsibility for the Apollo program is President John F. Kennedy. DeGroot implies that Kennedy’s contribution was even more odious because he never really cared for space, an admission he made in a famous meeting of November 1962, barely two months after his rousing “this new ocean” speech at Rice University. Indeed, DeGroot presents Kennedy as ruing his 1961 commitment to put men on the moon before the end of the decade. During his last two years in office, the combative president searched for ways to escape the suffocating financial burden that Apollo imposed on his political freedom of movement. Kennedy’s 1963 United Nations speech, suggesting that the United States and the Soviet Union go to the Moon together, was just the most obvious of his many attempts to get out from under the space race. But once Kennedy was assassinated, the Moon mission became politically untouchable, a monument to a slain president who did not really care that much. DeGroot appears to find Kennedy’s opportunism and duplicity as shameful as von Braun’s self-serving hucksterism. By October 1963, says DeGroot, “Kennedy’s space program was a dog’s dinner.”

These two central characters are surrounded by a bevy of supporting actors, carried along on a wave of almost spiritual enthusiasm for human spaceflight. Politicians proclaimed mindlessly that whoever controlled the heavens would control Earth. Scientists endorsed the Moon mission in the expectation that more federal dollars would flow into space science. The aerospace industry quickly perceived a windfall. Even so critical an observer as Secretary of Defense Robert McNamara embraced Apollo because he realized that NASA funds would support launch-vehicle development without burdening his already swollen defense budget. The public at large accepted all the justifications for Apollo offered by these public figures but loved the enterprise in the first instance because of the great public theater. NASA played its part by commodifying Apollo and packaging the astronauts as heroes. Brian Duff, NASA’s director of public affairs, said that his job was “to keep up a drumfire of positive public attention on the program and never let up the visibility of it.”Apollo was marketed like candy to an adoring public and a cynical power elite.

DEGROOT’S DISTASTE FOR APOLLO SEEMS TO FLOW NOT FROM A CALCULUS OF PUBLIC INVESTMENT BUT RATHER FROM A PROFOUND, BUT LARGELY UNEXPLORED, DISTRUST OF TECHNOLOGICAL INNOVATION OR PROGRESS.

By comparison, the critics were few and ineffective. President Dwight Eisenhower resisted the space race from the start and remained critical throughout the Apollo program. His NASA administrator, T. Keith Glennan, shared his views and lost his job to James E. Webb when the Kennedy administration took office. Webb kept his reservations to himself and tried with mixed results to steer some of the avalanche of Apollo funding to a “balanced” space program and to contracting policies at home that were redistributive and socially responsible. Columbia University sociologist Amitai Etzioni labeled Apollo “the moon-doggle” and spoke against it to all who would listen. Newsweek columnist Edwin Diamond likened Apollo to potlatch, the tradition of some Native American tribes to compete for prestige by seeing who would give away more of his own property. Social critic Lewis Mumford called Apollo “an extravagant feat of technological exhibitionism” and subsequent proposals mere “technological disguises for infantile fantasies.” But these pioneer critics, blazing the trail that DeGroot follows in this book, were shouting into the wind. After Kennedy’s assassination, not even the fatal Apollo 204 fire of 1967 could deflect the United States from the Moon mission. The Moon landings were “a homage to Kennedy” and a gambit in the Cold War. Nothing could stop them, but then again nothing since has convinced Americans to repeat the performance.

DeGroot’s critique of Apollo operates at the visceral level. His distaste for the program is palpable. To him it was a silly, expensive, dangerous, empty stunt of no lasting significance. He little notes and barely credits Kennedy’s rationale of demonstrating to the world the superiority of U.S. science and technology. Surely that was worth something—maybe not $25 billion (almost $150 billion in 2005 dollars), but something. Lyndon Johnson once said famously that the value of satellite reconnaissance repaid the entire investment, military and civilian, that the United States made in space. So it is possible to think about the costs and benefits of Apollo, but DeGroot displays no interest in the topic. He does not even embrace the claim by Amatai Etzioni and other Apollo critics that the money spent could have been better invested in social programs at home. De Groot seems to believe that Apollo was exorbitant at any price, independent of the opportunity costs it consumed.

DeGroot’s distaste for Apollo seems to flow not from a calculus of public investment but rather from a profound, but largely unexplored, distrust of technological innovation or progress. “Virtually every major technological development, including television, nuclear power, and the Internet,” he claims, has “rendered man more efficient in his moral corruption.” Such a catholic indictment, which gathers in its sweep everything from refrigeration to CAT scans, bears some of the flavor of the postmodern malaise, the perception that the 20th century and the whole sorry outcome of modernity has been disastrous for the human race. Though DeGroot writes clearly, even effervescently, without resort to the arcane vocabulary of cultural theory, he appears to harbor some of the suspicion of modern technology that moves so many critics of the neoliberal capitalist globalism that triumphed in the Cold War.

Perhaps DeGroot honed his dismay over modernity in his previous book, a history of nuclear weapons. Still, DeGroot has the honesty to admit that Apollo, for all its irrationality, was, as his subtitle concedes, “magnificent.” “Every once in a while,” he writes, “astronauts and cosmonauts demonstrated that, even though their race was trivial, there was something magnificent about it.” He reports some of the heroic acts in space and some of the terrifying near misses that the early spacefarers survived. And of course he recognizes those who dared and died. But he can muster no similar encomium for the human spaceflight program of the ensuing three decades. In his view, the misbegotten attempt to replay Apollo over and over again has all of the flaws of that unique program and none of its magnificence.

Dump deterrence? Not yet

In Beyond Nuclear Deterrence, Alexei Arbatov and Vladimir Dvorkin, two well-known and respected Russian security experts, analyze and then propose modifications to the current nuclear deterrent relationship between Russia and the United States. Their goal, they explain, is to “transform the current state of mutual nuclear deterrence . . . into a new mode of relations based on mutual management of nuclear weapons.”

The authors point out that despite the substantial easing of tensions since the end of the Cold War, a truly cooperative U.S.-Russia relationship has not developed, and the irrational and dangerous policy of mutual assured deterrence continues to prevail. They argue that this mutual nuclear standoff represents “a latent but real barrier to . . . cooperation and integration,” and that “transforming deterrence as part of forging closer security relations with the West would certainly advance Russia’s progress toward democracy and economic integration with the West.”

Arbatov and Dvorkin argue that continued reliance on deterrence perpetuates concerns about the potentially hostile intentions of the other party, the risk of an inadvertent or accidental nuclear attack, the possible diversion of nuclear weapons to rebel groups or terrorists, and the feasibility of a disarming first strike. Moreover, they note, although the likelihood of a massive nuclear exchange has diminished, nuclear-weapons states have become more explicit about contemplating the use of nuclear weapons in response to various, often non-nuclear, threats. Finally, the authors decry the fact that since the end of the Cold War, the United States has not only spurned the pursuit of nuclear arms control, greater transparency, and confidence-building measures but has also considered creating new nuclear weapons.

For Arbatov and Dvorkin, nuclear deterrence is “ultimately irrational” and should be replaced for a number of reasons: It is stunningly irrelevant to the real threats and challenges of the age of non-state terrorism, it places limits on the ability of the United States and Russia to cooperate in dealing with new threats, and it is a terrible waste of national resources.

To enable the United States and Russia to abandon or at least modify mutual assured deterrence, the authors suggest three steps: further reduce and de-alert a significant portion of the Russian and U.S. strategic nuclear arsenals, develop and deploy a joint ballistic missile early-warning system and a joint missile proliferation monitoring system, and develop and deploy a joint ballistic missile defense (BMD). The goal would be to “envelop the major portion of the technical assets of the [United States and Russia], making war between them operationally and technically impossible, and bringing them to a close strategic alliance.”

Additionally, Arbatov and Dvorkin propose a number of arms control steps to complete the process of reining in and ultimately transforming the strategic nuclear relationship. These include completing the counting rules for the Moscow Treaty (formally called the Strategic Offensive Reductions Treaty, or SORT), a reference to the U.S. claim that only “operationally deployed” warheads are limited; designing appropriate verification provisions; and extending the treaty’s life so that it does not expire on the same day that the final limits come into effect. They also endorse further reductions of strategic forces to 1,000 to 1,200 warheads by 2017 in a SORT II, limiting and reducing tactical nuclear weapons, limiting the number of submarines and mobile intercontinental ballistic missiles (ICBMs) on patrol, and forswearing the first use of nuclear weapons.

The authors conclude by stating that the key to a better, more secure world is to remove mutual nuclear deterrence “as the foundation of the operational strategic relationship between the United States and Russia; as the material embodiment of the two states’ confrontational military relations; as an impediment to their security and political cooperation; and as a huge drain on their financial resources and scientific technological innovations.”

Flawed prescriptions

In an era in which neoconservatives have railed against the uselessness of arms control in dealing with nuclear weapons, it is refreshing to read a book dedicated to reviving interest in traditional arms control. It would certainly make sense for the United States to ratify the Comprehensive Nuclear Test Ban Treaty (CTBT) and seek a Fissile Material Cutoff Treaty. It would also be helpful if the next administration reversed the current pro-nuclear-use rhetoric. It is hard to see how threatening to use nuclear weapons in response to virtually any threat helps persuade Iran or North Korea to abandon their interest in a nuclear arsenal to deter the nuke-happy United States. Beyond Nuclear Deterrence helps us face the fact that we have been marking time in arms control since the signing of the CTBT in 1996 and that nuclear arms control and proliferation issues do not go away if ignored but tend to get more complicated and critical.

Arbatov and Dvorkin’s essential premise—that deterrence between the United States and Russia is irrational if not downright dangerous—is fair enough. But their prescriptions for escaping from this brave new world seem flawed. The United States does not have an adversarial relationship with Russia because of deterrence; there is deterrence because many people still believe a potentially adversarial relationship could emerge. Until U.S. leaders can surmount the sense that Russia is an unpredictable if not confrontational partner, they will be inclined to hedge against a potential deterioration in relations. The move from confrontation to cooperation that Arbatov and Dvorkin believe can be jump-started by “transforming deterrence” will be a slow one. The special relationship between the United States and the United Kingdom, which seems so politically inevitable, took at least 150 years to begin to develop.

The no-man’s land between Cold War confrontation and close cooperation that inhibits doing away with deterrence also militates against intimate technical coordination of missile defense programs and early-warning systems. When challenged about the destabilizing effects of missile defenses, it has traditionally and ironically been the last refuge of U.S. hard-liners to offer to share the technology with the Russians and/or the rest of the world (as President Reagan did when he announced his Strategic Defense Initiative in the 1980s). But there is absolutely no indication that Congress, the Defense Department, or even “soft-on-security” liberals have any intention of or interest in actually sharing sophisticated BMD technology or command-and-control decisions with Russia.

WHAT HANDICAPS THE ARBATOV/DVORKIN STUDY IS THE SENSE OF SPECIAL PLEADING THAT IT CONVEYS: THE UNDERSTANDABLE EFFORTS OF THE AUTHORS TO OFFER REMEDIES FOR THE SERIOUS SHORTCOMINGS OF RUSSIA’S PRESENT-DAY AND LIKELY FUTURE STRATEGIC NUCLEAR POSTURE.

The same political inhibitions probably hold true for de-alerting as well. If strategic force levels are reduced to the 1,000 to 1,200 level, as Arbatov and Dvorkin propose, but the present political and strategic environment endures, then the tendency of military planners will be to maintain the alert status of the smaller remaining forces, particularly if they are believed to be vulnerable. On the other hand, if the United States and Russia move away from a deterrent relationship, then U.S. alert levels will either go down as a result or cease to be perceived as particularly threatening by the Russians.

What handicaps the Arbatov/Dvorkin study is the sense of special pleading that it conveys: the understandable efforts of the authors to offer remedies for the serious shortcomings of Russia’s present-day and likely future strategic nuclear posture. Rather than providing an objective analysis of how to enhance a stable strategic relationship between nations that are not natural allies, the book gives the impression that early-warning and missile defense cooperation on the one hand and lower force levels and selective strategic force constraints on the other seek either to make up for Russian weaknesses or play down U.S. strengths.

This is not to say that all the authors’ recommendations are without merit. Reducing nuclear force levels as they propose would clearly be desirable, as would changes in declaratory policy and the renunciation of first use. The United States should also be very concerned about a Russian early-warning system that is so broken that it increases the likelihood of miscalculation in a crisis, and it should proceed with the Joint Data Exchange Center, which has languished for the past eight years. (The center is meant to exchange missile launch data from early-warning systems between the two countries.) And the obvious solution to Russian concerns about force vulnerability is to deploy more survivable systems such as submarines or single-warhead mobile ICBMs.

But perhaps the most important issue the authors raise is the one of how to move beyond deterrence. This move is not likely to take place at the operational level before there are changes at the political level. After all, the United States is not in a deterrent relationship with France or Israel, and those countries are not concerned about the threat from U.S. hard-target kill capabilities, force-reconstitution capabilities, or BMD systems. It is Russia’s political leadership that has to decide whether supporting Iran’s nuclear ambitions and smothering political dissent is more important than abandoning deterrence and moving into a new, more benign or even cooperative relationship with the United States.

The End Is Near

James Lovelock, THE REVENGE OF GAIA

At age 87, James Lovelock remains the indefatigable proponent of the Gaia hypothesis, which depicts Earth as a living entity. In The Revenge of Gaia, he warns that Gaia is not well. Earth is running a worrisome fever, and unless drastic action is taken immediately, the coming heat wave will prove catastrophic. By the end of the century, Lovelock fears, humanity will be reduced to a small fraction of its current size, largely limited to arctic and polar refuges.

Lovelock’s warnings are rather apocalyptic but hardly exceptional. Almost all environmental scientists worry about global warming, and for good reason. Lovelock’s prescription for avoiding collapse, on the other hand, is rarely encountered. The only thing that can save the world, he vigorously argues, is nuclear power.

Such a pro-nuclear stance is tantamount to heresy in the environmental community. But Lovelock is a fiercely iconoclastic thinker, willing if not eager to attack core assumptions of the movement itself. In doing so, he provides a useful tonic for environmentalism, challenging its party-line positions while urging everyone to think beyond their conventional assumptions. One can only hope that his advocacy of fission power and his attacks on renewable energy generate respectful debate among green thinkers rather than excommunication.

Although The Revenge of Gaia is always bracing, the solidity of its arguments is another matter. Although he never ceases to be informative and provocative, Lovelock is not always convincing. Like many environmental writers, he tends to exaggerate, to find certainty where it is not warranted, and to ignore contradictory evidence. As a result, Lovelock’s three key contentions require further scrutiny. First, does the Gaia hypothesis provide a suitable template for understanding climate change and its associated dangers? Second, will global warming prove as catastrophic as it is portrayed here? And third, is nuclear power really as benign as Lovelock thinks it is?

Lovelock devotes little space to outlining his well-known Gaia hypothesis. He does, however, argue against rationalist critics who dismiss it as a form of mysticism. The Gaia model is highly useful for understanding global climate, he contends, and as a result it is coming to be accepted by increasing numbers of earth-systems scientists.

The crucial issue for assessing Gaia as a scientific concept is what exactly is meant by “alive.” Unfortunately, Lovelock tends to equivocate. He allows that the “living Earth” is something of a metaphorical conception, which helps us grapple with the self-regulatory capacity of the planet’s intertwined biotic and climatic systems. Yet Lovelock seemingly cannot help portraying Gaia not only as a singular, living being, but as one possessing will and volition—and with revenge on her mind. In the end, he wisely gestures toward consigning the more ineffable aspects of the thesis to the domain of religion. Unfortunately, the scientific and spiritual dimensions of the Gaia perspective tend to be conflated elsewhere in the text.

As powerful as Gaia’s self-regulatory properties may be, Lovelock argues, they are about to be overwhelmed by the greenhouse gases spewed out by human activities. As global warming intensifies, positive feedback cycles will kick in, accelerating the destruction. The late 21st century, in this grim vision, will be marked by such heat and drought as to reduce most of the world to arid waste. Lovelock’s map of a greenhouse Earth, with an average temperature 5° Celsius higher than at present, shows forests persisting only in Canada, northern Eurasia, Japan, Patagonia, Madagascar, the Himalayas, and Tasmania. The rest of the world will supposedly be covered by desert and scrub.

Lovelock’s casual dismissal of concerns about biodiversity reveals an almost perverse enthusiasm for ice-age conditions.

Although global warming is now virtually irrefutable, both the rapidity and the extent of temperature increase remain open issues. More pronounced difficulties are encountered with predictions of vegetation change. Lovelock focuses strictly on enhanced evaporation and thus foresees an arid future. Greater rates of evaporation over expanded oceans, however, will also generate increased precipitation, although where and when the extra rain will fall remains obscure. Drought will probably intensify in many if not most equatorial and mid-latitude locations, but monsoonal rainfall could well be augmented, perhaps even leading to a greening of the Tibetan plateau. Such a possibility, however, is nowhere to be seen in The Revenge of Gaia.

Confidence in Lovelock’s climatology is not enhanced by an examination of his maps of current and ice-age vegetation zones. In the former case, he depicts South America as potentially supporting forests everywhere but the Atacama—even on the arid, windswept plateau of Patagonia. Lovelock loses much of his paleoclimatological credibility with his map of full glacial conditions (5° Celsius colder than at present), which shows all land areas not covered by ice as having been forested. In actuality, the Pleistocene world at glacial maximum was inhospitably dry as well as cold, with equatorial rainforests reduced to scattered refugia and with deserts covering areas such as the Argentine pampas that are now relatively humid.

Lovelock’s idiosyncratic account of glacial-age vegetation is rooted in his core concern about Gaia overheating. Earth’s fate, he reminds us, is death by fire; as solar radiation slowly intensifies, our planet will eventually burn. Gaia’s response to this inexorable if distant threat, he contends, is to cool itself through the development of huge glaciers, which reflect solar radiation back into space. During the Pleistocene glacial episodes, when Gaia’s cooling mechanisms were functioning smoothly, plants grew so vigorously that they supposedly pumped much of the carbon dioxide out of the atmosphere, helping maintain the beneficially cool conditions. With vast expanses of the ocean’s surface remaining below 12° Celsius, moreover, continual mixing ensured that nutrients remained in circulation, thus generating highly productive marine ecosystems.

By releasing stored carbon, humanity undercuts this vital regulatory mechanism, forestalling future glaciation and thus sending the world prematurely into its hotly catastrophic future. More immediate disaster, we are warned, awaits the maritime realm. When ocean temperatures exceed 12° Celsius, mixing ends, nutrients fall to the sea floor, and marine “deserts” emerge. “This may be one of the reasons why Gaia’s goal,” Lovelock writes, “appears to be to keep the Earth cool.”

Although intriguing, Lovelock’s depiction of marine productivity is marred by hyperbole. He ignores other mixing mechanisms, such as upwelling, and absurdly depicts coastal waters off California, Peru, and Namibia as virtually lifeless deserts. His interpretation of ice-age vegetation, moreover, is decidedly heterodox. The extremely low carbon dioxide level at the time was a major constraint on plant life; subarctic marine algae may have thrived, but far more botanically diverse terrestrial ecosystems were under considerable stress. Lovelock, to the extant that he acknowledges this, contends that biodiversity is itself a response to the already dangerous levels of interglacial warming and is thus overrated. “So rich biodiversity,” he concludes, “is not necessarily something highly desirable and to be preserved at all costs.”

Lovelock’s casual dismissal of concerns about biodiversity reveals an almost perverse enthusiasm for ice-age conditions. One gets the impression that he would cheer the readvance of continental glaciers. It is difficult to argue, however, that a mile of ice sitting over the Midwestern corn belt would be less devastating to human society—or to natural ecosystems—than would a full 5° of global warming.

Nuclear salvation?

Lovelock’s concerns about climate change may be commonplace within the environmental movement, but not so his proposed solutions. He scorns the usual green energy strategies, both for economic and, more intriguingly, environmental reasons. He rejects biomass production because it requires the expansion of agriculture, further weakening Gaia by eliminating natural ecosystems, and condemns wind turbines for spreading a hideous blight across rural landscapes. And although he allows that solar and tidal power might supplement our energy requirements, he denies that they are effective enough for full reliance. As a result, only nuclear power will do. Hoping that clean fusion power will soon become viable, Lovelock argues that we must first embrace conventional fission reactors.

Lovelock’s endorsement of nuclear power seemingly comes with no reluctance. He accuses environmentalists of failing to understand Paracelsus’s still-relevant dictum: Poison lies in the dose rather than the substance itself. Lovelock considers nuclear waste so benign that he would volunteer to bury Britain’s entire stock in his own backyard. Chernobyl and other nuclear catastrophe sites, he reminds us, are now verdant wildlife sanctuaries. Outrageously but perhaps correctly, he argues that oxygen, not pollution, ultimately causes most cancer. Lovelock comes to the brink of contending that environmental fear-mongering, however well-intentioned, has helped bring us to the edge of disaster by preventing the spread of nuclear energy.

Hoping that clean fusion power will soon become viable, Lovelock argues that we must first embrace conventional fission reactors.

Certainly environmentalists, Lovelock among them, have tended to exaggerate threats to nature. In regard to nuclear power, hyperbole is encountered on both sides: The dangers posed by waste materials have no doubt been overstated, but long-lived radioactive contamination can hardly be considered harmless. Nonetheless, Lovelock’s arguments about energy ought to be carefully assessed. At the very least, a scientifically and economically informed reconsideration of nuclear power is in order, considering the profound threats posed by continued fossil-fuel reliance.

Green Prometheanism

Intriguingly, Lovelock’s advocacy of technological solutions extends well beyond nuclear power. He even gives an appreciative if skeptical hearing to audaciously high-tech proposals for combating global warming, such as the building of gargantuan sunshades in outer space. The Holy Grail of Lovelockian technoenvironmentalism would appear to be the artificial synthesis of food out of elements extracted directly from the inorganic environment, which would allow Gaia to reclaim most agricultural landscapes.

Lovelock’s vision of manufacturing food could hardly be further removed from the small-scale organic farming favored by most green activists. Such technological enthusiasms are more commonly encountered among so-called cornucopian optimists, who dismiss all ecological concerns. But there is nothing intrinsically contradictory in Lovelock’s Promethean environmentalism. Decoupling the economy from nature, as he envisages, would be the only way to restore sizable areas to nature without generating a global depression. If, on the other hand, all six billion humans were to be fed strictly through organic farming methods, it is doubtful that any significant areas of unperturbed habitat would remain.

James Lovelock was not the first environmentalist to advocate green Prometheanism, but his predecessors have largely been either ignored or reviled as giddy apologists of a cancerous industrial system. Perhaps Lovelock’s stature will allow such ideas to be given real consideration, but I am not hopeful. Much of the broader environmental movement continues to be informed by the romantic, antimodernist mindset that it adopted in the 1960s. The world is indeed experiencing an ecological crisis, but only a hard-headed approach, one that takes economics and engineering as seriously as climatology, can provided the needed solutions.

Deep Competitiveness

Competitiveness is the new buzzword in Washington, DC. Many public and private leaders proclaim that the United States faces a new and formidable competitiveness challenge. Nancy Pelosi and House Democrats unveiled their Innovation Agenda in late 2005. President Bush announced his American Competitiveness Initiative in the 2006 State of the Union Address. And Congress has introduced several major legislative packages addressing competitiveness. But even if Congress were to enact all of the proposed policies—a good thing—they would not go far enough to ensure the nation’s continued technological leadership. Part of the reason why rhetoric is not being sufficiently translated into action is that many people in and out of official circles simply lack a sense of urgency about the situation. That must change.

Seventeen years ago, I wrote my doctoral dissertation to explain why some states responded to the competitive and economic restructuring challenges of the 1980s with sound and significant policy initiatives, whereas other similarly situated states did not. The answer was in some ways profoundly simple: States in which there was a broad and highly developed consensus about the need to act did more, and did it better, than states where consensus was less broad and less developed. In short, a widely shared understanding of the need to act, coupled with the right analysis of the problem, matters.

That lesson is relevant today at the national level. Even with the numerous reports, books, editorials, conferences, and hearings highlighting the “gathering storm” of global competitiveness, many leaders are seemingly still not completely convinced. Indeed, the prevailing mood in many quarters and among much of the economic policy punditry is one of complacency. For these skeptics, the case simply has not been made that the United States faces a significant competitiveness challenge. For example, in reference to reports citing a shortage of U.S. graduates in science, technology, engineering, and mathematics, Newsweek economic columnist Robert Samuelson claims that it is “A Phony Science Gap?” The Washington Post’s Sebastian Mallaby agrees, calling it “The Fake Science Threat.” Mallaby adds that the United States need not feel threatened because China is, after all, just a “low-wage country that crams on science.” He further claims that China’s efforts in moving aggressively ahead with science and technology–led economic development are irrelevant because “innovation depends neither on low wages nor science.”

THE UNITED STATES MUST WORK HARDER TO ENSURE THAT NATIONAL ECONOMIC DEVELOPMENT STRATEGIES AROUND THE WORLD ARE BASED ON POSITIVE-SUM STRATEGIES SUCH AS INVESTING MORE IN SCIENCE AND TECHNOLOGY, BUILDING INFRASTRUCTURE, AND BOOSTING EDUCATION, AND NOT ON NEGATIVE-SUM MERCANTILIST STRATEGIES.

Really? Although a low-wage country that crams on science might not produce the next Intel, Google, or Apple (although it has produced technology companies such as Lenovo and Legand), it can and does attract (and sometimes coerce) innovation-based multinational companies to set up production there. Developing countries do not need to grow strong domestic companies to have a more innovation-based economy as long as they are able to attract innovation-based activities. In other words, low wages and high science are a powerful combination. By way of example, R&D investments by U.S.-based firms in China grew from $5 million in 1994 to $506 million in 2000, and multinational companies are establishing more than 200 new R&D laboratories per year in China.

Even when economists and pundits do acknowledge a threat, they dismiss it by pointing out that the United States has successfully faced challenges before. Why should this time be any different? When discussing the issue of the off-shoring of jobs, Morgan Stanley’s Stephen Roach argued in the New York Times, “This is exactly the same type of challenge farmers went through in the late 1800s, sweatshop workers went through in the early 1900s, and manufacturing workers in the first half of the 1980s.” Robert Samuelson wrote,“Ever since Sputnik (1957) and the ‘missile gap’(1960), we’ve been warned that we’re being overtaken technologically.”

What such observers fail to realize is that one reason the United States survived such technological challenges is precisely because it took them seriously. In response to Sputnik, the government created the National Atmospheric and Space Administration and the Defense Advanced Research Projects Agency and beefed up funding for education in science, technology, engineering, and mathematics. Similarly, when the nation faced competitiveness challenges in the late 1970s and 1980s, leaders from both parties in government, as well as from industry and academia, acted with creativity and resolve. Policymakers responded with a host of major policy innovations, including the Stevenson-Wydler Act, the Bayh-Dole Act, the National Technology Transfer Act, and the Omnibus Trade and Competitiveness Act. They created a long list of programs and initiatives to boost innovation and competitiveness, including the Small Business Innovation Research program, the Manufacturing Extension Partnership, and Cooperative Research and Development Agreements. They put in place the R&D tax credit and lowered capital gains and corporate tax rates. They created a host of new collaborative research ventures, including the semiconductor consortium SEMATECH, the National Science Foundation’s (NSF’s) Science and Technology Centers and Engineering Research Centers, and the National Institute of Standards and Technology’s Advanced Technology Program.

Moreover, Washington did not act alone. Virtually every state transformed its practice of economic development to stress technology-led economic development. Many states realized that R&D and innovation were drivers of the new economy and that state economies prosper when they maintain a healthy research base closely linked to the commercialization of technology. For example, Pennsylvania, under the leadership of Governor Richard Thornburgh, established the Ben Franklin Partnership Program to provide matching grants primarily to small and medium-sized firms to work collaboratively with the state’s universities.

All these steps, coupled with efforts by the private sector and universities, helped the United States to respond effectively to that competitiveness challenge. Today, it may very well be that the United States will successfully confront its new challenges. But success is much more likely if the nation and its various leaders act with the resolve and creativity demonstrated in the past.

And action should reflect a sense of urgency, because many other counties, including most of Southeast Asia and Europe, have made innovation-led economic development a centerpiece of their national economic strategies during the past decade. In doing so, many of the nations looked to the United States for guidance. Why? The answer is simple. They know that moving up the value chain to more innovation-based economic activities is a key to boosting future prosperity and that losing this competition can result in a relatively lower standard of living as economic resources shift to lower value–added industries.

Consider what some nations and regions have done. Europe’s Lisbon Agenda has set an ambitious, if somewhat unrealistic, goal of making Europe “the most competitive and dynamic knowledge-based economy in the world by 2010.” Many European nations, including Belgium, Finland, the Netherlands, Sweden, Switzerland, and the United Kingdom, are not only boosting R&D funding but also introducing policy changes and government initiatives to more effectively transfer technology from universities and government laboratories to the private sector for commercialization. Canada has announced a national innovation strategy that focuses on boosting the production and commercialization of knowledge; improving the skill level of workers through expanding activities such as adult learning, producing more students with advanced degrees, and revising immigration policies; improving the environment for innovation by building in tax and regulatory competitiveness; and strengthening communities by promoting the growth of high-tech clusters, among other actions. As part of its effort, Canada set a goal to rise from 15th to 5th among countries in the Organization for Economic Cooperation and Development (OECD) in its ratio of R&D to gross domestic product by 2010. South Korea set a goal in 1997 to raise R&D’s share of the government’s budget from 3.6% to 5%, and the figure already has hit 4.7%. Many other nations have set similar goals. As a result, whereas investments in R&D as a share of gross domestic product actually decreased in the United States from 1992 to 2002, comparative investment levels increased in most other nations, including Japan (15%), Ireland (24%), Canada (33%), Korea (51%), Sweden (57%), China (66%), and Israel (101%).

The seriousness of these competitors also is evident in the statistics for R&D tax credits. When the United States adopted its R&D tax credit (a 20% credit on the incremental increases in research investments) in the early 1980s, it was a policy leader and had the most generous tax treatment of R&D among OECD nations. But today, while Congress debates whether to the make the credit permanent (or even whether to extend it for a few years), many other nations have forged ahead to provide much more generous tax treatment of R&D. The result is that by 2004 the United States ranked 17th among OECD nations in tax treatment of R&D. For example, the United Kingdom and Australia provide what is equivalent to a 7.5% flat credit on R&D, meaning that their effective credit is almost twice that of the United States. Japan’s credit is almost three times as generous as that of the United States, and for small companies, Japan’s credit is four times as generous. China provides a 150% deduction on R&D expenses, provided that R&D spending increased 10% over the prior year. Canada, in an explicit effort to attract U.S. corporate R&D, is even more generous. Large companies are eligible for a flat 20% credit and small firms can receive a 35% credit. In many provinces, equally generous credits can be added on. Even France, a nation that many pundits deride as a socialist basket case, has acted with resolve, adopting in 2004 a credit essentially equivalent to a 40% incremental R&D tax credit.

Given the generosity of these tax policies, it is perhaps not surprising that U.S. majority–owned affiliates have been investing twice as fast in foreign countries as they have been in the United States during most of the past decade. Many of these projects are in developing nations. The United Nations reports, for example, that of 1,773 “greenfield” R&D projects set up between 2002 and 2004, more than half (953) were from companies in developed countries establishing projects in developing nations, with 70% of these in China and India.

In response to such developments, some observers not only minimize the competitive challenge to the United States but actually define it away, claiming that countries do not really compete against each other. Mallaby expressed this widely held view when he wrote in the Washington Post in early 2006: “The science lobby should also stop pretending that countries compete the same way companies do . . . the ‘China threat’ argument ignores the ways that competition between countries, unlike companies, is a positive-sum game.”

To be sure, there are aspects of competition between nations that are beneficial. But is also seems clear that if other nations move up the value chain to high value–added innovation-based economic activities, the United States will pay at least some cost. Even with continued entrepreneurial innovation and scientific progress, worldwide demand for software, airplanes, pharmaceuticals, microelectronics, instruments, and other high value–added goods and services is not unlimited. For the same reason that companies want to be in these higher-margin businesses, so too do countries. As a result, whereas the conventional approach to competition (firms compete, countries do not) provides some important insights, it is simply not an adequate guide to explaining how nations achieve or sustain competitive advantage, particularly in an economy driven by knowledge and innovation.

This view of competition not only serves to minimize the importance of the challenge, it also confines the scope and character of policy proposals in response. According to this view, if U.S. aviation, machine tool, semiconductor, or software firms lose in competition to firms in other nations, or if U.S. firms move high value–added facilities to other nations, all will be well as long as the United States maintains flexible labor and capital markets. The “lost” resources simply will flow into other industries, creating new firms in more innovative and higher value–added sectors.

Policies promoting competitiveness

If this view accurately describes today’s economic environment, then many of the recommendations proposed in Washington today, such as boosting education and training, ensuring an adequate supply of engineers, and helping displaced workers, will suffice. (This assumes that the nation’s political leaders have the will to implement them effectively, which is no small task.) In this scenario, if the United States loses domestic high value–added innovation-based production to foreign competition, U.S. workers will have the skills to take advantage of new opportunities.

But what if the conventional view is not sufficient to explain industrial and economic change, particularly in an economy in which knowledge is increasingly the major factor of production? What if a significant share of knowledge is embedded in organizations, not just in individual workers? What if there are significant “spillovers” from firm activities? What if there are considerable “first-mover” advantages, including learning effects, which let firms translate early leads into dominant positions? What if there are significant network effects that mean that advancement in one industry (say, broadband) results in advancement in a host of others (such as Internet video or telemedicine)? What if lost higher value–added activities end up being replaced with lower value–added ones? What if, when you lose it, you cannot easily recover it?

I would argue that these factors more accurately describe the workings of the 21st-century knowledge-based global economy. Accordingly, a better guide to today’s economic reality can be found in the disciplines of what some observers call evolutionary or growth economics. In such models, losing corporate competitions in knowledge-based industries means losing much more than just the firms. It means losing deeply embedded knowledge that is hard to replicate. It means that it can be very difficult to recreate value from the dispersed pieces of value represented by unemployed workers, used machinery, and underutilized suppliers. Perhaps the simplest way to put it is this: If the United States were to lose a company such as Boeing, the nation likely could not rely on market forces, even a dramatic drop in the value of the dollar, to later recreate a domestic civilian aviation industry. To do so would require recreating not just the firm, but its complex web of suppliers, professional associations, university programs in aviation engineering, and other knowledge-sharing organizations.

In this view, a robust national competitiveness policy needs to be grounded in a simple understanding: Like it or not, in an increasingly global economy most nations enact policies to tilt the choice of corporations to invest there. This means that the United States needs to develop a comprehensive competitiveness policy focused on ensuring that innovative activities, as well as innovative people, are attracted to, stay in, and grow in the United States.

Part of this policy, of course, must focus on accelerating government funding of frontier research and improving education at all levels to ensure that U.S. workers have the skills needed for high-wage jobs. Toward these ends, policymakers already have taken a number of steps or proposed programs and initiatives. President Bush proposed increasing research funding for the physical sciences by $50 billion over 10 years, calling for large increases at NSF, the National Institute of Standards and Technology, and the Department of Energy. The National Innovation Act of 2005, introduced by Sen. John Ensign (R-NV) and Sen. Joe Lieberman (I-CT), includes a number of measures to boost spending on science and math education and authorizes the doubling of the NSF budget. Another bipartisan Senate proposal, the Protecting America’s Competitive Edge Acts, also would boost funding for science and math education and federal support for research.

Congress should enact and fully fund these and other related measures. But even if policymakers do so, they should not think they are done with competitiveness and can move on to other matters. Winning the new global competitiveness race will require at least a decade of careful attention to the issue by government leaders, businesses, and universities. In particular, four steps will be critical for the next phase of the competitiveness agenda.

Work to create a global trade regime based on markets, not mercantilism. Companies in the United States, no matter how innovative and lean, now find it difficult to expand innovation-based activities domestically because many other nations are not competing on a level playing field. Many nations, particularly in Asia, are practicing what might be called market mercantilism: putting in place liberalized investment rules coupled with a host of other policy actions—some legitimate, some distorting and illegitimate—to attract foreign investment and boost domestic innovation-based growth.

For these nations, achieving an “innovation economy” is the goal at any cost. They do not want to wait the 20 or more years it takes to get there if they limit their policy actions to legitimate means, such as boosting university research, passing strong intellectual property protection rules, and investing in infrastructure and skills. Rather, they take a shortcut, turning a blind eye while domestic firms (and sometimes government agencies themselves) steal foreign intellectual property, pressure foreign firms to share intellectual property in order to gain access to their consumer markets, manipulate standards to favor domestic firms, and engage in massive government intervention to keep their currency prices below what the market would otherwise produce. When China pressures U.S. companies to open R&D laboratories as a quid pro quo for selling in the Chinese market, that is not capitalism; it is mercantilism. When 70% of the software used in India is pirated, that is mercantilism. When Japan’s central banks engage in massive purchases of the dollar to keep the value of the yen low and thus artificially lower prices of Japanese exports, that is mercantilism. When the European Union reclassifies information technology (IT) products under its Combined Nomenclature rules so that they can engage in a back-door exercise to raise tariffs on U.S. products, that is mercantilism. Such steps not only violate the spirit and the letter of global trade agreements, they seek to substitute the actions of government for the allocative efficiencies of markets, leading to a global misallocation of resources and lower global productivity.

CONGRESS COULD ENCOURAGE STATES TO FOCUS MORE ON TECHNOLOGY-BASED ECONOMIC DEVELOPMENT BY APPROPRIATING $1 BILLION ANNUALLY FOR A COMPETITIVE MATCHING GRANT FUND TO CO-INVEST IN STATE-SUPPORTED TECHNOLOGY-BASED INITIATIVES.

As a result, the United States must work harder to ensure that national economic development strategies around the world are based on positive-sum strategies such as investing more in science and technology, building infrastructure, and boosting education, and not on negative-sum mercantilist strategies. Competition to see who has the best university system, the largest share of scientists and engineers, the best broadband infrastructure, and the best system for protecting intellectual property makes all nations better. Therefore, the United States should continue to push for expanded global market integration and reduction of tariffs and other nontariff barriers, while at the same time working with the World Trade Organization and other international bodies to move the world trading system to one based more on markets and less on mercantilism.

To complement such outward-looking efforts, the federal government needs to take even more robust steps to improve the nation’s competitive readiness. This means supporting more basic research and expanding the domestic supply of skilled workers. But it also means that the government should take steps to make it more likely that companies invest in innovation-based activities domestically, particularly by addressing the cost differential between the United States and “low-wage countries that cram on science.”

Overhaul the corporate tax code to spur innovation. The tax code can be a powerful tool not only for boosting innovation but for helping level the playing field between the United States and other nations, particularly lower-wage nations and those that manipulate their currency levels. Accordingly, the government should create a new knowledge tax credit that allows companies to take a 40% credit on incremental increases in expenditures on research and experimentation, global standards–setting, and workforce training. Companies could take the credit if their R&D-to sales ratio had increased over a defined prior base period. Companies not meeting this requirement still would be allowed to take a credit equaling 10% of research and training expenses that exceed 60% of research expenses in the prior base period. The Senate PACE Finance legislation would be an important step forward, calling for a doubling of the R&D tax credit to 40%.

But even more is needed. The government should create a flat 40% credit for company expenditures on research at universities, federal laboratories, and research consortia and on support for education and training in U.S. schools and universities. One reason for this more generous collaborative R&D credit is that more of the benefits of collaboration spill over to the economy than is the case with proprietary in-house R&D. The additional cost for this new knowledge credit would be approximately $22 billion per year.

In order to pay for the new tax incentives, Congress could institute a modest business activity tax /|\ ^._.^ /|\ of the kind proposed by Gary Hufbauer at the Institute for International Economics. As a consumption tax, the BAT would be levied on all domestic sales of goods and services less purchases from other U.S. firms that are also subject to the BAT. Purchases of all intermediate materials and raw materials from firms that have already paid the BAT on their value-added would thus be exempted, and purchases of software and equipment would be exempt and thus effectively expensed. Such a tax not only would pay for these and other tax incentives to spur innovation and investment but would do so in a way that would be “border-adjustable”—that is, imports would also be subject to the tax and exports would be exempt—in contrast to current corporate taxes that are not.

Create new research partnerships. Simply spending more money on R&D will not be enough. One of the key lessons from the policy innovations of the 1980s and 1990s was the importance of “institutional innovation.” For example, the Bayh-Dole Act opened up a whole new avenue for increasing the commercialization of university research. Thus, the government needs to envision and implement new models of innovation partnerships. It is not enough simply to fund more proposals from individual investigators, although that will be important. The government must do more to boost university/industry partnerships and to mobilize collective talents around key technological challenges. This is needed, in part, because there are still large gaps between the for-profit research community and the nonprofit research community, which includes universities, hospitals, and federal labs, among others. The for-profit community does not always know what capabilities and results the nonprofit research community has produced, or could produce, that would be useful, whereas nonprofits often do not fully understand industry’s needs.

To help bridge this divide, Congress should establish an Industry Research Alliances Challenge Grant initiative to coinvest with industry-led research alliances. Industry members would establish technology “road maps” and use them to make targeted investments in research conducted at universities or federal laboratories. This initiative would increase the share of federally funded university and laboratory research that is market-relevant, and in so doing better adjust the balance between curiosity-directed research and research more directly related to societal needs. To jump-start this, the federal government should provide $2 billion per year to fund up to 100 industry/university research alliances. To be eligible for funding, industry-led consortia would have to include at least 10 firms, agree to identify generic science and technology needs that the firms share, provide support that at least matches federal funds, and invest the funds in universities and federal laboratories through a competitive selection process. Such a process would not entail the government “picking winners and losers,” because industry, in conjunction with academic partners, would identify the broad technology areas critical for research.

The government also needs to do more to build viable state/federal innovation partnerships. Historically, the federal innovation system has focused on larger firms and on the 30 or so largest research universities. But in the new economy, entrepreneurial startups and small and medium-sized enterprises are playing an increased role in the nation’s innovation system. Moreover, many colleges and research universities not among the “top 30” have developed significant science and technology strengths and play key roles in working with industry in their regions.

States are well positioned to work with these kinds of firms and universities, and each state has in fact developed initiatives to promote technology-based economic development. But because the benefits of innovation typically spill over state borders, states invest less in innovation-based economic development than is in the national interest. Congress could encourage states to focus more on technology-based economic development by appropriating $1 billion annually for a competitive matching grant fund to co-invest in state-supported technology-based initiatives.

Make digital transformation of the economy within 10 years a national goal. The digital economy—that is, the ubiquitous use of IT in all applications and industries that can be digitized—is the source of all of the recent rebound in productivity growth. Moreover, accelerating digital transformation, particularly in the service sector, will be a key driver in the future not only of economic growth but of progress in an array of areas, including education, environmental protection, government, health care, homeland security, law enforcement, and transportation. Unfortunately, a number of market problems have caused some bottlenecks in this transformation. Problems have included classic “chicken-or-egg” dynamics of product deployment, as well as active industry resistance from some sectors threatened with digital disintermediation. This lag in digital transformation is especially visible in the health care sector, though many other sectors, including education, much of government, construction, and transportation, also have fallen behind. Moreover, in a growing number of IT application areas, including deployment and adoption of broadband telecommunications, the United States lags behind other nations. To catalyze advances, the government needs to develop tax, regulatory, procurement, and other policies not only to remove a host of barriers to digital transformation but also to encourage companies, nonprofit organizations, governments, and individuals to catch the coming digital wave.

Action is needed on each of these fronts—now. In 1942, with the first inklings that the war effort might finally be turning the Allies’ way, Winston Churchill famously proclaimed: “This is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.” Perhaps recent times can be viewed similarly. With all the yeoman’s work that has highlighted the importance of the competitiveness issue, perhaps it is the end of the beginning. Now the nation must redouble its efforts to see that rhetoric is translated into action.

Archives – Winter 2007

Blue Mesa, Utah

James Sanborn is noted for his work with American stone and related materials that evoke a sense of mystery and the forces of nature. He is probably best known for the Kryptos sculpture installed at Central Intelligence Agency headquarters in 1990, which displays encrypted messages that continue to stump code-breakers to this day. From his series of topographical projections, Sanborn created this piece at Blue Mesa, Utah, in 1995. The repeated word “lux,” Latin for “light” and also the International System unit of illumination, is projected onto the landscape.

Sanborn has exhibited at the High Museum of Art, the Los Angeles County Museum of Art, the Phillips Collection, and the Hirshhorn Museum. He has been commissioned to create artwork for the Massachusetts Institute of Technology, the Central Intelligence Agency, and the National Oceanic and Atmospheric Administration. Sanborn was born in 1945 in Washington, D.C., and raised in Alexandria, Virginia.

Don’t Know Much Trigonometry

A new poll revealed that 86% of Americans are aware that China and India are working to produce more workers with technical skills, and only 49% believe that the United States would rank at or near the top of the global economy 20 years from now. In addition, 70% said that general science and math skills would be “very important” for college graduates “in all areas of study in the 21st century,” but only 46% said that students should be required to study more science and math in college. The survey of 1,000 registered voters was conducted by the Winston Group for the American Council on Education.

The same week the poll was released the Brookings Institution’s Hamilton Project, which is devoted to crafting a policy strategy that will not only spur overall economic growth but also expand opportunities for all Americans, presented a panel discussion that included Harold Varmus, Lawrence Summers, Robert Rubin, former Compaq CEO Michael Capellas to explain that science and technology are critical to innovation and a healthy economy. Their collective brainpower was hardly necessary to identify the obvious. Few Americans would argue against the need for a strong foundation in science and technology. Where we need insight is in convincing Americans that they need to do something to nurture research and reap the benefits.

The country has developed an effective strategy for building a highly skilled cadre of industry researchers and university faculty. It’s called immigration. Whether this approach will be successful in the future remains to be seen.

Research is necessary but not sufficient to sustain a world-leading economy. Innovation requires a much more complex social fabric to succeed. Scientifically and technologically literate people are needed in courtrooms and elementary school classrooms, in Congress and in statehouses, on factory floors and in customer service centers, in corporate boardrooms and Wall St. financial firms, in nursing homes and in operating rooms. Only when knowledge pervades all aspects of the nation’s life will it be able to mine the full value of breakthroughs achieved at the frontiers of human understanding of the natural world and of humans themselves.

The polling data tell us that we have not yet convinced the general public that knowledge of science and technology should not be the protected domain of the research elite. The awe-inspiring complexity of modern science and engineering is repelling as well as impressing the public. How can anyone be expected to even begin to understand what is well understood by only a handful of people who have devoted their entire lives to study? The challenge for the scientific and engineering community is to identify the body of knowledge that is useful and accessible to the public and to smaller groups of people who have a need for more knowledge in specific areas.

Encouraging signs of new approaches are appearing. New science museums, increasingly sophisticated websites with medical information and research news, and the new professional master’s degree in science are examples of effective innovations. The need is obvious, and the work is clearly unfinished.

Forum – Winter 2007

The future of nuclear deterrence

Re: “Nuclear Deterrence for the Future” (Thomas C. Schelling, Issues, Fall 2006). I add some comments that derive from my work with nuclear weapon technology and policy since 1950. More can be found at my Web site, www.fas.org/RLG/.

Adding to Schelling’s brief sketch of the attitude of various presidents toward nuclear weapons, Ronald Reagan had a total aversion to nuclear weapons, and Jimmy Carter was not far behind. In both cases, aides and other government officials managed to impede presidential initiatives toward massive reductions of nuclear weaponry. A more complete discussion has just appeared (James E. Goodby, At the Borderline of Armageddon: How American Presidents Managed the Atom Bomb, reviewed by S. M. Keeny Jr.,“Fingers on the Nuclear Trigger,” in Arms Control Today, October 2006; available at http://www.armscontrol.org/act/2006_10/BookReview.asp).

As a member of the National Academy of Sciences’ Committee on International Security and Arms Control (CISAC) since it was created in 1980 for bilateral discussions with Soviet scientists, I share Schelling’s view of the importance of such contacts. CISAC’s bilaterals have since been expanded (1988) to similar discussions with the Chinese and (1998) with India. In addition, CISAC has published significant studies over the years (see www7.nationalacademies.org/cisac/). Would that more in the U.S. government had read them!

The August 1949 Soviet nuclear test intensified pressures for the deployment of U.S. defenses and counter-force capability, but it was clear that that was to deter attack, and not for direct defense of the U.S. population.

But if deterrence is the strategy for countering nations, the U.S. stockpile is in enormous excess at perhaps 12,000 nuclear weapons, of which, 6000 may be in deliverable status. As observed by former Defense Secretary Les Aspin, “Nuclear weapons are the great equalizer, and the U.S. is now the equalizee.” It gives false comfort and not security that the U.S. nuclear weapons stockpile is much greater than that of others.

Deterrence of terrorist use of nuclear weapons by threat of retaliation against the terrorist group has little weight, when we are doing everything possible to detect and kill the terrorists even when they don’t have nuclear weapons. Instead, one should reduce the possibility of acquisition of weapon-usable materials by a far more aggressive program for buying highly enriched uranium (HEU) at the market price (because it can readily be used to power nuclear reactors when it is diluted or blended down) and for paying the cost of acquiring and disposing of worthless excess weapon plutonium (Pu) from Russia as well—worthless because the use of free Pu as reactor fuel is more costly than paying the full price for uranium fuel.

The enormous stocks of “civil Pu” from reprocessing operations in Britain, France, and soon Japan would suffice to make more than 10,000 nuclear warheads. There is no reason that these should not be adequately guarded, with sufficient warning time in case of a massive attack by force so as to keep stolen Pu from terrorist hands, but it is not clear that this is the case.

Weapon-usable material might be obtained from Pakistan, where substantial factions favor an Islamic bomb, and Pakistan has both HEU and Pu in its inventory. Dr. A. Q. Khan had an active program to sell Pakistani equipment and knowledge, including weapon designs, to Libya, North Korea, and probably Iran; was immediately pardoned by President Musharraf; and U.S. or international investigators have not been allowed to question him.

One terrorist nuclear explosion in one of our cities might kill 300,000 people— 0.1% of the U.S. population. While struggling to reduce their probability, we should plan for such losses. Else, by our reaction, we could destroy our society and even much of civilization, without using another nuclear weapon.

U.S. ratification of the Comprehensive Test Ban Treaty would be a step forward in encouraging rationality and support for U.S. actions, and direct talks between the U.S. government and those of North Korea and Iran are long overdue. For those who believe in the power of ideas, the ability to communicate them directly to influence others is an opportunity that should not have been rejected.

RICHARD L. GARWIN

IBM Fellow Emeritus

Thomas J. Watson Research Center

Yorktown Heights, New York

RLG2 at us.ibm.com


Thomas C. Schelling has been a dominant figure in the development of the theory of nuclear deterrence and its implications for strategic policy for almost the entire history of nuclear weapons. It is our great fortune that he continues to be a thoughtful commentator on the subject as the spread of nuclear capability threatens to undercut the restraint that has prevented their use since 1945. Schelling is right to point out how close the world came to catastrophe several times during that period, averted by the wisdom, or luck, of sensible policies that made evident the disaster that would be unleashed if deterrence failed.

Even in the face of growing unease as more countries acquire or threaten to acquire nuclear arms, he continues to believe that deterrence can be maintained and extended to cover the new players. But he rightly points out that that will not happen without policies, especially U.S. policies, that demonstrate the case for continued abhorrence of their use. He argues that we should reconsider the decision not to ratify the Comprehensive Test Ban Treaty (CTBT), a treaty that would add to the psychological pressure against the use of the weapons. And he says that we should not talk about possible circumstances in which we would use them ourselves, implying that we might do so, and that we should be quiet about the development of any new nuclear weapons.

But I worry that Schelling is too restrained, or polite, in his discussion of these aspects of U.S. policy. I see nothing from this administration supporting CTBT ratification nor a groundswell of U.S. public opinion pressing for it. Moreover, from articles in the press and Washington leaks, it is my impression that the administration is far down the road of committing the nation to the development of new forms of nuclear weapons, so-called “blockbusters” being the most publicized example. It is hard for me to imagine any single weapons development decision that would do more to undermine America’s (and the world’s) security. Such a move would be tantamount to saying that the weapons can be useful; other nations would get the message. Schelling rightly states that “The world will be less safe if the United States endorses the practicality and effectiveness of nuclear weapons in what it says, does, or legislates.” But that is where we appear to be headed.

Schelling also believes that much can continue to be accomplished to keep deterrence viable through the education of leaders in apolitical settings in which national security can be seen in perspective and where the exposure of national leaders to the disastrous effects of the use of nuclear weapons can most effectively take place. He singles out CISAC (the Committee on International Security and Arms Control) of the National Academy of Sciences as one such setting, and emphasizes that not only our own leaders must be included, but also those of Iran and North Korea, difficult as that may be.

It is essential that Schelling’s views be given the attention they deserve if deterrence is to remain “just as relevant for the next 60 years as it was for the past 60 years.”

EUGENE B. SKOLNIKOFF

Professor of Political Science, Emeritus

Massachusetts Institute of Technology

Cambridge, Massachusetts


The Nobel Prize–winning economist Thomas C. Schelling has published an insightful analysis in Issues. He emphasizes the continuing value of the deterrence concept in the post–Cold War world, notwithstanding the profound changes in the international order.

Schelling refers to the past contributions of the National Academy of Sciences’ CISAC—the Committee on International Security and Arms Control. We would like to supplement Schelling’s recital by remarks on CISAC’s past role and future potential contribution. In particular, Schelling refers to the numerous international meetings organized by CISAC among scientists of diverse countries who share a common interest in both science and international security affairs. In fact, CISAC was established as a standing committee of the NAS for the explicit purpose of continuing and strengthening such dialogues with its counterparts from the Soviet Union. Additionally, CISAC has broadened these dialogues to include China and India and has originated multilateral conferences on international security affairs within the European Community. Moreover CISAC, separate from such international activities, has conducted and published analyses of major security issues such as nuclear weapons policy and the management of weapons-useable material (http://www.nas.edu/cisac).

It is important to neither overstate nor understate the significance of contacts among scientists of countries that may be in an adversarial relationship at a governmental level, but share expertise and constructive interest in international security matters. CISAC’s meetings with foreign counterparts are not, and could not be, negotiations. Rather, the meetings are conducted in a problem-solving spirit; no common agreements are documented or proclaimed, but each side briefs their respective governments on the substance of the discussions. Thus the bilateral discussions have injected new ideas into governmental channels that have at times had substantive constructive consequences.

Schelling proposes that CISAC should extend this historical pattern to include a counterpart group from Iran. Although the NAS is currently involved in various forms of interactions with Iranian science, these have not included contacts by CISAC or other groups with Iranian scientists in the field of arms control and international security. We believe that, based on CISAC’s experience in the past, such contacts would be of value in displacing some of the public rhetoric with common understanding of the scientific and technical realities.

W. K. H. PANOFSKY

Emeritus Chair

RAYMOND JEANLOZ

Chair

Committee on International Security and Arms Control

National Academy of Sciences

Washington, DC


Bioscience security issues

In his comprehensive discussion of the problem of “Securing Life Sciences Research in an Age of Terrorism” (Issues, Fall 2006), Ronald M. Atlas rightly closes by noting that “further efforts to establish a culture of responsibility are needed to ensure fulfillment of the public trust and the fiduciary obligations it engenders to ensure that life sciences research is not used for bioterrorism or biowarfare.” This raises the question of whether practicing life scientists are able to generate a culture of responsibility.

As we move towards the sixth review conference of the Biological and Toxin Weapons Convention (BTWC) in Geneva (November 20 to 8 December, 2006), we can be sure that attention will be given to the scope and pace of change in the life sciences and the need for all to understand that any such developments are covered by the prohibitions embodied in the convention.

Yet despite the acknowledgement at previous Review Conferences of the importance of education for life scientists in strengthening the effectiveness of the BTWC, and the encouragingly wide participation of life scientists and their international and national organizations in the 2005 BTWC meetings on codes of conduct, as a number of States Parties noted then, there is still a great need among life scientists for awareness-raising on these issues so that their expertise may be properly engaged in finding solutions to the many problems outlined by Atlas. My own discussions in interactive seminars carried out with my colleague Brian Rappert and involving numerous scientists in the United Kingdom, United States, Europe, and South Africa over the past two years have impressed on me how little knowledge most practicing life scientists have of the concerns that are growing in the security community about the potential misuse of life sciences research by those with malign intent (see M. R. Dando, “Article IV: National Implementation: Education, Outreach and Codes of Conduct,” in Strengthening the Biological Weapons Convention: Key Points for the Sixth Review Conference, G. S. Pearson, N. A. Sims, and M. R. Dando, eds. (Bradford, UK: University of Bradford, 2006), 119–134; available at www.brad.ac.uk/acad/sbtwc).

I therefore believe that a culture of responsibility will come about only after a huge educational effort is undertaken around the world. Constructing the right educational materials and ensuring that they are widely used will be a major task, and I doubt that it is possible without the active participation of the States Parties to the Convention. It is therefore to be hoped that the Final Declaration of the Review Conference includes agreements on the importance of education and measures, such as having one of the inter-sessional meetings before the next review in 2011 consider educational initiatives, to ensure that a process of appropriate education takes place.

MALCOLM DANDO

Professor of International Security

Department of Peace Studies

University of Bradford

Bradford, United Kingdom


As Ronald M. Atlas’ excellent discussion shows, it is important for the bioscience and biotechnology communities to do whatever they can—such as education and awareness, proposal review, pathogen and laboratory security, and responsible conduct—to prevent technical advances from helping those who deliberately intend to inflict harm. It is equally important to assure policy-makers and citizens that the technical community is taking this responsibility seriously.

Making a real contribution to the first of these problems, however, will be very difficult. New knowledge and new tools are essential to fight natural, let alone unnatural, disease outbreaks; to raise standards of living; to protect the environment; and to improve the quality of life. We cannot do so without developing and disseminating capabilities that will inevitably become available to those who might misuse them. It is hard to imagine that more than a tiny fraction of proposals with scientific merit, or publications that are technically qualified, will be foregone on security grounds. Research in a “culture of responsibility” will proceed with full awareness of the potential risks, but it will proceed.

THE PERVASIVELY DUAL-USE NATURE OF BIOSCIENCE AND BIOTECHNOLOGY, AND THE GREAT DIFFICULTY IN FORESEEING ITS APPLICATION, MAKE IT IMPOSSIBLE TO QUANTITATIVELY SCORE A RESEARCH PROPOSAL’S POTENTIAL FOR GOOD AND FOR EVIL.

Why, then, pay the overhead involved in setting up an oversight structure? First, it may occasionally work. Despite the difficulty of predicting applications or consequences, occasionally an investigator may propose an experiment that has security implications that a group of suitably composed reviewers cannot be persuaded to tolerate. Second, this review and oversight infrastructure will be essential to dealing with the second, and more tractable, of the problems described above: that of retaining the trust of the policy community and avoiding the “autoimmune reaction” that would result if policymakers were to lose faith in the scientific community’s ability to monitor, assess, and govern itself. Overbroad and underanalyzed regulations, imposed without the science community’s participation or support, are not likely to provide security but could impose a serious price. Whenever “contentious research” with weapon implications is conducted that raises questions regarding how or even whether it should have been performed at all, scientists will need to be able to explain the work’s scientific importance, its potential applications, and why any alternative to doing that work would result in even greater risk.

Self-governance is an essential part of this picture—not because scientists are more wise or more moral than others, but because the subject matter does not lend itself to the discrete, objective, unambiguous criteria that binding regulations require. The pervasively dual-use nature of bioscience and biotechnology, and the great difficulty in foreseeing its application, make it impossible to quantitatively score a research proposal’s potential for good and for evil. Informed judgment will require expert knowledge, flexible procedures, and the ability to deal with ambiguity in a way that would be very difficult to codify into regulation.

One of the key challenges in this process will be assuring a skeptical society that self-governance can work. If a self-governed research enterprise ends up doing exactly the same things as an ungoverned research enterprise would have, the political system will have cause for concern.

GERALD L. EPSTEIN

Senior Fellow for Science and Security

Homeland Security Program

Center for Strategic and International Studies

Washington, DC


A better war on drugs

Jonathan P. Caulkins and Peter Reuter provide a compelling analysis of the misguided and costly emphasis on incarcerating drug offenders (“Reorienting U.S. Drug Policy,” Issues, Fall 2006). But the effects of such a policy go well beyond the individuals in prison and also extend to their families and communities.

As a result of the dramatic escalation of the U.S. prison population, children in many low-income minority communities are now growing up with a reasonable belief that they will face imprisonment at some point in their lives. Research from the Department of Justice documents the fact that one of every three black male children born today can expect to go to prison if current trends continue. Whereas children in middle-class neighborhoods grow up with the expectation of going to college, children in other communities anticipate time in prison. This is surely not a healthy development.

Large-scale incarceration also results in a growing gender gap in many urban areas. In high-incarceration neighborhoods in Washington, DC, for example, there are only 62 men for every 100 women. Some of the “missing” men are deceased or in the military, but many are behind bars. These disparities have profound implications for family formation and parenting.

Policy changes of recent years have imposed increasing penalties on drug offenders in particular, ones that often continue even after a sentence has been completed. Depending on the state in which one lives, a felony drug conviction can result in a lifetime ban on receiving welfare benefits, a prohibition on living in public housing, and ineligibility for higher-education loans. Such counterproductive policies place barriers to accessing services that are critical for the reintegration of offenders back into the community. And notably, these penalties apply to drug offenses, and only drug offenses. Such is the irrational legacy of the political frenzy that created the “war on drugs,” whereby evidence-based responses to social problems were replaced by “sound-bite” policies.

Finally, although Caulkins and Reuter provide a sound framework for a vastly reduced reliance on incarceration for drug offenders, we should also consider how to reduce our reliance on the criminal justice system as a whole. This is not necessarily an argument regarding drug prohibition, but rather an acknowledgement that there are a variety of ways by which we might approach substance abuse. In communities with substantial resources, for example, substance abuse is generally addressed as a public health problem, whereas in disadvantaged neighborhoods public policy has emphasized harsh criminal justice sanctions. Moving toward a drug-abuse model that emphasizes prevention and treatment for all would result in an approach to drug abuse that is both more compassionate and effective.

MARC MAUER

Executive Director

The Sentencing Project

Washington, DC


Jonathan P. Caulkins and Peter Reuter draw on their extensive expertise to give us a sobering assessment of the legacy of the war on drugs. The massive size of government expenditures alone casts doubt on this war’s cost-effectiveness. Expanding consideration to the full social costs, including HIV/AIDS risk from both illicit intravenous drug use and within-prison sexual behavior, would paint an even more distressing picture about the appropriateness of current policy.

The authors’ assessment of the social desirability of incarcerating many fewer people caught up in the drug war makes sense. Yet, some obvious mechanisms for achieving a reduction in the number of people incarcerated, such as decriminalizing marijuana possession and reducing the length of sentences for drug-related offenses, is unlikely to be politically feasible even with state corrections costs squeezing out other valued services—there are just too few politicians with the courage to address the issues raised by Caulkins and Reuter.

The forced abstinence advocated by Mark Kleiman seems to be one of the most promising approaches. However, its implementation runs up against the barrier of what has often been called the “nonsystem” of criminal justice. Felony incarceration is a state function, whereas criminal justice is administered at the local level. Sending the convicted to prison involves no direct cost for counties; keeping them in a local diversion program does. Without diversion options, judges will continue to send defendants who would benefit from drug treatment and enforced abstinence to prison.

How might this barrier be overcome? One possibility would be to give counties a financial incentive and a financial capability to create appropriate local alternatives for drug offenders and other low-level felonies. This could be done by giving each county an annual lump sum payment related to an historical average of the number of state inmates serving sentences for these low-level felonies who originated in the county. The county would then have to pay a fee to the state proportional to its current stock of inmates serving these sentences. The fees would be set so that a county that continued business as usual would see annual payments to the state approximately equal to the lump sum grant. Counties that found ways to divert low-level felons to local programs, however, would gain financially because they would be placing fewer inmates in state facilities. These potential savings, avoided fees, would provide an incentive to innovate as well as a revenue source for funding the innovation. Almost certainly, drug treatment and enforced abstinence programs would be among those selected by innovative counties. One can at least imagine legislators having the political courage to try such an approach.

DAVID L.WEIMER

Professor of Political Science and Public Affairs

Robert M. La Follette School of Public Affairs

University of Wisconsin–Madison


The Caulkins/Reuter piece is a wonderful, compressed map of what the policy world of drugs would look like without the combined inertia of large, self-interested organizations and well-fed public fears. No major landmark is missing. The argument for reducing prison sentences avoids being naïve about who is in prison for what, but pours out powerfully in terms of the limited benefits bought for the dollars and lives that our present policies exact. Even the national focus on terrorism and a stunning shortage of budget dollars for domestic needs has not prevented tens of billions of dollars from being wasted.

Caulkins and Reuter move to the frontiers of policy in two places. First, they urge us to take seriously Mark Kleiman’s case for coercing abstinence by individual checks on present-oriented users on parole or probation. Kleiman has now added to this prescription a remarkable and promising theory of how to best allocate monitoring capacity among addicts subject to court control.

The authors mention only passingly a huge increase in adolescent use of highly addictive prescription opioids in spite of the fact that they are used by roughly 8-10% of high school seniors—an order of magnitude beyond heroin use by this group. Some significant and growing part of that use comes in the form of drugs that are available from abroad and are advertised on the Internet. Reducing the effects of this global form of business structure will require new steps by Internet service providers and credit card companies as well as far more imagination by the Drug Enforcement Agency.

PHILIP HEYMANN

James Barr Ames Professor of Law

Harvard Law School

Cambridge, Massachusetts


U.S. aeronautics

Todd Watkins, Alan Schriesheim, and Stephen Merrill’s “Glide Path to Irrelevance: Federal Funding for Aeronautics” (Issues, Fall 2006) fairly depicts the state of U.S. federal investment in this sector. For most of the past century, the United States has led the world in “pushing the edge of the aeronautics envelope,” based, in part, on a strong national aeronautics research strategy.

The indicators of decline in U.S. preeminence in aerospace are noted in the paper. This is due, in part, to strategic investments other nations have made in aeronautics research. These investments have created a strong international civil aeronautics capability. In contrast, the United States has systematically decreased its investment in civil aeronautics research over the past decade and has underinvested in fundamental and high-risk research to develop the excitement, knowledge, and people to shape aeronautics in the future.

One area of concern is the potential decline of intellectual capital in the U.S. aeronautics enterprise. Much of our historical strength has been due to the knowledge and expertise of our people. One consequence of a weak aeronautics research program is that we will not stimulate intellectual renewal at a pace that will maintain or increase our national capability in aeronautics.

R. JOHN HANSMAN

Professor of Aeronautics and Astronautics

Director, MIT International Center for Air Transportation

Massachusetts Institute of Technology

Cambridge, Massachusetts


There is an egregiously inverted statement in the otherwise excellent article by Todd Watkins, Alan Schriesheim, and Stephen Merrill on federal support for R&D in the various fields of aeronautics. Under the heading of “increasing the performance and competitiveness of commercial aircraft” is the statement that “One positive note is that Boeing’s new 787 Dreamliner appears to be competing well against the Airbus A-350.” It should read that the attempt by Airbus to compete with the 787 Dreamliner by developing the A350 has so far come a cropper.

The 787 grew out of a competition of market projections between the two companies, in which Airbus projected a large demand for hub-to-hub transportation and designed the extra-large and now-troubled A-380, while Boeing projected that efficient long-range network flying would dominate the future intercontinental air routes and designed the medium-sized 787 for that purpose. The 787 sold well from the start, with over 450 firm orders on the books to date. It is now in production, and the first aircraft will go into service in about two years. There is, as yet, no A-350.

Well-publicized lags in orders and manufacturing glitches in the A-380 have set it back two years, causing potential cancellation of some of the approximately 150 orders already on the books and an increase in costs that jeopardize Airbus’ potential response to the 787. The initial response to the 787, when Airbus realized that it was selling well, was to hastily propose the A-350 in a design that was essentially an upgrade and extension of the existing A-330. That went over so poorly that it was abandoned, and Airbus has started over with a completely new design that hasn’t been completed yet. Called the A-350 XWB (for extra wide body; they have to compete on some basis), that design has been caught in the cost backwash from the A-380 troubles, so that as of this writing a decision on whether to go ahead with it, based on Airbus having the “industrial and financial wherewithal to launch the program” (Aviation Week & Space Technology, October 16, 2006, p. 50), has yet to be made. As a result of all these poor management decisions and the resulting technical issues, this year Airbus has dropped to 25% market share in transport aircraft (same source as above) and there has been turmoil at the top with two changes of CEO and more restructuring to come.

Competing well, indeed!

S. J. DEITCHMAN

Chevy Chase, MD


A new science degree

In “A New Science Degree to Meet Industry Needs” (Issues, Fall 2006), Michael S. Teitelbaum presents a very informative and persuasive case for a new type of graduate degree program, leading to a Professional Science Master’s (PSM) degree. I should confess that I didn’t need much persuasion. I observed the initiation of some of the earliest examples during the mid-1990s from the vantage point of the Sloan Foundation Board, on which I then served. More recently, I became chair of an advisory group assisting the Council of Graduate Schools in its effort to propagate PSM programs more widely. I’m a fan of the PSM degree!

Beyond endorsing and applauding Teitelbaum’s article, what can I say? Let me venture several observations. What are the barriers to rapid proliferation of PSM degrees? I don’t believe they include a lack of industry demand or student supply.As Teitelbaum noted, there is ample evidence of industry (and government) demand. And, in a long career as a physics professor, I’ve seen numerous undergraduate science majors who love science and would like to pursue a science-intensive career. But many don’t want to be mere technicians, nor do they wish to traverse the long and arduous path to a Ph.D. and life as a research scientist. Other than the route leading to the health professions, those have pretty much been the only options for science undergraduates.

The barriers do include time and money. Considering the glacial pace characteristic of universities, one could argue that the appearance of 100 PSM programs at 50 universities in just a decade is evidence of frenetic change. The costs of such new programs are not negligible, but they are not enormous, and there are indications of potential federal support for PSMs.

That leaves the cultural and psychological barriers. In research universities, science graduate programs are examples of Darwinian evolution in its purest form. Their sole purpose is the propagation of their faculties’ research scientist species, and survival of the fittest reigns. In such environments, the notion of a terminal master’s program that prepares scientists for professions other than research is alien and often perceived as less than respectable. (Here, it may be worth remembering the example of a newly minted Ph.D. whose first job was as a patent examiner. In his spare time, Albert Einstein revolutionized physics.) As Teitelbaum noted, that attitude seems to be changing in universities where there is growing awareness of the rewards of attending to societal needs in a globally competitive world.

Finally, a too-frequently overlooked observation: Many nondoctoral universities have strong science faculties who earned their doctorates in the same graduate programs from which come the faculties of research universities. Such universities are perfectly capable of creating high-quality PSM programs that are free of the attitudinal inhibitions of research universities.

DONALD N. LANGENBERG

Chancellor Emeritus

University System of Maryland


The Council of Graduate Schools (CGS), an organization of 470 institutions of higher education in the United States and Canada engaged in graduate education, recognizes that U.S. leadership in research and innovation has been critical to our economic success. Leadership in graduate education is an essential ingredient in national efforts to advance in research and innovation. Therefore, CGS endorses the concept of the Professional Science Master’s degree (PSM) and believes it is at the forefront of innovative programming in graduate education. We commend the Alfred P. Sloan Foundation and the Keck Foundation for their efforts in recognizing the need for a new model of master’s education in the sciences and mathematics and for providing substantial investments to create the PSM.

For more than a decade, national reports, conferences, and initiatives have urged graduate education to be responsive to employer needs, demographic changes, and student interests. And U.S. graduate schools have responded. The PSM is a strong example of this response. PSM degrees provide an alternative pathway for individuals to work in science and mathematics armed with skills needed by employers and ready to work in areas that will drive innovation.

Because of this strong belief in the PSM model for master’s education in science and mathematics, the CGS has assumed primary responsibility from the Alfred P. Sloan Foundation for supporting and promoting the PSM initiative. As Michael S. Teitelbaum notes, there are currently more than 100 different PSM programs in more than 50 institutions, and new programs are continuing to be implemented. CGS has fostered the development and expansion of these programs and has produced a “how-to” guide on establishing PSM programs. The book outlines the activities and processes needed to create professional master’s programs by offering up a number of “best practices.”

PSM programs are dynamic and highly interdisciplinary, which enables programs to respond quickly to changes in the needs of employers and relevant industries. The interdisciplinarity of programs creates an environment where inventive applications are most fruitful, leading to the production of innovative products and services. PSM programs are “science plus”! They produce graduates who enter a wide array of careers, from patent examiners, to forensic scientists leading projects in business and government, to entrepreneurs starting their own businesses.

At CGS, we believe that the PSM degree will play a key role in our national strategy to maintain our leadership in the global economy. Just as the MBA was the innovative degree of the 20th century, the PSM can be the innovative degree of the 21st century. Therefore, CGS has worked diligently to encourage national legislators to include support and funding for PSM programs in their consideration of competitiveness legislation. It is the graduates of these programs who will become key natural resources in an increasingly competitive global economy. Now is the time to take the PSM movement to scale and to embed this innovative degree as a regular feature of U.S. graduate education. The efforts of all— CGS, industry, federal and state governments, colleges and universities, and countless other stakeholders—are needed to make this a reality.

DEBRA W. STEWART

President

Council of Graduate Schools

Washington, DC

www.cgsnet.org


Improving energy policy

Robert W. Fri’s thoughtful and perceptive analysis of federal R&D (“From Energy Wish Lists to Technological Realities,” Issues, Fall 2006) accurately describes the hurdles that technologies must cross in moving from laboratories to production lines. Research must be coupled with public policy to move innovative technologies over the so-called valley of death to commercialization. The good news is that such a coupling can be spectacularly successful, as with high-efficiency refrigerators.

Unfortunately, the level of both public- and private-sector R&D has fallen precipitously, by more than 82%, as a share of U.S. gross domestic product, from its peak in 1979. Deregulation of the electric power market, with its consequent pressures on investment in intangibles like R&D, has contributed to a 50% decline in private-sector R&D just since 1990. An additional tonic for successful public/private–sector partnerships would be new tax or regulatory incentives for the energy industry to invest in its future.

REID DETCHON

Executive Director

Energy Future Coalition

Washington, DC


Robert Fri’s article explores principles for wise energy technology policy. It is interesting to consider Fri’s principles in the context of climate change. Mitigating climate change is fundamentally an energy and technology issue. CO2 is the dominant anthropogenic perturbation to the climate system, and fossil-fuel use is the dominant source of anthropogenic CO2. Because of its long lifetime in the atmosphere, the concentration of CO2 depends strongly on the cumulative, not annual, emissions over hundreds of years, from all sources, and in all regions of the world. Thus, unlike a conventional pollutant whose annual emission rate and concentration are closely coupled, global CO2 emissions must peak and decline thereafter, eventually to virtually zero. Therefore, the effort devoted to emissions mitigation, relative to society’s business-as-usual path, grows exponentially over time. This is an unprecedented challenge.

Economically efficient regimes that stabilize the concentration of CO2 in the atmosphere, the goal of the Framework Convention on Climate Change and ultimately a requirement of any regime that stabilizes climate change, have much in common. The emissions mitigation challenge starts immediately, but gradually, and then grows exponentially. Economically efficient regimes treat all carbon equally—no sectors or regions get a free ride. And, ultimately all major regions of the world must participate, or the CO2 concentrations cannot be stabilized.

NO GENERATION CAN CAPTURE THE FULL CLIMATE BENEFITS FROM ITS OWN EMISSIONS MITIGATION ACTIONS. EACH GENERATION MUST THEREFORE BEHAVE ALTRUISTICALLY TOWARD ITS DESCENDENTS, AND WE KNOW THAT SOCIETIES ARE MORE LIKELY TO BE ALTRUISTIC IF THE COST IS LOW.

In climate change, cost matters. Cost is not just money, but a measure of the resources that society diverts from other useful endeavors to address climate change. Cost is a measure of real effort and it matters for the usual reasons, namely that the lower the cost, the more resources remain available for other endeavors. This can mean maintaining an acceptable rate of economic growth in developing nations, where such growth is a central element in an overall strategy to address climate change. But cost is also important for a more strategic reason. The cumulative nature of CO2 emissions means that the climate that the present generation experiences is largely determined by the actions of predecessors and our own prior actions. Our ability to change the climate we experience is limited. The same is true for every generation. Thus, no generation can capture the full climate benefits from its own emissions mitigation actions. Each generation must therefore behave altruistically toward its descendents, and we know that societies are more likely to be altruistic if the cost is low. Technology can help control cost and thus can both reduce the barriers to begin actions and reduce future barriers, if the technologies continue to develop.

Fri’s first principle, provide private-sector incentives to pursue innovations that advance energy policy goals, means placing either an implicit or explicit value on greenhouse gas emissions. For climate, the major changes that ultimately accompany any stabilization regime imply not only a value on greenhouse gas emissions, but one that, unlike conventional pollutants like sulfur, can be expected to rise with time rather than remain steady. Given the long-lived nature of the physical assets associated with many parts of the global energy system, it is vitally important to communicate that the value of carbon is not simply a transitory phenomenon and that the core institutional environment in which that value is established can be expected to persist, even though it must evolve and even though the value of carbon must be reassessed regularly.

Fri’s second principle, conduct basic research to produce knowledge likely to be assimilated into the innovation process, takes on special meaning in the context of climate change. The century-plus time scale and exponentially increasing scale of emissions mitigation give enormous value to a more rapidly improving energy technology set. The technology innovation process is messy and nonlinear, and developments in non-energy spheres in far-flung parts of the economy and the world can have huge impacts on the energy system. For example, breakthroughs in materials science make possible jet engine development for national defense, which in turn makes possible a new generation of power turbines. Progress in understanding fundamental scientific processes, particularly in such fields as materials, biological, and computational science, is the foundation on which innovation will be built and the precursor to the birth of technologies for which we do not yet have names.

Fri’s third principle, target applied research toward removing specific obstacles to private-sector innovation, speaks to the variety of issues that revolve around moving forward energy technologies that can presently be identified. The particulars of such energy systems as CO2 capture and storage; bioenergy; nuclear energy; wind; solar; hydrogen systems; and end-use technologies in buildings, industry, and transport vary greatly. The interface between the public role and private sector will differ across these technologies and from place to place. Technology development in Japan or China is of a very different character than technology development in the United States.

Finally, Fri’s fourth principle, invest with care in technologies to serve markets that do not yet exist, is just good advice.

JAE EDMONDS

Laboratory Fellow and Chief Scientist

LEON CLARKE

Senior Research Economist

Pacific Northwest National Laboratory

Joint Global Change Research Institute at the University of Maryland College Park

College Park, Maryland


Energy clarified

The fall Issues includes two letters on energy and security by David L. Goldwyn and Ian Parry. They reference an article by Philip E. Auerswald, “The Myth of Energy Insecurity” (Issues, Summer 2006). I endorse their views and would like to add another.

Future policies on domestic energy supplies should not invoke the idea that the market will signal economically optimum actions affecting oil and gas supply. The price of oil has been controlled since about 1935 and has not been market-driven since then. Market-influenced, at times, but not driven. A little history is needed to understand how we have arrived at our present cost experience for oil.

The East Texas oil field was discovered in about 1931 and so flooded the oil market that the price dropped to the famous 10 cents per barrel. In 1935, the state of Texas empowered the Railroad Commission to set oil production rates in the state. The goal was to stabilize prices and to prevent the wasting of oil reserves that were consequent to excessive rates of production. This regulatory process depended on two sets of data. One was the maximum permitted rate of production for each well in Texas. This rate was derived from certain tests that recorded the down-hole pressure in a well at different oil flow rates. A formula calculated the maximum economic production rate for that well. The other data set was the monthly demand forecast from each oil purchaser. These data were combined to determine how many days each Texas well could produce oil in a specific month. These producing days were termed “the allowable.”

Soon Louisiana, Oklahoma, and New Mexico imposed similar regulatory regimes on oil production in their states and, thereby, controlled the domestic oil supply and, consequently, the price of oil. An examination of oil prices during this period illustrates how firmly the price regulation held.

In 1971, a fateful event occurred. The production allowables in Texas and other oil-producing states reached 100% and there was no longer an excess of supply in the United States. It was then that the fledgling OPEC organization realized that control of oil prices had passed from Texas, and other regulating states, to them. Again, they copied the Texas Railroad Commission example and have set allowable production rates for their members. Throughout this 70-year history, oil prices have been controlled. The current OPEC regulatory process is messier and responds to more exogenous factors than simply demand. However, oil prices still are not fundamentally market-driven. Oil production resembles the production of electronic microprocessors. The largest cost is incurred before production begins and the marginal cost of production is almost irrelevant in setting the market price. The price objective must include the recovery of fixed costs.

I believe that this understanding is important because if research, new drilling opportunities, and the development of alternate energy resources are to be driven by market price signals, the resulting policies and strategies will be built on shifting sand.

JOE F. MOORE

Retired Chief Executive Officer

Bonner & Moore Associates

Houston, Texas


In a recent letter (Forum, Spring 2006), Paul Runci, Leon Clarke, and James Dooley respond to our article “Reversing the Incredible Shrinking Energy R&D Budget” (Issues, Fall 2005). Runci et al. do a great service by weighing in on the under-studied issues surrounding both the trends and the conclusions that can be gleaned by exploring the history of investment in energy R&D.

The authors begin with the point that the declines seen in R&D investments are not limited to the United States and indeed are an issue of global concern. We agree strongly, and in 1999 published a depressing comparison of the downward trends in energy investment in many industrialized nations. This trend alone, at a time of greater local and global need, warrants further exploration in both analytical and political circles.

A primary point made by Runci et al. is that the U.S. energy research budget has not declined to the degree we claim. In fact, they argue, funding levels are relatively stable if much of the 1970s and early 1980s—when levels rose in an “OPEC response”—are excluded. Although their letter is correct in pointing out some recent evidence of stabilization, at least in some areas, in our paper we emphasized “shrinking” for three reasons that we contend remain valid. First, although energy investment may be roughly comparable today to that of the late 1960s, the energy/environmental linkages recognized today are more diverse and far-reaching than they were four decades ago, interactions that were briefly perceived in the late 1970s. More analytically, however, our contention that funding levels declined by all significant measures is further bolstered by the observations that (1) real declines are forecast in the 5-year budget projections, and (2) perhaps even more ominously, private-sector investment in energy research has decreased in many areas. Comparing any of these figures to the growth of R&D investment in other sectors of the economy, such as health and medical technology, makes recent trends even more disturbing.

It may be true that, as they say, “the perceived benefits of energy R&D reflect society’s beliefs about the value of energy technology in addressing priority economic, security, and environmental challenges.” However, Runci et al. misinterpreted our use of historical data with their comment that “communicating the evolution of funding levels during the past several decades is not sufficient to fundamentally alter the predominant perceptions of the potential value of energy R&D.” We did not intend, as they suggest, to use historical data to make claims about the benefits of energy R&D. We simply documented the past 60 years of large federal R&D programs to illustrate that a major energy technology initiative would fit easily within the fiscal bounds of past programs.

Though we may engage in a spirited debate over past trends, our main point is that the scope and diversity of energy-related issues currently facing our nation and the world argue simply and strongly for greater attention to energy: the largest single sector of the global economy and the major driver of human impact on the climate system. More specifically, we developed scenarios of 5- to 10-fold increases by adapting a model devised by Schock et al. of Lawrence Livermore National Laboratory to estimate the value of energy R&D in mitigating environmental, health, and economic risks. Details of our valuation methodology are available at www.rael.berkeley.edu.

This response in no way conflicts with Runci et al.’s emphasis on the need to alter public perceptions about the value of energy R&D. On this point we fully agree. Continuing to develop quantitative valuation techniques provides one way to inform and, ideally, influence “beliefs” about what is possible in the energy sector.

ROBERT M. MARGOLIS

National Renewable Energy Laboratory

Washington, DC

DANIEL M. KAMMEN

University of California, Berkeley


Research integrity

Michael Kalichman’s “Ethics and Science: A 0.1% Solution” (Issues, Fall 2006) makes several correct and critically important points with respect to ethics in science and the responsible conduct of research (RCR). Scientists, research institutions, and the federal government have a shared responsibility to ensure the highest integrity of research, not because of some set of regulatory requirements but because it is the right thing to do. However as Kalichman states, the university research community (the Council on Governmental Relations included) was justifiably unhappy with a highly prescriptive, inflexible, and unfunded mandate for the RCR requirement proposed in 2000 by the Office of Research Integrity. I would take issue with his characterization that, after suspending the RCR requirement, efforts by research institutions to enhance RCR education “slipped down the list of priorities.”

Rather, it remains a critical component of graduate and undergraduate research education. This education is offered in specialized areas like human subjects protections or radiation safety, tailored to the unique needs of the student. Sometimes, special seminars are organized; in other cases, the material is integrated into academic seminars. It occurs in classrooms, online, and at the bench.

It is, however, one of an ever-growing set of competing priorities for university resources, limited by the federal government’s refusal to hold up its end of the partnership in supporting the research enterprise.

That leads to my main point of contention with Kalichman: the 0.1% solution. In the past 10 years, there has been significant growth in requirements related to the conduct of research. Whether by expansion of existing requirements, new laws and regulations, government-wide or agency-specific policies, or agency or program “guidance,” the impact has been that institutions are scrambling to implement policies and educate faculty, students and administrators on a plethora of new requirements. In many cases, significant organizational restructuring is necessary to achieve adequate compliance, in addition to the education and training of all the individuals involved. Some examples: expanded requirements and new interpretations of rules to protect human participants in research; new laws and extensive implementing regulations for institutions and researchers engaged in research on select agents; requirements for compliance with export control regulations under the Departments of Commerce and State; and expected policy recommendations or guidance from the National Science Advisory Board on Biosecurity on the conduct and disposition of research results of so-called dual-use research.

SCIENTISTS, RESEARCH INSTITUTIONS, AND THE FEDERAL GOVERNMENT HAVE A SHARED RESPONSIBILITY TO ENSURE THE HIGHEST INTEGRITY OF RESEARCH, NOT BECAUSE OF SOME SET OF REGULATORY REQUIREMENTS BUT BECAUSE IT IS THE RIGHT THING TO DO.

So to suggest a 0.1% set-aside from direct costs of funding for RCR education begs the question: What percentage do we set aside for these other research compliance areas, particularly those with national security implications, which some view as equally important as RCR? Or do we expand the definition of RCR to include all compliance areas, in which case the percentage set aside would have to be significantly higher?

Of course, there is a solution ready and waiting to be used, and it was described quite well in an essay in the Fall 2002 Issues by Arthur Bienenstock, entitled “A Fair Deal for Federal Research at Universities.” The costs to implement RCR and the other regulations described above are compliance costs that are properly treated as indirect; that is, they are not easily charged directly to individual grants but are costs that benefit and support the research infrastructure. As Bienenstock explained, the cap imposed by the Office of Management and Budget (OMB) on the administrative component of university facilities and administrative (F&A) rates means that for institutions at the cap (and most have been at the cap for a number of years), increased compliance costs are borne totally by the institution. So instead of the federal government paying its fair share of such costs as outlined in OMB Circular A-21, universities are left to find the resources to comply or to decide whether they can afford to conduct certain types of research, given the compliance requirements.

It seems then, that a reevaluation of the government/university partnership on this issue is needed. If, as we have been told over the years, OMB is unwilling to consider raising or eliminating the cap on compliance cost recovery through F&A rates, are they and the research-funding agencies willing to consider a percentage set-aside from direct costs to help pay for compliance costs?

TONY DECRAPPEO

President

Council on Governmental Relations

Washington, DC


Regarding the subject of Michael Kalichman’s letter: With the average cost of developing a new drug now approaching $1 billion, according to some studies, U.S. pharmaceutical research companies have a critical vested interest in ensuring the scientific integrity of clinical trial data. Concerns about the authenticity of clinical studies can lead to data being disqualified, placing approval of a drug in jeopardy.

Because human clinical trials that assess the safety and effectiveness of new medicines are the most critical step in the drug development process, the Pharmaceutical Research and Manufacturing Association (PhRMA) has issued comprehensive voluntary principles outlining our member companies’ commitment to transparency in research. Issued originally in 2002, The Principles on Conduct of Clinical Trials were extensively reevaluated in 2004 and reissued with a new and informative question-and-answer section (www.phrma.org/publications/principles and guidelines/clinical trials).

On the crucial regulatory front, pharmaceutical research companies conduct thorough scientific discussions with the U.S. Food and Drug Administration (FDA) to make sure scientifically sound clinical trial designs and protocols are developed. Clinical testing on a specific drug is conducted at multiple research sites, and often they are located at major U.S. university medical schools. To help guarantee the legitimacy of clinical trials, clinical investigators must inform the FDA about any financial holdings in the companies sponsoring clinical testing if the amount exceeds a certain minimal sum. Potential conflicts of interest must be reported when a product’s license application is submitted. In addition, companies have quality-assurance units that are separate and independent from clinical research groups to audit trial sites for data quality.

In many cases, an impartial Data Safety Monitoring Board that is independent of companies has also been set up and is given access to the clinical data of America’s pharmaceutical research companies. The monitoring boards are empowered to review clinical trial results and stop testing if safety concerns arise or if it appears that a new medicine is effective and should be provided to patients with a particular disease.

After three phases of clinical testing, which usually span seven years, analyzed data are submitted in a New Drug Application (or NDA) to the FDA. The full application consists of tens of thousands of pages and includes all the raw data from the clinical studies. The data are reviewed by FDA regulators under timeframes established by the Prescription Drug User Fee Act. Not every application is approved during the first review cycle. In fact, the FDA may have significant questions that a company must answer before a new medicine is approved for use by patients.

This impartial review of data takes, on average, 13 people-years to complete. Impartiality is guaranteed by stringent conflict-of-interest regulations covering agency reviewers. At the end of the process, a new drug emerges with full FDA-approved prescribing information on a drug label that tells health care providers how to maximize benefits and minimize risks of drugs they use to treat patients. The medical profession can and should have confidence that data generated during drug development are of the highest quality and that review of the information has received intense regulatory scrutiny.

ALAN GOLDHAMMER

Associate Vice President, Regulatory Affairs

PhRMA

Washington, DC


Safer chemical use

Lawrence M. Wein’s “Preventing Catastrophic Chemical Attacks” (Issues, Fall 2006) has quite rightly drawn attention to (1) the lack of an appropriate government response since the 9/11 attacks, (2) the inadequacy of plant security measures as a truly preventive approach, and (3) the need for primary prevention of chemical mishaps through the use of safer chemical products and processes, rather than Band-Aid solutions. The latter need embodies the idea of “inherent safety” coined by Trevor Kletz and “inherently safer production,” well-known to the American Institute of Chemical Engineers but infrequently practiced—and politically resisted—by our many antiquated chemical manufacturing, using, and storage facilities. Chlorine, anhydrous ammonia, and hydrofluoric acid do pose major problems for which there are known solutions, but there are a myriad of other chemical products and manufacturing facilities producing or using, for example, isocyanates, phosgene, and chemicals at refineries, which pose serious risks for which solutions must also be implemented.

Inherently safer production means primary prevention approaches that eliminate or dramatically reduce the probability of harmful releases by making fundamental changes in chemical inputs, production processes, and/or final products. Secondary prevention involves only minimal change to the core production system, focusing instead on improving the structural integrity of production vessels and piping, neutralizing escaped gases and liquids, and improving shutoff devices rather than changing the basic production methods. The current technology of chemical production, use, and transportation, inherited from decades-old design, is largely inherently unsafe and hence vulnerable to both accidental and intentional releases.

Wein’s call for the use of safer chemicals has to be backed up by law. There is a need for regulations embodying the already-legislated provisions in the Clean Air Act and Occupational Safety and Health Act putting enforcement teeth into industry’s legal general duty to design, provide, and maintain safer production and transportation of chemicals representing high risks.

Requiring industry to actually change their technology would be good, but is likely to be resisted. However, one way of providing firms with the right incentives would be to exploit the opportunity to prevent accidents and accidental releases by requiring industry to (1) identify where in the production process changes to inherently safer inputs, processes, and final products could be made and (2) identify the specific inherently safer technologies that could be substituted or developed. The first analysis might be termed an Inherent Safety Opportunity Audit (ISOA). The latter is a Technology Options Analysis (TOA). Unlike a hazard or risk assessment, these practices seek to identify where and what superior technologies could be adopted or developed to eliminate the possibility, or dramatically reduce the probability, of accidents and accidental releases, and promote a culture of real prevention.

A risk assessment, such as required by “worst-case analysis,” is not sufficient. In practice, it is generally limited to an evaluation of the risks associated with a firm’s established production technology and does not include the identification or consideration of alternative inherently safer production technologies. Consequently, risk assessments tend to emphasize secondary accident prevention and mitigation strategies, which impose engineering and administrative controls on an existing production technology, rather than primary accident prevention strategies. Requiring industry to report these options would no doubt encourage more widespread adoption of inherently safer technologies, just as reporting technology options in Massachusetts under its Toxic Use Reduction Act has encouraged pollution prevention.

NICHOLAS A. ASHFORD

Professor of Technology and Policy

Director of the Technology and Law Program

Massachusetts Institute of Technology

Cambridge, Massachusetts

Update: U.S. flexibility on farm subsidies key to trade progress

In “In Agricultural Trade Talks, First Do No Harm” (Issues, Fall 2005), I argued that negotiations at the World Trade Organization risked further impoverishment of the world’s poor because the talks lacked a critical focus on how changes in the global trading system would affect small-scale farmers in developing countries. In low-income countries, 68% of the people sustain themselves through farming, so their fate largely determines the overall welfare of their countries.

When the Doha Round of trade negotiations was launched in autumn 2001, against the backdrop of the 9/11 terrorist attacks, there was wide agreement to emphasize the needs of developing countries in order to strengthen their economies and buttress global stability and security. Despite that moment of clarity, many of the major players, including the United States, soon reverted to their traditional priority of gaining market access abroad for their own firms and agricultural interests. After the article appeared, the negotiations bogged down and finally were suspended in July 2006.

The main sticking point is agricultural trade. The United States, which has resisted any significant cuts in its annual $15 billion to $20 billion in farm subsidies, has been blamed by most countries for the current impasse. The subsidies are widely seen as inducing overproduction, dumping below-cost goods on world markets, and lowering prices for farmers elsewhere. In the current round, negotiators insisted that the United States be allowed not only to spend up to $22.6 billion in subsidies under any new agreement but also to gain wide access to markets in other countries, including low-income countries such as India, where 58% of the population—most, desperately poor—depends on farming. India responded by saying that it would negotiate over any commercial matter but that the livelihoods of its struggling farmers were not on the negotiating table.

The impasse has prevented progress on the overall negotiations. Most countries have refused to make their best offers for reduced tariffs on manufactured goods or services until they have an acceptable deal on agriculture. An economic analysis of the U.S. stance raises critical questions. Agriculture accounts for only 1.4% of the U.S. gross domestic product, less than 2% of employment, and 5.6% of total exports. For the U.S. economy and its firms and workers, the vast majority of export gains from the Doha Round would come in the manufacturing and service sectors. Even in the agricultural sector, gains would come primarily from economic growth in developing countries that are able to grow their way out of poverty and increase the purchasing power of their households. In much of the developing world, countries that were net agricultural exporters a generation or even a few years ago are now net importers under current trade rules and tariff levels. It is not tariffs that reduce poor countries’ imports; it is the poverty of their citizens.

In effect, the United States has chosen to put at risk much larger potential export and employment gains for manufacturing and service sectors for the sake of trying to maximize access to markets where its agricultural exports are already growing strongly. What accounts for the U.S. strategy? The most important factor is the political economy of agriculture. Agribusiness and very large farms collect most of the billions in subsidies. The value of these subsidies leads them to spend heavily on lobbying and campaign contributions. Coupled with the presence of farm votes in many political swing states, the farm lobby wields disproportionate influence.

In previous rounds of global trade talks, the United States and other wealthy countries have been able to protect their farm sectors while prying open markets abroad. However, a new balance of power has emerged in the global trading system, as large developing countries such as India, China, Brazil, and Indonesia make their presence felt. Among these countries’ key concerns is the issue of employment and livelihoods. Because small-scale agriculture is the main occupation in many developing countries, these countries are reluctant to open their agricultural markets too quickly, lest poor farmers be displaced before manufacturing and other jobs can be created for them.

They have formed a group called the G33 to propose that they should be allowed to exempt 10% of agricultural tariff lines from tariff cuts for products that are crucial to their livelihoods and rural development. An additional 10% of their agricultural tariff lines could be selected for cuts that are smaller than those generally agreed on. Tariffs on the remaining 80% of farm goods would be reduced by an agreed-on overall formula.

The daunting task at the center of the negotiations is to find a trade deal that captures the positive potential of global trade to accelerate economic growth and job creation, while recognizing and allowing sufficient flexibility to deal with the reality of job destruction that increased trade always entails. This is true for both rich and poor countries, although the poor have much less scope to absorb the shocks and make the transition. The G33 proposal is a reasonable attempt to balance these competing objectives for countries with high shares of employment in agriculture.

The United States has the ability to break the current impasse. To do so, it must step back from its current maximal demands and instead accept the G33 proposal for flexibility on agricultural trade liberalization while reducing its overly ambitious demand to maintain high levels of trade-distorting subsidies for wealthy U.S. farmers. A new proposal along these lines would instill life back into the Doha talks. Other countries would then feel pressure to come forward with proposals in other sectors. Until this happens, there will be no agreement on a new trade regime.

Commuting in America

Everybody has ideas about how to solve traffic congestion, but the job is trickier than it seems, as a new report examining recent trends in computing patterns makes clear. Commuting in America III, published in October 2006 by the National Academies’ Transportation Research Board, finds that commuting patterns continue to evolve in complex and often surprising ways. The difficulty of accurately predicting the future presents obvious problems for policymakers.

Two major demographic forces are affecting commuting patterns: the declining influence of the baby boom generation and the simultaneous advent of a large immigrant population joining the workforce. As the baby boom generation retires, growth of the working population, ages 18-65, is projected to slow dramatically between 2010 and 2030. Would fewer workers mean fewer drivers? Not necessarily. First, the percentage of people 65 and older who are working has increased in recent years (from 11.2% in 1990 to 12.7% in 2005) and is expected to rise even further; on the other hand, older workers tend to shift from single-occupancy vehicles (SOVs) to more carpooling, walking, and working at home. Second, and more important, the projections of the future number of workers are highly dependent on immigration estimates, which have been wrong before and are likely to be wrong again, especially because immigration rates can be abruptly changed by an act of Congress. Immigrants, meanwhile, seem to be contributing to congestion relief because so many of them carpool, walk, ride bicycles, and take transit. Yet the data indicate that the longer immigrants are in the country, the more they convert to the preferred U.S. mode of travel: the SOV.

Although driving alone to work continued to increase through 2004, there were signs of stabilization during the 1990s as growth rates slackened. Most significantly, there were five metropolitan areas where the SOV share actually declined from 1990. All losses were quite small: under 1%, except for Seattle, with a decline of about 1.5%.

SUBURBAN SPRAWL, RURAL DEVELOPMENT, IMMIGRATION, AN AGING POPULATION, CHANGING WORK HABITS. WHAT EFFECTS ARE THEY HAVING ON THE NATION’S TRANSPORTATION SYSTEMS?

The expanding size of metropolitan areas is having substantial repercussions for commuting and travel time. This geographical expansion has largely been driven by the desire among Americans for bigger, better, cheaper housing in suburban areas. As workers have moved, employers have followed. Yet these shifts in the location of workplaces have not led to shortened commutes. Rather, they have allowed workers to move even further from the city center in search of cheap housing and enabled workers living in rural areas and even other metropolitan areas to compete for those jobs. (In the process, commutes of more than 60 minutes and “extreme” commutes of more than 90 minutes have become more common.) Consequently, many commuters now cross multiple counties on their way to and from work. These cross-county patterns indicate that solutions to traffic congestion will increasingly need to be regional in nature.

Another complicating factor in computing patterns is that an increasingly large number of people do not drive directly to and from work; they stop multiple times along the way: to buy coffee in the morning, to drop off and pick up children at day care or school, to visit the gym, to buy groceries, and so forth. These “trip chains” save time and have probably been the central factor in the continued growth of SOVs (the number of new solo drivers grew by almost 13 million in the 1990s). At the same time, this commuting pattern makes alternatives such as transit and carpooling even less attractive.

Yet another factor complicating commutes is that the length of the peak morning and evening commuting periods has increasingly lengthened. For instance, the number of people beginning their commutes before 6 a.m., and even 5 a.m., has risen significantly.

Worker schedules and commuting flow patterns continue to make alternatives to the SOV problematic, with one exception: working at home is the only “mode of transportation” other than driving alone that has grown throughout the entire baby boom period. Aided recently by the telecommunications revolution, it has now passed walking as a way to get to work and is third behind carpooling in most metropolitan areas. There may be ways for policymakers to abet this natural growth.

Working Suburbs

2000 metro flow map (% of total commuters)

Computing patterns have become more diverse and complicated, with some people commuting from rural to suburban areas to work and some commuting between metro areas.

Where the cars are

Share of increase in commuting flows, 1990–2000

Growth in suburb-to-suburb commuting now swamps growth in the “traditional” commute from suburb to central city.

Percentage of workers commuting across county lines

More than 34 million workers now leave their home counties to go to work, up 85% from 1980.

Congestion intensity by size of urban area, 1982–2002 (Ratio of travel time in peak hours to off-peak travel time)

Congestion is increasing in all metropolitan areas, but most markedly in those areas with more than 5 million people (there are now 12 of them, accounting for one-third of the nation’s population).

Rock around the clock

Daily trips per capita

One reason for increased congestion during peak commuting periods is that more people, especially time-pressed working mothers, are doing more family and personal business on their way to and from work.

Number of workers leaving for work by departure time, 1990-2000

The fastest rate of growth took place among commuters leaving home before 6:30 a.m. This group, which accounted for 36% of total commuter growth in the decade, was probably motivated at least in part by a desire to avoid heavy congestion during peak commuting periods.

Percentage of workers commuting over 60 minutes and under 20 minutes by metro size

The number of extreme commutes has grown substantially in recent years. In 2005, more than 10 million workers needed more than 60 minutes to drive to and from work; a third of those needed more than 90 minutes.

Mode shifts

Non-auto trends in mode shares

Working at home is the only “mode of transportation,” other than driving alone, in which there has been continuous growth since 1980. The number of people working at home almost doubled between 1980 and 2000, increasing by 2 million workers. By 2005, it had increased by another 600,000, to 4.2 million.

Transportation choices of over-55 workers

As workers age, there is a significant shift away from single-occupant vehicles (from about 80% to 68%), slight gains in carpooling, and major shifts to walking and working at home.

Mode use by years in United States

Immigrants constitute only 13.5% of all workers, but are significant users of non-single occupant vehicle modes of transportation. For example, immigrants constitute more than 40% of larger carpools. But the longer they live in the United States, the more likely they are to drive a car alone to work.

Improving Public Safety Communications

At 9:59 a.m. on September 11, 2001, the first of many evacuation orders was transmitted to police and firefighters in the World Trade Center’s North Tower. Police heard the order, and most left safely. But firefighters could not receive the order on their communications equipment—even as people watching television at home knew of the tragedy unfolding. When the tower fell 29 minutes after the first evacuation order, 121 firefighters were still inside. None survived.

Although the number of lives lost on 9/11 was especially great, there is nothing unusual about loss of life due to failures in the communications systems used by first responders: firefighters, police, paramedics, and members of the National Guard. Such failures occur across the country during large disasters, such as Hurricane Katrina, and during emergencies too small to make the news, such as police car chases and burning houses. When public safety communications systems do not work, the lives of first responders and the citizens they protect are at risk.

Clearly, the nation’s public safety communications system is broken, and fundamental changes in technology and public policy are needed. Incremental changes here and there will not suffice. The weaknesses of the current system can be addressed only by developing a nationwide broadband communications network designed as an integrated infrastructure. Fortunately, the resources for such a move are now available; we need only the vision to use them well. In 2009, as part of the transition to digital television, the federal government plans to transfer a large portion of premium spectrum—24 megahertz (MHz)—from analog TV to public safety use. A block of this size, unencumbered with old equipment, is an extraordinary opportunity. Moreover, this segment of spectrum is around 700 MHz, which means it has physical properties that are particularly useful when designing a communications system that must cover a large geographic region, as would be required to adequately serve all first responders. But unless policymakers make concerted efforts to capitalize on the expanded spectrum for public safety, this rich opportunity will be lost.

In a strangely unrelated effort, the federal government also has plans to invest $3 billion to $30 billion and a significant amount of spectrum in the Integrated Wireless Network (IWN) program, which is intended to provide communications services for a tiny fraction of first responders. These resources could instead be used to serve all first responders.

Prospects for critical progress

Public policies on communications systems for public safety have evolved over many decades, and most of them have long outlived their usefulness. In particular, these polices are based on assumptions that local agencies should have maximal flexibility at the expense of standardization and regional planning, that commercial carriers have little role to play, that public safety should not share spectrum or infrastructure, and that narrowband voice applications should dominate. These policies have led to a system that fails too often, costs too much, consumes too much spectrum, and provides too few capabilities. Moreover, public safety requirements have changed after 9/11, and the technology has changed as well, so there are many reasons to consider a fundamental change in policy.

Public safety communications systems will remain inadequate as long as primary responsibility rests with local governments. Tens of thousands of independent uncoordinated agencies simply cannot design and operate a public safety communications infrastructure that meets the country’s post-9/11 needs. Public safety officials must start planning over large geographic regions and large blocks of spectrum, and this requires fundamental reform. Policy reforms should include shifting some responsibility and authority for decisions about public safety communications infrastructure from many independent local government agencies to the federal government, expanding the role of commercial service providers, allowing public safety to share spectrum with others, and expanding capabilities beyond traditional voice communications. Since the TV band spectrum to be reallocated to public safety has few legacy systems that must be accommodated or moved, it is an excellent place to launch a new policy.

Taking a new approach to public safety communications holds promise of making progress in a number of critical areas:

PUBLIC SAFETY COMMUNICATIONS SYSTEMS WILL REMAIN INADEQUATE AS LONG AS PRIMARY RESPONSIBILITY RESTS WITH LOCAL GOVERNMENTS.

Interoperability. Interoperability is the ability of individuals from different organizations to communicate and share information. Its lack is often cited as a major problem for public safety. For example, when first responders from multiple public safety agencies arrived at Columbine High School after the shooting in 1999, interoperability problems were so great that the responders had to rely on runners to carry written messages from one agency’s command center to another. Interoperability is a problem only because decisions are made by local agencies, each of which has the flexibility to choose technology that is incompatible with that of its neighbors.

Spectral efficiency. Many public safety agencies have expressed concern that a shortage of public safety spectrum is coming, even assuming they do get 24 MHz of television spectrum. This shortage may have more to do with ineffective policy than technical necessity, because much greater efficiency is possible. If public safety systems have a spectrum shortage, their communications capacity will be inadequate during large emergencies. If the nation responds to the shortage by simply allocating more spectrum to public safety without improving efficiency, this wasted spectrum will be unavailable for other purposes such as inexpensive Internet access and cellular phone services.

Dependability and fault tolerance. Critical pieces of the system should rarely fail. Of course, some failures are inevitable when a hurricane the size of Katrina hits, but this need not bring down an entire system. In a fault-tolerant design, other parts of the system will continue to operate, compensating for failures to the extent possible. This can occur only if systems are designed coherently across large regions. Moreover, today’s policies make it difficult for first responders to use commercial systems, even when these are the only systems that survive a disaster such as a hurricane.

Advanced capabilities. Current public safety communications systems primarily provide voice. There are many other services that could be useful, including broadband data transfers, real-time video, and geolocation, which would enable dispatchers to track the precise location of first responders during an emergency.

Security. Systems can be designed so that hostile parties cannot easily attack a system or eavesdrop on first responders. The greatest challenge will be in protecting interagency communications, because protection must run “end to end,” and today the agencies at each end of the conversation often have dissimilar technologies.

Cost. The uncoordinated actions of local agencies greatly increase costs. The amount of infrastructure deployed in a region today depends more on the number of local governments involved than on the region’s size or population. Moreover, the rapid growth of commercial wireless services has led to mass production and low costs. Thus, equipment used by public safety could be much cheaper than was once possible, if it is similar enough to equipment used in commercial markets.

Recent efforts at reform have tended to address one problem at a time, which can make matters even worse. For example, the government has reallocated spectrum to address spectrum scarcity, but in a manner that may lead to new interoperability problems. There are grant programs specifically intended to improve interoperability, but some grants will be spent in ways that improve interoperability while degrading dependability and wasting spectrum. The right way to improve systems is to address all objectives together rather than piecemeal.

Alternative visions

Within the overarching goal of developing a national broadband network, there are a number of possible paths forward. Allowing first responders to make use of multiple systems will increase the chances that some system is available and expand the capabilities that first responders can use. There should still be a primary system, which would at minimum support mission-critical voice communications, and possibly more.

Today, primary public safety communications systems are designed and run by thousands of independent local agencies, and this leads to interoperability failures, inefficient use of spectrum, lower dependability, and higher costs. One obvious response is to continue to rely on government agencies but to move away from flexibility and toward standardization and a consistent nationwide architecture defined by one or more federal agency.

Even with a national architecture defined at the federal level, the federal government may or may not actually operate the infrastructure. Certainly, one option is for a federal agency such as the Department of Homeland Security (DHS) to deploy and operate the nationwide system. The government would pay directly for the infrastructure (although not necessarily for the mobile devices used by first responders that connect to this infrastructure). Another option is for local or regional entities to continue operating their own systems but to be required to design the systems so that they operate seamlessly within a national architecture. This approach is not unprecedented. For example, the Internet consists of many thousands of independent networks under separate administrative control, all of which operate and cooperate using protocols and architectures approved by the Internet Engineering Task Force. Similarly, many telephone companies around the world use consistent standardized technology.

One government program is already in place to develop a nationwide wireless network explicitly for law enforcement and homeland security. This network will be developed by federal contractors under the direction of the Departments of Homeland Security, Justice, and Treasury. The IWN will support 80,000 federal agents and officers. Even though it will be available to only a few percent of first responders— those from federal agencies—the network must still cover the entire country. The program is expected to cost between $3 billion and $30 billion.

One challenge in developing a nationwide system for all first responders is migrating from current systems without a disruption. This transformation becomes vastly simpler with the spectrum made available by the digital TV transition. Such a shift creates the opportunity to construct a nationwide system using some or all of that new spectrum and allows local agencies to gradually migrate from current systems to the new one over a period of years. As the agencies abandon their outdated technology and old spectrum allocations, some of these bands could become available for other uses.

Another approach to developing a nationwide system for serving first responders is to employ commercial companies. This approach has advantages. Multiple networks already operate in much of the country, and competition between these carriers drives costs down and quality up. However, commercial carriers rarely offer services designed to meet public safety standards for mission-critical communications. This is not surprising; most public safety agencies would not use these services regardless of their quality or price. Perhaps if they would, adequate services would emerge.

An alternative is to seek bids for a new nationwide system that would be specifically designed to serve public safety and would be run by a commercial provider. Many European nations have adopted this approach. For example, the British government has signed a contract with British Telecom to build a wireless system and operate it for 19 years. The system is intended for public safety, although it covers not just first responders but other public service agencies and even community health centers. Thus, the United Kingdom will gain the efficiency and dependability of a national system, with no possibility of interoperability problems, all provided through the existing expertise of British Telecom.

In the United States, Verizon is reportedly considering making a similar proposal, wherein the company would operate in 12 MHz of spectrum in the 700-MHz band that is currently intended for public safety after the digital television transition. Based on media reports, it appears that Verizon would serve public safety users only, in return for a fee. No spectrum or infrastructure would be shared with users who are outside of public safety.

Further efficiencies could be gained if a network serves both first responders and commercial users, where the former have priority. First responders need a system with great capacity during major emergencies, but most of the time they require little capacity, so capacity sits idle. Consumers can use this capacity. Cyren Call has requested 30 MHz in the 700-MHz band to establish just such a network in the United States. The network itself would be built and operated by a number of commercial carriers operating in different regions, while Cyren Call would be the network manager, setting service requirements, negotiating deals with equipment and service providers, overseeing compliance with requirements, and managing the flow of payments. Public safety agencies would pay for services on this network much as consumers pay for cellular services today.

The challenge when serving both consumers and first responders is to reconcile public safety’s demands for dependability, security, and coverage with the public’s demands for low cost. For example, a system serving only public safety would naturally be designed to maximize coverage, but a company deriving much of its revenues from commercial users will focus on population centers. Cyren Call proposes to bring terrestrial wireless coverage to 99.3% of the U.S. population, but to cover only 63.5% of the nation’s total area—mostly urban areas—or 75% of the area within the contiguous states. (The company proposes using slower satellite communications to cover the remaining rural areas.)

CURRENT POLICIES ARE SO WASTEFUL THAT A POLICY CHANGE COULD EASILY REDUCE THE COST OF PUBLIC SAFETY COMMUNICATIONS INFRASTRUCTURE, IN ADDITION TO SAVING LIVES AND SAVING SPECTRUM.

The biggest challenge when many public safety agencies are served by a single commercial company is ensuring that the company has an incentive in perpetuity to provide good services at reasonable prices. If the only choices for public safety are to pay whatever this company asks or to discontinue wireless communications for first responders, then public safety is at risk. A traditional solution is to impose regulations on costs and quality, as is done with utilities. It is not clear whether such regulation would deter commercial companies, such as Cyren Call and Verizon, from entering this market. But there would be other, nonregulatory ways to mitigate this risk.

For example, individual public safety agencies have little power to negotiate with a nationwide company, so this task can be given to a single national entity, such as a federal agency or national consortium that represents all public safety agencies. The government also might require companies to sign contracts that clearly define performance standards across many criteria, including but not limited to dependability, security, coverage, and quality of service, so companies will not be rewarded for cutting corners. Contracts could run for long periods, so renewals can be negotiated well in advance. If a contract is not renewed, this leaves more time to create an alternative. In addition, the government might stipulate that public safety users do not have to pay for their last few years of service under a contract. If the contract is renewed, then payments continue without interruption. If not, the company must provide several years of services without payment, a situation that would increase the company’s incentive to renew or enable public safety agencies to use the money they saved to prepare for whatever is next.

More extreme measures would make the company as dependent on public safety as public safety is dependent on the company. For example, when the government allocates spectrum to a company, the government can require that if the company fails to negotiate a deal acceptable to public safety, then the spectrum license is immediately revoked, even if the majority of the network’s users are not associated with public safety. License renewal also could depend on input from DHS and other responsible public safety agencies. To go even further, the government might require the company to surrender its infrastructure to the next contract winner if the negotiation fails. Similar measures have been proposed for highly subsidized telecommunications providers “of last resort” in rural areas. Under this arrangement, there is no risk that vital public safety infrastructure will become unavailable, because it can always be reassigned. The challenge here is giving the company adequate incentive to invest in infrastructure it could lose someday. Again, this requires long-term contracts and early negotiations.

In return for enacting provisions that protect public safety from monopoly service providers, the government might offer provisions that protect commercial carriers from other risks. For example, the government might guarantee that payments from public safety will not fall below a given level, even during the transition period when many public safety agencies are not yet making use of the new network.

Commercial companies also go bankrupt—especially new companies with innovative business plans. Contracts must address this possibility, so critical infrastructure will not be lost to public safety and there will be no disruptions in service. This problem is not new. Companies that operate other forms of critical infrastructure do go bankrupt from time to time, so there are models to follow.

The nation also has a variety of options to choose from in developing secondary systems to support first responders, assuming that the mission-critical voice communications are provided through a primary system. These possibilities are not mutually exclusive, so several could be adopted. The possibilities include:

Cellular carriers. Cellular carriers can compete to offer services to public safety, and the diversity of current networks can greatly increase dependability and coverage, even if individual commercial networks do not always meet all of public safety’s requirements. Cellular carriers also can provide new services that are not offered by the primary system.

THERE IS NO REASON TO INVEST BILLIONS OF PUBLIC DOLLARS IN A NETWORK THAT SERVES ONLY FEDERAL FIRST RESPONDERS, WHEN THE VAST MAJORITY OF FIRST RESPONDERS WORK FOR STATE AND LOCAL AGENCIES.

A nationwide commercial carrier. As with the Cyren Call and Verizon proposals, a commercial company could provide services to public safety across the nation, but on a secondary basis, focusing on services such as broadband that are not widely available today to public safety. One such proposal comes from M2Z Networks, which has offered to provide free services to first responders in return for 20 MHz of spectrum near 2.1 gigahertz, which is less valuable than spectrum in the 700-MHz band. The company also pledges to provide broadband services to most of the nation’s population and to return 5% of the revenues to the federal government. The company’s network would cover 95% of the nation’s population, so presumably the percentage of area covered would be considerably less than that proposed by Cyren Call. Since the services are free, there obviously is no danger of M2Z Networks overcharging. However, it is still necessary to worry about whether public safety’s service requirements will be met adequately and in perpetuity.

Municipal infrastructure. Municipal systems that blanket a city with wireless broadband coverage, or just serve strategically placed hot spots, are proving that they can play a useful role for public safety. These Wi-Fi–based municipal systems are relatively low-cost, provide high data rates, and can serve many needs, including but not limited to public safety. Although this technology is currently not capable of providing some mission-critical applications over a large region, it is fine for certain uses. These uses include fixed applications, such as transferring data from a remote surveillance camera to a command center, and applications where lives do not depend on ubiquitous and instantaneous access, such as transferring arrest reports from a police car back to the station.

Ad hoc networks. Ad hoc networks are ideally suited for applications where all devices are mobile or are transported to an emergency as needed. Such networks might be set up quickly among portable devices placed in a burning building or among fast-moving police cars. This is also an effective solution where much of the communications is local— for example, to enable public safety devices operating within an urban subway system to communicate with each other at high data rates.

Satellite networks. Satellite systems can cover vast regions and are largely immune from earthquakes, hurricanes, and most terrorist attacks. Thus, they may play an important role in sparsely populated areas where terrestrial coverage can be expensive, or in areas where terrestrial systems have been destroyed by a recent disaster. However, they are generally not the first choice where good terrestrial options are available. The time it takes a signal to travel to a satellite and back is inherently problematic for some applications, including basic voice communications. Today’s mobile satellite devices tend to be more expensive, larger, heavier, and more power-hungry than their terrestrial counterparts, which makes the satellite devices less attractive for many first responders.

Next steps

The challenge, of course, will be in devising public policies that will help reach these goals. If the nationwide broadband system is to be run by a commercial company, a number of complex issues must be worked out with the companies that come forward. If the system is to be run by government entities, policymakers could begin the process today. This latter process is essentially the same regardless of whether the network will ultimately be run by one federal entity or a collection of local or regional entities. The best approach would be for policymakers to pursue both paths in parallel.

The first step is to establish the technology and architecture for a nationwide broadband network that will meet the long-term needs of public safety. The Federal Communications Commission (FCC) and DHS would presumably have roles to play in this process, with plenty of input from public safety organizations, equipment manufacturers, wireless service providers, and other stakeholders, as well as from disinterested experts. The process itself should resemble the development of an open standard more than the typical rulemaking of a regulatory body or the opaque pronouncements that are possible from an executive-branch agency. Ultimately, an architecture should be adopted that is based on open standards, for which no entity (other than the federal government) owns intellectual property. It would include a broadband backbone, which is likely to be based on the versatile Internet protocol (IP), and standards for wireless communications. It would incorporate gateways to legacy public safety systems, as well as potential secondary systems such as commercial cellular carriers, municipal Wi-Fi systems, ad hoc networks, and satellite systems.

Given the stakes of such a fundamental shift in public safety infrastructure, the government should take the time to consider a variety of current and emerging technical options and to seriously investigate the long-term implications of each. Thus, the government should provide funds to such agencies as the Homeland Security Advanced Research Projects Agency and the National Science Foundation specifically to engage forward-looking researchers outside of government, much as it has used the Defense Advanced Research Projects Agency when considering major shifts in technology for military use.

It also is time to reevaluate the IWN. There is no reason to invest billions of public dollars in a network that serves only federal first responders, when the vast majority of first responders work for state and local agencies. One possibility is to greatly expand this program so that it supports all first responders. But if such a vast change in scope is not practical, then the network should be shelved so funding can be reallocated to a more complete solution to the problems of communications for public safety and homeland security.

Either the FCC must make spectrum available for this network, presumably at 700 MHz, or the Department of Commerce must make the IWN spectrum available for this purpose . Assuming the former, this need not increase the total amount of spectrum going to public safety, but it does mean that the FCC must abandon the policy of granting local public safety agencies maximal flexibility regarding the use of spectrum at 700 MHz. None of the proposals for spectrum allocation currently before the FCC meets this requirement.

If the nationwide network is to be government-run, the federal government must provide funding to build the nationwide infrastructure, although much or all of the funding for the mobile devices held by first responders might come from local agencies. In the long run, the money saved by an efficient system should be far greater than the amount spent, but not during the initial transition period. One possible source of funds is auction revenues from the TV spectrum that will be allocated for commercial use.

In parallel with pursing the path toward a government-run nationwide infrastructure, serious attention also should be given to the proposals of the commercial companies Cyren Call, Verizon, M2Z Networks, and perhaps others to come. A commercial public safety network may have the potential for greater benefits than a government-run system. This is especially true if the network also serves users outside public safety, so the system can be put to good use between emergencies, leading to much greater efficiencies in the use of expensive infrastructure and scarce spectrum. However, a commercial system also carries greater challenges and risks. In particular, the government must take steps to ensure that commercial companies will meet public safety’s requirements, including requirements for coverage, dependability, and security, and that requirements and fees can safely evolve over time as technology and needs change. Commercial companies will have strong incentive to cut costs and raise prices where they can, and public safety may be in a poor position to negotiate. Moreover, commercial companies that hope to derive their profits from paid subscribers will naturally try to avoid serving sparsely populated areas. It is not clear yet whether these issues can be resolved to the satisfaction of all. None of the proposals to date are sufficiently specific to address these issues. Because the risks and rewards of this approach are both great, more detailed consideration of these proposals is warranted.

Regardless of whether public safety’s new nationwide network is operated by government or a commercial company, if it serves only public safety, then the spectrum allocated to this network will sit idle much of the time. Instead, the spectrum should be shared with another user who would have secondary access. Given that public safety would not need the spectrum often, secondary rights might be auctioned for almost as much as dedicated spectrum. Thus, for example, if public safety had exclusive access to 12 MHz and primary access to 24 MHz that is shared with commercial systems, then this might be far better for both public safety and commercial users than giving public safety exclusive access to just 24 MHz. This could also generate greater auction revenues.

Because commercial carriers could play a more important role in public safety, either as primary or secondary service providers, the government should adopt policies that would increase their dependability. Policymakers should first provide market incentives for carriers to be more dependable. Carriers are rewarded for investing in better service only if customers are willing to pay more as a result. Today, customers cannot know which carrier provides the most dependable service, with or without a major disaster, so no one will pay more for a dependable service. If the FCC released annual report cards on each commercial carrier’s dependability and security, then the carriers might have an incentive to compete with rival carriers to be more dependable and secure. If the government later comes to view these carriers as critical infrastructure, policymakers should take the additional step of increasing their priority with respect to power restoration after a disaster.

Critics may argue that the nation cannot afford the cost of such policy changes. In fact, current policies are so wasteful that a policy change could easily reduce the cost of public safety communications infrastructure, in addition to saving lives and saving spectrum.

Critics also may complain that these proposed steps to reform will take too long to come to fruition. Certainly, it would be better if there were a quick fix, but the nation has been spending time and money on quick fixes for years with little effect. More than five years have passed since 9/11, and the nation still waits for failed policies to suddenly become effective. It is time to at least start the process of meaningful reform to meet truly long-term needs for public safety and homeland security.

The New U.S. Space Policy: A Turn Toward Militancy?

At first reading, the Bush administration’s new National Space Policy looks much like the Clinton policy enunciated a decade ago. Supporters of the Bush policy in fact state that it is little different, except that the language is perhaps a bit less diplomatic. On closer examination, however, and more importantly, in the context of actions taken during the past six years, the changes are dramatic. Some ambiguous language and departures from current policies and programs reveal a kind of incoherence and disingenuousness—and militancy—about U.S. space policy in the 21st century.

Released by the White House Office of Science and Technology Policy late on a Friday afternoon before the 2006 Columbus Day weekend, the policy provides overarching guidance for the United States’s multiple space programs. Initially, there was little reaction, which was almost certainly the point of burying the story on a slow weekend. The document was actually signed by President Bush on August 31 but then held for a few weeks and released with as little fanfare as possible, thus continuing the administration’s approach of maintaining a low-profile space policy to avoid too much scrutiny and controversy. But why?

First and foremost, the Bush policy describes a U.S. space program that is focused on security. Although this makes obvious sense, the blunt and even confrontational language of the new policy puts the United States at odds with the priorities of the other spacefaring nations. For many countries, space assets are regarded primarily as tools of globalization. To be fair, the new policy recognizes that “those who effectively utilize space will enjoy added prosperity and security and will hold a substantial advantage over those who do not.” But this is almost a throwaway sentence, given that the rest of the document emphasizes the military uses of space. “Freedom of action in space,” the authors write, “is as important to the United States as air power and sea power.” Well, yes, but does that mean that other countries can then demand similar rights and expectations regarding their security in space as well? To assert a right in the international community is to assume that others can assert a similar right.

Consider this language from the new space policy: “The United States rejects any claims to sovereignty by any nation over outer space or celestial bodies, or any portion thereof, and rejects any limitations on the fundamental rights of the United States to operate in and acquire data from space.” This is a firm and unwavering warning, perhaps one desperately needed, to the Russians or the Chinese that they should forget about colonizing Jupiter. Well and good, but closer to home, the U.S. language begs an important question: If the United States can claim complete freedom to operate in space, does this right then extend to every other nation on Earth as well?

A key principle in the new policy states: “The United States considers space systems to have the rights of passage through and operations in space without interference. Consistent with this principle, the United States will view the purposeful interference with its space systems as an infringement on its rights.” In other words, the United States considers space to be something like the high seas. And yet, when it is in the U.S. national interest, the United States acts against vessels in the maritime commons, as when it— rightly—forces North Korean ships to submit to inspection. But does such an absolute declaration of sovereign right really help the cause of cooperation in space? Even on Earth, the high seas are not immune to international governance; why should space be any different?

THE LANGUAGE OF THE ADMINISTRATION’S POLICY IS SO BROAD THAT IT READS MORE LIKE A BLANKET CLAIM TO HEGEMONY IN SPACE RATHER THAN A REASONABLE DEMAND THAT WE, LIKE ANY NATION, BE ALLOWED TO TRAVERSE THE SKIES IN OUR OWN DEFENSE.

In response to questions from the press and in related public statements at the United Nations and elsewhere, the administration does clarify that these rights of passage apply to all nations, not just the United States. However, the United States is emphatic that these rights it asserts cannot and will not be guaranteed by international law but by the threat of force, thereby providing a rationale for the development of new enforcement capabilities. Additionally, in other parts of the document, the section on right of access is apparently superseded, or contradicted, by other policy priorities.

Perhaps most revealing is that the very first principle in the administration’s policy states: “The United States is committed to the exploration and use of outer space by all nations for peaceful purposes, and for the benefit of all humanity. Consistent with this principle, ‘peaceful purposes’ allow U.S. defense and intelligence-related activities in pursuit of national interests.” Let us imagine, for a moment, what the U.S. reaction would have been in, say, 1972, had the Soviet Union made a similar declaration. National interests are, of course, whatever governments deem them to be, and in the Soviet case, those interests might have including spying on the United States. This is not to say that the United States does not have a good case for arguing for the unimpeded use of space for the kind of observation and communication that would hamper rogue states and terrorists. (It does, and it should make it clear that it will not accept limits on its ability to protect itself.) But to state flatly that all defense and intelligence purposes fall under the “peaceful use of space” is to invite other nations to claim exactly the same right. The peaceful uses of space do in fact include observation and warning of attack, but the language of the administration’s policy is so broad that it reads more like a blanket claim to hegemony in space than a reasonable demand that it, like any nation, be allowed to traverse the skies in its own defense.

Perils of ambiguity

The new policy also suffers from ambiguous language that, intentional or not, seems designed to hide important issues under a canopy of imprecision, perhaps in an attempt to tamp down potential objections from those in Congress and others interested in the direction of U.S. space policy. To take one example, the policy demands that the United States preserve its rights, capabilities, and freedom of action to “dissuade or deter others from either impeding those rights or developing capabilities intended to do so.” But what does that mean? Given the dual-use nature of space technology, almost anything shot out of the atmosphere might qualify, and it is not clear what capabilities demonstrated by other nations might specifically be targeted as threats. Or to take the worst case, given the broadness of the policy’s language, what can other countries do in space that will not be considered threatening by the United States? Indeed, should any nation violate this overly broad construction of a threat, the administration vows to “take those actions necessary to protect its space capabilities, respond to interference and deny, if necessary, adversaries the use of space capabilities hostile to U.S. national interests.” This language is contradictory with the passage on rights. Although the specific language is not all that new, other nations could be forgiven, in light of recent U.S. actions, for viewing statements like that not as an assertion of defense but as a promise of aggression. In fact, the policy directs the secretary of Defense to “develop capabilities, plans, and options to ensure freedom of action in space, and if directed, deny such freedom of action to adversaries.” The presumed assumption is that in time of war, the United States reserves the right to violate the rights of others. It would be useful, though, if that were made explicit, in the same way that rules of war in other international mediums, specifically air, land, and sea, are made explicit. And although the words “space weapons” are never uttered, they can be heard if one listens closely.

The 2001 Space Commission report (chaired by Donald Rumsfeld) sees space becoming a battlefield along with land, air, and the seas. If you agree with this premise, then the United States would be remiss not to prepare for that inevitability. The Joint Doctrine for Space Operations, published by the Office of the Joint Chiefs of Staff in August 2002, states that,“The United States must be able to protect its space assets and deny the use of space assets by its adversaries.” The 2004 U.S. Air Force Counterspace Operations Doctrine document states that, “U.S. Air Force counterspace operations are the ways and means by which the Air Force achieves and maintains space superiority. Space superiority provides freedom to attack as well as freedom from attack.” Even as recently as June 2006, John Mohanco, the State Department’s deputy director of multilateral nuclear security affairs, told the Conference on Disarmament that, “The high value of space systems for commerce and in support of military operations long has led the United States to study the potential of space-related weapons to protect our satellites from potential future attacks, whether from the surface or from other spacecraft. As long as the potential for such attacks remains, our government will continue to consider the possible role that space-related weapons may play in protecting our assets.”

Less than a week after the new National Space Policy was released, Robert Luaces, the alternate representative of the United States to the United Nations General Assembly First Committee, made a statement on the policy. He said: “The international community needs to recognize, as the United States does, that the protection of space access is a key objective. …It is critical to preserve freedom of action in space, and the United States is committed to ensuring that our freedom of space remains unhindered. All countries should share this interest in unfettered access to, and use of, space, and in dissuading or deterring others from impeding either access to, or use of, space for peaceful purposes, or the development of capabilities intended to serve that purpose.” Although the words may be intended to persuade other countries of benign U.S. intentions, actions sometimes speak louder.

The Missile Defense Agency, in its fiscal year (FY) 2007 budget documentation, cited plans to ask for $45 million in FY 2008 to begin research on a test bed for space-based interceptors. Presented as purely defensive, that characterization is based on intent rather than technological capability, and clearly “intent” can change. Space-based interceptors add to the inventory of space technology R&D programs with potential military applications, including several microsatellite programs as well as some gee-whiz efforts such as “Rods from God,” which would develop the capability to hurl metal rods from space with such force that it would create the equivalent of a radiation-free nuclear weapon.

Other countries, Russia and China in particular, are interested in many of the same technologies as the United States, especially ground-based, laser antisatellite weapons (ASATs), co-orbital microsatellites, air-launched direct-ascent ASATs, and missile defenses. Threat assessments of other countries’ capabilities and intents in these areas vary widely, with exaggerations being common and possible because the technological difficulties involved are not well understood. Although the United States adamantly rejects arms control as a limitation on what it could do, arms control would constrain others from doing things that place important U.S. assets at risk. If the United States proceeds with development of these technologies, at staggering cost, others can and will do the same, only in a cheaper, easier, defensive mode.

ALTHOUGH THE WORDS “SPACE WEAPONS” ARE NEVER UTTERED IN THE NATIONAL SPACE POLICY, THEY CAN BE HEARD IF ONE LISTENS CLOSELY.

Hence, the real danger of the new space policy could well be the perpetuation of the false belief that space assets can be defended. In reality, it is impractical if not impossible from a technical perspective to defend space assets. They are easily seen objects traveling in known orbits and hence much easier to target than the incoming missiles that the United States seems convinced it can shoot down with missile defense. The only way to protect assets is to outlaw attacks and the technologies that enable attacks, and to try to implement a regime under which attacks can be verified. But the new policy specifically rejects new legal regimes or other restrictions that would inhibit U.S. access to space. Attacks on satellites should be strongly stigmatized, in the same way that the use of chemical or biological weapons is stigmatized, with assurances of severe retribution sanctioned by the international community.

Noteworthy additions

Beyond the more sensational areas of the new policy, there are other new, noteworthy areas of emphasis. The policy states the need to develop space professionals, addressing the kind of workforce issues raised in a 2005 study by former Johnson Space Center director George Abbey and former presidential science adviser Neal Lane. It also expresses much-needed support for enhanced space situational awareness, a goal that might receive the attention it deserves because of support from General James Cartwright, commander of U.S. Strategic Command.

Additionally and somewhat curiously, considerable attention is given to support for space nuclear power. This probably refers not to the large propulsion systems of the past used to send spacecraft to the outer solar system and beyond, but to specialized equipment used in microsatellites and therefore of considerable potential military value. The need to improve development and procurement practices that have resulted in space projects routinely running over budget and behind schedule is addressed, as is the need for more interagency space partnerships. The latter can be expected to yield more partnerships between the National Aeronautics and Space Administration (NASA) and the Department of Defense, which is probably the only way in which the NASA budget will be increased. On that note, the policy offers support for a “human and robotic program of space exploration” but fails to note that the administration is not requesting a NASA budget adequate to fulfill the vision of reaching the Moon, Mars, and beyond.

International cooperation is given a nod, with the secretary of State given the lead in “diplomatic and public diplomacy efforts, as appropriate, to build an understanding of and support of U.S. national space policies and programs and to encourage the use of U.S. space capabilities and systems by friends and allies.” Unfortunately, the more those friends and allies hear and understand the U.S. policy, the less inclined they might be toward working with the United States. The Times of London perhaps summed up the international perspective best in its October 19, 2006, article entitled “America Wants it All—Life, the Universe, and Everything,” in which it stated that space was no longer the final frontier, but the 51st state of the United States. It went on to say that, “The new National Space Policy that President Bush has signed is comically proprietary in tone about the U.S.’s right to control access to the rest of the solar system.” Although the ambiguities and contradictions of the U.S. policy statement make it difficult to characterize its purpose so bluntly, this negative perception certainly exists.

In a period when the importance of strategic communications is increasingly recognized in the fight against terrorism, that same importance must be recognized as extending to other policy areas as well. Although the policy does not include blatant language about space weapons and U.S. intentions to dominate space, the echo of such sentiments can be heard. Because space is inherently not the purview of any one country and is increasingly becoming globalized, setting unattainable goals with thinly veiled threats as the implied means for carrying them out is not in the best interests of the United States. The country would be better served by convincing others that it is not in their interests— economic, societal, or political—to interfere with space assets, and by using the rule of law to its benefit by establishing codes of conduct for space that would identify nefarious actors and prescribe sanctioned action against them.

A unilateral declaration that the skies belong to the United States is not the answer. Space is too important to be left to debates behind closed doors and reports released after hours to avoid attention and scrutiny. Space may well be a battlefield in the 21st century, but U.S. responses to that reality, including diplomatic as well as military measures, need to be debated openly and with a full understanding of U.S. responsibility to its citizens and its partners in the international community. Friday afternoon press releases and incoherent policy are no substitute for a realistic way forward in space. Americans—and the peoples of the world with whom we will share space, whether we like it or not— deserve better.

None Dare Call It Hubris: The Limits of Knowledge

During the past four decades, many of us have come to terms with an increasing realization that there may be a limit to what we as a species can plan or accomplish. The U.S. failure to protect against and respond to Hurricane Katrina in the summer of 2005 and the apparent futility of the plan to democratize and modernize Iraq provide particularly painful evidence that we seem to be operating beyond our ability to plan and implement effectively, or even to identify conditions where action is needed and can succeed.

Our disappointing performances in New Orleans and Iraq might be less disheartening if they were the most complex problems we need to address, but they are child’s play compared to the looming problems of global terrorism, climate change, or possible ecosystem collapse; problems that are not only maddeningly complex but also potentially inconceivably destructive.

Our current approach to framing problems can be traced back to the 1972 publication of the Club of Rome’s The Limits to Growth, which posed the still-unanswered question: How much population growth and development, how much modification of natural systems, how much resource extraction and consumption, and how much waste generation can Earth sustain without provoking regional or even global catastrophe? Since that time, the way we think about human activity and the environment and the way we translate this thinking into our science policy and subsequent R&D, public debate, and political action have been framed by the idea of external limits—defining them, measuring them, seeking to overcome them, denying their existence, or insisting that they have already been exceeded.

For technological optimists these limits are ever-receding, perhaps even nonexistent, as science-based technologies allow progressive increases in productivity and efficiency that allow the billion and a half people living in industrialized and industrializing nations today to achieve a standard of living that was unimaginable at the beginning of the 20th century. For the pessimists, there is global climate change, the ozone hole, air and water pollution, overpopulation, natural and human-caused environmental disasters, widespread hunger and poverty, rampant extinction of species, exhaustion of natural resources, and destruction of ecosystems. In the face of these conflicting perceptions, it makes no sense to try to use external limits as a foundation for inquiry and action on the future of humans and the planet. It is time to look elsewhere.

All sides in the limits-to-growth debate would probably agree on the following two observations: First, the dynamic, interactive system of complex biogeochemical cycles that constitutes Earth’s surface environment is falling increasingly under the influence of a single dominant life form: us. Second, this life form, notable for its ability to learn, reason, innovate, communicate, plan, predict, and organize its activities, nonetheless exhibits serious limitations in all these same areas.

During the past 150 years, scientific and technological innovation has facilitated enormous growth: The population of Earth has increased approximately sixfold, the average life span of those living in the industrialized nations has doubled, agricultural productivity has increased by a factor of five, the size of the U.S. economy alone has increased more than 200-fold, the number of U.S. scientists has increased by more than 17 times, and the volume of globally retrievable information stored in analog and digital form has expanded by incalculable orders of magnitude. At the same time, 20% of the planet’s bird species have been driven into extinction, 50% of all freshwater runoff has come to be consumed, 70,000 synthetic chemicals have been introduced into the environment, the sediment load of rivers has increased fivefold, and more than two-thirds of the major marine fisheries on the planet have been fully exploited or depleted.

As Joel Cohen has brilliantly illustrated in his book How Many People Can the Earth Support?, there are many possible futures available to us. The only certainty is that present trajectories of growth cannot, and therefore will not, be maintained indefinitely. (Thomas Malthus got this point right more than 200 years ago. He simply failed to appreciate the productivity gains that science and technology could deliver.) The central question that faces us is whether we will be able to position ourselves to choose wisely among alternative future trajectories or will simply blunder onward. The markets will indeed adjust to the eventual depletion of fossil-fuel reserves, for example, but will likely be too shortsighted to prevent global economic disruption on an unprecedented scale, a situation that could even lead to global war.

If we continue to define our problem as external to ourselves—as limits imposed by nature and the environment— then we consign ourselves to a future of blundering. The limits that matter are internal. They are the limits on our collective ability to acquire, integrate, and apply knowledge.

Although it is difficult to isolate these limits neatly from one another, it is helpful to separate them into six large categories: limits of the individual, of sociobiology, of socioeconomics, of technology, of knowledge, and of philosophy. Although these might at first seem to be insurmountable shortcomings, I believe that our best hope for finding our place in nature and on the planet resides in embracing our limits and recognizing them as explicit design criteria for moving forward with our knowledge production and organization. I see potential for progress in each.

Individual limits. We all operate out of self-interest, which is entirely rational. Community spirit and altruism may be motivating factors, but given that we cannot know the effects of our individual actions on the larger systems in which we are enmeshed, the only reasonable alternative is for each of us to pursue our conception, however imperfect, of our own interests. Yet as social systems grow more and more complex and as they impinge more and more on natural systems, our individual vision inevitably captures less and less of the big picture. Our only option is to accept the limits of individual rationality and to take them into account in formulating public policy and collective action.

Sociobiological limits. During the course of our development, humanity’s special capabilities in areas such as toolmaking, language, self-awareness, and abstract thought have rendered us extraordinarily fit to engage in the competitive business of individual and species survival. We compete among ourselves at every organizational level and with other species in virtually every ecological niche. Cooperation, therefore, most often occurs at one level (a tribe or a nation, for example) in order to compete at a higher level (a war between tribes or nations). But at the highest levels— the behavior of an entire species competing with or dominating billions of other species—we have run out of reasons to cooperate or structures to foster effective cooperation. We need to consciously search for ways to transcend our sociobiological limits on cooperation.

Socioeconomic limits. We have done our best to make a virtue out of our individual and sociobiological limits through market economics and democratic politics. Yet we are unable to integrate the long-term consequences of our competition-based society into our planning processes. Our competitive nature values the individual over the group, but the aggregation of individual actions constantly surprises us. Despite our best intentions, our actions are consistent with a global economy predicated on the expectation of continued growth and development derived from ever-increasing resource exploitation. Thus, for example, we all climb into our cars in the morning thinking only that this is the most convenient way to get to work. We are not deliberately choosing to waste time in traffic jams, exacerbate the trade deficit, and pump greenhouse gases into the atmosphere.

We find it extraordinarily difficult to anticipate or accurately account for the costs and risks incurred over the long term by such group behavior. Indeed, those costs and risks vary wildly from individual to individual and from group to group. An example of this is the cost/benefit calculation that must have been made regarding New Orleans, where the probability of catastrophic flooding is low and the cost of protecting the city is high. At every level of the political system, the individual perspective outweighed the collective, with the result that adequate protection for the whole community lost out. Because of these complexities, efforts to advance the long-term interests of the whole by controlling the short-term behavior of the individual are doomed to failure, which is one of the lessons of the global collapse of communism.

Technological limits. To evade the behavioral limits of biology and economics, we have turned to technology. Indeed, technology, harnessed to the marketplace, has allowed industrialized societies to achieve amazingly high standards of living. In doing so, however, we have put our future into the hands of the lowest bidder. Cheap oil and coal, for example, ensure our continued dependence on the internal combustion engine and the coal-burning power plant. The problem we face is not a shortage of polluting hydrocarbon fuels, but an excess. History shows that we will develop increasingly efficient energy technologies but that gains in efficiency will be more than offset by the increased consumption that accompanies economic growth. The increased efficiency and cleanliness of today’s cars when compared with those built in 1980 are an example. Technology has allowed us to pollute less per mile of driving, but pollution has declined little because we drive so many more miles. Too often we choose technologies that save us from today’s predicament but add to the problems of tomorrow.

Knowledge limits. There is absolutely no a priori reason to expect that what we can know is what we most need to know. Science uses disciplinary organization to recognize and focus on questions that can be answered. Disciplines, in turn, are separated by methodology, terminology, sociology, and disparate bodies of fact that resist synthesis. Although disciplinary specialization has been the key to scientific success, such specialization simultaneously takes us away from any knowledge of the whole.

Today the whole encompasses six billion people with the collective capability of altering the biogeochemical cycles on which we depend for our survival. Can science generate the knowledge necessary to govern the world that science has made? Do we even know what such knowledge might be? Producing 70,000 synthetic chemicals is easy compared to the challenge of understanding and dealing with their effects. Despite the billions we have spent studying our interference with the planet’s biogeochemical cycles, we really do not have a clue about what the long-term result will be. And we have even less knowledge about how to organize and govern ourselves to confront this challenge.

The intrinsic difficulties of creating a transdisciplinary synthesis are compounded dramatically by a dangerous scientific and technological illiteracy among senior policy-makers and elected officials. An ironic effect of technology-created wealth is the growth of an affluent class that prizes individualism over civic engagement and that feels insulated from the need to understand and confront complex technology-related social issues.

Philosophical limits. The scientific and philosophical intellectuals of “the academy” remain focused on the relatively simple question of understanding nature. The much more complicated and challenging—and meaningful— quest is to understand nature with a purpose, with an objective, with an end. What is the purpose of our effort to understand nature: to learn how to live in harmony with nature or to exploit it more efficiently? For thousands of years, philosophical inquiry has been guided by such fundamental questions as “Why are we here?” and “How should we behave?” Such questions were difficult enough to confront meaningfully when our communities were small, our mobility limited, and our impact restricted. In today’s hyperkinetic world, how can we possibly hope to find meaning? The literal answers provided by science amount to mockery: We are here because an expanding cloud of gas some 15 billion years ago eventually led to the accretion of planets, the formation of primordial nucleotides and amino acids, the evolution of complex organisms, the growth of complex social structures in primates, and the dramatic expansion of cognitive and analytical capabilities made possible by the rapid evolution of neocortical brain structures. Such explanation is entirely insufficient to promote the commonality of purpose necessary for planetary stewardship. We lack a unified or unifiable metaphysical basis for action, just when we need it most.

I list these limits—which no doubt could be parsed and defined in many different ways—not to bemoan them, but to acknowledge the boundary conditions that we face in learning how to manage our accelerating impact on Earth. How can we create knowledge and foster institutions that are sensitive to these boundary conditions? This is a sensitivity that we have hardly begun to develop and that will not be found in any of compartmentalized traditional disciplines that we nurture so earnestly.

Not only do we perpetuate traditional disciplines, we assign inordinate significance to distinctions in a strict hierarchy: Disciplines trump other disciplines based on their quantitative capacities. The academy remains unwilling to fully embrace the multiple ways of thinking, the different disciplinary cultures, orientations, and approaches to solving problems that have arisen through hundreds if not thousands of years of intellectual evolution. Our science remains culturally biased and isolated: Western science is derivative of a philosophical model of domination and the manipulation of nature, as opposed to the acceptance of natural systems and dynamics.

The problems that we face are not hierarchical, nor do they fall within strict disciplinary categories. They require multiple approaches and an integration of disciplines; we cannot expect biologists alone to solve the problem of the loss of biodiversity. Because each academic discipline has a Darwinian focus on its own survival, none has the impetus or the capacity to develop a formal language to make itself comprehensible to other disciplines. We have not developed the means for chemists to talk to political scientists, and for political scientists to talk to earth scientists, and for earth scientists to talk to engineers. The debate must engage a broad community of disciplines, and not just the expertise found within the universities but also the wisdom and expertise developed in commerce, industry, and government.

We need new ways to conceive of the pursuit of knowledge and innovation, to understand and build political institutions, to endow philosophy with meaning for people other than philosophers. We trumpet the onset of the “knowledge society,” but we might be much better off if we accepted that, when it comes to our relations with nature, we are still pretty much an “ignorance society.” Our situation is reminiscent of Sherman McCoy, the protagonist of Tom Wolfe’s Bonfire of the Vanities, who fancies himself a “Master of the Universe” just as his life is taken over by events far beyond his control. We have the illusion of understanding and are not humbled by the fact that we do not understand. We refuse even to consider the possibility.

Hubris, exemplified in the demands we make on science, is a major obstacle to coming to grips with our situation. We are obsessed with trying to predict, manage, and control nature, and consequently we pour immense intellectual and fiscal resources into huge research programs—from the Human Genome Project to the U.S. Global Change Research Program—aimed at this unattainable goal. On the other hand, we devote little effort to the apparently modest yet absolutely essential question of how, given our unavoidable limits, we can manage to live in harmony with the world that we have inherited and are continually remaking.

Concepts such as sustainability, biodesign, adaptive management, industrial ecology, and intergenerational equity— new principles for organizing knowledge production and application—offer hints of an intellectual and philosophical framework for creating and using knowledge appropriate to our inherent limits. Sustainability is a concept as potentially rich as justice, liberty, and equality for guiding inquiry, discourse, and action. Biodesign seeks to mimic and harness natural processes to confront challenges in medicine, agriculture, environmental management, and national security. Adaptive management acknowledges the limits of acquiring predictive understanding of complex systems, and although the prospect of their control is illusory, the genesis of increasingly sophisticated data sets should impart increasing “predictability” to the bandwidth in which systems may behave. Industrial ecology responds to our tendency to organize and innovate competitively, and looks to natural systems for a model of innovation that can enhance competitiveness while reducing our footprint on the planet. Intergenerational equity seeks to apply core societal values such as justice and liberty across boundaries of time as well as space. Of course, we will need many other new ways to think about and organize our actions, but these few indicate a beginning.

Common to all such approaches is the idea that more flexibility, resilience, and responsiveness must be built into all institutions and organizations—in academia, the private sector, and government—because society will never be able to control the large-scale consequences of its actions. In today’s ignorance society, we must define some measure of rationality and recognize that the only way to reduce uncertainty about the future is to take action and carefully observe the outcomes. We must establish threshold criteria for, or at least attempt to define, the range of potential scenarios for which some degree of planning either to promote or obstruct a given outcome should be contemplated. The latter is the more difficult, particularly if a major risk or disaster begins to emerge. Yet we should not succumb to the paralysis of the “precautionary principle,” which saps innovation and risk-taking. The more institutional and organizational innovation we conduct, the better the chances that we will learn how to deal with the implications of our own limits.

The ideological and institutional struggle between communism and market democracy can be viewed as one such set of competing innovations, albeit poorly planned and exceedingly costly. A key result of this innovation competition is the certain knowledge that rational self-interest cannot be successfully suppressed indefinitely and that legal systems that foster dissent and freedom of choice provide a fertile culture for innovation. We now urgently need to conceptualize a new series of innovations, at much lower cost and shorter run-time, to push this result further and apply it to the problem of ensuring that our global society can continue to be sustained by the web of biogeochemical cycles that makes life possible in the first place.

From the Hill – Winter 2007

Federal R&D funding stuck on hold

The outgoing Republican majority in Congress left town in December having passed only 2 (defense and homeland security) of the 11 appropriations bills needed to fund the fiscal year (FY) 2007 budget. Although Congress has endorsed the Bush administration’s proposed large increases for select physical sciences funding agencies as part of the president’s American Competitiveness Initiative, these and other increases for R&D programs are on hold and may not survive in the next congressional session. However, Speaker-Elect Nancy Pelosi (D-CA) reiterated her support for President Bush’s proposal to increase basic research in the physical sciences and engineering.

In the meantime, agency programs will be operating at either the FY 2007 House figure or FY 2006 funding levels, whichever is lower. Thus, even R&D programs slated for large increases in 2007 must operate at flat funding levels through mid-February (when work on the budget is expected to be completed), whereas R&D programs slated for steep cuts find their operating funds reduced sharply.

Congress approved an FY 2007 Department of Defense (DOD) budget that includes $76.8 billion for R&D. Nearly the entire $3.5 billion or 4.8% increase would go to weapons development programs, but some research activities will see budget increases.

Congress rejected the administration’s request for a 20% cut in DOD’s science and technology (S&T) investments. Instead, it voted to maintain spending near the 2006 funding level of $13.6 billion. A profusion of congressional earmarks would boost DOD support of basic and applied research above 2006 levels. Basic research would climb 4.8% to $1.5 billion, and applied research would increase 0.8% to $5.2 billion. The research-oriented Defense Advanced Research Projects Agency (DARPA) would see its budget increase 1.4% to $3 billion.

The Air Force and the Missile Defense Agency (MDA) would be the big winners in weapons development funding. Air Force R&D would climb 10.7% to $24.4 billion, and MDA development would surge 22.1% to $9.4 billion after a steep cut in 2006.

Congress, meanwhile, cut the Department of Homeland Security’s (DHS’s) R&D funding for the first time in 2007. DHS R&D would fall 22% to $1 billion, even as the total DHS budget would keep increasing. Funding for most DHS R&D activities would decline. Only DHS R&D activities in cybersecurity, interoperable communications, and radiological and nuclear countermeasures would receive increases in 2007.

The radiological and nuclear countermeasures R&D portfolio would receive a significant increase as part of its move from the Science and Technology directorate to a separate Domestic Nuclear Detection Office (DNDO). Congress increased DNDO R&D from $209 million within S&T to $273 million, up 31%.

Congressional dissatisfaction over DHS management continues to grow. The final DHS budget withholds $65 million in 2007 R&D funds (and an additional $60 million in management funds) until DHS provides Congress with detailed reports on financial management and performance measures. The bill also rescinds $125 million in previously appropriated R&D funds that DHS has not yet spent.

Although a still-increasing defense budget would help defense R&D increase by 4.5% to $81.2 billion, a flat overall domestic budget would keep nondefense R&D flat in real terms with a 2% increase to $58.8 billion. Defense R&D would make up 58% of the federal R&D portfolio, a ratio not seen since the height of the Cold War.

Although Congress has so far supported the president’s American Competitiveness Initiative, proposed increases for its three key physical sciences agencies are on hold until next year. The National Science Foundation (proposed to go up 8.3% to $4.5 billion), the Department of Energy’s Office of Science (up 15.3% to $3.8 billion), and Commerce’s National Institute of Standards and Technology laboratories (up 20% to $377 million) could receive the full requested increases for their R&D programs, but the new 110th Congress could chisel away at these increases in order to shift funds to other programs.

Also, Congress has so far supported the administration’s proposal to boost the National Aeronautics and Space Administration’s (NASA’s) R&D funding by $858 million or 7.6% to $12.2 billion; all of the increase and more would go to development efforts for the next generation of human space vehicles, leaving NASA research funding in steep decline.

The National Institutes of Health (NIH) budget would decline for the second year in a row to $28.5 billion; all but three of NIH’s institutes and centers would see their budgets shrink for the second year in a row.

Many agencies face uncertain prospects in final FY 2007 appropriations because of big differences between

New Bush climate plan falls short, critics say

After four years of work, the Bush administration on September 20, 2006, unveiled a strategic plan for using technology to reduce the risk of climate change. However, it was immediately criticized for falling short of what is needed to deal with the issue.

The Climate Change Technology Program (CCTP) Strategic Plan outlines $3 billion in spending across agencies for technology research, development, demonstration, and deployment to reduce greenhouse gas emissions. It sets six complementary goals: reducing emissions from energy use and infrastructure; reducing emissions from energy supply; capturing and sequestering carbon dioxide; reducing emissions of other greenhouse gases; measuring and monitoring emissions; and bolstering the contributions of basic science to climate change. The plan outlines near-term, mid-term, and long-term approaches toward attaining these goals and examines what is needed to meet varying levels of emissions reductions. Overall, the plan essentially reiterates the administration’s position that basic scientific research and voluntary actions are adequate to solve the problem.

Department of Energy officials described the plan at a hearing held by the House Science Committee Subcommittee on Energy. The House Government Reform Committee held a hearing the following day. The overriding message from lawmakers and witnesses was that although the plan adequately describes necessary research, it fails to deliver on innovative solutions or the deployment of new technologies. It was emphasized that without regulations limiting the emissions of greenhouse gases, companies will not adopt new technologies. The same point was made in a recent Government Accountability Office (GAO) report.

The hearing of the Government Reform Committee went beyond an examination of the strategic plan to investigate whether a new Climate Change Advanced Research Projects Agency to forge groundbreaking research is needed. Such a new agency would be modeled after the successful DARPA. CCTP Director Stephen Eule argued that such a new agency would take away funds from other agency activities. Eule also clarified that the role of the strategic plan is not to set goals for reducing greenhouse gas emissions but rather to suggest technological opportunities. House- and Senate-proposed funding levels. As a result of all the budget work left unfinished, most R&D funding agencies are in the now-familiar situation of going two, three, or even more months into a new fiscal year without a final budget. For the U.S. science and engineering community, the longer the delays, the more likely it is that proposed funding increases for R&D programs will get carved away and transferred to other programs.

The hearing also featured testimony from GAO Director of Natural Resources and Environment John Stephenson, who stated that changes in the format and content of the administration’s reports make it difficult to determine whether stated increases in climate change funding are “a real or definitional increase.”

EPA revises clean air standard

Just days before a court-imposed September 27, 2006, deadline, the Environmental Protection Agency (EPA) revised one standard for human exposure to fine particulate matter (PM) but kept another the same, despite a recommendation for change from its science advisory board.

The revised standard, which will take effect in 2015, lowers the 24-hour exposure to PM from 65 micrograms per cubic meter of air to 35 micrograms. The EPA said it would leave unchanged the yearly allowable average exposure of 15 micrograms, even though 20 of the 22 members of the EPA’s science advisory board, as well as the American Medical Association, had recommended that it be reduced to 13 to 14 micrograms. EPA Administrator Stephen Johnston said there is “insufficient evidence” to justify a lower standard for annual average exposure.

Toxicological and epidemiological studies have shown fine PM to be associated with the aggravation of heart and respiratory disease, asthma attacks, lung cancer, chronic bronchitis, and premature death.“Of the many air pollutants regulated by EPA, fine particles likely pose the greatest threat to public health due to the number of people exposed,” said William Wehrum, acting assistant administrator, EPA Office of Air and Radiation.

At a July 13 hearing of the Senate Environment and Public Works Subcommittee on Clean Air, Climate Change, and Nuclear Safety, a key Republican and Democrat clashed over the need for new standards. Subcommittee Chairman George Voinovich (R-OH) disputed the need for new regulation, especially given the difficulty that many counties have had in meeting existing standards. He said that penalties associated with nonattainment status can be a substantial economic burden on counties and that adopting the new standards could significantly increase the number of counties in violation of the rule. Voinovich noted that that the provisions of the Clean Air Act require the EPA merely to assess air quality standards every five years, not necessarily to revise them.

Noting the country’s burgeoning health care costs, Sen. Thomas Carper (D-DE) said, “The cost of breathing dirty air is a far higher burden on the economy than paying for air pollution control.” Carper stated that because of rising health care costs, even industry groups such as the National Association of Manufacturers recommend addressing chronic diseases such as asthma.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Not Safe Enough: Fixing Transportation Security

In August 2006, British authorities announced that they had uncovered a plot in which liquid explosives would be used to destroy airliners en route from England to the United States. When the U.S. Transportation Security Administration (TSA) responded by banning all liquids and gels from passenger aircraft cabins, it was widely reported that such action was necessary because existing screening equipment could not detect the kind of explosives involved in the plot. This is perhaps understandable, given the daunting technological challenge of developing detectors that are not only sensitive enough to identify explosives from among the wide array of liquids routinely carried on aircraft, but also compact and affordable enough to be widely deployed and quick enough to not unduly impede passenger flow.

But this response comes 11 years after the similar Bojinka plot (a plan to bomb 12 U.S.-registered aircraft flying across the Pacific) was foiled in 1995, five years after the 9/11 hijackings that produced unprecedented attention and funding for aviation security, and two years after the 2004 law that sought to implement the National Commission on Terrorist Attacks Upon the United States’ (9/11 Commission’s) recommendation to prioritize the development and deployment of effective checkpoint explosives detectors. By itself, this is sobering, but when taken together with widespread evidence that such limited progress is the rule rather than the exception across all transportation security programs, a fundamental reassessment is in order about how the United States is approaching this issue.

Although customs agents or checkpoint screeners are often the focus of published reports on security lapses, the shortcomings in the current system do not rest primarily with the front-line personnel charged with implementing transportation security. Rather, the problem is that the Bush administration and Congress have failed to adequately address the key, big-picture questions about the management, funding, accountability, and, above all, priorities for the emerging transportation security system. Although progress in bolstering transportation security has been made since 9/11, additional action is needed to remedy continuing deficiencies.

Before September 11, 2001, U.S. transportation security was limited in extent and purpose. Transit police and subway surveillance cameras sought to deter or detect criminal activity. Customs agents at ports looked for smugglers. In aviation, the only sector that had received significant security policy attention and resources from the federal government, the emphasis was overwhelmingly directed overseas. The events of 9/11 altered all of that. The federal government responded with a flurry of initiatives, including:

  • The Aviation and Transportation Security Act of 2001, which created TSA to be responsible for the security of all transportation modes and established deadlines for the implementation of specific aviation security measures.
  • The Maritime Transportation Security Act of 2002, which set security guidelines for ports and ships.
  • The Homeland Security Act of 2002, which established the Department of Homeland Security (DHS) by combining 22 federal agencies, including TSA, the Coast Guard, the Customs Service, and the Federal Emergency Management Agency (FEMA).
  • The Intelligence Reform and Terrorism Prevention Act of 2004, which turned many of the 9/11 Commission’s recommendations, including those relating to transportation security, into statutory mandates.

The expanded policy attention was accompanied by a substantial rise in federal funding for transportation security, which increased from well under $200 million in fiscal year (FY) 2001 (almost all of which went for aviation security) to more than $8.5 billion in FY 2006, with 70% devoted to aviation.

IT IS IMPERATIVE THAT THE ARTIFICIAL BUDGET AND POLICY DISTINCTIONS BETWEEN NATIONAL DEFENSE AND HOMELAND SECURITY BE ELIMINATED SO THAT SECURITY AND COUNTERTERRORISM EFFORTS AT HOME AND ABROAD CAN BE BETTER INTEGRATED.

As a result, improvements have been made in many areas of transportation security during the past five years. Passenger aircraft are much less vulnerable to another 9/11-style of attack. More air and sea cargo is being scrutinized in some fashion. More attention has been given to vulnerability assessments of the nation’s transportation infrastructure and to security training for law enforcement and transportation workers. Above all, there is greater awareness of the terrorist threat. But major questions remain about the effectiveness of all elements of the new system.

The Heritage Foundation and the Center for Strategic and International Studies jointly reported that DHS “is weighed down with bureaucratic layers, is rife with turf warfare, and lacks a structure for strategic thinking and policymaking.” According to a 2005 survey of homeland security officials and independent experts, “the department remains under financed and understaffed, and suffers from weak leadership.” The 9/11 Commission found that “The current efforts do not yet reflect a forward-looking strategic plan systematically analyzing assets, risks, costs, and benefits.” The successor to the 9/11 Commission, the 9/11 Public Discourse Project (PDP), reported that, as of December 2005, progress in implementing the commission’s transportation security recommendations rated a “C” for airport checkpoint screening for explosives; a “C–” for the National Strategy for Transportation Security, a “D” for checked bag and cargo screening, and an “F” for airline passenger prescreening.

Aviation security

Among the layers of aviation security, TSA’s intelligence division is now more relevant to its agency leadership and decisionmaking process than was its predecessor within the Federation Aviation Administration, but although it is twice as large as its predecessor, it remains significantly understaffed, and its agents are now spread much more thinly, with responsibilities for all transportation modes, not just aviation.

Progress has been reported in airport perimeter security through a reduction in airport access points, an increase in surveillance of individuals and vehicles entering airports, and some improvement in airport-employee background checks. However, little has changed in the old system’s divided responsibilities for access control, with the federal government, airport authorities, and to a lesser extent the airlines all having a role.

The number of names currently on the “no-fly” list used to prevent known or suspected terrorists from boarding commercial aircraft and the “automatic selectee” list that subjects them to additional security scrutiny is now much larger than before 9/11. However, administrative problems and the intelligence community’s concern about sharing sometimes highly classified names with the private airlines that still implement these programs have resulted in many of those identified in the government’s consolidated terrorist watch list still not appearing on either aviation security list.

The pre-screening system in use today is largely the same as the one that selected 10 of the hijackers on 9/11, with the added consequence of subjecting the selectee to a search of their person and carry-on bags. Testing of the follow-on program, called Secure Flight, has progressed very slowly. Privacy groups remain concerned about the program’s potential, describing it at a hearing of a DHS advisory panel on privacy rights as “an unprecedented government role, designating citizens as suspect” and criticizing TSA for being “incredibly resistant” to providing the public with necessary information. In September 2006, Congress reported that passenger names were still not being checked against the full terrorist watch list and cut funding for Secure Flight by more than 50%.

Problems have persisted in screening of passengers and checked bags, which have received by far the most post9/11 policy attention and funding. The current federalized checkpoint screening workforce is more numerous, better paid, more experienced, and better trained than its pre9/11 private-contractor counterpart. But a series of reports by the DHS inspector general have documented continuing poor performance.

By the latter part of 2003, all checked bags were being screened for explosives, compared to just 5% before 9/11; but because of shortages of equipment and screener personnel, some of the screening is being done with canine bomb-sniffing teams and manual bag searches rather than explosive-detection equipment. There are continuing concerns about the capabilities of existing explosives-detection equipment and about placing the machines in airport lobbies rather than integrating them with baggage conveyor systems, which offers both operational and security benefits. TSA has indicated that under current funding levels, this “optimal” baggage screening system will not be fully deployed until 2024.

The onboard security layers underwent the most immediate transformation after 9/11. U.S. airlines and international airlines flying to the United States were required to install reinforced cockpit doors. The “Common Strategy” training program for flight crews (which as of 9/11 had called for accommodation with hijackers) was revised to take into account the 9/11 tactics. The number of federal air marshals was increased dramatically, and agents are now assigned to domestic as well as international flights.

Nevertheless, deficiencies have been noted in each of the onboard security measures. The effectiveness of the hardened cockpit doors has been questioned because of reported crew failures to secure the door throughout a flight. The quality of the new security training has also been doubted, with the Association of Flight Attendants indicating that “we still have not been trained to appropriately handle a security crisis or terrorist attack onboard our airplanes.” The rapid expansion of the air marshals programs has produced operational and management problems, including an abbreviated curriculum for training, and budget constraints have prevented it from reaching its target staffing levels.

In addition, general aviation security has not been substantially upgraded. There is no security screening for pilots and passengers or for baggage and cargo. Threat and vulnerability assessments have yet to be undertaken for most general aviation airports. Similar shortcomings continue to exist in air cargo security. Reportedly, only 5% of all air cargo is currently screened, and the Government Accountability Office (GAO) has indicated that aircraft carrying cargo continue to be highly vulnerable to terrorist sabotage. TSA has proposed a set of regulations for air cargo security, but as is typical for rulemaking, the process is moving very slowly. Even if finalized, the new rules would provide few details on how the freight industry, which is expected to implement the security program, is to fulfill this unfunded mandate.

Maritime security

Although the potential vulnerability of the U.S. maritime sector was noted before 9/11 (for example, by the 2000 federal Interagency Committee on Crime and Security), little was done to address that vulnerability. After 9/11, the situation has changed somewhat, although far less has been done than in the aviation sector.

Under the Homeland Security Act of 2002, the Coast Guard was made primarily responsible for port security, but the DHS inspector general, the GAO, and its own commandant have all reported difficulties in trying to fulfill the new responsibilities together with its traditional rescue, drug-interdiction, and other missions under current funding levels. And a 2005 maritime security report observed that “A compressed and disjointed timeline for implementing the act has definitely affected what the MTSA (Maritime Transportation Security Administration) has actually accomplished….Overall,too many facility plans are little more than lists of activities that individually and collectively fall far short of the goals set by MTSA.”

The Customs and Border Protection division of DHS has principal responsibility for container security. Its Container Security Initiative (•_•) ( •_•)>⌐■-■ (⌐■_■) identifies and prescreens “highrisk” containers bound for the United States from the largest ports outside the country. The Customs-Trade Partnership Against Terrorism (C-TPAT) creates government/industry partnerships that offer expedited customs processing for shipping companies that reduce their security vulnerabilities. But in March 2006, congressional investigators reported that only a little more than one-third of the high-risk containers identified by the CSI program were actually inspected and that only about one-fourth of the companies receiving preferential treatment under C-TPAT had had their security practices validated. They also found that, despite the spending of more than $300 million and the priority supposedly attached to the undertaking, fewer than 40% of cargo containers entering U.S. ports were being screened for nuclear or radiological material.

Because of these and other perceived shortcomings, Congress in October 2006 enacted the SAFE Ports Act, which seeks to expedite the development and deployment of more advanced inspection detectors, codify and revise the CSI and C-TPAT programs, require that port security grants be based on risk assessment, establish a port security training and exercise program, and require DHS to develop a plan to enhance the security of the maritime supply chain and speed the resumption of trade after a terrorist attack. It remains to be seen whether the new law will be accompanied by the sustained funding and implementation efforts likely to be necessary if it is to prove more effective than previous attempts to boost maritime security.

Land transportation security

Although terrorists have repeatedly demonstrated both an interest in and the capability to attack the land transportation modes, exemplified most recently by the bombings of Madrid commuter trains in 2004 and London subways in 2005, a report noted that “the least emphasis has been placed on this area because it was perceived as least pressing, and also because it is hardest to protect.”

With a total FY 2007 budget for land transportation security of just $37.2 million and with only 100 land transportation security inspectors, TSA and DHS continue to play a minimal part in securing this mode. Under congressional prodding, DHS has provided about $460 million to date in security grants to rail and transit systems. The Department of Transportation (DOT) land-transportation modal agencies (including the Federal Railroad Administration, the Federal Transit Administration, the Federal Highway Administration, and the Federal Motor Carrier Safety Administration) have retained substantial security roles, but these too are operating with very limited resources (less than $60 million overall in the current budget).

These limited federal investments, directed primarily at rail and transit systems, have financed a number of security measures, including vulnerability assessments, increased law enforcement presence, enhanced surveillance, expanded worker security training, and limited deployment of explosives-detection capability, such as canine teams. However, with an almost complete absence of data on the effectiveness of these measures, it is difficult to discern what impact they have had. In addition to a lack of resources, land transportation security efforts have been hindered by poorly defined roles and responsibilities for the federal agencies involved, inadequate policy planning, limited information sharing, and insufficient security training for front-line personnel.

Learning from 9/11?

Despite the massive increase in attention and resources devoted to transportation security, and especially aviation security, in the wake of the 9/11 catastrophe, many systemic weaknesses continue.

Reactivity and incident-driven decisionmaking still predominate in transportation security, as is evident in the persistent focus on aviation-passenger screening, the short-lived priority given to rail and transit security after the 2004 Madrid and 2005 London rail and transit bombings, the recent attention to the long-standing problem of liquid explosives in the wake of the foiled London plot to blow up airliners, the limited use of rulemaking in making permanent revisions in the transportation security baseline, and the absence of strong policy planning.

The layered approach to transportation security, under which the failure of a single component does not lead to a systemwide failure, which was an important goal of pre- and post-9/11 aviation security systems, continues to be honored more in the breach. Such layers are either flawed (still the case in aviation security), incomplete (container security is overwhelmingly reliant on a single security layer, the prescreening of cargo), or virtually nonexistent (with respect to all land modes, with the partial exception of mass transit).

As was true on 9/11, security is still not being engineered into and integrated with basic transportation operations. Although evidence for such integration is lacking for aviation and maritime security, the most glaring example is in land transportation. Given that federal spending in FY 2006 for the security of this entire mode totaled just $317 million (3.7% of all federal transportation security expenditures), it is noteworthy that the August 2005 legislation to reauthorize the single largest federal investment in transportation— grants for highway construction and safety and public transit—contains very little for security design or performance standards. That law authorized grants totaling $286 billion through 2009, of which just more than $30 million is statutorily required to be spent on transit security programs.

Transportation security is not being handled as a national security issue. In its final report in 1997, the Gore Commission on Aviation Safety and Security observed that terrorists “know that airlines are often seen as national symbols. When terrorists attack an American airline, they are attacking the United States.” For these reasons the commission recommended that the federal government should consider aviation security as a national security issue. The available evidence clearly indicates that as of September 11, 2001, this goal had not been achieved, and aside from checkpoint and baggage screening, there is little indication that the government is currently treating either aviation security or transportation security as a whole as top-priority matters. The $31 billion DHS budget proposed by the administration for FY 2007 represents just 3.5% of the total budget for discretionary spending. Although this is certainly well above pre-9/11 funding levels, it stands in stark contrast to the $439 billion allocated to the Department of Defense and is more on a par with the $34 billion provided for the Department of Housing and Urban Development.

Unanswered questions

An inability or unwillingness to address three particular shortcomings of the pre- and post-9/11 transportation security system continues to limit progress. Without better guidance than has been provided to date on fundamental policy questions concerning priorities, roles, and funding sources, neither DHS, TSA, nor the other federal and non-federal components of transportation security will be able to succeed, regardless of the best efforts of their workforces.

How is security to be prioritized and balanced with other societal imperatives, including fiscal responsibility, economic efficiency, and civil liberties? Before 9/11, many other values were allowed to outweigh security considerations. Although it was perhaps true in the immediate aftermath of the terrorist attacks that the country and its leaders were willing to subordinate other priorities to homeland security needs, with the passage of time and in the absence of further incidents, other claims have predictably and necessarily been reasserted. Although there is no formal dual mandate for TSA, like the one that had required the Federal Aviation Administration (FAA) to both regulate and promote civil aviation, countervailing pressures can still be seen within the current aviation security program, whether in the form of arbitrary congressional limits on the screener workforce driven by budgetary pressures or the abandonment of the more ambitious CAPPS II airline passenger prescreening system in the face of strong opposition from privacy advocates. Outside of aviation, the Customs and Border Protection division, the Coast Guard, and the various DOT land-transportation modal authorities face their own dual mandates in trying to balance the newly received security mission with their older, more established, and far better–funded legacy or core missions.

There is thus an urgent unmet need for full debate on the costs and benefits of proposed security measures in order to determine the proper balance among security for society, individual rights, personal convenience, and financial cost. There are no easy answers here, and to pretend otherwise, or even worse to ignore such a need, was an invitation for disaster on 9/11 and continues to be so.

How should transportation security be organized? Who should be responsible for what? One facet of the 9/11 aviation security failure was the lack of accountability afforded by a system of divided responsibilities. For the most part, little has been accomplished post-9/11 to clarify the situation. Other than for passenger aviation, the security roles and responsibilities of federal, state, local, and private entities in all transportation modes remain largely undefined. The apparent abandonment of the Aviation and Transportation Security Act of 2001’s vision of TSA as the primary federal agency responsible for transportation security, as well as the loss by DHS of certain intelligence coordination functions envisioned for it in the Homeland Security Act of 2002, has actually led to a post-9/11 proliferation of federal agencies responsible and presumably accountable for discrete elements of transportation and homeland security.

How are security measures to be funded? This question has not so much been poorly answered as ignored by policymakers. In the pre-9/11 aviation security system, documented screening-performance shortcomings were not fixed and mandated explosives-detection systems were not deployed largely because of the unresolved question of who should pay. After 9/11, the November 2004 legislation deleted the 9/11 Commission’s recommendation that the national transportation security plan provide a means for adequately funding its security measures. Even today, key federal security efforts for air cargo, ports, and mass transit are little more than unfunded mandates. In the absence of clear cost-allocation decisions by the federal government, attempts to increase security investments in areas such as airport access control, airline flight crew security training, general aviation, port security, rail transportation, mass transit, highways, bridges, tunnels, and pipelines will continue to be deferred and/or denied.

Turning the tide

To address current weaknesses in transportation security, the following systemic and policy flaws have to be tackled.

First, federal policymakers must treat transportation security as a matter of national security, with commensurate resources and policy focus, rather than as the second-tier activity suggested by current funding levels and bureaucratic clout. For this to occur, it is imperative that the artificial budget and policy distinctions between national defense and homeland security be eliminated so that security and counterterrorism efforts at home and abroad can be better integrated and that a more comprehensive assessment be undertaken to determine the optimum allocation of roles and resources. One attempt to do so is the Unified Security Budget developed by the Center for Defense Information (CDI), which compiles in one overall budget account national security programs for military forces, homeland security, and international affairs. Whether or not one accepts the policy choices made in the CDI budget (which calls for cutting $62 billion from the military budget and transferring $52 billion of this to homeland security and international affairs), its aim to “give Congress a look at the big picture, and provide the basis for a better debate over this nation’s security priorities” and to “be a tool of decision-making about cost-effective trade-offs across agency lines” is one that must be accomplished if we are to optimize not only transportation security but national security as properly understood as well.

Second, the transportation security system must prioritize budgets and policy measures, within and across program lines, based on relative risks rather than on responding to the latest incident, bureaucratic inertia, or pork barrel politics. In the absence of such priority setting, which most independent analyses of current transportation security policymaking suggests is the case at present, neither the agencies, the administration, the Congress, nor the public can properly evaluate critical decisions being made about the large increases in federal funding that currently support homeland security. Questions such as the optimum number and therefore cost of airport checkpoint screeners, or federal air marshals, or port security assessments, or canine teams for mass transit, or transportation security intelligence analysts should not be made in isolation. Yet there currently does not appear to be any more of a basis for TSA to prioritize across all transportation modes than there was for the FAA to do so within aviation security. To begin improving this situation, Congress should reject the National Strategy for Transportation Security mandated by the Intelligence Reform and Terrorism Prevention Act of 2004 and submitted by DHS in November 2005. As previously noted, the 9/11 PDP gave this document a “C–,” finding that it “lacks the necessary detail to make it an effective management tool.” In requiring a resubmission of this plan, Congress should clarify and strengthen the existing statutory mandate by returning to the original 9/11 Commission recommendation that the document clearly assign transportation security roles and missions and provide a means for funding implementation of the plan.

Third, the goal ought to be to build a comprehensive and sustainable baseline of standard security across all transportation modes and intermodal connections rather than the current series of largely unconnected systems based on ad hoc, reactive decisionmaking. The pattern of incident, heightened attention, and gradual diminution of perceived threat and security effort is almost certain to be repeated as the 9/11 events recede in memory. When faced with a similar quandary after the foiling of the Bojinka plot in 1995, the FAA created the Baseline Working Group (BWG) to try to raise the standard security level that would be operable system-wide in the absence of specific threats or disasters. Although the BWG was overtaken by events (the crash of TWA flight 800 and the appointment of the Gore Commission), the concept of an unexceptional baseline standard of security continues to be a sound one, not only for aviation but for all transportation modes.

The following key elements of such a baseline can be found in the 2002 National Research Council (NRC) report Making America Safer:

  • Identify and repair the weakest links in vulnerable systems and infrastructures.
  • Build security into basic system designs where possible.
  • Build flexibility into systems so that they can be modified to address unforeseen threats.
  • Search for technologies that reduce costs or provide ancillary benefits to civil society to ensure a sustainable effort against terrorist threats.

Consistent with this approach, Congress should amend the 2005 surface transportation reauthorization legislation to extend the kind of security consciousness it already applies to mass transit to highway and bridge construction programs as well. The pursuit of technologies or policies that confer ancillary benefits is particularly important because, as the NRC report observed, “such multiuse, multibenefit systems have a greater chance of being adopted, maintained, and improved.” Examples in transportation security range from improving the physical safety and security of passengers and workers against nonterrorist criminal acts to reducing the opportunity for cargo theft. Another is comprehensive security training for the transportation workforce, which would include instruction in how to recognize, report, and respond to suspicious activity, as well as self-defense. Such training is often stated as a goal of existing programs, but as in so many other areas, the unanswered question of who pays has stymied progress. Despite numerous complaints about the content and quality of existing training, including training for airline flight crews and transit workers, with the exception of the extensively studied training of airport checkpoint screeners, little has been done to evaluate or improve its effectiveness.

Fourth, there must be improvement in the quality and flow of relevant security information to state, local, and private stakeholders and to the general public, rather than the current approach that continually calls for “heightened alert” to a nonspecific threat while offering largely unfounded reassurances about current vulnerabilities. The January 2005 designation by the GAO of homeland security information-sharing as “high-risk” demonstrates the continuing need to improve information flow in homeland security, including transportation systems. This is not surprising given the complexity of the task. On the other hand, the clearest and most unchallenged lesson from 9/11 was in this very area. The great challenge, but also the enormous opportunity, of successfully integrating the various components involved or potentially involved in collecting transportation security information is illustrated by the fact that apart from the federalized aviation passenger and baggage screening workforce, the vast majority of all individuals who are likely to see suspicious incidents, be required to implement or enforce security measures, or respond to security incidents are non-federal workers, including private-sector employees; local police, fire, and rescue personnel; and of course the vast majority of passengers or shippers who are law-abiding and who wish to help in fighting the terrorist threat. Furthermore, DHS should heed the advice of the Heritage Foundation and the Center for Strategic and International Studies to scrap the existing nationwide alert system and replace it with “regional alerts and specific warnings for different types of industries and infrastructure,” an approach used effectively in the case of the foiled 2006 London bomb plot.

Finally, there must be a clear assignment of transportation security roles to federal and nonfederal agencies, and those assigned must be held fully accountable for security performance rather than continuing the old and current systems of often ill-defined “shared responsibilities” in which no agency is held accountable. It is particularly important that this be done with respect to land transportation, where the federal role has been particularly ill-defined. One potentially useful step would be to have TSA assume responsibility for managing the security of the Washington, DC, Metro transit system. Although much thought would need to be given as to how to make this work, having TSA in charge would appear to offer important advantages, including prioritizing the defense of one of the two most likely terrorist targets in land transportation (the other being the New York City subway system), providing a test bed in which innovative security systems and procedures could be demonstrated for other transit systems, gaining valuable operational knowledge for TSA that would lead to more informed security rulemaking for transit systems, and enabling TSA to coordinate planning for the evacuation of the federal and nonfederal workforce from Washington in the event of a national emergency.

Securing the nation’s transportation systems is very, very difficult. Would-be defenders are confronted with the daunting assignment of protecting a vast nationwide array of airports, ports, tracks, roads, tunnels, bridges, stations, cargo, passengers, and workers. They must do so in a manner that minimizes disruption to commerce, inconvenience to riders, and costs to customers, shippers, and taxpayers. They must continually defeat a terrorist enemy that can choose the time, place, and method of attack, using publicly available information on security vulnerabilities. And they must cope with the fact that the nature of this particular threat means that protections must be maintained even though incidents may be few and far between.

In response to all of these challenges, the federal government has, to date, done considerably more with respect to transportation security than was the case before 9/11. Even so, many independent analyses have concluded that these efforts fall far short of the need. Most glaringly, more than five years after 9/11, the federal government has yet to come to grips with basic questions about priorities, roles, and funding. Unless and until it does so, significant and sustained progress is unlikely.

Avoiding Gridlock on Climate Change

For the twelfth consecutive year, nearly 190 nations convened in November 2006, this time in Nairobi, to address the critical issue of climate change. Unfortunately, the atmosphere at these two-week annual conclaves most resembles a medieval trade fair: a hearty reunion of thousands of well-tailored diplomats (some countries send as many as 100 representatives), plus additional thousands of nongovernmental “observers,” some manning colorful information booths, others intent on picturesque mayhem to attract squadrons of riot police. Hundreds of media representatives also join the party in search of a provocative sound bite or an attention-grabbing image.

These UN mega-conferences have by now developed a predictable pattern. Considerable time is occupied by tedious problems of coordinating positions and tactics, both inside the huge national delegations and within blocs of countries such as the European Union and other regional or “like-minded” coalitions. There are the usual dire warnings—fully justifiable—of impending global catastrophe. There are trivial protocol debates and ritualistic ministerial speeches exhorting complicated and unrealistic actions. There are cultural diversions such as boat rides on the Rhine or dance performances in Marrakech. As the end nears, all-night negotiating sessions contribute to a sense of destiny.

But despite the customary self-congratulatory finale, the results at Nairobi, as at preceding meetings, were embarrassingly meager. This process has been going on every year since the 1995 Berlin Conference of Parties to the United Nations (UN) Framework Convention on Climate Change. The sheer size of the treaty, with its integral accompanying explanatory texts, definitions, and regulations, has meanwhile grown from under 40 pages to several hundred pages of numbing complexity. Much of the negotiations have centered on how the industrialized countries can dilute (for example, by emissions trading or by arbitrary estimates of emissions absorbed by ecological land use) the unrealistic emission targets that they accepted in the midnight hours nine years ago in Kyoto and how much financial resources should flow from the “rich” to the “poor” countries, which have no targets at all.

The process is inevitably slow because of the large number of parties involved. But is it necessary to have everyone at the table? In actuality, only 25 nations, about half of them in the “developing” category, are responsible for about 85% of the world’s greenhouse gas emissions. None of the other 160-plus countries accounts for even 1%! Many of the world’s largest emitters are, in fact, “newly industrializing” nations that shun even a hint of voluntary restraints by claiming affinity with the poorest developing countries. Yet India’s emissions are greater than those of Japan and Germany, Brazil emits more than the UK, and China’s emissions exceed everyone but the United States. Moreover, greenhouse gas emissions from these soi-disant “poor” nations are growing far more rapidly than those of the “rich.”

The climate meetings, obsessively focused on short-term targets and timetables applying only to industrialized nations, have become trapped in a process that is unmanageable, inefficient, and impervious to serious negotiation of complex issues that have profound environmental, economic, and social implications extending over many decades into the future. The Kyoto Protocol, lamely defended by its proponents as “the only game in town,” now best serves the interests of politicians whose rhetoric is stronger than their actions and of those commercial interests and governments that want no meaningful actions at all—notably, Saudi Arabia, Kuwait, and other Near East oil producers, and the U.S. administration, which is not unhappy with the treaty’s lack of progress.

Lessons from the ozone history

It is worth recalling that the 1987 Montreal Protocol on Substances That Deplete the Ozone Layer, later characterized by the heads of the UN Environment Program and the World Meteorological Organization as “one of the great international achievements of the century,” was negotiated by only about 30 nations in nine months, with delegations seldom exceeding six persons and with minimal attention from outside observers and media. I doubt whether the ozone treaty could have been achieved under the currently fashionable global format.

We might draw some useful lessons from the ozone history. In the late 1970s, the ozone science was actually much more disputed than the climate science of today, and the major countries that produced and consumed chlorofluorocarbons (CFCs) were hopelessly deadlocked over the necessity for any controls at all. In this situation, the first international action on protecting the ozone layer was neither global, nor even a treaty. Rather, it was an informal accord among a loose coalition of like-minded nations, including Australia, Canada, Norway, Sweden, and the United States, to individually and separately ban the use of CFCs in aerosol spray cans.

This measure alone resulted in a temporary 30% drop in global CFC consumption (temporary because these “wonder chemicals” were continuing to find new uses in numerous industries.) But the action was nevertheless significant for the future. The resultant technological innovations demonstrated to the skeptics (in this case the European Community, Japan, and the Soviet Union) that controls were feasible, at least for this class of products. It also gave the United States and other proponents of a strong treaty the moral and practical high ground in later negotiations to restrict all uses of CFCs. Yet, if anyone had actually proposed a 30% reduction target, it would surely have been rejected as impossible.

An important lesson here is that a specific policy measure, not an abstract target, could stimulate unanticipated technological innovation. The policy measure drove the agreement on targets in the later ozone protocol, not vice versa. In contrast, the half-hearted performance of most governments with respect to climate policy measures has not matched their political rhetoric about the urgency of targets.

Another important lesson from the Montreal history was that not all countries need to agree in order to take a substantial step forward. It is also relevant to note that, in contrast to Kyoto, developing nations did accept limitations on their CFC consumption, but only when they were assured of equitable access to new technologies. Technology development is the missing guest at the Kyoto feast.

An architecture of parallel regimes

Is it sensible to continue trying to solve the extremely complex climate problem in a single common format with 190 countries having widely variant national interests? Given the halting progress of the Kyoto Protocol, supplementary agreements appear inevitable and have, in fact, already appeared in the form of bilateral and other arrangements, including innovative actions at state and municipal level. The Montreal experience suggests that pursuing activities involving fewer countries and more narrowly defined objectives, in parallel to the Kyoto process, might be more effective than focusing solely on targets and timetables in a global forum.

The climate problem could be disaggregated into smaller, more manageable components with fewer participants—in effect, a search for partial solutions rather than a comprehensive global model. An architecture of parallel regimes, involving varying combinations of national and local governments, industry, and civil society on different themes, could reinvigorate the climate negotiations by acknowledging the diverse interests and by expanding the scope of possible solutions.

To be sure, even here success would require a degree of genuine political will among at least a significant number of key governments. Nonetheless, by focusing on specific sectors and policy measures in smaller, less formal settings with varying combinations of actors and by not operating under UN consensus rules, the possibilities for achieving forward motion would be increased. The process and results could be termed protocols or forums or agreements, but their essential character would more closely resemble a pragmatic working group than a formal diplomatic negotiation. Following are some examples.

Energy technology research and development. The most important issue not addressed by the Kyoto approach is that solving the climate problem will require an energy technology revolution on a global scale that would enable, at acceptable costs, emissions reductions of 60 to 70% or more in the course of this century. As early as 1998, a group of scientists from Pacific Northwest National Laboratory and other institutions, led by Jae Edmonds, initiated the pioneering Global Energy Technology Strategy Program (GTSP). Employing integrated modeling analysis, they demonstrated that the magnitude of emissions reductions required to avoid dangerous climate exchange would entail prohibitive costs in the absence of a new generation of “post-modern” technologies. They demonstrated that technological breakthroughs in several areas, including carbon capture and sequestration, biotechnology (biomass), hydrogen, nuclear power, solar and wind, and end-use (efficiency) technologies, could save trillions of dollars compared to “innovation as usual.”

These insights initially went largely unheeded in the prevailing efforts to make the Kyoto Protocol functional. Current worldwide energy R&D is clearly inadequate. It is ironic that governments were negotiating emission-reduction targets while simultaneously reducing their budgets for energy technology R&D. In effect, they were implicitly counting on a reluctant private sector, stimulated by the modest Kyoto targets, to somehow generate a monumental transformation of the world’s existing fossil-fuel-based energy system.

Given the stakes, energy research arguably merits a degree of public sector commitment comparable to that devoted not long ago to aerospace and telecommunications. The amounts involved are not exorbitant, especially when compared to expenditures in other parts of our entertainment-and defense-oriented economies. For example, a U.S. tax of $8 per ton of carbon, equivalent to two cents per gallon of gasoline, would generate about $12 billion annually—about six times what the government currently spends on energy R&D. Investments in an energy technology revolution would also yield political dividends by stimulating economic growth, job opportunities, and commercial spin-offs, as did the space program.

Why not, then, convene an open forum for like-minded countries, North and South, involving also participation from industry, universities, and research institutions? The industrialized nations could aid and contribute to energy technology research programs of developing countries, recognizing that technological solutions may vary in different regions. The participants could also commit to increase basic and applied research budgets and to collaborate in technology development and diffusion.

Transportation. A large proportion of global carbon dioxide emissions comes from the transportation sector, particularly automobiles. As China and India, each with populations of over one billion, expand automobile use, their resultant emissions will dwarf those of the industrialized North. Yet, it is no secret that much more fuel-efficient vehicles, or even cars that do not need fossil fuel at all, are feasible.

Is it not conceivable that the 15 or 20 major automakers of the world, together with the ministers of industry of their respective nations, could convene in a medium-sized conference hall and hammer out a schedule for introducing low-carbon and then no-carbon vehicles? The topics could range from new fuels and engines to strong but lightweight structural materials. No auto manufacturer could complain of competitive disadvantage, for they would all operate under the same constraints, and new technologies would be shared. Moreover, the companies would be encouraged to pool their intellectual talent in order to arrive at technological solutions sooner and at lower cost. (Their respective advertising departments would doubtless later find ways to differentiate their products for consumers’ tastes.) Interestingly, this type of collaboration was fostered by the Montreal Protocol among companies in the race to eliminate CFCs.

Power generation. A similar process of collaborative technology research, development, and diffusion could be applied to electric power generation. Even though there are many more producers worldwide, one can conceive of arrangements on a subnational or cross-border basis to stimulate cooperation on emissions-reducing technologies. This is already occurring among U.S. states and Canadian provinces. Agreements could also be reached on commitments to progressively introduce technology standards for future new power plant construction, which would influence private sector long-term investment planning.

Agriculture, coal, and adaptation technologies.Relevant governments and industries could collaborate on the development of biofuels, biomass, and land-use and agricultural practices to promote carbon absorption. Clean coal as well as carbon capture and sequestration technologies would be an important subject for North-South collaborative agreements involving major coal-producing and consuming countries such as Australia, China, the Czech Republic, Germany, India, Poland, and the United States. Partners from industrialized countries could also work together with developing countries on new adaptation technologies. For example, the Netherlands could share its extensive experience in flood prevention and water management with countries such as Bangladesh, Egypt, and island states of the Pacific and Caribbean.

Other technology R&D agreements. Separate protocols or collaborations, involving flexible coalitions of governments, industry, universities, and civil society, could be envisaged to promote other technological innovations, including energy efficiency, hydrogen, nuclear, solar, wind power, and biotechnology. A working example is the ITER project to explore fusion energy.

Government procurement policies. Because of their buying power, governments can, through procurement policies, provide a powerful market stimulus for technological innovation. Industrialized and developing country governments could develop procedures to aggressively promote energy efficiency, non-fossil fuels, and innovation through their automobile fleet procurement, building standards, and other commercial policies. In the ozone history, the U.S. Department of Defense played an unexpectedly critical role in accelerating the global phase-out of CFC 113 by revising its procurement standards.

Regional cooperation. Regional forums would provide another opportunity to bring together industrialized and developing countries for scientific, technical, and policy cooperation, as well as for the diffusion of new technologies. The 2005 Arctic Climate Impact Assessment was a model of innovative scientific and policy-related cooperation on climate issues among diverse nations and peoples bordering on the Arctic. The Asia-Pacific Partnership on Climate and Clean Development, established in 2005, involves Australia, China, India, Japan, South Korea, and the United States, which together account for about 60% of the world’s energy use. The partnership, which includes industry participation, is intended to complement, not replace, the Kyoto Protocol, and promotes collaboration in eight areas: aluminum, buildings and appliances, cement, cleaner use of fossil fuels, coal mining, power generation and transmission, renewable energy and distributed generation, and steel. One could envision similar models in other regions.

Looking forward. There are no easy answers; we could begin by admitting that over a decade of global negotiations has not brought notable progress. We should be open to new ideas. Parallel regimes would complement the Kyoto Protocol’s focus on short-term emissions targets and trading by developing the essential technological and policy conditions for the steep emissions reductions that will be necessary in future decades. In order to influence long-term private investment decisions in energy, transport, and infrastructure, policy-oriented parallel regimes should be reinforced by clear signals that the price of carbon emissions will rise indefinitely—for example, gradually increasing carbon taxes and cap-and-trade systems similar to the successful U.S. experience in the 1990s with sulfur dioxide. U.S. leadership in this process, as in the Montreal ozone protocol, would be critically important, but any given regime could make practical progress even absent U.S. involvement.

Parallel regimes would enable motivated governments to move away from the mega-conference syndrome and its accompanying trade-off mentality, and instead to focus on pragmatic problem-solving coalitions in smaller and less formal settings. Public-private partnerships drawing on industry expertise, local communities, and civil society would be characteristic of this approach. Negotiations and consultations would be reduced to a manageable number of countries and delegations, and would be more specialized and technical in their scope. Providing reports on these activities to the wider audience of the annual Conference of Parties to the Framework Convention could stimulate other countries to join one or another regime of interest and could gradually transform the Convention into a forum for dissemination of new ideas and practical results, rather than an instrument for illusory consensus, rhetoric, and delay.

Glide Path to Irrelevance: Federal Funding for Aeronautics

The nation’s 100-year preeminence in aviation is in serious jeopardy. So, too, are the medium- and long-term health and safety of the U.S. air transportation system. The peril stems from a lack of national consensus about the federal government’s role in civilian aviation generally and about the National Aeronautics and Space Administration’s (NASA’s) role in aviation technology development in particular. Aeronautics—the first “A” in NASA—is now vastly overshadowed in resources, managerial attention, and political support by the agency’s principal mission of space exploration and discovery. Indeed, most people have no idea that NASA is the leading, and essentially the only, agency that is organizationally and technically capable of supporting the nation’s leadership in air transportation, air safety, and aircraft manufacturing.

The aeronautics community supports an expansive public R&D program, with NASA playing a lead role. But during the past seven or eight years, successive administrations and Congresses have reduced NASA’s aeronautics budget without articulating how the program should be scaled back. In these circumstances, NASA has tried to maintain a sprawling program by spreading diminishing resources across existing research establishments and many objectives and projects—too many to ensure their effectiveness and the application of their results.

With its plans to return humans to the Moon and eventually send them to Mars, the Bush administration has added to the problem by further reducing the aeronautics budget. The budget request for fiscal year (FY) 2006 and succeeding years anticipates a 50% reduction in NASA’s aeronautics R&D spending and personnel by 2010. The current NASA management understands that such resources will not support an expansive program and proposes to refocus efforts on fundamental research, avoiding costly demonstration projects. That may appear to be a reasonable strategy given the current outlook for funding, but it risks losing the support of industry stakeholders and other intended users of NASA-developed technologies. They operate in a risk-averse environment and often depend on outside suppliers to deliver well-proven technologies. This is especially the case in public goods research, such as safe, efficient air-traffic management and environmentally benign aviation operations, in which the argument for NASA involvement is strongest. Thus, with either its previous peanut-butter-spreading approach or its current fundamental research focus, we believe that the agency is on a glide path progressively leading to the irrelevance of the first A in NASA.

The administration’s 2006 budget proposal exposed the lack of agreement between the government and the aeronautics community about the federal government’s role in aeronautics. NASA’s former associate administrator, Victor Lebacqz, acknowledged as much in defending the president’s budget request before the House Science Committee. He said that there currently are two contending points of view. One point of view, reflected in a host of remarkably consistent blue-ribbon commissions and national panel reports, is that the aviation sector is critically important to national welfare and merits government support to ensure future economic growth and national competitiveness. This view implies an expansive public and private R&D program. The other view, reflected in the administration’s budget submission, is that the aviation industry is approaching maturity, with aviation becoming something of a commodity, and that the government can therefore retrench and leave technology development to the private sector. Lebacqz neglected to mention what in our view is the most compelling case for reinvigorating national investment in aerospace technologies: clear public-good objectives— mobility, safety, and environmental protection—served by NASA’s R&D involvement.

At any rate, the proposed retrenchment had a galvanizing effect. Congress rejected the proposed cut and restored NASA’s Aeronautics Research Mission Directorate (ARMD) budget. At the same time, Congress passed the NASA Authorization Act, which called on the administration to prepare a policy statement on aeronautics as a basis for further discussion with Congress. A new NASA administrator and associate administrator withdrew proposed plans to scale back support for aeronautics and set to work on a new plan for ARMD.

These were encouraging signs that a potentially fatal retrenchment could be avoided. But in his FY 2007 budget proposals for NASA, the president proposed a further 18% cut in aeronautics, to $724 million. This is in comparison to the $16.8 billion total NASA request, mostly targeted on space. If enacted, the resulting aeronautics budget in real terms would be less than one-half what it was in 1994.

Thus, it is long past time for a sustained high-profile national dialogue about the public value of national investments in aeronautics, distinct from space, and the very real continuing threat to NASA’s unique role and capabilities in aeronautics.

World leadership in air transportation and aircraft manufacturing is widely viewed as a cornerstone of U.S. economic welfare and national security. Department of Transportation statistics are revealing. U.S. residents already have the highest per capita level of air travel in the world, and use is rising steadily. Domestic commercial flights, the backbone of the U.S. travel industry, carried 660 million passengers in 2005. The Federal Aviation Administration predicts one billion passengers by 2015. General aviation already flies 150 million more passengers than do commercial flights. Air cargo has grown 7% annually since 1980, by far the fastest-growing mode of freight transportation during the past two decades. It now accounts for more than one-quarter of the overall value of U.S. international merchandise trade, steadily gaining ground on the maritime sector, which has a two-fifths share. JFK International Airport alone handled $125 billion worth of international air cargo in 2004; this total ranks ahead of the value of cargo through the Port of Los Angeles, the nation’s leading maritime port.

Aviation’s national economic impact does not stop with the air transport system. Aerospace exports in 2005 made up nearly 30% of all U.S. exports in the category that the Department of Commerce labels “advanced technology products.” Census Bureau trade figures indicate that aerospace, mainly airplanes and parts, delivered a surplus to the United States of nearly $37 billion in 2005, which significantly defrayed an $82 billion deficit in all other advanced technology categories. Indeed, for years aerospace has regularly logged the widest positive trade margin among U.S. manufacturing industries.

As for aeronautics’ military significance, the Department of Defense’s (DOD’s) guiding doctrine relies significantly on air superiority and aircraft rapid strike and force-deployment capabilities. Moreover, a variety of aeronautics technologies, such as stealth and unpiloted remote-sensing aircraft and airborne command and control systems, have transformed military operations not only in the air but on the ground and at sea. The centrality is reflected in procurement strategy: A 2005 RAND analysis found that the DOD spends on the order of a third of its procurement budget on aerospace, including about $40 billion every year to buy aircraft and other air systems.

Nonetheless, recent signs that the nation’s preeminence in aviation may be imperiled have occasioned deep concern. At least 12 studies of U.S. activity in aeronautics published during the past half decade by the National Academies and various industry and government bodies have called attention to the vulnerability of the United States’ traditional leading position. In its final report, the Commission on the Future of the United States Aerospace Industry, widely known as the Walker Commission, stated that “the critical underpinnings of this nation’s aerospace industry are showing signs of faltering” and warned bluntly, “We stand dangerously close to squandering the advantage bequeathed to us by prior generations of aerospace leaders.” In 2005, the National Aerospace Institute, in a report commissioned by Congress, declared the center of technical and market leadership to be “shifting outside the United States” to Europe, with a loss of high-paying jobs and intellectual capital to the detriment of the United States’ economic well-being.

The clear message is that the United States must overcome a series of major challenges—to the capacity, safety, and security of the nation’s air transportation system, to the nation’s ability to compete in international markets, and to the need to reduce noise and emissions—if the nation’s viability in this sector, let alone international leadership, is to be ensured.

National needs fall into four broad areas. The first three involve classic public or quasi-public goods in which there is little disagreement that the federal government should play a central role. These categories are air traffic control, emissions and noise reduction, and air safety and security. In practice, the central federal role falls to NASA. No other organization remotely has the capabilities. Were it not for NASA, little R&D would be performed, key supporting infrastructure would not exist, and new technologies would not be developed because the benefits appropriable by private enterprise are too limited or too widely diffused to attract investment. The fourth category centers on commercial competitiveness. Here, there is much more policy debate about the role of the federal aeronautics enterprise. And the ideological tone of this debate carries over to, and dwarfs and distorts, discussion of the other three areas.

The following discussion highlights the four categories and the related policy debates.

Modernizing a strained air transportation system. Air transportation in the United States has, in a sense, fallen victim to its own popularity. The system is severely strained because of capacity limits, delaying tens of millions of passengers and many billions of dollars in cargo. In the face of growing demand, passenger airlines’ on-time records have been deteriorating. Only slightly more than three-quarters of all flights on major U.S. carriers in 2005 arrived within 15 minutes of being on time. To improve on-time performance records, airlines have extended scheduled flight times. Over short-haul routes (less than 500 miles), air travel is essentially no longer faster than earthbound alternatives: door-to-door travel times amount to between 35 and 80 miles per hour. The Walker Commission calculated that barring transportation system improvements, the delays will cost the U.S. economy $170 billion between 2002 and 2012, with annual costs exceeding $30 billion by 2015.

THE UNITED STATES MUST OVERCOME A SERIES OF MAJOR CHALLENGES–TO THE CAPACITY, SAFETY, AND SECURITY OF THE NATION’S AIR TRANSPORTATION SYSTEM; TO THE NATION’S ABILITY TO COMPETE IN INTERNATIONAL MARKETS; AND TO THE NEED TO REDUCE NOISE AND EMISSIONS.

Yet demand represents only one side of the equation. The air-traffic management system, although generally judged to be safe, reliable, and generally capable of handling today’s traffic flow, largely relies on 1960s technology and operational concepts and resists innovation. The system’s limitations, along with other factors such as airport runway capacity, place severe constraints on future expansion. The skies and landing patterns will become even more cluttered as hundreds of air taxis join the fleets annually during the next decade, thanks to the introduction of relatively inexpensive so-called microjets. In a 2003 report, a National Academies’ committee was emphatic: “Business as usual, in the form of continued, evolutionary improvements to existing technologies, aircraft, air traffic control systems, and operational concepts, is unlikely to meet the challenge of greatly increased demand over the next 25 to 50 years.”

Significant technical hurdles remain:

  • The need to accommodate an increased variety of vehicles and venues. Such aircraft include air taxis, unpiloted aircraft, aircraft that use tilt-rotor propulsion systems to achieve nearly vertical takeoff and landing, “lighter-than-air”aircraft, and other aircraft that do not need runways.
  • Heightened security and reliability of voice, data, and video connections to in-flight aircraft.
  • Increased use of automation and satellites in handling traffic flow.
  • Use of synthetic vision, cockpit display of traffic information, and controller displays to improve awareness of aircraft separation.
  • Systems engineering and real-time information management and communication for moving from local traffic control to regional and nationwide traffic flow control and optimization.
  • Prediction and direct sensing of the magnitude, duration, and location of wake vortices.
  • Safety buffers against monitoring failures and late detection of potential conflicts.

Curtailing environmental degradation. Efforts during the past half century, primarily supported by the federal government, have paid off in significant reductions of both the noise and emissions emanating from turbine engines. But the growth of air traffic over the period has more than offset technological progress. In fact, objections to aircraft noise and emissions have been the primary barriers to building new airports or adding new runways at existing airports. These two steps are key to relieving pressure on the nation’s overburdened air transportation system, simultaneously increasing system capacity and travel speeds.

Technical needs here include:

  • Low-emission combustors to reduce emissions of nitrogen oxide and particulate matter
  • Alternative energy sources
  • Structures and materials to reduce drag and improve aerodynamics
  • Understanding aviation’s effect on climate and the need to balance nitrogen oxide and carbon dioxide emissions
  • Improved dispersion models, which look at how pollutants disperse in, react with, and interact with the atmosphere
  • Standardized methods for measuring particulate emissions
  • Improved engine and airframe noise-reduction technologies
  • Reducing sonic boom to enable a new generation of commercial supersonic transports

Enhancing safety and security. The air transportation system has an excellent safety record. From 2002 to mid-May 2006, U.S. commercial aviation, both passenger and cargo, saw a total of 59 fatalities resulting from eight events, yet carried well more than 2 billion domestic passengers on more than 40 million flights. However, as forecast demand accelerates during the next 25 to 50 years, there is little assurance that historical trends will continue. Indeed, National Transportation and Safety Board Chairman Mark Rosenker released a report in late 2005 suggesting that near-misses between passenger jets at the nation’s most congested airports occur “with alarming frequency.” At least 326 “runway incursions,” close calls that could have led to accidents, occurred at U.S. airports in 2004. Rosenker put much of the blame on the technologies currently in use. Moreover, the 9/11 terrorist attacks did more than show the vulnerabilities of the air transportation system; they focused attention on new homeland security requirements that call for system capabilities not previously anticipated.

Looking forward, the roadmap of safety-related technology needs involves:

  • Fault-detection and control technologies to enhance aircraft airworthiness and resiliency against loss of control in flight
  • Prediction, detection, and testing of propulsion system malfunctions
  • Technologies to reduce fatalities from in-flight fires, post-crash fires, and fuel tank explosions, including self-extinguishing fuels
  • On-board weather and hazard identification
  • Systems using synthetic vision and digital terrain recognition to allow all-weather visibility
  • Technologies to reduce weather-related accidents and turbulence-related injuries
  • Understanding human error in maintenance and air-traffic control
  • Blast-resistant structures and luggage containers
  • More sensitive, accurate, and faster technology for passenger screening
  • Intelligent autopilots able to respond to anomalous flight commands
  • Reduced vulnerability of Global Positioning System guidance

Increasing the performance and competitiveness of commercial aircraft. Several recent reports share the view that European competition, which already has eroded U.S. dominance of commercial large jet sales, threatens one of the nation’s few standouts among value-added exports. The U.S. share of this global market plummeted from 71.l% in 1999 to about 50% today, with the U.S. company Boeing and the European company Airbus now trading the market leader spot from year to year. In 2005, Airbus took orders for more aircraft (1,055) than Boeing (1,002), though Boeing’s aircraft were higher in total value. One positive note is that Boeing’s new 787 Dreamliner appears to be competing well against the Airbus 350. U.S. companies that manufacture military airframes continue to dominate worldwide, in large part because of the sheer size of the Pentagon’s procurement budgets. But these companies rely increasingly on foreign suppliers, particularly those in countries targeted for sales, squeezing the second and lower tiers of the U.S. defense industrial base.

Two indicators of industry health are employment and R&D. Trends in both areas are worrisome. In February 2004, total U.S. aerospace employment hit a 50-year low of 568,700 workers, the majority in commercial aircraft, engines, and parts. This level was more than 57% below the peak of 1.3 million workers in 1989. By the end of 2005, employment had nudged back up to 626,000 workers. Meanwhile, the aerospace share of R&D investments dropped from about 19% of the total in 1990 to only 5% in 2002. The comparable figure in Europe was 7%. Although the United States can obtain advanced aircraft and air-traffic management systems from foreign suppliers if U.S. manufacturers fail to remain competitive, the implications of such dependency are troubling well beyond the clear national security concerns and beyond the aeronautics industry itself. These sectors have the highest economic and jobs multipliers because they draw on a wider variety of other high-value sectors—computers, electronics, advanced materials, precision equipment, and so on—than nearly any other industry.

In terms of providing public goods, the technical issues in this category relate primarily to improving aircraft efficiency and performance. Technological advances may help increase high-technology employment and reduce imports. Other potential positive public externalities include transportation time savings, increased system capacity, reduced energy dependence, reduced environmental impact, and reduced public infrastructure needs. Related technical challenges include:

  • Improved propulsion systems, both the evolution of highbypass turbofan engines burning liquid hydrocarbon fuels and the development of engines using hydrogen as fuel
  • New airframe concepts for subsonic transports, supersonic aircraft, runway-independent vehicles, personal air vehicles, and uninhabited air vehicles
  • Composite airframe structures combining reduced weight, high-damage tolerance, high stiffness, low density, and resistance to lightning strikes
  • High-temperature engine materials and advanced turbomachinery
  • Enhanced airborne avionic systems
  • The application of nanotechnology for advanced avionics and high-performance materials
  • Passive and active control of laminar and turbulent flow on aircraft wings

Advances in each of these areas would be welcome. But given the severity of budget constraints, advancing every area is probably not possible. So where to set priorities? We urge focus on cross-cutting enabling technologies and on maintaining and upgrading NASA’s unique national testbed faculties. Some technologies under development will have application primarily in one of the four major categories described above. Other technologies, crucial in more than one area, play enabling roles across the board. The interrelation is such that improvement or lack of it in each technology can affect improvements in one or more of the others. The following general technical capabilities or enabling technologies are particularly central:

Modeling and simulation. A 2003 National Research Council report provides a detailed set of recommendations that would provide “the long-term systems modeling capability needed to design and analyze evolutionary and revolutionary operational concepts and other changes to the air transportation system.”Modeling and computer simulation are also significant factors in lowering manufacturing costs, which could help make commercial supersonic aircraft economically successful. Taking a broader view, modeling and simulation, among other information technology applications, will contribute not only to automating and integrating the air transportation system but also to reducing aviation transit time, fatal accident rates, noise and emissions, and the timeto- market product cycle times for new technologies.

Human factors. In aviation safety, human factors are critical and need more support. Air traffic controllers are central to the efficiency and safety of the airspace, especially during periods of inclement weather and poor visibility. Unfortunately, the stereotypical controller, harried and perhaps burned out, has a significant basis in aeromedical research reality. In addition, pilot errors, often related to fatigue, regularly lead to fatal crashes, including an American Connection commercial flight in late 2004 that left 13 dead. Such errors are particularly problematic in general aviation, leading to, for example, the accidents that killed U.S. Senator Paul Wellstone and John F. Kennedy Jr. With the expected increased automation in both individual aircraft and the total air transportation system, significantly better human interfaces and decision-aid technologies will be required to deal with the decisionmaking complexities and data overloads such systems will generate. The Walker Commission, concurring that human factors research could help “enhance performance and situational awareness . . . in and out of the cockpit,” predicted it would be a “primary contributor” to tripling the capacity of the U.S. air transportation system by 2025. In addition, research on the impact on people (and structures) of the sonic boom pressure waves created by supersonic flight is needed to inform both vehicle design and safety regulations.

Distributed aeronautics communications networks. In the final analysis, the most complex problem of all may well be the integration of national and worldwide air, space, and ground communication networks. A highly automated, high-throughput, secure, and accident-free national airspace system will be extraordinarily information-dense and highly geographically (and spatially) distributed and will meet decisionmaker needs for essentially real-time data analysis and presentation with worldwide on-demand availability. Technologies currently in use have only just dented the needs. To help in moving ahead, the National Academies’ Committee for the Review of NASA’s Revolutionize Aviation Program recommended exploring “revolutionary concepts” related to distributed air-ground airspace systems, including the distribution of decisionmaking between the cockpit and ground systems and reorganization of how aircraft are routed, with significant implications for airspace usage and airport capacity.

Even if NASA aeronautics program expenditures were stabilized and focused along these lines, managers of ARMD will continue to face severe constraints. The first limitation is high fixed personnel costs. Total expenditures (salaries and fringe benefits) for aeronautics workers, including large contingents of civil service personnel as well as contractors, were slightly more than $400 million in fiscal 2006. This total is in the neighborhood of 45% of the aeronautics budget, even after assuming that NASA-projected workforce reductions occur.Yet even that assumption is in jeopardy, because the latest congressional authorization of NASA’s budget restricted the agency’s ability to reduce its workforce.

The second limitation is that certain fixed administrative costs incurred by the agency arise from its responsibilities as defined in the Space Act, obligating NASA to maintain certain critical national facilities (wind tunnels and the like) and aeronautics core competencies. Overhead such as general administrative costs (G&A) are normally determined for each center and applied as a percentage of labor cost involved in the program at that center. G&A costs in the proposed 2007 budget total more than $250 million alone at the four major aeronautics-related NASA labs: Ames, Glenn, Langley, and Dryden. G&A costs at the labs are high because of the obligation to support their aging facilities and equipment.

A third limitation is that an ever-growing part of NASA’s extramural program is earmarked by Congress for particular projects. In the past decade, the number of earmarks in NASA’s budget exploded more than 30-fold to 198. Earmarks totaled $568.5 million in fiscal 2006, fully eight times more in dollar terms than a decade before.

The issue is not so much whether any particular earmarked program or institution has technical merit or will substantially help a favored local constituency. Many surely do in isolation. But when it comes to effectively managing technology and ensuring maximum returns on public investments, NASA is rapidly losing the flexibility to optimize— by field, or level of risk, or potential users and suppliers, or time horizons, or national systemic needs, or core competencies, and so on—across its R&D portfolio. In our view, this risks making NASA’s aeronautics activities not so much a coordinated strategic national portfolio but a hodgepodge collection of unrelated pet projects.

In short, after earmarks, personnel costs, and fixed G&A costs, NASA for fiscal year 2006 was left with roughly the same amount of money for discretionary R&D spending that several multinational high-technology firms each spend per week on R&D. At times, the results in the research trenches seem almost surreal. Langley administrators recently sent a memo to employees cutting all spending for gas on agency-related travel and for new wireless connectivity, as well as pushing back—again—roof repairs and badly needed information technology maintenance and upgrades. Outdated computers, no more wireless connectivity, and bad roofs at one of the nation’s premier research institutions?

To us, this is stunning neglect of the national interest in the future of aeronautics technologies.At current and proposed funding levels,NASA and the nation cannot hope to come close to fulfilling national needs in the face of an already strained air transportation system; fierce and increasing international competition in aircraft markets; the environmental challenges of noise, emissions, and fuel efficiency; and demands for improved air safety and homeland security. NASA’s ARMD is the nation’s only organizationally and technically capable option for overall leadership in aeronautics technologies.Unfortunately, it is largely hidden from public view, structurally, financially, and politically buried in a space agency on a mission to Mars. How many additional hundreds of millions of delayed air travelers, or how many more national commissions warning about the perilous future of U.S. aeronautics, will it take to get policymakers to put the A back in NASA?

Nuclear Deterrence for the Future

The most significant event of the past 60 years is the one that did not happen: the use of a nuclear weapon in conflict. One of the most important questions of the next 60 years is whether we can repeat this feat.

The success that we have had in avoiding the construction and deployment of nuclear weapons by a large number of nations has been far better than anybody anticipated 40 or 50 years ago. Likewise, the fact that nuclear weapons have not been used is rather spectacular.

The British scientist, novelist, and government official C.P. Snow was quoted on the front page of the New York Times in 1960 as saying “unless the nuclear powers drastically disarmed, thermonuclear war within the decade was a mathematical certainty.” I think he associated with enough scientists and mathematicians to know what mathematical certainty was supposed to mean. We now have had that mathematical certainty compounded more than four times without any use of nuclear weapons.

When Snow made that statement, I did not know anyone who thought it was outrageous or exaggerated. People were really scared. So how did we get through these 60 years without nuclear weapons being used? Was it just plain good luck? Was it that there was never any opportunity? Or were there actions and policies that contributed to this achievement?

The first time when it seemed that nuclear weapons might be used was during the Korean War, when U.S. and South Korean troops retreated to the town of Pusan at the southern tip of Korea. The threat was serious enough that Britain’s prime minister flew to Washington with the announced purpose of persuading President Truman not to use nuclear weapons in Korea.

The Eisenhower administration, or at least Secretary of State John Foster Dulles, did not like what he called the taboo on the use of nuclear weapons. He said “somehow or other we must get rid of this taboo on nuclear weapons. It is based on a false distinction.” And the president himself said “if nuclear weapons can be used for purely military purposes on purely military targets, I don’t see why they should-n’t be used just as you would use a bullet or anything else.” The United States even announced at a North Atlantic Treaty Organization (NATO) meeting that nuclear weapons must now be considered to have become conventional.

U.S. policy had changed considerably by the time Lyndon Johnson became president. In 1964 he said, “Make no mistake. There is no such thing as a conventional nuclear weapon. For 19 peril-filled years no nation has loosed the atom against another. To do so now is a political decision of the highest order.”

Those 19 peril-filled years are now 60 peril-filled years. President Kennedy started, Johnson continued, and Secretary of Defense Robert McNamara spearheaded a powerful effort to build up enough conventional military strength within the NATO forces so that they could stop a Soviet advance without the use of nuclear weapons. Both Kennedy and Johnson had a strong aversion to the idea of using nuclear weapons.

During the 1960s, the Soviets officially ridiculed the idea that there could be a war in Europe that did not instantly— in their words, automatically—go nuclear, but their actions were very different from their public announcements. They spent huge amounts of money developing conventional weaponry, especially conventional air weaponry in Europe. This investment would have made no sense if a European war were bound to become nuclear, especially from the outset. It seems to me that the Soviets recognized the possibility that the world’s nations might get along without actually using nuclear weapons, no matter how many of them were in the stockpiles.

I find it noteworthy that as far as I know, the United States did not seriously consider using nuclear weapons in Vietnam. Of course, I’ll never really know what was in Richard Nixon’s or Henry Kissinger’s mind, but at least we know that they were not used.

Remarkably, Golda Meier did not authorize the use of Israel’s nuclear weapons when the Egyptians presented excellent military targets. At one point, two whole Egyptian armies were on the Israeli side of the Suez Canal, and there were no civilians anywhere in the vicinity. This was a perfect opportunity to use nuclear weapons at a time when it was not clear that Israel was going to survive the war. And yet they were not used. We can guess at some of the reasons, but I think it was Meier’s long-range view that it would be wise to maintain the taboo against the use of nuclear weapons because eventually any country could become a nuclear target.

When Great Britain was defending the Falkland Islands, it had several opportunities when nuclear weapons might have been effective, but Margaret Thatcher decided that they were not an option. The Soviets fought and lost a degrading and demoralizing war in Afghanistan without resorting to nuclear weapons. Some observers have argued that the Soviets had no viable targets; I believe that they did have opportunities but nevertheless decided against using nuclear weapons. I believe that the underlying rationale against their use was the same for these countries as it was for Lyndon Johnson: The many peril-filled years in which nuclear weapons were not used had actually become an asset of global diplomacy to be treasured, preserved, and maintained.

Maintaining the streak

Will the world be able to continue this restraint as more nations acquire nuclear weapons? Since Lyndon’s Johnson statement, India and Pakistan have developed nuclear weapons. Even in my lifetime, I expect to see a few more countries do so. How do we determine whether these new nuclear powers share the commitment to avoid the use of these weapons?

From a U.S. perspective, two ideas are worth considering. The country should reconsider its decision not to ratify the Comprehensive Test Ban Treaty. It was an opportunity to have close to 180 nations at least go through the motions of supporting the principle that nuclear weapons are subject to universal abhorrence. Nominally, the treaty was about testing, but I believe that it could have served a more fundamental purpose by essentially putting another nail in the coffin of the use of nuclear weapons.

I also believe that even if U.S. leaders believe that there are circumstances in which they would use nuclear weapons, they should not talk about it. And if they want to develop new weapons, they should do so as quietly as possible—even avoiding congressional action if possible. The world will be less safe if the United States endorses the practicality and effectiveness of nuclear weapons in what it says, does, or legislates.

The National Academy of Sciences Committee on International Security and Arms Control (CISAC), the Ford Foundation, the Aspen Institute, and other institutions have sponsored numerous international meetings on arms control, and these meetings have almost always included representatives of India and Pakistan. I believe that it was extremely important for them to hear at firsthand from U.S. scientists and political leaders about the dangers associated with the use of nuclear weapons. I believe that India and Pakistan also learned from watching Cold War leaders forego the use of those weapons because they feared where it might lead. Because I think that India and Pakistan have absorbed some of the lessons of this experience, I worry less about what might develop in an India-Pakistan standoff.

Now it is important to teach the Iranians that if they do acquire nuclear capability, it is in their national interest to use such weapons only as a means to deter invasion or attack. The president of Iran was recently quoted as saying that Iran still intended to wipe Israel off the face of the earth. My guess is that if they think about it, they are not going to try to do it with nuclear weapons. Israel has had almost a half century to think about where to store its nuclear weapons so that it would be able to launch a coun-terattack if its existence is threatened. Iran does not want to invite a nuclear attack. Every Iranian should be aware that the use of nuclear weapons against Israel or any other nuclear power is an invitation to national suicide. It is important that not only a few intellectuals in Iran understand this, but that people throughout the country share this awareness. I would like to see a delegation of Iranians participating in future CISAC meetings.

All new nuclear powers would benefit from knowing that it took the United States 15 years after the development of nuclear weapons to begin to think about the security and custody of the weapons themselves. This did not happen until Robert McNamara had his eyes opened by a study done by Fred Ikle of the RAND Corporation that revealed that U.S. nuclear weapons did not even have combination locks on them, let alone any police dogs to guard them on German airfields. McNamara initiated what became known as “permissive action links.” It took about four years to have the permissive action links developed to his satisfaction and then finally installed on the land-based warheads. If the Iranians do develop nuclear weapons, it is critical that it not take them 15 years to think about the custodial problems. Will control be granted to the army, navy, air force, or palace guard? Will security be adequate at storage facilities? We have witnessed enough instability across the globe to know that governments fail and that the branches of the armed forces sometimes take different sides in civil conflicts. Iran needs to think through what will happen to the weapons in the event of a government failure. Will some part of the government or military be able to maintain control, or will they watch Israeli commandos arrive to take charge of the weapons?

A nuclear Iran would need to act rapidly on questions of security, custody, and the technological capacity to disarm the weapons if they lose control of them. CISAC could be of enormous help to the Iranians in relaying the lessons from decades of U.S. experience in learning how to manage custody of nuclear weapons.

An even more important task will be to prepare for the extremely remote possibility that a terrorist group could acquire such weapons. It will be essential but very difficult to persuade them that nuclear weapons are valuable primarily as means of persuasion and deterrence, not destruction.

About 20 years ago, I began thinking about how a terrorist group might use a nuclear weapon for something other than just blowing up people. A good example occurred during the Yom Kippur war of 1973. The United States resupplied Israel with weapons and ammunition, but the United States was not allowed to fly from European NATO countries or to refuel its planes in Europe. All of the refueling was done in the Azores. It struck me then that if I were a pro-Palestinian terrorist and had a nuclear weapon, I would find a way to make clear that I had it and that I would detonate it near the air fields in the Azores if the United States did not stop landing planes loaded with ammunition for Israel. This strategy had a number of fallback positions: If it failed to deter the United States from refueling in the Azores, it might deter Portugal, which owned the Azores, from allowing the refueling to take place, and if that failed, it might deter the individuals working at the airport and doing the refueling. If we ever have to face the prospect of nuclear-armed terrorists, I want them to be thinking along these strategic lines rather than thinking about attacking Hamburg, London, or Los Angeles.

My hope for CISAC is that it will see its mission broadly: educating itself, U.S. leaders, and anyone who will be in a position to influence the decision to use a nuclear weapon. Thinking of extending this mission to Iran is difficult, and to North Korea even more so. I think it is important to keep in mind that if terrorists do acquire nuclear weapons, it would probably be by constructing them after acquiring fissile material, and that means that there is going to be quite a high-level team of scientists, engineers, and machinists of all kinds working over a significant period of time, probably in complete seclusion from their families and jobs with nothing to do but think about what their country and other countries are going to do once a bomb is ready. And I think they will probably come to the conclusion that the last thing they want to do is waste it killing Los Angelenos or Washingtonians. I believe they will think about sophisticated strategic ways to use a weapon or two or three if they have them.

This means we may be living in a world for the next 60 years in which deterrence is just as relevant as it was for the past 60 years. One difference will be that the United States will find itself being deterred rather than just deterring others. Although the United States likes to think of itself as always in the driver’s seat, in reality it was deterred by Soviet power from considering the use of nuclear weapons in several instances. I believe that the United States did not seriously consider rescuing Hungry in 1956 and Czechoslovakia in 1968 because it was sufficiently deterred by the threat of nuclear war.

My hope is that the United States will continue to succeed in deterring others from using nuclear weapons, and that others will succeed in deterring the United States.

A New Science Degree to Meet Industry Needs

All of us are aware of urgent calls for new and energetic measures to enhance U.S. economic competitiveness by attracting more U.S. students to study science, mathematics, and engineering. In the case of scientists, one reason for the lack of science-trained talent prepared to work in industry (and some government positions) is that the nation does not have a graduate education path designed to meet industry’s needs. A college graduate with an interest in science has only one option: a Ph.D. program, probably followed by a postdoctoral appointment or two, designed to prepare someone over the course of about a decade for a university faculty position. If the need for scientists to contribute to the nation’s competitiveness is real, the nation’s universities should be offering programs that will prepare students in a reasonable amount of time for jobs that will be beneficial to industry. What is needed is a professional master’s degree.

The demand for more science-trained workers appears to be real. In 2005, 15 prominent business associations led by the Business Roundtable called for whatever measures are necessary to achieve no less than a 100% increase in the number of U.S. graduates in these fields within a decade. In 2006, a panel of senior corporate executives, educators, and scientists appointed by the National Academies called for major national investments in K-12 science and mathematics, in the education of science and math teachers, and in basic research funding to address what it saw as waning U.S. leadership in science and technology. This National Academies report was endorsed by leading education associations and served as a basis for several legislative proposals (such as the Bush administration’s American Competitiveness Initiative) now moving through the Congress. Supportive articles and editorials have dominated journalistic coverage of these arguments.

Few would contest the general proposition that it would be highly desirable for the nation to encourage more of its students to become knowledgeable about science, mathematics, and technology—at all levels of education, from K12 through graduate school. The current century, like the past half-century, is one in which all citizens, no matter their level of education, need to possess considerable understanding of science and technology and to be numerate as well as literate. Indeed, it would be reasonable to argue that such knowledge is now close to essential if young Americans are to become knowledgeable citizens who are able to understand major world and national issues such as climate change and biotechnology that are driven by science and technology, even if their own careers and other activities do not require such knowledge. Efforts to improve math and science teaching at the K-12 and university levels make a great deal of sense.

So too do calls for substantial federal support for basic scientific research. Such research is a public good that can produce benefits for all, yet it is unlikely to be adequately supported by private industry because its economic value is so difficult for them to capture. Moreover, there is considerable truth in the various reports’ claims that support for basic research in the physical sciences and mathematics has lagged well behind the dramatic increases provided for biomedical research.

The key question, though, is not whether the goals are appropriate but whether some of the approaches being widely advocated are the best responses to claimed “needs” for scientists and engineers with the capabilities needed to maintain the competitiveness of the U.S. economy. Improving the quality of U.S. K-12 education in science and math is indeed a valuable mission. But if the proximate goal is to provide increased numbers of graduate-level scientists of the kinds that nonacademic employers say they want to hire, a focus on K-12 is necessarily a very indirect, uncertain, and slow response.

Increased federal funding for basic research also is a worthwhile contribution to the public good, but its effects on graduate science education would be primarily to increase the number of funded slots at research universities for Ph.D. students and postdocs who aspire to academic research careers. Extensive discussions with nonacademic employers of scientists indicate that they do wish to recruit some Ph.D.-level scientists (more in some industries, fewer in others), but also that they value the master’s level far more highly than do most U.S. research universities.

In addition to strong graduate-level science skills that a strong master’s education can deliver, employers express strong preferences for new science hires with

  • broad understanding of relevant disciplines at the graduate level and sufficient flexibility in their research interests to move smoothly from one research project to another as business opportunities emerge
  • capabilities and experience in the kind of interdisciplinary teamwork that prevails in corporate R&D
  • skills in computational approaches
  • skills in project management that maximize prospects for on-time completion
  • the ability to communicate the importance of research projects to nonspecialist corporate managers
  • the basic business skills needed to function in a large business enterprise

In light of employers’ stated needs, there appears to be a yawning gap in the education menu. U.S. higher education in science, often proudly claimed as the world leader in quality, is strong at the undergraduate and doctoral levels yet notably weak at the master’s level.

No one planned it this way. The structure of the modern research university is a reasonable response to the environment created by the explosive growth of federal research in the decades after World War II. But that period of growth is over, the needs of industry have evolved and become more important, and now the nation faces a gap that has significant negative implications for the U.S. science workforce outside of academe. That gap can be filled with the creation of a professional science master’s (PSM) degree designed to meet the needs of today and of the foreseeable future.

For at least the past half-century, even outstanding bachelor’s level graduates from strong undergraduate science programs have been deemed insufficiently educated to enter into science careers other than as lowly “technicians.” Over this period, rapid increases in federal support for Ph.D. students (especially as research assistants financed under federally supported research grants) propelled the Ph.D. to become first the gold standard and then the sine qua non for entering a science career path. More recently, and especially in large fields such as the biomedical sciences, even the Ph.D. itself has come to be seen as insufficient for career entry. Instead, a postdoc of indeterminate length, also funded via federal research grants, is now seen as essential by academic employers of science Ph.D.s.

Over the same period, the average number of years spent in pursuit of the Ph.D. lengthened in many scientific fields. More recently, the number of years spent in postdoc positions has also increased. The result has been a substantial extension of the number of years spent by prospective young scientists as graduate students and postdocs. Postgraduate training is now much longer for scientists than for other professionals such as physicians, lawyers, and business managers.

The lengthening of time to Ph.D. and time in postdoc coincided with deteriorating early career prospects for young scientists. Indeed, many believe that the insufficiency of entry-level career positions for recent Ph.D.s was itself an important cause of the lengthening time to Ph.D. and lengthening postdoc periods. As Ph.D.-plus-postdoc education became longer and career prospects for those pursuing them more uncertain, the relative attractiveness of the Ph.D. path in science waned for many U.S. students, even those who had demonstrated high levels of achievement as undergraduate science majors.

COMPETITIVENESS HAWKS WANT TO EXPAND THE SCIENTIFIC WORKFORCE. PROGRAMS TO TRAIN MORE PEOPLE WITH PROFESSIONAL SCIENCE MASTER’S DEGREES COULD HELP.

Yet there was this odd gap. Had the same talented students chosen to pursue undergraduate degrees in engineering, they would have had the option of pursing one of the high-quality engineering master’s degrees that are highly regarded by major engineering employers. But there was no such alternative graduate education path for those who would have liked to pursue similar career paths in science.

Estimates by the National Science Board suggest that surprisingly small proportions (well under one-fifth) of undergraduate majors in science continue on to any graduate education in science. This low level of transition to graduate education has prevailed during the same period that numerous reports have been sounding alarms about the insufficiency of supply of U.S.-trained scientists.

What has happened in the sciences, though not in engineering, is that as heavy research funding has made the Ph.D. the gold standard, the previously respectable master’s level of graduate education had atrophied. Indeed, many graduate science departments have come to see the master’s as a mere steppingstone to the Ph.D. or as a low-prestige consolation prize for graduate students who decide not to complete the Ph.D. At least some members of graduate science faculties came to look down their collective noses at the master’s level, and some graduate science departments simply eliminated the master’s degree entirely from their offerings.

The PSM degree, a rather newly configured graduate science degree that has been supported by numerous U.S. universities with financial support from the Alfred P. Sloan Foundation and the Keck Foundation, was designed to meet strongly expressed desires of nonacademic science employers for entry-level scientists with strong graduate education in relevant scientific domains, plus the knowledge they would need to be effective professionals in nonacademic organizations. In only a few years, the number of PSM degrees has grown from essentially 0 to over 100 (and at over 50 different campuses in some 20 states). They are by no means clones of one another, but they do generally share many core characteristics.

They are two-year graduate degrees, generally requiring 36 graduate credits for completion. The credits are course-intensive, with the science and math courses at the graduate level. In addition, many PSM degrees offer cross-disciplinary courses (such as bioinformatics, financial mathematics, industrial mathematics, biotechnology, and environmental decisionmaking). Most PSM curricula include research projects rather than theses; some of the projects are individual, some are team-based. Courses in business and management are also common. Depending on the focus of the PSM degree, there may also be courses offered in patent law, regulation, finance, or policy issues. Finally, many PSM programs provide instruction in other skills important for nonacademic employment, such as communication, teamwork, leadership, and entrepreneurship.

One of the most important elements of nearly all PSM degrees is an internship with an appropriate science employer; most of these take place during the summer between the first and second year. These offer PSM students the chance to see for themselves what a career in nonacademic science might be like, and they likewise afford employers the opportunity to assess the potential of their PSM interns as future career hires.

Many industry and government scientists have been enthusiastic supporters of emerging PSM degree programs in fields relevant to their own activities. They serve as active advisors to PSM faculty, offering guidance on the science and nonscience curricular elements. Over 100 employers have offered PSM students paid internships, and many have mentored them in other ways. Employers also often provide tuition reimbursement to their own employees who wish to enhance their own scientific skills by undertaking a PSM degree while still employed full-time. Employers also often serve as champions for the PSM initiatives with university administrators and state and local officials.

Perhaps most important, employers have been offering attractive entry-level science career paths to PSM graduates. Data are incomplete, but since 2002 we know that at least 100 businesses have hired PSM graduates, with good starting salaries by the standard prevailing for scientists: generally in the $55,000 to $62,000 range. In addition, over 25 government agencies have hired PSM graduates, starting them at $45,000 to $55,000. Hiring employers indicate that they value PSM graduates’ scientific sophistication, but also their preparation to convey technical information in a way that is comprehensible to nontechnical audiences and more generally to work effectively with professionals in other fields such as marketing, business development, legal and regulatory affairs, and public policy.

Meanwhile, faculty involved in PSM programs have found the students to be highly motivated additions to their graduate student numbers. The programs have also facilitated valuable faculty contacts with business, industry, and government. Finally, at the national level, the rapidly increasing PSM movement has begun to contribute efficiently and nimbly to U.S. science workforce needs.

PSM curricula are configured by their faculty leaders to respond to the human resource needs expressed by nonacademic employers of scientists. In the fast-changing scene of scientific R&D, the PSM degrees are attractively agile. Universities that seek to contribute to economic advance in their regions see the PSM degrees as responsive to nonacademic labor markets for science professionals in ways that are quite attractive to science-intensive employers. Finally, as two-year graduate degrees, PSM programs are “rapid-cycle” programs that can respond quickly to calls for increased numbers of science professionals.

If PSM degrees produce science-educated professionals with capacities that nonacademic employers value, why have they not yet been embraced by all universities with strong science graduate programs? Are there reasons why one might expect some faculties to be skeptical or negative about such new degrees?

There is, first, inevitable inertia to be overcome, rendered more powerful because of the diminished status of master’s science education over the past decades. Nonetheless, there have been numerous energetic and committed faculty members who have perceived a strong need for this kind of graduate science education. For them and others, however, the incentive structures do not generally reward such efforts. As has often been noted, research universities and federal funding agencies generally reward research—publications, research grants and the overheads that accompany them, and disciplinary awards—rather than teaching, and certainly tenure decisions relate primarily to research achievements. Master’s-level students themselves often are seen as contributing little to faculty research activities, since their focus is primarily on graduate-level coursework rather than working as research assistants on funded research grants.

One difference among research universities may be the extent to which they envision their role as contributing directly to the economic advancement of their region or country. Among the leaders in PSM innovation and growth have been a number of prominent public and/or land-grant research universities such as Georgia Tech and Michigan State. From their early days, these and similar institutions have seen themselves as engines of economic prosperity, and important parts of their financial resources come from state legislatures that consider such economic contributions to be essential. One can also think of a number of leading private research universities that include regional economic prosperity among their goals, and it is notable that some of these universities have also pursued PSM degree programs.

With over 100 PSM degrees in operation or development around the country and the pioneer programs of this type generally prospering, one could easily conclude that there has been at least a proof of concept. Still, the programs are mostly quite new and relatively small, and hence the numbers of PSM graduates are still modest.

The challenge over the coming few years is to move the PSM concept to scale. This will not be easy, although there is reason for optimism. Ultimate success will depend on recognition by both government science funders and universities of the odd gap that prevails in U.S. graduate science education, as well as on continuation of the attractive early career experiences of PSM graduates and enthusiasm for their capabilities on the part of science-intensive employers.

The recent series of reports urging action to encourage more U.S. students to study science and mathematics could be well answered by support for PSM initiatives. In addition to the large amount of energy and money the nation might be devoting to convince more teachers and young people to pursue undergraduate education in science and math, it would also make a great deal of sense to focus attention on the large number of science majors who are already graduating from college and yet are deciding not to continue toward graduate education and careers in science. The PSM initiatives currently underway at over 50 U.S. universities offer an alternative pathway to careers in science that might literally transform this situation, and one that has real prospects for near-term success.

Ethics and Science: A 0.1% Solution

Science has an ethics problem. In South Korea, Woo Suk Hwang committed what is arguably the most publicized case of research misconduct in the history of science. The range of Hwang’s misconduct was unusual but not extraordinary. He misjudged the ethical challenges presented by a newly developing field of research, he paid insufficient attention to accepted standards of responsible conduct, and he had a role in the fabrication of many key research findings. What made this case extraordinary was that it involved human embryonic stem cell research, a field of inquiry that is being watched more closely by the global public than perhaps any before it. The impact of this scandal is profound for Hwang, for his country, for all of science, and for stem cell research in particular.

The United States is not immune to cases of research misconduct. In one of several examples in 2005, Paul Kornak, a researcher with the Veterans Administration in Albany, New York, admitted that he had forged medical records. The forgeries made it possible for individuals to enter drug trials for which they were not qualified, and one of those individuals subsequently died, apparently as a result of his participation. Although cases such as this receive limited media attention, they deserve our attention as much as the case of Hwang. The problem we face is not just how to minimize the occurrence of such cases, nor is it just about the biomedical sciences and human health. The more fundamental problem is the need to define more clearly what constitutes responsible conduct in all areas of academic inquiry.

Standards of conduct should include much more than just avoiding behavior that is clearly illegal. During the past 15 years, numerous studies have provided evidence that on the order of one-third of scientists struggle with recognizing and adhering to accepted standards of conduct. This does not mean that large numbers of scientists are knowingly engaging in research misconduct, but it is reasonable to conclude that many lack the tools, resources, and awareness of standards that would serve to sustain the highest integrity of research. The pursuit of knowledge is a noble end, but we scientists owe more to the public and to ourselves than to ignore the ethical foundations of what we do. If we expect our colleagues to act responsibly, then we must provide them with the knowledge and support they need.

In academia, we recognize that the remedy for gaps in knowledge and skills is education and training. Because the purpose of science is to have an impact on the human condition, the conduct of science is defined by ethical questions. What should be studied, what are the accepted standards for the conduct of research, and what can be done to promote the truthful and accurate reporting of research? The answers to these questions are not normally found in a K12 education or in college. Based on surveys of researchers, these questions are only rarely being answered through research training. Something more is required. Institutions of higher education are the logical places to fill this gap.

In the area of research ethics, scientists have obligations to the public that grants them the privilege to conduct research, to private and public funders who expect that research will be conducted with integrity, to the scientific record, and to the young people they train. These are not mere regulatory obligations; they are also the right thing to do. That said, these obligations are addressed in part by a National Institutes of Health (NIH) requirement, now in place for 15 years, that those supported by NIH training grants should receive training in the responsible conduct of research (RCR). The domain of RCR training includes not only the ethical dimensions of research with human subjects, but every dimension of responsible conduct in the planning, performance, analysis, and reporting of research. This RCR requirement stimulated the creation of educational materials and resources and encouraged the participation of research faculty in the teaching of RCR courses.

Such a requirement is appropriate and important, but limiting the required training to the select few that receive NIH funding unintentionally sends the wrong message. Under these circumstances, it is not unexpected for faculty and trainees to assume that RCR training is just one more bureaucratic hurdle rather than something that has real value. The way to remedy this perception is to implement training programs that engage all researchers.

Expanding RCR training to all will not be easy. In December 2000, the Office of Research Integrity (ORI) and the Public Health Service (PHS) announced that all researchers supported by PHS grants would be required to receive RCR training. Many in the academic community were justifiably unhappy that the policy was a highly prescriptive and unfunded “one size fits all” mandate. The requirement was suspended in February of 2001, just two months after its announcement. The ORI’s decision to suspend the requirement was precipitated by concerns that it had not been developed through appropriate rulemaking procedures. Whatever the shortcomings of that effort, the need for RCR training for all researchers still exists.

Before the requirement was suspended, an RCR education summit was convened by multiple federal agencies. The goal of the summit was to address the roles of the federal government and federally funded research institutions in meeting a common interest in effective RCR education for all scientists. In that meeting, Jeffrey Cohen, who was then director for education at the Office for Human Research Protections, clearly articulated the apparent dilemma. On the one hand, a federal requirement for RCR education could readily result in a prescriptive and inflexible program that would not be effective. On the other hand, in the absence of a federal mandate, research institutions had only rarely created programs to promote RCR.

The good news is that the initial announcement of a requirement stimulated many institutions to begin developing programs for RCR training. Unfortunately, once the requirement was suspended, efforts to enhance RCR education slipped down the list of priorities. The U.S. experience appears to be that although research institutions talk about the importance of ethics, most are funding little more than what is required for compliance. Today, the challenge for the research community is to promote RCR education in the absence of a regulatory mandate.

Continuing with the status quo is not good enough. Or more precisely, funding only the minimum required to comply with external regulations is inadequate. However, although an increased focus on ethics is an admirable goal, resources are scarce. If we hope to do more to promote ethics, then the inevitable question is what will it cost? We could begin by a prescriptive listing of what must be done and then ask how much those programs would cost. However, general implementation of this approach is impractical if only because circumstances in each institution vary so greatly.

A better formula would be to make ethics support commensurate with the size of the research program. A similar approach was carried out with the allocation of 3% of the Human Genome Project research budget to study its ethical, legal, and social implications. Given the necessary resources, each institution could then implement the kinds of programs most appropriate to its culture and needs. Unfortunately, it is unlikely that today’s research institutions can realistically consider a 3% allocation in the face of declining research budgets. So if not 3%, how much?

In health care policy, a “decent minimum” is often discussed as a standard for judging what should be in place for everyone. Given the need for an increased focus on the ethical dimensions of research, it is reasonable to ask what would be a decent minimum above what is currently allocated for compliance. Using the principles that funding should be proportional to the research budget and that formal programs are critical for addressing the ethical dimensions of research, I propose that we begin with a requirement of spending just 0.1% of an institution’s direct research funding for RCR education.

What could be done with such a modest allocation for research ethics? Intermediate and large research institutions would have dedicated resources to create and carry out a variety of programs to train researchers, to raise awareness of ethical issues and resources, and to engage the public in a shared examination of the ethical and scientific foundations for ongoing and proposed research. Smaller institutions could use their more limited resources to develop partnerships with other institutions and to attend train-the-trainer programs rather than develop programs de novo. In addition, smaller institutions could obtain help with program creation through organizations such as the Association for Practical and Professional Ethics (http://www.indiana.edu/~appe), the Collaborative IRB Training Initiative (https://www.citiprogram.org), the Responsible Conduct of Research Education Consortium (http://rcrec.org), and the Society of Research Administrators International (http://www.srainternational.org).

This year marks the fifth anniversary of the suspension of the PHS requirement for RCR training for the researchers it funds. Rather than continuing to wait for federal action, the research community should take the high ground and exhibit the necessary leadership to ensure that ethics is an integral part of science. The cost of 0.1% is low, and the potential for gain is high. Experience will determine whether the amount is adequate, but it should be possible to win wide agreement that it is a good starting point for a decent minimum.

From the Hill – Fall 2006

Bush vetoes stem cell research bill

Within 24 hours of a Senate vote of 63 to 37 to approve the Stem Cell Research Enhancement Act, President Bush issued the first veto of his presidency, closing a chapter in a long and complex debate over the use of federal funds for human embryonic stem cell research. A vote to overturn the veto failed in the House.

The bill (H.R. 810), which was approved 238 to194 by the House in May 2005, would have loosened restrictions set by the president in August 2001, when he allowed federal funding of research only on stem cell lines derived from embryos by that date. Proponents of H.R. 810 argued that many of those original cell lines are unsuitable for research and that the original number expected to be available were overestimated. The Stem Cell Research Enhancement Act would have allowed the government to fund research on cell lines created after August 2001 that met specific ethical standards. Only those cell lines derived from embryos left over from fertility treatments and donated with the consent of the progenitors and without financial incentives would be allowed.

Human embryonic stem cells are derived from several-day-old embryos and can theoretically differentiate into virtually any type of human cell, from blood cells to skin cells. Proponents of federal support for embryonic stem cell research argue that excess embryos left over from in vitro fertilization and are slated to be destroyed could be donated for research. Opponents, however, argue that such research would still condone the destruction of a human embryo and that federal dollars should not be used.

Despite the president’s veto, congressional support for reducing restrictions on federal funding of stem cell research is growing. The Senate vote included 43 Democrats, 1 Independent, and 19 Republicans. More members of the House voted to overturn the veto than had voted for the bill in 2005.

After the presidential veto, Senate Majority Leader Bill Frist (R-TN), who stunned the research community in 2005 by announcing his support for the House bill,stated,“I am pro-life,but I disagree with the president’s decision to veto the Stem Cell Research Enhancement Act. Given the potential of this research and the limitations of the existing lines eligible for federally funded research, I think additional lines should be made available.”

House challenge to climate change research fizzles

Continuing challenges to studies by climate scientist Michael Mann and colleagues by climate change skeptics in the House, led by Rep. Joe Barton (RTX), chairman of the House Energy and Commerce Committee, appear to have fizzled after the release of a June 22 National Research Council (NRC) report that supported Mann’s conclusions. Barton, however, has vowed to keep his committee actively involved in the climate change debate and has requested two new studies on research practices in the field.

Despite the NRC report, which concluded that Mann’s statistical procedures, although not optimal, did not unduly distort his conclusions, the Energy and Commerce Committee’s Subcommittee on Oversight and Investigations held two hearings in July, each lasting more than four hours. They focused on the statistical methods used in the 1998 and 1999 studies by Mann, Raymond Bradley, and Malcolm Hughes. Barton argued that the use of the studies in the 2001 Intergovernmental Panel on Climate Change report justified a detailed examination of the methods involved. “A lot of people basically used that report to come to the conclusion that global warming was a fact,” he said.

Mann, Bradley, and Hughes reconstructed temperatures of the past 1,000 years. Because direct temperature measurements date back only 150 years, the researchers used proxy measurements, including tree ring growth, coral reef growth, and ice core samples. They produced a graph that looked like a hockey stick: a long period of relatively stable temperatures, then a dramatic spike upward in recent decades. Critics of the research, however, argued that the hockey stick shape could simply be the artifact of incorrect statistical techniques, and climate change skeptics seized on the graph as a proxy for everything they believe is wrong about climate change research.

During the July 19 hearing, Edward Wegman, a George Mason University statistician, testified on behalf of the mathematicians who reviewed the Mann papers at the request of Rep. Barton. He stated, “The controversy of the [Mann] methods lies in that the proxies are incorrectly centered on the mean of the period 1902-1995, rather than on the whole time period.” He explained that these statistical procedures were capable of incorrectly creating a hockey stick shaped graph.

Gerald North, chair of the NRC committee, testified at the hearing that he agreed with Wegman’s statistical criticisms, but said that those considerations did not alter the substance of Mann’s findings.North said that largescale surface temperature reconstructions “are only one of multiple lines of evidence supporting the conclusion that climatic warming is occurring in response to human activities.”

At a July 27 hearing of the committee, Mann, referring to his statistical techniques, said that “knowing what I know today, a decade later, I would not do the same.” But he noted that multiple studies by other scientists have reached similar conclusions: Temperatures have been far higher in recent decades.

The July hearings were only the latest round of attacks on Mann’s research. In July 2005, Barton and Subcommittee on Oversight and Investigations Chairman Ed Whitfield (R-KY) solicited not just the climate papers in question, but large volumes of material from Mann and his coauthors, including every paper they had ever published, all baseline data, and funding sources. These requests were fiercely resisted by the scientific research community. Perhaps the harshest rebukes came from House Science Committee Chairman Sherwood Boehlert (R-NY), who called the investigation “misguided and illegitimate.”

Democrats on the Energy and Commerce Committee expressed frustration at the exclusive focus of the July hearings on just the two papers. They stressed that the scientific consensus on human-induced climate change would remain unaltered even if Mann had never written the papers in question. Rep. Bart Stupak (D-MI) said he was “stupefied” at the narrow scope of the hearings and said that Congress was “particularly ill-suited to decide scientific debates.” Rep. Jay Inslee (DWA) called the hearing “an exercise in doubt.”

Barton said at the July 27 hearing that he had requested a study from the Government Accountability Office on federal data-sharing practices, particularly in climate science research, and that he planned to request a study from the National Research Council’s Division on Engineering and Physical Sciences on how to involve more disciplines in climate change research.

Bills target attacks by animal rights activists

Bills have been introduced in the House and Senate to address the growing issue of attacks, particularly on laboratories, by extremist animal rights groups, which Rep.Howard Coble (R-NC) said are having a “chilling effect”on research.

The House Judiciary Subcommittee on Crime, Terrorism and Homeland Security held a hearing in May on H.R. 4239, the Animal Enterprise Terrorism Act (AETA), sponsored by Rep. Thomas Petri (R-WI). The bill would make it a crime to harass, threaten, or intimidate individuals or their immediate family members whose work is are related to an animal enterprise (including academic institutions and companies that conduct research or testing with animals). It would also make it a crime to cause economic disruption to an animal enterprise or those who do business with animal enterprises, an intimidation technique called tertiary targeting. The bill adds penalties and allows victims to seek restitution for economic disruption, including the reasonable cost of repeating any experiment that was interrupted or invalidated as a result of the offense.

Chairman Coble, a cosponsor of the legislation, outlined the key issue of the hearing: the need to balance law enforcement regarding crimes and the protection of First Amendment rights. He announced that an amendment will be introduced at markup that will serve to ensure that the bill doesn’t prohibit constitutionally protected activities, even though he believes that the bill already contains such language.

Michele Basso, an assistant professor of physiology at the University of Wisconsin, Madison, testified about harassment she has received as a result of her research with primates. She said that animal rights activists protest regularly at her home, have signed her up for subscriptions to 50 magazines, and made numerous threatening phone calls. She said that university officials do not provide sufficient security and that she and some colleagues have thought about leaving the field and pursuing other research. She said that some colleagues in the United Kingdom are spending so much time on security measures that their research is suffering.

William Trundley, director and vice president of corporate security and investigations at GlaxoSmithKline, said that his company has been under attack in both the United States and United Kingdom. He said that many employees, whom he described as “traumatized,” have had their property vandalized, and researchers’ families have been harassed.

The Animal Enterprise Protection Act (AEPA) of 1992 protects animal enterprises against physical disruption or damage, but says nothing about tertiary targeting of people or institutions that conduct business with an animal enterprise. Brent McIntosh, deputy assistant attorney general, testified that AEPA is not sufficient to address the more sophisticated tactics used by animal rights extremists today. “The bill under consideration by the subcommittee would fill the gaps in the current law and enable federal law enforcement to investigate and prosecute these violent felonies,” he said.

Rep. William Delahunt (D-MA) argued that these activities are already well covered by local and state laws and should be prosecuted at that level. However, both McIntosh and Basso testified that local law enforcement authorities see these activities as minor crimes (spray painting, trespass, etc.) and generally have little inclination to pursue the perpetrators. Further, those who commit these crimes often receive at most minimal fines or short jail sentences. Rep. Bobby Scott (D-VA), a cosponsor of AETA, concurred with the witnesses that local laws cannot address the national scope of this activity.

A companion bill was introduced by Sen. James Inhofe (R-OK) in October but has not advanced out of committee. Committee staff said they are hopeful that the bills will advance in the fall of 2006.

Bills to boost competitiveness advance

In a vote demonstrating a bipartisan commitment to boosting U.S. economic competitiveness, the House Science Committee on June 7 approved the Science and Mathematics Education for Competitiveness Act (H.R. 5358) and the Early Career Research Act (H.R. 5356). The bills were originally introduced with only Republican sponsorship, but enough changes were made to bring all Democrats onboard.

Although Committee Chair Rep. Sherwood Boehlert (R-NY) characterized the bills as complementing President Bush’s American Competitiveness Initiative (ACI), White House science advisor John Marburger sent a letter to Boehlert stating that the bills contain “very high authorizations” of spending and would diminish the impact of the ACI.

The bills strengthen existing programs at the National Science Foundation (NSF) and Department of Energy’s (DOE’s) Office of Science. The Science and Mathematics Education for Competitiveness Act would expand NSF math, science, and engineering education programs, including the

  • Robert Noyce Teacher Scholarship Program, which provides scholarships to math and science majors in return for a commitment to teaching. The bill includes more specifics on the programs that grant recipients must provide for students to prepare them for teaching, including providing field teaching experience. It also allows those programs to serve students during all four years of college, although scholarships would still be available only to juniors and seniors, and raises the authorization levels for fiscal years 2010 and 2011. NSF will be required to gather information on whether students who receive the scholarships continue teaching after their service requirements are completed.
  • Math and Science Partnership Program, which would be renamed the School and University Partnerships for Math and Science. In addition to teacher training, the bill would allow grants for other activities, including developing master’s degree programs for science and math teachers.
  • Science, Technology, Engineering, and Mathematics Talent Expansion Program (STEP), which provides grants to colleges and universities to improve undergraduate science, math, and engineering programs. The bill would allow the creation of centers on undergraduate education.

The legislation also requires NSF to assess its programs in ways that allow them to be compared with education programs run by other federal agencies.

The Early Career Research Act was amended to include provisions from H.R. 5357, the Research for Competitiveness Act, and passed unanimously. The bill authorizes programs at NSF and DOE’s Office of Science to provide grants to early-career researchers to conduct high-risk, high-return research. The bill also expands an NSF program that helps universities acquire high-tech equipment that is shared by researchers and students from various fields.

The amended bill also includes several provisions concerning the National Aeronautics and Space Administration (NASA). A new section expresses the sense of the Congress that NASA should participate in competitiveness initiatives within the spending levels authorized in the NASA Authorization Act of 2005 and allows NASA to establish a virtual academy to train its employees.

Committee backs funding for new energy technologies

The House Science committee voted on June 27 to approve the Energy Research, Development, Demonstration, and Commercial Application Act of 2006 (H.R. 5656), which authorizes $4.7 billion over six years for the development and promotion of new energy-related technologies. The committee, however, did not approve the creation of a new agency within DOE to accelerate research on targeted energy technologies.

The bill brings together multiple bills, including one that was introduced by Energy Subcommittee Chairman Judy Biggert (R-IL), to authorize and specify the implementation of the president’s AEI. The AEI provides for a 22% increase in clean-energy research at DOE.

Biggert noted the difficulties involved in altering U.S. energy-use practices and regulations. “To make significant progress down this path requires a steadfast commitment from Congress and the federal government to support the development of advanced energy technologies and alternative fuels that will help end our addiction to oil and gasoline,” she said. “The bill we are considering today includes provisions that do just that, by building on the excellent research and development provisions this committee included in the Energy Policy Act of 2005.”

The legislation funds research in photovoltaic technologies, wind energy, hydrogen storage, and plug-in hybrid electric vehicles. Grant money was made available for the design and construction of energy-efficient buildings, as well as for further educational opportunities for engineers and architects related to high-performance buildings. The bill also gives what Committee Chairman Rep. Sherwood Boehlert (RNY) called an “amber light” to the Global Nuclear Energy Partnership (GNEP), financing the program but requiring further analysis before large-scale demonstration projects can proceed. Under the GNEP, the United States would work with other countries to develop and deploy advanced reactors and new methods to recycle spent nuclear fuel, which would reduce waste and eliminate many of the nuclear byproducts that could be used to make weapons. Further support was given to FutureGen, which is aimed at developing an emissions-free coal plant with the capacity for carbon capture and sequestration.

The committee decided to seek further input from the National Academies on a proposal to create an Advanced Research Projects Agency for Energy (ARPA-E), patterned after the successful Department of Defense DARPA (Defense Advanced Research Projects Agency) program. The National Academies recommended creating an ARPAE in its 2005 report Rising Above the Gathering Storm. Biggert was concerned whether this “new bureaucracy” would really help, and Boehlert worried that “a lot of unanswered questions” remained on the details of ARPA-E and expressed concerns about its funding. Rep. Bart Gordon (D-TN), who introduced legislation in December 2005 to create an ARPA-E, maintained that the language was sufficiently clear and that the provision already represented a finished product from the National Academies when he introduced an amendment to establish ARPA-E within DOE. His amendment was defeated, and the committee kept language instructing the National Academies to create a panel to further study and make recommendations on the ARPA-E concept. The Senate Energy and Natural Resources Committee supported the concept of ARPA-E when it passed S. 2197 in April 2006.

Energy research was also supported in H.R. 4761, the Deep Ocean Energy Resource Act, passed by the House on June 29. The bill authorizes two new DOE research and education programs at a combined total of $37.5 million a year for each of the next 10 years. The new programs would provide funding for grants to colleges and universities for “research on advanced energy technologies.” Specifically, the grants could be used for research on energy efficiency, renewable energy, nuclear energy, and hydrogen. The new programs would provide graduate traineeships at universities and colleges for research work in those same areas.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.