The Drug War’s Perverse Toll

Bob Dole remarked in his acceptance speech at the 1996 Republican National Convention that “the root cause of crime is criminals.” The tautology and its implications-more prisons, longer sentences, tougher judges-made for a good applause line, though not the wisest course of action. Dole and his fellow Republicans were closer to the truth when they talked about the superiority of the family to the state as a means of moral development and social control.

The root cause of crime, or at any rate violent crime, is the failure of families to shape and restrain the behavior of young men, who are responsible for much more than their share of murders, robberies, and other serious offenses. Yet not all young men are criminally inclined. Those who are raised in intact families and who then marry and become parents themselves, thereby acquiring a familial stake and the responsibilities that go with it, do not require expensive legal deterrence (or medical treatment) as often as those who do not. Societies that attempt to control behavior by relying on police and prisons rather than families have more crime and heavier taxes, plus ever larger outlays for private security. The great fiscal virtue of attentive parenting and matrimonial stability is that they are by far the cheapest way to maintain social order.

Heavy reliance on the criminal justice system can also reach the point where it undermines family life. Imprisoning a large number of men distorts the marriage market and thereby increases the likelihood of illegitimacy and discourages the formation of families. This is especially true for groups that have low gender ratios, the most important group being blacks in inner cities. Criminal justice reform-in particular, reform of drug laws and drug enforcement tactics-can help to restore balance to the black marriage market. Making the number of marriageable black men and women more nearly equal is a necessary, though not a sufficient, condition for a long-term reduction in the high levels of criminal violence and social disorder that plague U.S. inner cities.

High gender ratios and violence

The age and gender mix of any group or population tells us a good deal about its potential for violence and disorder. The most dangerous behavior, such as street fighting or drunken driving, occurs in the teens and twenties. A population with an unusually large number of young people, such as that of the United States in the late 1960s and 1970s, is bound to have higher rates of violence and disorder. So will populations that have too many men. Economic activities such as prospecting and ranching attract youthful male workers, producing demographic anomalies such as mining camps and cattle towns. The relative absence of women, children, and old people and the ensuing surplus of young bachelors create a pathological tangle of drunkenness, violence, gambling, prostitution, disease, neglect, and early death.

Consider the nineteenth-century U.S. frontier. Religious colonies and family farms were placid enough, but frontier boom towns were very violent places. Nevada County, California, which was full of boisterous mining camps where at least 9 out of 10 residents were male, had an average annual homicide rate of 83 per 100,000 population in the early 1850s. Leadville, a Colorado mining town, had a rate of 105 per 100,000 in 1880. Fort Griffin, a Texas frontier town frequented by cowboys, buffalo hunters, and soldiers, had an even higher rate of 229 per 100,000 during its boom years in the 1870s.

Nonfrontier or postfrontier regions with more normal gender ratios experienced much less homicidal violence. Henderson County, a rural backwater in western Illinois, had an average annual rate of 4.3 homicides per 100,000 population during 1859-1900, or just 19 murders in over 40 years. Two eastern cities, Boston and Philadelphia, had criminal homicide rates of 5.8 and 3.2 per 100,000 in the two decades after 1860. By comparison, the average rates for Boston and Philadelphia in the early 1990s were 19.1 and 28.6 per 100,000. Frontier towns with abnormally high gender ratios and a surfeit of young single men thus had exceptionally high murder rates in comparison to cities in their era as well as to cities in our own.

New South Wales, Australia, was founded by the British in 1788 as a penal colony to replace their lost American dumping grounds. Immigration gave New South Wales a heavily masculine population. Initially four-to-one male, it fluctuated between three- and two-to-one male over the next half century. The results were widespread drunkenness, prostitution, corporal punishment, and lethal violence, both among the colonists and against the Aborigines. Peter Grabosky, who compared criminal statistics from New South Wales between 1826 and 1893 to economic conditions, gender distribution, urbanization, police manpower, and police expenditures, concluded that rates of serious crimes against persons and property were almost solely a function of the male overage of the population. Other variables virtually did not matter. In New South Wales as in nonfarming areas of the U.S. frontier, social problems grew out of a skewed masculine population. When it became more normal, so did the rates of violence and disorder.

Low ratios and illegitimacy

It does not follow, however, that the rate of murder or any other crime is simply a linear function of the ratio of males to females. Low gender ratios-that is, when females are substantially more numerous than men-can also give rise to social disorder. This is mainly because low gender ratios encourage illegitimacy and divorce.

Most illegitimate children lack not only fathers and decent family incomes but the whole array of socially obligated kin, such as paternal grandparents, aunts, or uncles, who come with an acknowledged marriage and who ordinarily provide encouragement, advice, and support for young people. Illegitimate children lack adult supervision and are more likely to run in gangs. Most critically, illegitimate children miss the opportunity to experience discipline and modeling from both biological parents, including what it is like to grow up in a two-parent family. They are thus less like to marry and stay married than those raised in intact families. They are more prone to irresponsible and criminal behavior, because their moral horizons, habits of self-control, and economic opportunities have been truncated at both ends of the reproductive cycle.

Many causes have been suggested for the rise of illegitimacy, including welfare dependency, an entrenched culture of poverty, the loss of industrial jobs, and economic frustration. But there is yet another explanation, one that is both surprising and surprisingly powerful: Illegitimacy and female-headed households are common wherever, as in the black inner city, a chronically low gender ratio exists.

The notion that sexual and marital behavior are connected to the balance of men and women dates back to the work of the sociologist Willard Waller, who studied U.S. courtship during the 1930s. Waller thought sexual relationships were governed by the principle of “least interest.” The person who had less to lose-who was less in love, less dependent-exercised power over the other person, who was more willing to sacrifice to keep the relationship alive. The gender ratio figured in the least-interest equation because, if one person were in the minority, he or she had more alternative partners available. The minority party had less to lose if the relationship broke up and hence could make more demands.

The principle of least interest, rechristened “dyadic power,” turned up again in Marcia Guttentag and Paul Secord’s 1983 study Too Many Women? They argued that in high-gender-ratio situations, most women would prize their virginity and expect to marry up, marry young, stay home, and bear large numbers of legitimate children, at least in cultures where contraception was not yet common. Low-gender-ratio situations produced the opposite pattern: more premarital sex and illegitimacy; more female-headed households and female labor-force participation; later marriages for women; and more divorce.

Historical studies have turned up interesting examples of dyadic power. Italian women in nineteenth-century Rochester, New York, married sooner than their counterparts in Southern Italy and were more successful in resisting premarital sexual advances. A firm “no” did not hurt their chances of marrying well because they were greatly outnumbered by Italian immigrant males who were denied the alternative of WASP brides by nativist prejudice. The reverse was true in impoverished Southern Italy, where men were scarcer than women because of the overseas exodus. Hence they exercised greater dyadic power.

Illegitimacy and female-headed households are more common wherever, as in the back inner city, a chronically low gender ratio exists.

Sociologists Scott South and Katherine Trent’s systematic study of late-twentieth-century data from 117 countries found much the same thing. Controlling for differences in socioeconomic development, women in low-gender-ratio nations consistently had lower rates of marriage and fertility and higher rates of divorce and illegitimacy. Given favorable sexual odds, it seems that men everywhere act like, and produce, bastards.

Black America, which has had the lowest gender ratio of any of the major U.S. ethnic groups for the past century and a half, is no exception. The difference begins at birth. The gender ratio for black newborns is typically about 102 or 103 males for every 100 females, as compared with 105 or 106 for whites. The higher mortality of black male children and young men causes the gap to widen with age. At ages 20 to 24, the black gender ratio is 97, the white 105. By ages 40 to 44, the black ratio is 86, the white 100.

The gender ratio alone understates the extent of the problem. Young black urban men are far more likely than whites of comparable age to be unemployed, imprisoned, institutionalized, crippled, addicted, or otherwise bad bets as potential husbands. The post-civil rights era increase in interracial marriages has further contributed to the unavailability of black men, who take white wives twice as often as black women take white husbands.

Dyadic power equals sexual leverage. Black women unwilling to engage in premarital sex are at a huge disadvantage in an already tight market. Black men know this and can easily exploit the situation. But such sexual opportunism increases the prospect of illegitimacy, and illegitimacy feeds the problems of poverty, unemployment, and violence that make the inner cities so dangerous.

It seems counterintuitive that low gender ratios are fraught with social peril. Shouldn’t fewer men translate into less crime? Yes, but as sociologists Steven Messner and Robert Sampson have shown, the effect of fewer men is, over time, more than canceled out by the effects of increased illegitimacy and family disruption. There may be fewer males in the ghetto, but because they are less often socialized in intact families or less likely to marry and stay married, they more often get into trouble.

So the relationship between the gender ratio and violence turns out to be paradoxical. Too many or too few women can both lead to problems. The smaller the disparity, the smaller the problems. Historically, communities with just a few “extra” men or women could assign them to existing households, as was required by law of bachelors in Puritan New England. But large disparities have proved more disruptive and harder to handle. The best case clearly is a marriage market in equilibrium. When a population’s structure is conducive to marriage and family stability, it is conducive to social order.

Mass incarceraton and its costs

But marriage and family formation are not simply a function of the raw gender ratio. To be eligible for marriage, a young man has to be in circulation, not locked away somewhere. Yet by 1995, one of every three black American men in their twenties, the prime age for marriage, was in prison, on probation, or on parole. By comparison, only about one black woman in 20 was in similar straits.

Let’s look more closely at the numbers. On any given day in 1994, more than 787,000 black men in their twenties were under some form of criminal justice control. Of these, 306,000 were behind bars; 351,000 on probation; and 130,000 on parole. An unknown but not inconsiderable number were hiding from arrest warrants. The cost to taxpayers for the criminal justice control of these black men is more than $6 billion per year. Of course, those among them who are behind bars are not committing street crimes, which is the point Bob Dole was making. Indeed, some observers think that the mass incarceration of young black men is what is behind the decline in violent crime rates in the past five years. Other theorists have stressed the progressive aging of the baby boomers; a temporary (and soon-to-be-reversed) decline in the relative number of teenagers; a healthier economy; the stabilization of urban drug markets; more aggressive police tactics; the proliferation of trauma centers (which, by saving more gunshot victims, lowers the homicide rate); and the notion that the number of violent crimes has, in some neighborhoods, fallen below an epidemic “tipping point.” None of these theories is exclusive of the others.

Yet even if mass incarceration turns out to be causally related to the recent decline in violent crime rates, we need to consider its long-term social costs. The doubling of the inmate population since 1985 has diverted dollars from education, particularly state-supported higher education. Inflation-adjusted funding per credit hour has eroded as penal outlays have increased, thereby diminishing young people’s future employment (and hence marital) prospects.

Children whose parents are in jail have suffered. More than 60 percent of male inmates have children, legitimate or otherwise, and most of those children are under18. The absence of their fathers and whatever financial and emotional support they might have provided does not improve their life prospects. Neither do their parents’ criminal records. Marc Mauer and Tracy Huling, who assembled the black prison numbers, have argued that young men who have done time are at an economic and marital disadvantage when released. In a sense, they take their bars with them. A prior criminal record reduces their chance of finding gainful employment, making them less attractive as marriage partners and less able to provide for their children.

The most subtle effect of the prison boom, however, has been the unintended lowering of the ratio of marriageable men to women, particularly, as we have seen, in the black community, where young men are less numerous to begin with. The smaller the ratio, the greater men’s sexual bargaining power and hence the likelihood of illegitimacy and single-parent families, which are the root causes of violence and disorder in the inner city. The solution makes the problem circular.

This doesn’t mean that we should tear down the prisons, but it does mean that we should think carefully about how and why these resources are used. The drug war, the single most important reason for the increasing rate of imprisonment among young black men, is the obvious place to begin.

The drug war’s toll

First, there is the problem of racial bias in drug arrests, prosecutions, and sentencing. In 1992, blacks made up about 12 percent of the U.S. population and about 13 to 14 percent of those who used any illicit drug on a monthly basis. Yet more than a third of all drug possession arrests, more than half of all possession convictions, and three-quarters of state prison sentences for possession involved blacks.

In Georgia, where the organization Human Rights Watch has made a detailed study, more whites than blacks were arrested for drug offenses before the drug war began in the mid-1980s. By the end of the decade, blacks were arrested for drug offenses more than twice as often as whites, even though blacks made up less than a third of the population. From 1990 through 1995, the black drug arrest rate per 100,000 was more than five times that of whites.

For sale or possession of marijuana, blacks were arrested at roughly twice the rate of white Georgians. For cocaine possession, blacks were arrested at a rate 16 times that of whites; for cocaine sale, 21 times. Marijuana and cocaine use and sale were more widespread in the black community, but the arrest rates were well beyond anything suggested by national prevalence data. Between 1991 and 1994, blacks made up from 13 to 18 percent of marijuana users and 18 to 38 percent of cocaine users.

The arrest-rate disparity was partly a matter of convenience. Black users and dealers, who often sell outdoors and to strangers, were more visible and easier to arrest in street sweeps. “When you want to catch fish,” explained one Georgia official, requesting anonymity, “you go where the fishing is easiest.” The pond, however, is so well stocked that the fishing has little effect: Lower-level black cocaine dealers are easily and quickly replaced. Indeed, they have been known to “drop a dime” on their rivals simply to eliminate competition and expand their own turf.

Pharmacologically absurd and racially unjust, Congress’s crack cocaine sentencing provision has been a policy fiasco of the first magnitude.

Though wholesale black arrests for cocaine dealing have not stopped street trafficking, they most assuredly have had an impact on the black male inmate population, beginning with the local lockup. A study of 150,000 criminal cases in Connecticut found that bail for black and Hispanic men averaged twice that of whites for the same offense. Those who could not afford the higher rates stayed in jail. Nor did their chances improve in the pretrial phase. A California study of 700,000 cases found that blacks and Hispanics were less likely than whites to have their charges dropped or cases dismissed, to plead out cheaply, or otherwise benefit from prosecutorial or judicial discretion.

But the most conspicuous (and correctable) problem is that federal and many state laws make even small-time drug dealing a big-time offense carrying a stiff sentence. This is especially true of crack, the drug war’s bête noir. In 1986, Congress enacted a sentencing provision that required only 1/100 of the amount of crack cocaine to trigger the same penalty as powder cocaine. Deal 500 grams of powder cocaine, get five years; deal five grams of crack, ditto. The result was that, by 1993, federal prison sentences for blacks averaged 41 percent longer than those of whites, with the crack/powder distinction being the major reason for the difference. Pharmacologically absurd and racially unjust on the face of it, the 100-to-1 ratio has been a policy fiasco of the first magnitude, compounded by Congress’s politically motivated refusal to heed the U.S. Sentencing Commission’s advice to drop it.

This criticism should not be confused with a call for legalization. The original point of the drug war, which began in 1986, was to mount a high-visibility campaign, orchestrated by the White House, to restrict and stigmatize drug use through increased education, prevention, treatment, and enforcement efforts. This was a reasonable goal and met with some success, notably among young people and adults who use drugs only accasionally. In the 1990s, the drug war’s leadership and moral purpose have disappeared, and education, prevention, and treatment dollars are harder to come by. That has left as the drug war’s principal enduring legacy the harshest penal aspects of the 1986 and 1988 federal antidrug laws. Intentionally or not, these laws and their state equivalents have functioned as a kind of giant vacuum cleaner hovering over the nation’s inner cities, sucking young black men off the street and into prison. More rational, flexible, and fiscally prudent state and federal sentencing policies within the context of a balanced drug war-the prescription of a majority of the nation’s police chiefs-would help to redress the scarcity of marriageable black men and other long-term problems associated with mass incarceration.

Beyond criminal-justice reform

Note the assumption here: Young black men who were thus kept out of prison would work at legitimate jobs and use their wages to marry and raise families. Social scientists such as Michael Gottfredson and Travis Hirschi, who emphasize the defective socialization of criminals, have expressed skepticism on this point. “People with low self-control will have difficulty meeting the obligations of structured employment,” they wrote in their 1990 book A General Theory of Crime, “just as they have difficulty meeting the obligations of school and family.” Scratch a dealer, find a sociopath.

But not a lazy sociopath. Practically every ethnographer or economist who has studied young minority drug dealers has been struck by their willingness to work hard. Drug dealing is an entrepreneurial activity, and it is easy to imagine this sort of energy and risk-taking succeeding in a legal enterprise. In fact, many young dealers keep one foot in the legitimate sector. Peter Reuter and his associates found in a 1985-87 study that 63 percent of the dealers in Washington, D.C., sold on less than a daily basis and that three-quarters earned money from legitimate jobs.

Criminal justice reform is a necessary though not sufficient condition for diminishment of violence and disorder in inner cities.

But if legitimate jobs were available, why did they sell drugs? The ethnographer Philippe Bourgois argues that crack’s appeal to young minority dealers is as much cultural as economic. The crack business is truly their own. There are no white or “A-rab” bosses to placate; no strictures on language, dress, or demeanor; no forms to fill out; no taxes to pay. Though Bourgois studied Latino dealers, the same applies to many young blacks. “Why should I work in some old dirty noisy car factory?” demanded one 17-year-old Detroit gang member. “Who needs a job, when all you got to do is get with a crew that’s rolling? . . . Me, I got it all worked out, later for all that school, job, marrying shit.”

The black community has long been split between what sociologist Elijah Anderson calls “the culture of decency” and “the culture of the street.” The former values the same things as most middle- and working-class Americans: close if not invariably traditional nuclear families, financial stability, religion, the work ethic, and getting ahead. It disapproves of that which middle America disapproves: crime, drug abuse, and teenage pregnancy.

The culture of the street is disdainful of conventional values such as marriage and sexual restraint. Within the peer group, often a neighborhood gang, sexual conquest provides impoverished young black men with a libidinous outlet, reinforces masculine identity, and enhances their standing. Marriage and good providership do not. The object, writes Anderson, is to smoothly “play” the girls but never to “play house.” They hit on vulnerable, naive, and often fatherless young women and then run, maintaining personal freedom and independence from matrimonial ties. When such ties exist, they are on the young man’s terms.

Sending fewer black men to prison, in short, is not going to solve the problem by itself. Black families are in trouble for many reasons, among them labor-market changes, a legacy of welfare dependency, racial and class segregation, and the inversion of traditional values, both within the street culture of the ghetto and the larger, eroticized, commercial culture of the mass media. It isn’t enough to keep young men eligible for marriage by keeping them out of jail. They also need jobs and the will to keep at those jobs and to base family life on them.

Yet there is good statistical evidence that the low ratio of marriageable black males to females, exacerbated by the drug war and the sentencing revolution of the 1980s and 1990s, has encouraged illegitimacy and family disruption. Robert Sampson, who analyzed census data from 171 cities, found that “the strongest predictor by far” of black family disruption was the gender ratio, followed by black male employment. Family disruption, Sampson hypothesized, gives rise to violence, which reduces the effective gender ratio directly (census takers don’t count dead men) and indirectly through imprisonment, which simultaneously hurts male job prospects.

The crime and violence stemming from family disruption have not only landed more young black men in prison or the morgue, they have landed more law-and-order politicians in office-another and by no means the least way in which the problem has become circular. Since 1980, if not before, the electoral dividends of appearing tough on crime have been more appealing to U.S. politicians than the long-term social dividends of flexible and reasonable criminal sanctions. Getting rid of the 100-to-1 ratio; revising sentencing guidelines to permit greater judicial discretion, including referral of more drug users into treatment; eliminating mandatory minimum sentences for lower-level trafficking offenses-these and other reforms will require real political courage. This is especially so because criminal justice reform is no panacea. It is best understood as a necessary though not a sufficient condition for the long-term diminishment of violence and disorder in the inner cities, America’s new violent frontiers.

Changes Big and Small

Issues has made very few changes in its format or appearance since a major overhaul in 1987. We created the Real Numbers section in 1990, added art to the cover in 1991, and introduced the From the Hill section in 1995. Cartoons started appearing sporadically in 1995. It doesn’t add up to much.

Beginning with this issue, we plan to pick up the pace. We will be adding more visual interest, more timely information, and more intellectual stimulation. First, we intend to follow through on earlier efforts by making Real Numbers and cartoons a part of every issue. We will also be looking for more provocative articles for the Perspectives section. These articles Perspectives were intended to be more speculative and far-reaching than the features. What they lacked in comprehensiveness and detail, they were supposed to make up for in novelty and ambition.

We have sometimes failed to live up to that vision. We want to publish more pieces like the one from Robert Cook-Deegan in this issue, which challenges the hegemony of peer review at the National Institutes of Health (NIH). Pointing to the success of staff-directed funding at the Defense Advanced Research Projects Agency, he proposes that NIH experiment with this approach in some areas of research. The sanctity of peer review is about as close as science gets to a religious principle, but a little heresy is useful to test the rigor of the faith. We hope that others will challenge the conventional wisdom in Perspectives. And remember that brevity is a virtue. Perspectives have tended to be about 2,500 words (or four magazine pages). We’d like to see shorter, sharper pieces. If you have something particularly provocative to say, don’t bury it in words. Be brief and straightforward.

The purpose of Real Numbers is to let the data do the talking. Jesse H. Ausubel does just that in this issue by tracking long-term trends in a number of critical environmental indicators. The numbers tell a story that calls into question the widely held belief that the environment is on the express train to ruin. Anyone can have an opinion. Issues authors have opinions that rest on a foundation of data and expert knowledge. We will be encouraging all authors to include supporting data in tabular or graphic form in their articles.

The Archives is a completely new feature that will tap into the rich history of the National Academy of Sciences. Each issue will include a photograph of a distinguished scientist from the NAS Archives that will be accompanied by a brief description of a significant event from the person’s life. We are delighted to be able to begin with J. Robert Oppenheimer speaking at a celebration of the Academy’s centennial in 1963.

Over the course of the coming year, we will be introducing more changes. Illustrations will be added to the Forum section, and new graphic elements will be introduced elsewhere. One editorial addition that we are particularly excited about will be a section that reports on what has happened after the publication of an article in Issues. We often hear from authors about policy changes and other developments that occur as a result of an article in Issues. We want to share some of this information with you to help you keep up to date with the topics you read about in Issues.

Beneath the surface

We hope that these changes will make Issues livelier and more appealing, but that is not enough. We are also recommitting ourselves to Issues central mission of making a significant contribution to the intellectual and political life of the country. That is becoming more difficult because of a growing malaise in the science policy community. One of the repercussions of the end of growth in federal support for science and the prospect of significant reductions in funding is that those who manage and influence science policy are having a lot less fun. After five decades of directing a growing enterprise, the science policy establishment must now confront a shrinking pie.

The past few years have witnessed a number of symposia commemorating the 50th anniversary of the publication of Vannevar Bush’s Science, the Endless Frontier, which marked the beginning of rapid growth in federal research funding. Any number of individuals and organizations have aspired to produce the visionary blueprint that would provide a beacon for the next 50 years. None have succeeded, but that should not be surprising. Bush sat down in front of a virtually blank page, because the government had done so little for science before World War II. And though he couldn’t have predicted it, Bush’s vision was pushed forward by the tailwind of one of the greatest periods of economic growth ever experienced by any country. The existing infrastructure of federal science stands in the way of the visionary imagination, and we are not likely to see a repetition of postwar economic growth. But that is not an excuse for fin de siecle pessimism.

The malaise of science policy is not a malaise of science. John Horgan’s musings about the “end of science” have not struck a chord with scientists. The thirst for knowledge and the fertile ground of human imagination will continue to push science forward. The public continues to hold scientists and the scientific enterprise in high regard. Scientists, however, will lose some of their respect for those of us who have helped distribute the federal bounty. We can no longer be the rich uncle from the Beltway and will more often be the unwanted bearer of bad news.

So let’s give up the Vannevar Bush dreams and the rich-uncle fantasies. Today’s challenges and opportunities are different and in many ways more difficult. During the final years of the Bush administration, it seemed that the bitter debates about technology policy had suddenly dissolved into consensus. That harmony lasted for a year or two before the Clinton administration found many of its technology policy innovations under attack as corporate welfare. More hard work will be required to hammer out a practical national agreement on Washington’s role in promoting technological innovation. For the moment, biomedical research enjoys broad bipartisan support in Congress, but the looming storms over various economic and ethical issues related to health care could soon cloud its prospects.

The easy days are gone. Rereading Vannevar Bush won’t bring them back. Somebody is going to have to step up to make the tough decisions. Those who do will not win any popularity contests–at least not right away. But if you want to have a symposium in your honor 50 years from now, you’ll have to sacrifice a few friends along the way. This is a new era in science policy, and the door is open to creative thinking.

Having witnessed the enormous power that science unleashed in the atomic bomb, the world’s leaders turned to scientists for help at the end of World War II. Today, scientists are less likely to be courted in the corridors of power, but science has become even more important to the fate of humanity. Science policy has become more difficult and more important. That’s not a cause for malaise; it’s a challenge to action. And Issues can be one place to act.

Forum – Winter 1997

The future of the Air Force

Andrew F. Krepinevich, Jr. has compiled a long record of thoughtful, informed analysis of defense issues. He justifies his reputation once again in “The Air Force at a Crossroads” (Issues, Winter 1997). As Krepinevich points out, the U.S. military is facing enormous uncertainties as we build the forces this nation will need in the next century. Although he focuses on the Air Force, I think he would agree that the huge changes and uncertainties he identifies are issues with which every element of our joint team must wrestle. The revolution in military affairs, geopolitical developments across the international landscape, and the growing importance of space-based capabilities will have profound effects on the entire U.S. military.

About 18 months ago, General Ronald Fogleman and I established a long-range planning effort to address those issues and to construct a plan to guide the Air Force into the next century. We focused on creating an effort that would draw on the expertise of the entire Air Force in building an actionable pathway toward the future. Over that year and a half, our long-range planning group spearheaded a study that covered the entire scope of Air Force activity.

Krepinevich mentions the first result of that effort: Global Engagement: A Vision for the 21st Century Air Force. It captures the outcome of our planning effort, capped by the deliberations of a week-long conference of our senior leadership, both military and civilian. It outlines our vision for the Air Force of the next century: how we will fight, how we will acquire and support our forces, how we will ensure that our people have the right training and values.

In defining this vision, our senior leadership looked at all of our activities. This is clearly too much to outline in a brief summary, but four major themes emerged. As we move into the next century, the Air Force will: fully integrate air and space into all its operations as it evolves from an air force, to an air and space force, to a space and air force; create personnel who understand the doctrine, core values, and core competencies of the Air Force as a whole, in addition to mastering their own specialties; regenerate our heritage of innovation by conducting a vigorous program of experimenting, testing, exercising, and evaluating new operational concepts, and by creating a series of battle labs; and reduce infrastructure costs through the use of best-value practices across the range of our acquisition and infrastructure programs. Together-and these must be viewed as a package-these goals provide an actionable, comprehensive vision for the future Air Force.

However, we realized right from the start of this process that it is much easier to define a vision than to execute it. The shelves of libraries all across this city are stacked high with vision statements, many of them profound, some of them right, very few of them acted upon. So we have begun the process of transforming this vision into a plan, subject to rigorous testing and review, that will carry the Air Force along the path we have laid out. And we are beginning to define the programmatic actions necessary to execute our vision.

Certainly we will disagree with Krepinevich on some particulars. That is inevitable, given the complexities we face and the uncertainty of the future. But we are in general agreement on the larger issues: We must move away from traditional approaches and patterns of thought if we are to execute our responsibilities in the future. We are well aware that our plan will change over time. But we are also confident that we have a mechanism and the force-wide involvement necessary to make those adjustments. We will give this nation the air and space force it needs.

SHEILA E. WIDNALL

Secretary of the Air Force


Andrew F. Krepinevich, Jr.’s well-reasoned discussion is quite timely as the nation’s military establishment undergoes a major effort in introspection: The Quadrennial Defense Review. I agree that the Air Force and the other services are at a crossroads of sorts as we approach the new millennium. Accordingly, the Air Force has been engaged over the past year and a half in a far-reaching, long-range planning effort. The initial output of this effort is the white paper Global Engagement: A Vision for the 21st Century Air Force.

I observe with some pleasure the extent to which many of Krepinevich’s observations and suggestions are addressed in Global Engagement. Those who developed our long-term vision resisted the temptation to seize on any one design or discipline for all their answers. The focus is on crafting institutional structures to ensure that the U.S. Air Force remains on the leading edge of the revolution in military affairs. Where Krepinevich urges the Air Force to engage in vigorous experimentation, testing, and evaluation, Global Engagement directs the establishment of six battle laboratories to shepherd developments in key areas such as space operations, information, and uninhabited aircraft. At the same time as it sharpens its focus in specific technical fields, the Air Force will broaden professional understanding by creating a basic course of instruction in air and space operations. Improving both of these areas and their purposeful integration will maintain our leading-edge advantage in capabilities useful to the nation.

The efforts described in Global Engagement were developed to move the Air Force forward responsibly and effectively. Those two concerns-responsibility and effectiveness-will always tend to distance an institutional vision document from even the most expertly drawn thinkpiece. However, such institutional vision documents are less likely to contemplate the truly revolutionary but more risky notions that thinkpieces can embrace. Therefore, the best path forward is often illuminated by lamps both within and without the institution.

Krepinevich proposes that “the Air Force should reduce its current reliance on theater-based, manned tactical air systems . . . this issue is crucial to a successful transformation . . . because success in this area would mean that the Air Force’s dominant culture-its tactical air force-accepts the need for major change.” Krepinevich would have us reduce our emphasis on controlling the air, yet the demand for these capabilities persuades us that this mission is of greater importance than ever. The growing sophistication, availability, and proliferation of defensive and offensive systems on the world arms market provide a diverse set of problems for our field commanders to confront today and in the battlespace of the future. Our forces must be ready to meet and defeat those capabilities.

Joint Vision 2010 has given all the services a focus for achieving success. All the elements of this vision cited by Krepinevich depend on friendly control of the air. Such control is absolutely necessary to the effectiveness of all our forces and to the security of host nations, and the challenges in this area continue to grow. Moreover, tactical air capabilities have great leverage in defense against missiles. Indeed, the most promising way to prevent attack from cruise and ballistic missiles is to destroy them before they launch.

On the issue of manned aircraft, our most flexible, responsive, and effective solutions to many military challenges are currently provided by manned aircraft. Air power is the most liquid of combat assets, offering a unique combination of man, machine, speed, range, and perspective. But even we do not see this as an immutable truth; it’s the best we have been able to do in an imperfect world. Our current theater-based manned aircraft are a result of sound tradeoffs among range, payload, performance, and cost. The rich range of new aircraft possibilities is probably the most valuable harvest of the revolution in military affairs, but each new possibility will have to prove its advantages in the real world of limited dollars, unlimited liability, and a nation that holds its Air Force accountable for success in peace and war.

In spite of the many congruencies between Krepinevich’s article and Global Engagement, there remain some significant differences. Considering Global Engagement on its own merits, I believe readers will agree that it is a solidly reasoned document based on a thorough understanding of the aerospace technology horizon that projects improved ways to apply science and technology to serve the nation. It rests on 18 months of dedicated effort involving the force at large; experts from the scientific, academic, and policy communities; and a winnowing and prioritization process conducted by the accountable senior leadership of every part of our nation’s Air Force. However, no body of thought ever attained its full potential without being burnished by competing views. I look forward to the thoughtful responses of your readers and to more articles like “The Air Force at a Crossroads.” I am certain they will help the Air Force see farther and better.

GENERAL RONALD FOGLEMAN

Chief of Staff

U.S. Air Force


As the Air Force adopts new technologies to meet future challenges, it will be transformed. Force structure, organization, and operational concepts will change. One of our challenges is to manage these changes as we recapitalize our fighter force. In “The Air Force at a Crossroads,” Andrew F. Krepinevich, Jr. states that the F-22 and Joint Strike Fighter (JSF) that are the heart of this recapitalization effort will be of only marginal operational utility in the future. His prediction is based on the premise that tactical ballistic missiles (TBMs) and cruise missiles (CMs) will become so effective that they will deny us the use of airbases within theater. Without bases in theater, we will be unable to employ our tactical aircraft.

This is not a new problem. Airbases have always been vulnerable. It is much easier to destroy aircraft on the ground than when they are airborne, and thus the quickest way to attain air superiority over an opponent is to destroy his aircraft on the ground. This has been accomplished in the past with relatively unsophisticated systems; in 1967, the Israelis devastated the Egyptian Air Force during the opening hours of the Six Day War. The lesson we have drawn from that campaign and others like it is that we must control the airspace over our airbases. Air Force doctrine in this matter is clear: Air superiority is a prerequisite to any successful military operation.

Our response to the emerging TBM and CM threat is consistent with our doctrine; we will acquire the necessary systems and develop the operational concepts to control the airspace over our bases. The air superiority mission has expanded to include TBMs and CMs. We will field an architecture of new systems that will significantly reduce the effectiveness of TBM and CM attacks. This architecture will put the missiles at risk throughout their operational life. We will conduct attack operations against the command and control, garrison, and launch sites. Missiles that are launched will be engaged in their boost phase by airborne lasers. Our next line of defense will be the Army’s Theater High-Altitude Defense System (THAAD) and the Navy’s Theater-Wide System. A similar layered approach will be used against CMs.

The Air Force has consistently applied new tactics and technology to solve difficult operational problems. The long-range escort fighter in World War II and the employment of stealth to neutralize surface-to-air missiles during the Gulf War are just two examples. TBMs and CMs are no more than another operational challenge.

The F-22 and JSF will not be marginalized by these threats. The F-22 and JSF share stealth and an integrated sensor suite to which the F-22 adds supercruise. These capabilities are important enablers that will allow them to dominate an adversary’s airspace. We will deploy into theater, using our bomber force to conduct initial attack operations and, if necessary, our defensive systems as a shield. Then we will take the battle to the enemy. The tempo, accuracy, and flexibility of our operations, directed at his center of gravity, will paralyze him. His systems will be destroyed or neutralized. Instead of being marginally useful, the F-22 and JSF will be the prime offensive element of any future campaign. They will simultaneously attain air superiority, support the ground forces, and conduct strategic attack. Because of their critical importance, the Air Force is committed to acquiring these systems in numbers consistent with our national strategy.

JOHN P. JUMPER

Lt. General, USAF

Deputy Chief of Staff

Air and Space Operations


Science and democracy

In “The Dilemma of Environmental Democracy” (Issues, Fall 1996), Sheila Jasanoff provides a vivid portrait of how democracies struggle to resolve environmental controversies through more science and public participation. I concur with her diagnosis that these two ingredients alone are not a sufficient recipe for success and will often be a prescription for policy stalemate and confusion. Jasanoff sees “trust and community” as ingredients that must accompany science and participation and offers glimpses of examples in which institutions have earned trust, fostered community, and sustained policies.

In order for Jasanoff’s vision to be realized, it may be necessary to address the dysfunctional aspects of U.S. culture that serve to undermine trust and community. First, the potent role of television in our society promotes distrust of precisely those institutions that we need to strengthen: administrative government and science. These institutions are not served well by sound-bite approaches to public communication. Second, the lack of a common liberal arts education in our citizenry breeds ignorance of civic responsibility and a lack of appreciation of the values and traditions that distinguish our culture from others. Third, our adversarial litigation system undermines truth in science.

I’m not exactly sure how to moderate the perverse influences of television and litigation, but we certainly can take steps in educational institutions at all levels to promote liberal arts education. It is intriguing that many European cultures are generally doing better than we on each of these matters, through perhaps that is only coincidence.

JOHN D. GRAHAM

Director

Harvard Center for Risk Analysis


Sheila Jasanoff has written a profound and provocative article on the relationship between science and public participation. Her essay is a bracing antidote to much of the shallow rhetoric about “more participation” that has become so popular in the risk literature.

The basic issues with which Jasanoff deals are at least as old as Plato and Aristotle: To what extent should public decisions be left in the hands of “experts” or representatives, and to what extent should others-whether interest groups, lay citizens, or street mobs-be involved? However, as Jasanoff discusses, the current setting for these dilemmas is different in important ways than it has ever been before. I think there are at least three critical differences.

First, the complexity of decisions has increased. Today’s decisions often must be considered in a global context, involve large numbers of diverse groups and individuals, and are embedded in rapidly changing and very complicated technologies.

Second, the technology of participation has changed and is continuing to evolve rapidly. We seem to be asymptotically approaching a state where everyone can communicate instantaneously with everyone else about everything. Television, the Internet, the fax, and the cellular phone have profoundly changed the ability of people to find information and to communicate views to each other and to the government. We are only beginning to understand the implications of these technologies, and new technologies will appear before we understand the implications of the old.

Third, the sheer number of people on the planet is a major factor in its own right. Participation in so many diverse issues becomes possible in part because of the size and affluence of the citizenry, as well as an unguided division of labor. It is not so much that individuals have more time to participate in decisions or that they are smarter (although in historical perspective both these things are true); it is that there are more of them. In the United States, there are people who spend a large portion of their waking hours worrying about the treatment of fur-bearing animals or the threat of pesticide residues on food. This allows other people to concentrate on education or homeopathic medicine. Every conceivable issue has its devotees.

I agree with Jasanoff on the central importance of trust and community in the modern context of participation. As she says, the task is to design institutions that will promote trust. At least in the United States, we have barely begun to explore how we can do this. We need to start by developing a better understanding of how trust relates to various forms and conditions of participation. It is questionable, for example, whether the standard government public hearing does much to promote trust on anyone’s part. But what are the alternatives, and what are their advantages and disadvantages? We need less rhetoric and more research and thought about these kinds of questions. We should be grateful to Jasanoff for having raised them.

TERRY DAVIES

Resources for the Future

Washington, D.C.


Restructuring the military

David Ochmanek, a distinguished scholar and public servant, has written a sound article laying out ways in which the United States might selectively cut military force structure without significantly degrading its combat capabilities. My only quibble is with his heavy reliance on yet-untested military technologies as a basis for his recommendations. Although I think his policy prescriptions are sound, they pass muster only because of a much broader set of arguments.

To see why heavy dependence on high-tech weapons is an imprudent way to plan on winning future wars similar to the Gulf War, consider all the (now unavailable) capabilities Ochmanek says we need to realize his vision. He insists that we must be able to shoot down ballistic missiles reliably (preferably in their boost phase) and be able to find mobile missile launchers, even though we recently failed almost completely at both these tasks against Saddam’s unsophisticated Scud missiles. He appears to assume that we will be able to discriminate enemy tanks, artillery, and armored fighting vehicles from those of our allies, not to mention from trucks and cars. He also assumes that adversaries will not be able to develop simple countermeasures, such as multitudes of small independently propelled decoys that could mimic the radar or heat signatures of armor, or that if they do, U.S. sensors will improve even more quickly and overcome the countermeasures.

Rather than hinge all on science and technology, we should reexamine current defense strategy. For one thing, today’s two-Desert Storm strategy is too pessimistic about our ability to deter potential adversaries. Contrary to the thinking behind the Bottom-Up Review, neither Saddam nor the North Korean leadership would be likely to attack if most of our forces were engaged elsewhere. As long as the United States keeps a permanent military presence on the ground in Kuwait and South Korea and retains the ability to reinforce substantially in a crisis, Baghdad and Pyongyang will know it would be suicide to undertake hostilities. Second, just as our deterrent is improved by having forces deployed forward, so is our warfighting capability-not so much for high-tech reasons as because we have prepared. The United States now has troops, aircraft, and many supplies in the two regions of the world where we most fear war. Third, should war occur, not only will our capabilities be greater than before, our opponents’ will be worse. North Korea’s forces are atrophying with time. Iraq’s are only two-thirds their 1990 size. No other plausible adversary is as strong.

Instead of keeping the two-major-war requirement, the United States should modernize as Ochmanek suggests while also moving to a “Desert Storm plus Desert Shield plus Bosnia” strategy. That approach would solve the Pentagon’s current funding shortfall and do a little more to help balance the federal budget at the same time.

MICHAEL O’HANLON

Brookings Institution

Washington, D.C.


David Ochmanek argues from the assumption that future defense budgets must either decrease or stay the same. This is hardly inevitable. No one really knows what future circumstances will shape the defense budget. One thing is certain, however: Ochmanek’s contention that the Pentagon can maintain a two-war capability by relying on heavy investment in aerospace modernization at the expense of force structure is highly questionable.

There is merit to Ochmanek’s belief that advances in technology will allow for reduced force structure without the loss of combat capabilities. However, it is too risky to reduce forces now with the expectation that technology will bail us out in the long run. Technology must first prove itself in training operations and then be applied against force structure requirements. Approving Ochmanek’s approach means accepting that, for an unspecified time period, the United States will be making commitments it cannot fulfill.

Ochmanek overstates the case for increased reliance on high-tech firepower, dangerously devaluing maneuver despite numerous examples in which the use of smart weapons alone proved insufficient. He disregards the failure of five weeks of unimpeded air campaign to dislodge Saddam Hussein from Kuwait in 1991 and the noneffect of the cruise missile raids against Iraq in 1993 and 1996. The powerful deterrent effect that thousands of U.S. troops sent to Kuwait had in those episodes should not be overlooked. Similarly, remote, long-range, high-tech capabilities such as satellites and cruise missiles do not reassure U.S. allies and ensure U.S. influence in peace and war in the way that visible ground and naval forces do. Ochmanek anticipates future combat in which the United States will see everything, and everything that can be seen can be destroyed or manipulated at long range. Nowhere does he discuss the effects of counter-technology or missions in which a wide and robust array of forces may be needed.

However, in describing the need for much better capabilities to defend against weapons of mass destruction, Ochmanek is right on target, although he places too little emphasis on the need to defend U.S. territory from missile attack. Moreover, his assessment of the potential threats posed by improved conventional weapons in the hands of U.S. adversaries accurately reflects the current situation and the dangers of underestimating these threats. Furthermore, his proposed cut in combat formations of the Army National Guard is both reasonable and practical.

Ochmanek is also correct in asserting that the nature of U.S. international responsibilities requires U.S. military superiority. However, by assuming that static or decreased defense budgets are inevitable, he compels the United States to accept a force structure that is too imbalanced and inflexible to ensure its superiority. Although technological advances offer great promise and potential savings, the military needs to maintain a broad and diverse range of air, land, and sea capabilities in order to be persuasive in peace and decisive in war. This will require a defense budget that Ochmanek seems unwilling to support.

KIM HOLMES

Heritage Foundation

Washington, D.C.


I agree with David Ochmanek (“Time to Restructure U.S. Defense Forces” (Issues, Winter 1997) that with a smaller force structure, the United States could still carry out its strategy of fighting and winning two major wars. I would like to add two additional points to consider.

First, the military forces and capabilities of our allies for fighting two major wars could be taken into account more seriously than has previously been the case. For example, the Iraq scenario in the Report on the Bottom-Up Review shows U.S. allies limited to Kuwait, Saudi Arabia, and the Gulf Cooperation Council countries-a far cry from the Gulf War coalition of over 30 nations, including NATO allies with very capable forces.

Second, from a deterrence standpoint, the perception of U.S. leadership resolve is more important than definitive proof that U.S. defense resources are adequate to fight and win two major wars. There will always be domestic critics who contend that defense resources are inadequate to execute the national strategy. But our enemies-those who actually decide whether deterrence is effective-focus more on whether the U.S. leadership is willing to commit substantial forces than on whether those forces can actually defeat them. The leader of any aggressor state will still hesitate to attack our allies if force structure is reduced and the United States can attack with “only” four army divisions, nine air force fighter wings, four navy aircraft carrier battle groups, and one marine expeditionary force.

The reality of the situation is that there is no way to prove that the United States has the resources to execute its two-war strategy without actually fighting two wars. Even within the Department of Defense (DOD), detailed analytic support for the two-war requirement came years after the strategy was announced in October 1993. Analysis of airlift and sealift requirements was not complete until March 1995. War gaming by the Joint Staff and commands was not completed until July 1995. Analysis of supporting force requirements was not completed until January 1996. Finally, analysis of two-war intelligence requirements was not completed until June 1996. And although DOD touted these as proof of our two-war capability, they failed to quell concerns held by many, particularly some in the military services.

STEPHEN L. CALDWELL

Defense Analyst

U.S. General Accounting Office

Washington, D.C.


Energy policy for a warming planet

In “Climate Science and National Interests” (Issues, Fall 1996), Robert M. White sets forth the problem of climate-change policy in all its stark reality. We are moving from substantially stable climatic systems to instability that will continue into the indefinite future unless we take firm steps to reduce the accumulation of heat-trapping gases in the atmosphere.

There are, to all practical purposes, two alternatives to fossil fuels: solar and nuclear energy. Nuclear energy is the most expensive form of energy at present and carries with it all the burdens of nuclear weapons and the persistent challenge of finding a place to store long-lived radioactive wastes. The United States does not at the moment have such a depository.

Although the less-developed world sees possible limits on the use of fossil fuels as an impediment to their technological development, fossil fuels may in fact not be essential. There is every reason to consider jumping over the fossil fuel stage directly into renewable sources of energy. Solar energy is immediately available, and the combination of improved efficiency in the use of energy and the possibility of capturing solar energy in electrical panels may be able to displace much of the fossil fuel demand.

The development of oil has been heavily subsidized by the federal government throughout the history of its use. Now there is every reason for the federal government to subsidize a shift to solar energy. We can afford to develop that technology and then to give it away through a massive foreign aid program 2 to 10 times larger than the current minuscule effort. It will come back to us in goodwill, in markets, and most of all in a global reduction in the use of fossil fuels, and will give us a real possibility of meeting the terms of the Framework Convention on Climate Change.

GEORGE M. WOODWELL

Woods Hole Research Center

Woods Hole, Massachusetts


Streamlining the defense industry

In “Eliminating Excess Defense Production” (Issues, Winter 1997), Harvey M. Sapolsky and Eugene Gholz suggest that the Department of Defense (DOD) should help pay industry restructuring costs to buy out excess production capacity, and they propose to fund a greater level of investment in R&D through further reductions in procurement accounts. The first recommendation is not new and is, in fact, being implemented today. The second recommendation would slow the modernization of our forces and is counter to our planned program to increase modernization funding.

The authors state that “defense policy is back to the Bush administration’s practice of verbally encouraging mergers but letting the market decide the ultimate configuration of the industry.” It is true that DOD is not directing the restructuring of the defense industry. Our role has been to provide U.S. industry with honest and detailed information about the size of the market so industry can plan intelligently and then do what is necessary to become more efficient.

DOD has always permitted contractors to include the costs associated with restructuring within a single company in the price of defense goods. In 1993, the Clinton administration extended that policy to include the costs associated with restructuring after a merger or acquisition, when it can be shown that the savings generated in the first five years exceed the costs. In 1994, a law was enacted requiring certification by an assistant secretary of defense that projected savings are based on audited cost data and should result in overall reduced costs for DOD. In 1996, another law was enacted requiring that the audited savings be at least twice the costs allowed.

During the past three years, DOD has agreed to permit $720 million in restructuring costs to be included in the price of defense goods in order to generate an estimated $3.95 billion in savings through more efficient operations. This policy is more than “verbal encouragement.” U.S. defense companies are now more efficient and are saving taxpayers billions of dollars, and the productivity of the average defense industry employee has risen about 10 percent over the same period of time.

Total employment-active duty military, DOD civilians, defense industry employees-is down from a 1987 Cold War peak of slightly more than seven million to about 4.7 million, or about 100,000 less than the 1976 Cold War era valley of 4.8 million. Defense industry employment has come down the most-38 percent compared with 31 percent for active duty military personnel and 27 percent for DOD civilians.

The authors are correct that there were roughly 570,000 more defense industry employees in 1996 than in 1976. But in 1997, that number will drop to about 390,000, and over the coming years employment will continue to shift to the private sector as DOD becomes more efficient by outsourcing noncore support functions in areas such as inventory management, accounting and finance, facility management, and benefits administration.

I am particularly shocked and dismayed with the authors’ assertion that “acquisition reform will only make matters worse” and “neither the military nor the contractors will be long-term advocates for these reforms” because the savings will lead to budget cuts. This argument does not make sense for at least three reasons. First, the budgets have already been cut-procurement is down two-thirds since the mid-1980s. Second, budget levels-past and future-are driven by fiscal forces that are quite independent of savings projected from acquisition reforms. In this environment, the services have supported and continue to strongly support acquisition reform as a way to make ends meet. Finally, acquisition reforms make our defense industry more efficient and competitive; it’s the reason why industry strongly supports these reforms.

Sapolsky and Gholz’s second major recommendation is to fund increased investment in R&D through further reductions in procurement accounts. This is not a prudent course to follow in 1997. Since 1985, the DOD budget is down by one-third; force structure is down by one-third; and procurement is down by two-thirds. Further, the average age and remaining life of major systems have not increased significantly. This was accomplished by retiring older equipment as the force was drawn down, but it is not a state of affairs that can be sustained over the long term; either force structure must come down further or the equipment will wear out with continuing use.

The historical norm for the ratio of expenditures on procurement to those on R&D has been about 2.3 during the past 35 years. Today, the procurement-to-R&D ratio is 1.1-an all-time low. To invest for long-term readiness and capability, we must begin spending more money on procuring new systems. Consistent with that goal, procurement spending is projected to increase to $60 billion by 2001. Although this goal is being re-examined in the current Quadrennial Defense Review, I very much doubt the review will recommend reducing procurement budgets from their current level.

PAUL G. KAMINSKI

Under Secretary of Defense (Acquisition and Technology)

U.S. Department of Defense


Harvey M. Sapolsky and Eugene Gholz get it half right. Their diagnosis of the problem hits the mark but their solutions won’t fix it. Mergers seem to occur on a weekly basis in today’s defense industry. Yet, as Sapolsky and Gholz note, the excess capacity “overhang” remains. Much of this excess results from a delay in consolidation among merged firms as well as the need to complete work on previously awarded contracts. As the consolidation process proceeds, we should expect additional painful defense downsizing. However, the market alone will not rationalize defense capacity. The authors’ support for restructuring subsidies and for expanded support for affected communities and workers makes sense. Long-term defense savings are impossible unless we act now to rationalize production capacity.

In terms of solutions, Sapolsky and Gholz do not go far enough. Whether we like it or not, the defense industry of the 21st century will look something like the “private arsenal” the authors describe. Unfortunately, a private arsenal cannot run on R&D alone. Experimentation should be encouraged and heavily funded, but such a system will not sustain the defense industrial base or effectively support military requirements.

Dual-use items can be supplied by existing civilian firms, and the Pentagon’s support for dual use should continue. These initiatives can help maintain competition on both price and technology grounds at the subtier or supplier levels of the industrial base. But systems integration and defense-unique items will most likely continue to be produced by a handful of firms with a predominant focus on defense.

Sustaining the private arsenal will require that we treat it like an arsenal, through significant public subsidy and, when necessary, tight regulation to ensure competitive prices. Such a solution is far from ideal and actually runs counter to the Pentagon’s current emphasis on acquisition reform. Unfortunately, the economics of defense production may leave us no other choice, barring an even more undesirable (and more expensive) return to global tensions that requires Cold War levels of defense production.

ERIK R. PAGES

Deputy Vice President

Business Executives for National Security

Washington, DC


Dual-use difficulties

In “The Dual-Use Dilemma” (Issues, Winter 1997), Jay Stowsky does a thoughtful job of laying out the conflicting political, economic, and national security objectives of the Technology Reinvestment Project (TRP). What he says about TRP in microcosm is also true for the Department of Defense (DOD) as a whole. In my opinion, the following are the most significant dual-use challenges that face DOD today.

Despite ongoing efforts to achieve a common definition, dual use continues to be interpreted in very different ways according to constituency, context, and agenda. The creation of a DOD-wide dual-use investment portfolio with clearly stated rate-of-return criteria seems to be an elusive goal. How to maximize benefits from dual use has not been adequately addressed.

DOD continues to be unprepared to deal directly with the critical business and economic issues encountered when engaging commercial industry. Most government employees outside of the senior leadership have little or no experience with the commercial world. To be successful, DOD must become market-savvy. However, evaluating commercial potential is difficult enough for private entrepreneurs; evaluating dual-use potential will be even more so for bureaucrats.

Successful acquisition reform is critical to success, as Stowsky points out. To achieve it, DOD needs to find better ways to select and educate program managers. It is not enough to teach the formalities; those chosen must be prepared to be creative and exercise the new flexibilities built into the recently reformed DOD acquisition regulations. Program managers will need to understand how to evaluate a broad range of risks and be capable of prudently weighing them against anticipated benefits.

Despite its attractiveness, dual use must not be seen as a panacea. Although it is an important component of any future defense investment strategy and can considerably improve the affordability and technological capabilities of our military systems, the differences between commercial and military operating environments remain significant. National defense is expensive, and overly optimistic notions that some day everything will be cheap and bought off the shelf are unrealistic.

The future of dual use must be bright, for as we are reminded by our leaders, we cannot afford to continue the Cold War legacy of maintaining a separate defense industrial base. TRP was a learning experience and must be taken as such. The difficulties it encountered can be overcome through vigorous, intelligent leadership. We really have no other choice.

RICHARD H. WHITE

Institute for Defense Analyses

Alexandria, Virginia


Although Jay Stowsky is sympathetic to the objectives of TRP, his article makes it clear that the Pentagon’s dual-use efforts are not an effective way of promoting civilian technology. It is not a matter of fine-tuning the mechanism; rather, we must abandon the basic notion that government is good at helping industry develop commercially useful technology.

Of course there is a key role for government in this important area of public policy. It is to carefully reexamine the many obstacles that the public sector, usually unwittingly, has imposed on the innovation process. These range from a tax structure that discourages saving and investment to a regulatory system that places special burdens on new undertakings and new products. Surely, a simpler and more effective patent system would encourage the creation and diffusion of new technology.

Contrary to the hopes of the conversion enthusiasts, in adjusting to defense cutbacks after the end of the Cold War the typical defense contract reduced its work force substantially rather than diversify into new commercial markets. However, the aggregate results have been quite positive. Employment in major defense centers (southern California and St. Louis are good examples) is now higher than at the Cold War peak. A growing macroeconomy has encouraged and made possible the formation and expansion of civilian-oriented companies that have more than offset the reductions in defense employment.

There is little in the history of federal support for technology to justify the notion that government is good at choosing which areas of civilian technology to support and which organizations to do the work. The results are far superior when private enterprises risk their own capital in selecting the ventures they undertake.

MURRAY WEIDENBAUM

Center for the Study of American Business

Washington University

St. Louis, Missouri


Improving U.S. ports

Charles Bookman’s “U.S. Seaports: At the Crossroads of the Global Economy” (Issues, Fall 1996) recognizes the importance of ports and articulates the enormity of the challenges facing them and the entire U.S. freight transportation system in the next decade. He puts ports into the context of the global economy and outlines the need for investment so that ports can meet the demands of future trade.

In fact, U.S. public ports have invested hundreds of millions of dollars in new facilities over the past five years and will continue to invest about $1 billion each year through the turn of the century. Much of this investment has gone into modernizing facilities for more efficient intermodal transportation. It is commonly assumed that intermodalism is something new and that intermodalism equals containerization. The fact is that all cargo is intermodal (that is, involving the exchange of goods between two or more types of transportation). The majority of the cargo handled by U.S. ports is bulk and breakbulk; 10 percent or less of total trade tonnage is containerized.

The view that many “inefficient” ports may disappear in favor of huge megaports does not take into full account local interest and investment in deep-draft ports. Within the national transportation system there is room for a diverse array of ports to serve niche cargo and economic development needs in local communities.

Bookman correctly identifies future challenges for ports as those dealing with environmental regulation, particularly the need to resolve dredging and disposal issues. In recent years, ports have made significant progress in expediting project approvals and in working with the U.S. Army Corps of Engineers to streamline the process. We continue to work for further improvements that will encourage consensus-building among stakeholders on dredging issues and to implement technological advances to help resolve some of the problems.

Bookman also encourages coordinated efforts in port planning and suggests several approaches, including a federal-state partnership in the funding of projects. We agree that a barrier to “such regional planning . . . is that state and local government officials have tended to be more interested in highway and mass transit improvements than in port access.” Ports hope that reauthorization of the Intermodal Surface Transportation Efficiency Act will give greater recognition to intermodal access and freight projects as an integral part of the transportation system.

The American Association of Port Authorities endorses the idea of a National Trade and Transportation Policy that would direct the federal government to match its commitment to growth in world trade with a commitment to help improve the nation’s infrastructure and build on the transportation system now in place.

Bookman’s article gives thought-provoking attention to a system in need of continued investment and planning. We look forward to the challenge of developing better partnerships with local, state, and federal stakeholders to further enhance U.S. public ports.

KURT J. NAGLE

President

American Association of Port Authorities

Alexandria, Virginia


Rethinking the car of the future

Daniel Sperling’s critique of the industry-government Partnership for a New Generation of Vehicles (PNGV) highlights a troubling divergence between national goals and federal research priorities (“Rethinking the Car of the Future,” Issues, Winter 1997). A well-designed PNGV could be a valuable component of a broader strategy to lower the social costs of our transportation system. In its current form, however, PNGV amounts to little more than an inefficient use of public funds.

From its objectives to its execution, PNGV is inadequately designed to meet the transportation challenges of the 21st century. The next generation of automotive technology must make substantial inroads in dealing with the problems of climate change, air quality, and dependence on foreign energy. In an era of shrinking public funds, policymakers must maximize investments by pursuing technologies that can simultaneously address these problems.

The leading technology on the PNGV drawing board today-a diesel-powered hybrid vehicle-offers only moderate technological progress at best and a step backward in air pollution control at worst. Diesel combustion generates high levels of ozone-forming pollutants and harmful particulate matter, two categories of pollutants that the Environmental Protection Agency has recently determined must be further reduced to protect human health. Several states are also considering classifying diesel exhaust as a toxic air contaminant because of its potential carcinogenic effects. There are other, more advanced technological options (such as fuel cells) that would deliver substantial air quality benefits along with larger gains in energy security and mitigation of global warming.

As Sperling suggests, now is the time to reform PNGV. In particular, the clean-air goals of the program must be redefined to drive down emissions of key air pollutants. The PNGV process also deserves more public scrutiny to ensure that it continues to meet public interest goals and delivers adequate returns on public investment. Finally, even a well-designed PNGV is no silver bullet; policies that pull improved technologies into the market will continue to be a necessary complement to the technology push of PNGV.

JASON MARK

Union of Concerned Scientists

Washington, D.C.

The Market for Spies

The Cold War may be over, but espionage apparently is still thriving. Now, however, it’s economic espionage. Former FBI and CIA officials have stated that “we’re finding intelligence organizations from countries we’ve never looked at before” and that foreign intelligence agencies of traditionally friendly countries “are trying to plant moles in American high-tech companies and search briefcases of American businessmen traveling overseas.” Suggested U.S. responses include such drastic measures as high import tariffs or economic sanctions against countries whose governments spy on U.S. businesses. Before taking such heavyhanded actions, the U.S. government needs to acquire more information about what other governments are doing and develop a better understanding of how these activities affect the well-being of the United States.

Wall Street Journal reporter John Fialka’s War By Other Means is an attempt to establish a baseline of information about economic espionage against the United States. The book concentrates on the efforts of the Soviet Union (and later Russia), Japan, and the People’s Republic of China, and also looks at the activities of Taiwan, South Korea, Israel, France, and Germany. The Soviet Union is credited with the most systematic economic espionage campaign in history, spending nearly $1.5 billion annually in the 1980s to obtain sensitive civilian technology. According to the CIA, most of this technology was stolen from the United States.

France has increased its intelligence budget by nearly 10 percent since the end of the Cold War. Fialka effectively tracks French espionage against IBM, IBM’s request for assistance from the FBI, and the unprecedented joint efforts of the FBI and CIA to track down French moles in IBM’s headquarters. In this case, the FBI delivered protests to Paris after French intelligence was found to be operating against IBM, Corning, and Texas Instruments. Although it is clear that France is engaged in espionage against U.S. companies, it is difficult to determine just how much the French have gained from their “intelligence-for-profit” activities.

The book devotes a great deal of attention to China’s activities in the United States, detailing its efforts to obtain the night-vision scopes used by U.S. M-1 tanks as well as more sophisticated missile guidance systems. China is procuring advanced military technology and equipment from countries such as Israel and Russia, but Fialka provides no evidence that China is coordinating its economic espionage with other countries in order to improve its access to high-technology equipment.

Fialka has put Japan on his list of leaders in economic espionage even though Japan has no large government intelligence bureaucracy. He reports that the Japanese do most of their economic spying with private funds and with the help of the Ministry for International Trade and Industry. Noting that there are more than 200 Japanese R&D companies in the United States (more than twice the number of any other country) and nearly 30,000 Japanese students attending U.S. universities, Fialka describes this presence as a “military-style campus intelligence system” without providing evidence that such a system exists. This is typical of the book’s glib overstatements.

Fialka also tends to obscure or exaggerate the value of economic espionage to those countries that rely on it, particularly in the case of the former Soviet Union. He agrees with the CIA that as a result of Soviet economic espionage, the United States and other Western nations were in effect “subsidizing the Soviet military buildup.” The Kremlin certainly had a long history of successful spying on Western industries, but Fialka does not explain why the Soviets were not able to keep up with the pace of Western technology and why the country ultimately dissolved because of its economic backwardness. He also fails to explain why China’s military backwardness has not been corrected by economic espionage. Similarly, the value of French, German, and Japanese intelligence operations is far from clear in Fialka’s book, and the author makes no attempt to examine just how much any of these countries has gained from its clandestine activities.

Overreacting

Fialka is correct in calling attention to the problem of economic espionage, but his recommendations for solving the problem are diffuse. He would like to introduce more powerful encryption systems to protect U.S. banks and corporations, comparing the situation to protecting vital secrets in wartime. Fialka even favors limits to the openness that exists in this country, which has been the key to much of our economic success. He wants to repeal the Freedom of Information Act, which he believes is used “primarily as a window on U.S. businesses by their competitors.” He would clamp down on immigration and find ways to limit the contacts of U.S. CEOs with companies in China until the United States “can sort out which companies are part of China’s military and gulag system and which are not.” If U.S. intelligence agencies cannot figure it out, Fialka recommends imposing punitive tariffs on all imports from China. Not only are these suggestions excessive, but Fialka never explains what gulags have to do with economic expionage.

In referring to the need to mobilize U.S. intelligence agencies to protect the U.S. economy, Fialka cites their poor track record in this area. The greatest concentration of analytical experts on international economic issues in the federal government resides not in any of the executive departments but in the CIA. In fact, the ranks of CIA analysts contain about as much economic expertise on international problems as can be found in all the executive departments of the government put together. Nevertheless, CIA analysts completely missed the economic collapse of the Soviet Union in 1991 and that of the communist states of Eastern Europe in 1989. Fialka correctly observes that the intelligence agencies “have not covered themselves with glory” in the economic area.

Fialka makes no attempt to analyze the serious debate within the intelligence community about the roles and missions of intelligence organizations in coping with spying on U.S. companies. Admiral Stansfield Turner, CIA director in the Carter administration, believes in emulating the active intelligence programs directed against our companies; Robert Gates and R. James Woolsey, more recent directors of the agency, recognize a role for the CIA in responding to economic challenge but oppose providing U.S. businesses with intelligence that would give them a competitive advantage. The legal implications of CIA spying on commercial organizations would have to be explored in any event, particularly clandestine collection against a foreign-based division of a U.S. company.

Cold War mindset

Finally, Fialka’s book suffers from trying to turn the problem of economic espionage into a major issue for U.S. national security. One of the major public policy problems in the Cold War era was the tendency to label issues such as communism, terrorism, and Islamic fundamentalism as vital national security threats. Fialka’s call for laws to deter espionage, curb immigration, and increase encryption by banks and utilities is reminiscent of a Cold War mindset.

The current competitiveness of the U.S. economy suggests that the thesis of this book is grossly overstated. The U.S. economy is showing greater stability, with low inflation and joblessness, than at any time in nearly three decades. The success of high-technology industries, the increased globalization of business and finance, and the deregulation of many industries have produced a flexible and competitive economy, with no sign of the inflationary, speculation-driven boom that has preceded almost every U.S. recession since the end of the Second World War. Last year’s federal deficit was the smallest recorded since Ronald Reagan’s first year in the White House. Restricting U.S. openness to the world economy, as Fialka advocates, would put this success at risk. And if economic espionage is not the serious problem that Fialka believes it to be, his cure could do more harm than the disease.

There is no mention of international agreements as an antidote to government-backed commercial spying or economic espionage. The United States could pursue such agreements in a number of ways, such as a single treaty that any country could sign, with each signatory pledging not to use its intelligence services to spy on any of the others for commercial gain. Other approaches could include multilateral agreements, particularly between the United States and its strongest allies. These agreements would limit the potential for conflict by providing a formal and predetermined means of response if one of the signatory countries is suspected of spying. This would eliminate the need for unexpected antagonistic economic reprisals. These agreements could contain provisions for cooperative measures against nonsignatories that use their intelligence services against the businesses of signatory countries. Such provisions could include substantial intelligence sharing and retaliatory actions such as trade restrictions and diplomatic protests.

The policy community needs a balanced book on the subject of economic espionage, not one that refers to the so-called “attack on the American economy” as a “time-lapse Pearl Harbor.”

Flawed Policy

In the late 1980s, Kenneth Flamm, an economist at the Brookings Institution, published two highly influential books on government’s role in the development of the computer industry. In Targeting the Computer (1987) and Creating the Computer (1988), Flamm made a persuasive case that—contrary to the arguments of authors such as George Gilder and business executives such as T. J. Rodgers of Cypress Semiconductors—government had played a significant role in creating the computer industry as well as other high-technology industries in the United States.

Now, in this long but ultimately rewarding book, Flamm examines the role of government in the contentious semiconductor trade disputes of the 1980s and doesn’t like what he finds. Indeed, he believes that the 1986 Semiconductor Trade Arrangement (STA) between the United States and Japan need never have happened and that it imposed costs that were greater than the benefits derived from the opening up of Japan’s semiconductor markets to foreign trade.

Flamm writes that it was only after the development of large-scale integrated circuits in the United States took the fledgling Japanese computer industry by surprise in the early 1970s that the Japanese government and industry focused on developing indigenous technologies for semiconductors. A series of joint government-industry projects succeeded by the end of the 1970s in enabling the Japanese semiconductor industry, or at least the part of it that produced DRAMs (dynamic random access memories), to become fully competitive with the U.S. industry.

Since demand for integrated circuits was expanding rapidly in the late 1970s and early 1980s, the U.S. semiconductor industry was willing to share the market with Japanese producers. Beginning in 1984, however, a sharp downturn in demand for semiconductors resulted in a shakeout in the industry, but only U.S. firms left the market while Japanese producers kept producing and selling at substantially lower prices. This led to the filing of antidumping petitions on the part of U.S. producers, upon which the Department of Commerce and the International Trade Commission ruled favorably in 1985 and 1986.

Flamm claims, however, that Japanese pricing during this period could be explained as “a predictable outcome of normal market forces” and that U.S. antidumping laws were not designed—as they should have been—to take forward pricing behavior into account. Forward pricing is pricing below average costs in the short term so that demand for a firm’s products will allow it to increase production in the long term, eventually reducing average costs so that they are below market prices. Forward pricing is rational for industries that have steep learning curves—that is, where average costs descend rapidly with cumulative production. For example, Texas Instruments priced its scientific calculators lower than average costs in the early 1970s to gain market share vis-a-vis its main competitor, Hewlett Packard, but still made money later on when its costs declined and prices stabilized.

The successful antidumping petitions filed by U.S. firms against Japanese firms led to a major trade dispute and eventually to the STA of 1986. Under the STA, Japanese firms agreed to a system of floor prices for sales of their semiconductors in the United States and third-country markets, the Japanese government agreed to collect statistics on semiconductor production costs and prices, and the Japanese pledged to increase the sale of foreign-made semiconductors in Japan from 10 percent to 20 percent of the market.

A cartel emerges

The main argument of Flamm’s book is that U.S. trade policy in the dispute was flawed, that it confused rational forward pricing with dumping (with a predatory intent) and that, importantly, it unintentionally encouraged the formation of a Japanese semiconductor cartel, first under the administrative guidance of the Japanese government’s Ministry of International Trade and Industry (MITI) and later as a purely private affair among Japanese semiconductor firms.

Flamm does an excellent job of proving that this is indeed what happened by analyzing a variety of data series on prices and costs and juxtaposing this with summaries of press reports and interview data. He shows, in particular, that there were wide regional differences in spot market prices in North America, Western Europe, and Asia that probably had their origins in the reduced investments in Japanese productive capacity engineered first by MITI and later by the industry itself to defuse the trade dispute.

The cartel imposed major costs on U.S. and Japanese consumers and on U.S. firms that were heavily dependent on Japanese components for finished products by raising the prices they had to pay for DRAMs. Although Japanese semiconductor firms enjoyed higher profits, especially after demand revived in 1988, Flamm argues that the net benefits to Japanese semiconductor producers that came from higher prices were much less than the net costs to final equipment producers and consumers of that equipment. In short, Flamm says, this did not have to happen and would not have happened if the U.S. government had not pushed for the STA, which gave MITI the chance to promote a cartel.

Flamm acknowledges that U.S. semiconductor firms increased their market share in Japan after 1986, so this part of the STA was a success. He shows, however, that greater U.S. access to the Japanese market was not due to a shift in Japanese demand toward products that U.S. firms specialized in, as some critics of the STA argued, but rather that there was an across-the-board improvement in U.S. sales of all types of devices. If the increase in U.S. exports to Japan had been purely a result of increased demand for products such as microprocessors, where U.S. firms had a clear competitive advantage, then it could be argued that the STA had nothing to do with increased exports. Nevertheless, it is still possible that other factors, such as the creation of Sematech (a U.S. research-and-development consortium funded jointly by the government and industry to support the development of state-of-the-art semiconductor production technologies), were primarily responsible for improved export performance.

Prescription for change

On the basis of his analysis of the semiconductor dispute, Flamm recommends three main policy changes: (1) using marginal costs rather than average costs as the basis for antidumping rulings; (2) encouraging stricter enforcement of antitrust laws in foreign countries; and (3) increasing the number of countries involved in future, similar negotiations as a means of developing multilateral rules for high technology more generally. All of these recommendations are worthy of serious consideration, with the third the most likely to be successfully implemented.

The first recommendation makes sense from the standpoint of economic theory, but Flamm himself acknowledges in his book that it is “always difficult to find data that allow one to say anything reasonable about marginal cost.” In his research for the book, Flamm had to go to considerable lengths to assemble the price data series and production models that he used to measure marginal costs. If a fine economist like Flamm has trouble marshalling credible data on marginal costs, think of the problems the Commerce Department might have. Still, if this recommendation were implemented, it would make it more difficult for the enforcers of antidumping laws to rule in favor of antidumping petitions, especially in high-technology industries and might thereby prevent unnecessary and undesirable trade frictions among the major producing nations. Since antidumping laws and petitions have proliferated in recent years, this recommendation merits careful study.

The problem with pressuring foreign governments to enforce antitrust laws—to prevent the formation of cartels—is that there is no multilateral forum for such efforts. Thus, bilateral disputes inevitably occur. According to Flamm, “Foreign companies can go to national authorities with complaints, but if anticompetitive behavior is tolerated by custom or law, or if national laws are selectively enforced by national authorities, or if bureaucrats issue undocumented guidance to manufacturers, there is no framework for

resolving grievances except government-to-government negotiation.” Still, it is quite likely that U.S. pressure on Japan and Western Europe to enforce antitrust laws that are already on their books has had the desirable effect of increasing the bargaining power of local supporters of stronger enforcement. It is always helpful when battling for domestic reforms if one can point to some form of international pressure or support.

Flamm’s third recommendation—that U.S.-Japanese semiconductor negotiations be multilateralized—should be urgently heeded. The General Agreement on Tariffs and Trade and its successor, the World Trade Organization (WTO), have not begun to adequately address problems posed by trade in high-technology products. Even after the Uruguay Round of trade negotiations, WTO has remained silent on issues involving antidumping laws and the relationship between trade and antitrust enforcement.

Going overboard

The book does have a few flaws. First, Flamm tries too hard to score points against other scholars—most notably Laura D’Andrea Tyson, chair of the President’s National Economic Council—sometimes at the expense of stretching his arguments too far. The title of the book suggests that he is going to present an argument against “managed trade” or what Tyson calls “cautious activism” in her book, Who’s Bashing Whom? However, a lot of evidence presented in Flamm’s book vindicates important parts of Tyson’s argument—for example, the importance of bargaining hard to open up foreign markets to U.S. exports and of pressuring foreign governments to beef up enforcement of domestic antitrust laws.

Flamm also dismisses too easily the idea that the semiconductor industry should be considered strategic—and thus more worthy of government support—because of its technological linkages to other important industries. In a somewhat self-contradictory manner, Flamm acknowledges the potential importance of technological spillovers or externalities and favors policies to promote domestic industries that generate such externalities independently of the strategies of foreign firms and governments. But he continues to oppose any serious effort to identify strategic industries or to institutionalize programs that provide public support to those industries on the basis of technological linkages.

Mismanaged Trade is a provocative book that will help to promote a more meaningful debate about the politics and economics of high-technology industries. The reader may find the book a bit long-winded and tiring in parts—thoroughness being sometimes the enemy of readability—but will emerge at the end with a better understanding of some of the key issues that governments have been grappling with in recent years. No future discussions of the semiconductor industry and its relation to the politics of competition in high technology can ignore this book.

The Air Force at a Crossroads

On the night of January 16-17, 1991, the United States launched an air war against Iraq after diplomatic efforts to end that country’s invasion of Kuwait had failed. U.S. air and naval forces employed stealth aircraft, long-range cruise missiles, and precision-guided “smart” munitions (PGMs) for the first time together in substantial numbers. The results were devastating. The Iraqi air defense network was quickly disabled, and the Iraqi leadership’s command and control of its forces was ruptured. Iraqi aircraft could not survive in the air or even in hardened shelters on the ground; many simply abandoned the fight and flew to safety in Iran. Although the effectiveness of U.S. PGMs was not as great as originally believed, the overall accuracy of the weapons was a vast improvement over their “dumb” ancestors.

This lopsided air war led some experts to conclude that a military revolution had occurred and that air power had led the way. Italian military theorist Gulio Douhet’s 70-year-old vision of air power’s ability to win wars seemed a reality at last. Other experts, however, including some U.S. Air Force leaders, viewed the war’s outcome in an entirely different way: Instead of the culmination of a military revolution, the Gulf War represented only the beginning of a period of increasingly rapid technological and geopolitical change that will confront the Air Force with challenges far different from, and far more formidable than, those that were faced in the skies over Iraq. If this latter vision is correct, as I will argue it is, in the relatively short span of 20 years, the U.S. Air Force will need to dramatically transform itself from its current reliance on manned aircraft to a new emphasis on, among other things, space operations and unmanned aerial vehicles (UAVs).

What kind of Air Force the United States will require a generation from now is a critical question that needs to be examined today. With the Cold War’s end, the Air Force is facing greater uncertainty than it has ever known. Although the world is a far less threatening place than it was during the Cold War, the challenges to national security will almost certainly increase substantially over the next 10 to 20 years as new and improving technologies make possible dramatic changes in all aspects of military planning and operations. In addition, the Air Force is now entering a period of modernization and will need to invest increasingly scarce defense resources wisely. It takes years to develop and field new military systems. If the Air Force chooses poorly now, it may be difficult, if not impossible (and certainly very expensive), to create a different kind of force on short notice later. Thus, the Air Force needs to examine whether its planned purchases of up to $133 billion in new combat aircraft will displace investments in military equipment and systems that may be equally or even more crucial for future needs.

Forging a vision

To prepare for a world 20 years hence, the Air Force first needs a vision of its future operating environment and the challenges it will pose. Unfortunately, the Clinton administration has thus far succumbed to the temptation to view future conflicts simply as linear extensions of more recent ones. Its 1993 bottom-up review (BUR) assumes future enemies with forces and operations similar to Iraq’s in 1991. Yet the Pentagon’s toughest future competitors are not likely to be updated versions of Saddam Hussein’s Iraq. Rather, the greatest challenges that could emerge would result from the erosion of great-power relationships, the proliferation of weapons of mass destruction, and the diffusion of information-based military technologies. Indeed, for potential competitors, the cardinal lesson of the Gulf War is to avoid confronting the Air Force as the Iraqis did. Competitors will probably be unable to match the U.S. military by pursuing a symmetrical competitive path-by copying the U.S. Air Force, for example-but they may not need to, given asymmetries in security objectives, mission requirements, geography, and strategic culture.

There are indications that the Pentagon is beginning to come to grips with its vision problem. A congressionally mandated Quadrennial Defense Review has been initiated to assess a broader array of challenges than were addressed in the BUR; and General John Shalikashvili, chairman of the Joint Chiefs of Staff, has published Joint Vision 2010, his view of the long-term challenges facing the U.S. military. In Global Engagement, a report released in November 1996, Air Force leaders state that a major transformation will be needed if the service is to retain its current relative advantages. The report says that the Air Force will have to transform itself from an air force to an air and space force and finally to a space and air force. But will the Air Force act on its vision? History is replete with examples of military organizations that were witness to discontinuous change and yet continued to rely on the rapidly declining effectiveness of the tried-and-true methods that had brought them success in the past.

During the next 20 years, the Air Force may well find itself confronting challenges with a far greater scale and level of diversity than those envisioned in the BUR. Strong historical patterns suggest that, without inspired diplomacy supported in part by a well-crafted defense program, a resumption of military competition among great powers will occur. Put another way, it seems unlikely that we will enjoy a Pax Americana. Periods of protracted military dominance and peace, such as the Pax Britannica or Pax Romana, generally coincided with a single power’s economic dominance, and even those relatively peaceful eras saw periods of large-scale conflict. Yet the United States’ economic edge is nowhere near as great as was Britain’s during its period of dominance; indeed, in some important regions, especially East Asia, the U.S. advantage is progressively eroding.

Moreover, neither the Pax Romana nor the Pax Britannica lasted indefinitely. Today, new great powers such as China seem poised to emerge. Russia will likely recover from recent setbacks. Added to the mix is the growing danger to the United States that is posed by the proliferation of weapons of mass destruction. The Pentagon estimates that more than 25 countries, including North Korea, Iran, and Libya, either have chemical, biological, or nuclear weapons or are actively attempting to develop them. Although a strong U.S. military can help avoid a resumption of great-power competition and stem the proliferation of weapons of mass destruction, the military services must also hedge against the possibility of failure.

Geopolitical change is occurring at the same time as a military revolution is emerging. Military revolutions are characterized by major, discontinuous leaps in the effectiveness of military organizations within a relatively short period of time, typically a few decades, and usually comprise four elements. First, rapidly emerging technologies make new military systems possible. When accumulated in sufficient numbers, these systems then provide a commander with new tools to solve strategic and operational problems. Next, new concepts are created for applying these tools. Finally, military organizations are restructured, creating new organizations to execute the new concepts using the new tools. The combination of these elements can provide an explosive growth in military capabilities.

The emerging military revolution appears to be driven by rapid advances in information and information-related technologies. These technologies have already triggered revolutionary changes in business corporations and seem poised to have a comparable effect on military organizations. The emerging military revolution will likely see the Air Force rethinking basic concepts in at least the following five areas:

Information superiority. Joint Vision 2010 notes that “the emerging importance of information superiority will dramatically impact on how well our Armed Forces can perform its [sic] duties in 2010.” Correspondingly, the Air Force has declared information superiority to be one of its core competencies. Information warfare is concerned with attacking, defending, and exploiting information and information systems to establish information superiority; that is, a pronounced gap or advantage over one’s adversary in terms of information pertaining to friendly and enemy military forces, political leadership, and social and economic structures. Information superiority is likely to play a crucial role in determining the effectiveness of military forces by reducing the fog of war for friendly forces while increasing it for the enemy.

The struggle for information superiority is likely to produce two principal areas of competition. One will be a competition between “hiders” and “finders.” Information technologies are fueling a rapidly growing potential to detect, identify, and track a far greater number of targets over a far greater area and for much longer periods of time, as well as to order and move that information far more effectively than ever before. Military leaders are now talking about the development of reconnaissance architectures that link numerous systems-satellites, unmanned aerial vehicles, remote sensors, and individual soldiers-into an information web that makes it possible to “see” all aspects of a battle. At least this is the goal of Joint Vision 2010, which discusses “full-spectrum dominance” and “dominant battlespace awareness.”

The Air Force should vigorously experiment with, test, and evaluate a wide range of new systems including methods for controlling space.

Of the four military services, the Air Force will likely be first among equals in the effort to establish information superiority and reap its benefits. However, this will probably not be a one-sided competition. In many instances, the “hiders” among potential adversaries are likely to make information superiority an illusory goal. To avoid detection, they will disperse forces and equipment; build more facilities deep underground and make them more difficult to penetrate; and rely on greater mobility, deception, and stealth. In short, the future battle for information superiority may resemble the long-term hide-and-seek competition between convoys and submarines that characterized the Battle of the Atlantic during World War II.

Indeed, although the U.S. military seemed to have a decisive information advantage during the Gulf War, the fog of war persisted. Despite prodigious efforts by U.S. forces to locate Iraqi Scud mobile missile launchers, the great Scud hunt failed to produce a single confirmed kill. Attempts to locate Iraqi facilities housing weapons of mass destruction were only partially successful. And the Iraqi leadership itself proved to be a fleeting target; the U.S. military was never able to bring the war directly home to Saddam Hussein. Finally, efforts to assess the damage inflicted on Iraq from the air were far from precise, leading to spirited debates on the eve of the ground offensive over how many Iraqi tanks had been disabled by air strikes. If the Air Force is to play a dominant role in the U.S. military’s efforts to achieve information superiority, it will have to improve substantially on its Gulf War performance.

Strategic strike. The second major area of competition will be between long-range precision-strike forces-strike platforms armed with PGMs and ballistic and cruise missiles incorporating precision-guidance accuracy-and active and passive defenses, including dispersion, stealth, and air and missile defenses. The emerging military revolution offers the potential to engage a far greater number of targets, over a far greater area, in far less time, and with far greater precision, lethality, and discrimination than ever before. Moreover, military revolutions typically give an advantage to the offense, at least initially. For the Air Force, this competition offers opportunities and challenges.

First, the Air Force will have the opportunity to exploit information superiority (assuming it can be achieved) by conducting a long-range precision-strike campaign against an adversary. The potential of such a campaign to provide an early low-cost victory could prove irresistible to military organizations capable of developing and integrating reconnaissance and strike systems architectures. Rapid strikes against an adversary could be mounted by using airborne and space information systems to provide real-time targeting information to long-range, precision-guided conventional munitions from land-, sea- and air-based sources. If the attacker can create an information gap (he knows more than the adversary does about the battle space), he may be able to destroy or disable the adversary’s center of gravity (the set of targets whose disabling will break the enemy’s ability or will to block friendly forces from achieving their military objectives) without engaging and defeating his military forces. However, it may not be possible to execute decisive strategic strikes, particularly if the defender can sustain enough information infrastructure to support an integrated defense.

A greater challenge may be the long shadow cast by nuclear weapons over strategic strike operations. The mere possession of even a modest nuclear arsenal may insulate a state against potential strategic strikes. Indeed, less competitive military organizations may be attracted to nuclear capabilities as a deterrent to nonnuclear precision-strike weaponry.

The Air Force needs to reconsider at least some of its planned $133 billion investment in new combat aircraft.

Power projection and air superiority. Joint Vision 2010 declares that “power projection, enabled by overseas presence, will likely remain the fundamental strategic concept of our future force.” Yet power projection in the traditional sense may no longer apply as this military revolution matures. The U.S. military’s long-range precision-strike monopoly has not been deeded to it in perpetuity. Other competitors are almost certain to try to exploit this new capability, given that they will have increasing access to space platforms for communications, imagery, and guidance purposes, as well as to ballistic and cruise missile technology and stealth technology. General Ronald Fogleman, the Air Force chief of staff, succinctly stated the challenge when he observed that in the not-too-distant future, “Saturation ballistic missile attacks against littoral forces, ports, airfields, storage facilities, and staging areas could make it extremely costly to project U.S. forces into a disputed . . . [region], much less carry out operations to defeat a well-armed aggressor. Simply the threat of such enemy missile attacks might deter the United States and coalition partners from responding to aggression in the first instance.”

How would the Air Force respond to an enemy that chose to field a missile force in lieu of an air force? Denied unfettered use of ports and airfields by enemy ballistic- and cruise-missile forces, the U.S. military would have to restructure itself to maintain an effective power projection capability. The “spear tip” of this capability might be centered on submersible strike and amphibious assault ships, long-range stealth bombers or weaponized unmanned aerial vehicles (UAVs), and stealth cargo aircraft capable of landing on less sophisticated air fields (so as to increase the number of potential targets an enemy would have to consider) or of conducting precision air drops of supplies instead of landing at all.

In this environment, future theater air operations will likely be characterized by an increased emphasis on long-range precision strikes, UAVs and weaponized UAVs, and electronic and information strikes. Moreover, establishing command of the air will be less a matter of clearing the skies of enemy manned aircraft and more of denying the enemy the use of his long-range precision assets, such as ballistic and cruise missiles, and suppressing his information, air, and missile defenses. A priority might be placed on achieving information superiority, in part through long-range precision strikes against the enemy’s information systems. Similar attacks could be initiated against the enemy’s long-range precision-strike architecture and air and missile defense networks. If U.S. air forces must be deployed forward before the enemy’s missile forces can be neutralized, they will have to find ways to offset their vulnerability. These might include a combination of increased alert levels, hardened shelters for aircraft, decreased reliance on traditional tactical aviation, and tactical aircraft based on a theater’s periphery. In sum, the U.S. military’s long-term heavy reliance on tactical aviation may experience a profound transformation.

Space control. Space-based systems are essential to the Air Force’s future effectiveness, particularly for reconnaissance, surveillance and intelligence gathering, battle management, communications, position location, terminal guidance, and battle damage assessment. Indeed, space is becoming inextricably linked to war on land, at sea, and in the air. If historical patterns hold, advanced military organizations will try to establish control over space as an essential element for prevailing in the “hider-finder” and “offense-defense” competitions. Lesser competitors may settle for a space-denial capability. For example, they might employ a direct-ascent antisatellite weapon or perhaps detonate a nuclear weapon in space. A race to control space may well be followed by the use of weapons to deny access to space (such as antisatellite weaponry) and, thereby, information flows. This in turn could lead to putting weapons on satellites and other space vehicles for use against satellites and targets on land and at sea.

Commercial or neutral-country satellites could acquire special significance in a battle for control of space. Lesser powers might be able to move their precious cargoes of information along the electronic highways of space in these “neutral bottoms.” If warring powers choose to wage unrestricted space warfare or establish an information blockade in space, they may risk provoking neutral powers into a state of belligerence. These challenges may be particularly profound for the Air Force, which expects to rely on commercial satellites, many of which will be operated by multinational consortia, for the bulk of its space communications.

The future Air Force

Although the Air Force has successfully proven its dominance of the current war-fighting regime, it must begin now to make the changes necessary to produce the very different kind of Air Force that will be needed 20 years from now. Given the reality of modest near-term threats, the long-term potential for a major threat, an emerging military revolution, and tight funding, the Air Force’s best bet would be to adopt a smaller force structure and, for the moment, buy fewer new weapon systems than currently planned in order to free up funding for developing the capabilities emerging from the military revolution.

Modernizing the Air Force should not just be about producing a few new systems in large numbers to do current missions better. Rather, the modernization process should reflect the Air Force’s emphasis on preparing for long-term threats, hedging against the real possibility that the Air Force’s future vision could be wide of the mark, and developing a flexible organization that is ready to react to unforeseen challenges. Specifically, the Air Force should strongly consider the following:

Adopt a “hedging” approach: At a time when defense budgets are tight and the danger to U.S. security relatively low, a strategy that would better prepare the Air Force to face the future will require accepting some increase in near-term risks in order to hedge against the emergence of far greater longer-run challenges. This will require a new approach to managing the Air Force’s “capital stock.” The Air Force should focus on buying strategic “options” for new capabilities that support its future vision and can be exercised if future needs warrant. This implies minimizing serial production runs of new systems, except when the system solves what is perceived as a longer-run major operational or strategic problem or when the new system replaces old systems with capabilities that are essential to the Air Force of 20 years hence. A greater emphasis should be given to experimenting with limited numbers of emerging systems. The goal should be to avoid producing systems whose value may depreciate precipitously, while facilitating experimentation with capabilities that exploit rapidly emerging technologies.

Make selective divestments. The Air Force should shed assets that perform functions whose relative importance 20 years from now will be substantially lower than it is today or that can be performed by other services or allies. Above all, the Air Force should reduce its current heavy reliance on theater-based, manned tactical air systems. Indeed, the Air Force’s ability to confront this issue is crucial to a successful transformation, not only because the bulk of the resources that could be freed up for new investments are here but also because success in this area would mean that the Air Force’s dominant culture-its tactical air force-accepts the need for major change.

The Pentagon is now embarking on an ambitious, multidecade, tactical aircraft modernization program. In FY1997, the Navy will begin procurement of the F/A-18E/F fighter for its carrier fleet. Next year, the Air Force will start buying the F-22 fighter, the successor to the F-15. Finally, around FY2005, the Navy, Marine Corps, and Air Force plan to start procuring the Joint Strike Fighter. Between now and 2013, the services plan to buy as many as 4,416 of these three aircraft at a cost that may exceed $350 billion (in FY1997 dollars).

Whether all of these tactical aircraft investments are the best way to invest the Pentagon’s modernization funds is doubtful. As inferred in Joint Vision 2010, future asymmetic aggressors can be expected to shift away from combat aircraft and toward ballistic and cruise missiles of ever-increasing range, accuracy, and lethality. The effectiveness of this strategy will rest on its ability to deny U.S. tactical aircraft access to the forward bases they require to conduct operations. Although U.S. missile defenses may improve, they are unlikely to be able to withstand large-scale missile attacks against critical targets such as major air bases, particularly as stealthy cruise missile technology improves and becomes available to more countries.

The loitering and stealth capabilities of unmanned aerial vehicles make them ideal strike platforms.

To meet the challenges of the emerging military-technological revolution and live within its increasingly tight budgets, the Air Force should strongly consider reducing its force structure while pursuing a selective modernization program that balances the need to maintain military capability today with the need to develop new systems for the future. For example, the Air Force could buy its planned force of advanced 438 F-22 fighters for about $10 billion less than a force of the same size divided evenly between F-22s and the less capable Joint Strike Fighter. A selective modernization program, especially with air platforms exploiting improved capabilities such as precision-guided munitions, offers a hedge against short-term risks.

Engage in vigorous experimentation, testing, and evaluation. As funds are freed up, the Air Force needs to experiment with new systems and operational concepts to determine those that it will require in 20 years. The initial priority should go to systems such as long-endurance stealthy reconnaissance and weaponized UAVs. The loitering and stealth capabilities of UAVs could make them ideal strike platforms, and their range may substantially reduce the Air Force’s dependency on vulnerable theater bases. Moreover, removing the pilot would also enable aircraft designers to build far smaller, more maneuverable, and cheaper fighters (at perhaps 10 to 20 percent of the cost of a next-generation manned fighter). The Air Force should also increase funding for development of stealthy, long-range cargo aircraft for resupplying forward-deployed forces in instances where major bases are at risk of a missile attack by enemy forces.

Other candidates for development and experimentation would include small satellites, a rapid satellite-launch capability, and active and passive methods for controlling space in the event that an adversary tries to introduce weapons into space. The Air Force should also place increased emphasis on winning the battle for information superiority through improved information-based capabilities, such as enhanced stealth, electronic-strike, and defense capabilities. In addition, the Air Force should move toward integrating long-range precision-strike, command, control, communications, and intelligence systems into a systems architecture, and it should increase its emphasis on advanced precision-guided munitions.

This process of experimentation and innovation will be critically important, as military effectiveness is likely to depend on the ability to integrate, at high levels of proficiency, systems architectures comprising military systems from all services as well as allied forces. Moreover, it needs to begin now, since large-scale transformations typically take a decade or more to complete.

The experiments can also be done relatively cheaply, because simply demonstrating a capability does not have to lead to large-scale procurement of a major new system. In any event, unless a substantial threat to the United States emerges, no large buildup will likely be necessary. Further, because technology is changing so rapidly, it would be unwise to buy new systems that could quickly become obsolete. What is most important is that we have key new capabilities on hand that can be quickly fielded if the security environment changes for the worse.

Given its present budget difficulties, the Air Force will probably not be able to sustain its existing force structure and its recapitalization plans over the long term. But as its current program seems ill-suited for the competitive environment it will likely face in 20 years, this is not necessarily a problem. Fortunately, in its Global Engagement report, the Air Force appears to have recognized the scale of the changes that will be needed. But enunciating a vision is not the same as seizing it and undertaking an actual transformation. Moving forward will require the Air Force to surmount the temptation encountered by many highly successful organizations that dominated their competition: a belief that future success resides in uncritically applying the means and methods that ensured past success. Domination of the skies has been essential to the U.S. way of warfare for the past 50 years. The prospects for maintaining that dominance in the next 50 could well rest on the Air Force’s embrace of a new age of air power, one that is likely to require a very different kind of Air Force. The challenge is as formidable as any the Air Force has ever faced.

In Defense of Environmentalism

The Betrayal of Science and Reason is the most important rejoinder to date to the “brownlash” (as the Ehrlichs call it) of anti-environmental writing. The bulk of the book is devoted to a systematic refutation of the main theses of the anti-environmental crusade. As such, it is indispensable reading for everyone concerned with the environmental debate, especially those who believe that the global ecological crisis is a mirage. However, although the book is successful in advancing the green position and discrediting the brown, it is disingenuous on the subject of environmentalism’s own streak of antiscientific bias. If the strengths of the book lie in what the Ehrlichs say, its problems arise from what they leave out.

The book’s primary aim is to document the fact that a widespread scientific consensus now exists on a series of ecological issues, including the dangers of global warming, stratospheric ozone depletion, population growth, and the worldwide extinction crisis. In this, the authors are strikingly successful. While acknowledging progress on some fronts, the Ehrlichs argue compellingly that the overall outlook remains grim. Some of their opponents’ cases are thoroughly demolished. For instance, Julian Simon’s credulous contention that demographic expansion can safely continue for the “next 7 billion years” is thoroughly rebutted. The anti-environmentalists have a stronger case in regard to climate change, but here too the authors assemble compelling evidence that the problem is real. On the subject of toxic wastes and their effects on human health, however, the outcome is less clear. Such reputable scholars as Bruce Ames and the late Aaron Wildavsky have made a strong case that trace amounts of toxic substances are not necessarily harmful. Although there are good reasons to side with the Ehrlichs and advocate a precautionary approach, a scientific consensus on this issue has yet to be reached.

Why the brownlash works

The authors also undertake a secondary project: to explain why the brownlash has been so successful. Although they deny the existence of a grand conspiracy, the Ehrlichs do see vested interests at work. Chemical, oil, paper, mining, and forestry companies, fearing expensive environmental regulation, have sought to discredit the scientific foundation on which such reforms rest. Their typical strategy is to deplore environmental emotionalism and champion dispassionate science in the abstract, while promoting selected scientific findings that support their own anti-environmental stance. In the worst cases, the Ehrlichs imply, phony findings are produced by dishonest researchers seeking fat contracts and consulting fees. More often, however, the brownlash operates by promoting the views of a few legitimate scientists without mentioning that their positions contradict a broad scientific consensus. Contrarian environmental science must be taken seriously, the Ehrlichs argue, but it cannot stand alone as the basis of reasonable environmental policy.

The Ehrlichs also maintain that journalists often act, wittingly or not, as accomplices in the brownlash. In story after story, contrarian views are portrayed out of context, with no explanation given of their lack of credibility within the larger scientific community. Journalists often push anti-environmental positions for their novelty value. “Controversy, exaggeration, and scandal sell,” the Ehrlichs say, while “stories about the gradual deterioration of our environment do not.”

Readers of this calmly argued book are likely to conclude that the environmental movement, unlike its opposition, rests squarely on science and reason. Such a portrayal, however, is based on a highly selective representation of contemporary U.S. environmentalism. There is no discussion, for example, of how pro-environmental journalism itself is often guilty of controversy, exaggeration, and scandal. The Ehrlichs ought to know better; they themselves have not always been above promoting eco-alarmism by resorting to exaggeration. Even in the present work, which is admirably restrained overall, there are traces of this rhetorical strategy. Consider the treatment of Gregg Easterbrook’s A Moment on the Earth-hardly a hard-core anti-environmental work. In devoting most of a detailed appendix to cataloging Easterbrook’s errors, the Erlichs simply go overboard.

Most of The Betrayal of Science and Reason strikes a deliberately reasonable tone, attempting to make sweeping environmental reforms palatable to a broad spectrum of the electorate. The authors praise the market system, uphold capitalism, and support market-oriented approaches to pollution reduction. They also take corporate environmentalism seriously, lauding such an unlikely candidate as Monsanto Corporation. Advancing a surprisingly moderate political agenda, the Ehrlichs imply that the common depiction of environmentalists as anticapitalist radicals is little more than a brown delusion. The market must be restrained-as Adam Smith himself recognized-but it is an indispensable foundation for a sustainable economic order. The authors also praise economists who consider environmental issues and criticize those who attempt to drive a wedge between economists and ecologists. Sensible environmental regulation, they repeatedly argue, in no way threatens the U.S. economy.

Half-hearted embrace

The Ehrlichs’ embrace of capitalism, however, remains half-hearted. They are still suspicious of economic growth and harshly criticize conventional economic analysis. More important, they never fundamentally disavow their own well-established antigrowth positions. Although they admit a few previous errors (for instance, they acknowledge underestimating the potential for technological innovation and substitution and say that they have been “remiss in not emphasizing sufficiently what good news there is”), in general they refuse to retract. As recently as 1990, these authors contended that “economic growth is the disease, not the cure” (emphasis in original) and that the United States ought to “return to . . . handwork, [de-emphasizing] mass production in both industry and agriculture.” With the Ehrlichs attacking the very foundations of the modern economy, it is hardly surprising that brownlash writers have concluded that environmentalism, of the Ehrlich variety, is incompatible with economic prosperity.

Remarkably, the authors stand by even the famous opening lines of their early book The Population Bomb: “The battle to feed all of humanity is over. In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now . . .” This prediction, they claim, was substantially correct; hundreds of millions remain hungry and many continue to starve. Hunger is indeed still with us; moreover, the “population bomb” is still ticking, and the fact that increasing human numbers are destroying biological diversity is virtually unassailable. But The Population Bomb predicted an imminent ecological apocalypse that simply did not occur. Food production has in fact kept pace with population growth over the past three decades, making the Ehrlichs’ stubborn defense puzzling. Surely they would be better off admitting that their earlier warnings were grossly overstated.

Crying wolf too often?

Instead, they argue that alarmism has direct benefits. Their own dire predictions, the Ehrlichs contend, may have helped reduce the intensity of famine by encouraging the development of relief efforts. This point is important. Mythology tells us that Cassandra’s fate was to be always right but never believed; in the modern world, it seems to some that Cassandras are always believed but never right. But if doomsayers are heeded, their messages can help avert the very disasters that they foresee. The apocalyptic environmental writing of the 1960s and 1970s inspired the legislation that has brought us the (limited) good news that we can now celebrate. The problem is that this tactic becomes self-canceling when used to excess. Foretelling a population bomb made sense in 1968, but writing of an already detonated bomb, as the Ehrlichs did in 1990, was clearly an exercise in crying wolf. Additional credibility has since been lost in ill-advised eco-crusades against such minor dangers as electromagnetic fields. The Ehrlichs now argue, with good evidence, that hormone-mimicking chemicals present a massive environmental threat, but how many will listen this time?

For the most part, the Ehrlichs’ present tactic is to sidestep alarmism and instead defend a sober environmentalism founded on science and reason. What the Ehrlichs do not acknowledge is the unflattering evidence that antiscientific attitudes have infected large segments of the environmental movement. A number of highly influential environmental philosophers and activists actually equate the pursuit of science with the “death of nature” and contend that reason itself has estranged us from the natural world, setting us on a path to sure destruction. Although such views have not infiltrated the laboratories of environmental scientists, they have spread widely through the green political community.

Remarkably, one finds nothing of this in the present book; the environmental movement depicted here is based entirely on science and reason. Why have the Ehrlichs ignored the pointedly antiscientific rhetoric of much eco-radical writing? Being extremely well-read, they cannot possibly be unaware of this literature. It is also doubtful that they find it too trivial to merit attention, for the brownlash writings that the Ehrlichs take on derive much of their strength from arraying themselves against the irrationalist and neo-Luddite sentiments of the more radical greens. Only in this way can extreme anti-environmentalists assume, however inappropriately, the mantle of science.

My only guess is that the authors chose to overlook antiscientific sentiments within the environmental movement out of a desire not to offend its more radical members. The Ehrlichs have long demonstrated great courage in combating anti-environmentalists, but it is another thing altogether to take on one’s partners in the struggle for change. Were they to have condemned environmentalists’ antiscience rhetoric, they would quickly have been branded as traitors to the movement. I know from experience how unpleasant such an experience can be. Regrettably, however, the resulting lacuna prevents them from fully explaining the success of the brownlash movement. Environmentalism can persuasively be portrayed as wildly out of touch with U.S. life precisely because scores of recent books argue that science lies at the root of the global crisis, that high technology must be abandoned, that the market is inimical to nature, and that capitalism by its own immutable logic will destroy the earth.

Polls consistently show that environmentalism still commands broad support in the United States. But if the movement is to go beyond symbolic acts and rearguard actions, it will have to reconsider the political strategies and rhetorical modes that have been its mainstays. Specifically, it must get in step with the public’s race to the political center. For example, the Ehrlichs’ main policy prescription is the imposition of steep carbon taxes on fossil fuels, which they must know is a nonstarter in today’s political climate. Yet they never discuss how to make this strategy palatable-say by pairing it with an income tax cut, especially for the middle class. Staking out such a centrist position, which can appeal to moderate conservatives as well as traditional liberals, may strike many committed environmentalists as weak-kneed defeatism, but it offers the only real hope for enacting the deep reforms that are needed. Reaffirming environmentalism’s historical alliance with science and reason is necessary for the development of a genuine eco-realism, even if it involves alienating the self-proclaimed deep ecologists who see the scientific revolution as humanity’s original sin. The Ehrlichs have taken a major step in this direction. I only wish that they would finish the journey. In doing so, they might lose a few friends and suffer some nasty insults, but they would greatly help the cause to which they have courageously devoted their careers.

Time to Restructure U.S. Defense Forces

With the election behind us, the Pentagon is gearing up to conduct a congressionally required top-to-bottom review of its six-year, $1.5-trillion program. In doing so, it will confront a number of difficult decisions about the size and structure of the armed forces. The United States must continue to field large, diverse forces to carry out today’s missions. Yet at the same time, we need to spend a substantially greater amount than we do today on new systems, both to meet the emerging challenges posed by potential adversaries and to renew the military’s existing capital stock. In an era of tight resources and competing priorities, it is unlikely that future budgets will be sufficient to support the force structure we field today.

The structure of today’s forces leaves little margin for error: “less of the same”–a smaller version of today’s force–will not be enough to support the ambitious national security strategy that this nation must pursue if the world is to evolve in ways favorable to our interests. Investments in new technology and operational concepts can, however, pave the way for a smaller, yet more effective fighting force. If defense planners are willing to restructure the armed forces–rather than simply cutting them across the board–they can free up resources that are badly needed to modernize the U.S. military and equip it to meet the most serious challenges it is likely to face in the future. To do so, however, will require foresight, leadership, and a willingness to challenge powerful bureaucratic and political interests.

Maintaining a “win-win” strategy

One can argue that the United States’ most important export is not aerospace products, computers, or agricultural commodities but security. Following World War II, this country emerged as the sole power that could extend credible security guarantees to other nations threatened by large-scale aggression. And in the post-Cold War world, the importance of American alliances has not diminished. For better or worse, only the United States today has the capacity to organize effective responses to regional threats such as those posed by Iraq, Iran, North Korea, China, and the nations of the former Yugoslavia.

To maintain our role as a credible security partner, U.S. forces must be capable of conducting a wide range of operations, from deterring or defeating overt aggression to intervening in local disputes and fighting international terrorism. As a nation with important interests at stake in multiple regions, the United States needs the capacity to handle more than one conflict at once. As a result, U.S. strategy in the post-Cold War era has called for maintaining a force capable of fighting and winning two major regional conflicts nearly simultaneously.

Among defense professionals, the conventional wisdom holds that today’s forces are already very close to the margin in terms of this capability, and some conservative critics have questioned whether they are actually adequate to the task. If this is correct, then any cuts in U.S. forces would mean abandoning the “two-MRC” posture. Alternative goals might be maintaining the capability to fight “one and a half wars” (a major war and a small-scale conflict) or pursuing a “win-hold-win” strategy (maintaining the capability to fight a war in one region while holding off aggression in another until the first operation is complete).

Adopting a more modest approach to U.S. force structure, however, could inflict serious damage to the United States’ standing internationally. None of our allies would want to depend on an alliance with a country that, once engaged in a conflict elsewhere, could no longer defend their interests. This perceived weakness would risk inviting aggression and undermine the credibility of U.S. security assurances, which in turn could prompt friendly regimes to adopt a more independent posture, thus weakening the network of alliances needed to support U.S. interests and operations abroad.

The principal dilemma facing the next secretary of defense, then, is how to maintain a two-war capability in the face of declining or, at best, static budgets. The problem is made worse by the fact that each of the armed services is faced with the need to renew its capital stock. Most of the United States’ current fleet of fighter aircraft became operational in the 1970s and were built in the 1970s and 1980s. The same is true for the Navy’s ships and the Army’s tanks and armored fighting vehicles. By 2015, the bulk of these services’ major weapons platforms will be 30 years old or more. A growing share of the DOD budget, then, will need to be devoted to capital investment, leaving even less to sustain the force structure now in place.

Contrary to the conventional wisdom, it is possible for DOD to reconcile the need to reduce some forces with the objective of maintaining the capability to fight two major, nearly simultaneous, wars. Even if today’s forces are only marginally capable of undertaking this mission, there are strong reasons to believe that a smaller force–if it is the right one–can accomplish the same goals. Its success, however, will depend on how well it is positioned operationally and technologically to meet the emerging challenges posed by potential adversaries around the world.

Constraints and challenges

The capabilities and structure of U.S. forces must reflect the unusual constraints imposed by our role as an exporter of international security. To begin with, U.S. forces must be prepared to deploy and fight far from home. Many of our most important interests–and most of the powers that threaten them–lie in Eurasia. The United States must therefore maintain an expensive supply infrastructure that permits a rapid expeditionary operation.

Conflicts involving less-than-vital U.S. interests pose a particular challenge. Americans have had the luxury of rarely having to fight for truly vital interests–that is, those in which the future shape or governance of the nation is at stake. By contrast, our adversaries frequently fight for such high stakes. This asymmetry means that we will very often find that our adversaries are prepared to withstand a great deal of punishment in wartime–a fact that will test our resolve and staying power.

Together, these conditions–expeditionary operations and asymmetric stakes–call for military capabilities that are clearly superior to those of any other nation. It is as if our team always has to play “away” games and is expected to win consistently by lopsided margins. Such capabilities do not come cheaply. Retaining them in the future will be costly as potential adversaries improve their military capabilities.

Not surprisingly, the most serious future challenges are those that exploit U.S. sensitivities to casualties and the need to deploy troops and supplies rapidly across great distances. One key challenge is posed by weapons of mass destruction. Any determined, mid-sized state has the wherewithal to create nuclear warheads, lethal chemical agents, or biological weapons. They will also find it increasingly easy to purchase or develop ballistic and cruise missiles to deliver these weapons. In the hands of an adversary, these weapons have the potential to call into question the viability of U.S. military strategy and operations.

Replay the Gulf War imagining that Saddam Hussein had access to such weapons. Would the Saudi royal family have permitted U.S. forces to operate from their territory, knowing that Iraq could retaliate by killing tens of thousands of civilians in Dhahran or Riyadh? Would the United States even have been willing to put its own forces at risk? It is not hard to imagine that the international response to Iraq’s invasion of Kuwait might have gone another way.

Of course, the United States and its allies can (and should) threaten to retaliate massively against those who use weapons of mass destruction. But that may be little comfort to those threatened most directly by such weapons and, given the asymmetries in stakes mentioned above, may not be sufficient to deter their use. We must therefore develop far better capabilities to defend against these weapons and to detect and destroy them before they can be used.

Potential adversaries are also acquiring new conventional weapons, some of which have the potential to create serious problems for U.S. and allied forces. For example, antiship cruise missiles launched in large numbers from aircraft or mobile launchers on the ground could pose a serious threat to shipping and naval forces in critical areas such as the Persian Gulf and the Taiwan Straits. Modern antiship mines and submarines also represent a growing challenge. If U.S. and allied forces are not able to neutralize such threats promptly in a future crisis or conflict, they could, at a minimum, impede the flow of forces and supplies to a threatened region for critical days or weeks, opening a window of opportunity for aggression. Similarly, increasingly capable surface-to-air missiles, such as the Russian-made SA-10, can threaten U.S. and allied air operations, such as transporting troops or supplies, conducting reconnaissance, patrolling the skies, or delivering firepower.

Emerging threats cannot be met by deploying more soldiers or buying more of the current generation of tanks planes and ships.

With access to these advanced weapons, our enemies can threaten modern combat ships or aircraft without having to replicate the enormous U.S. investment in these platforms–investments that are clearly beyond the reach of most. Nor do they have to defeat U.S. forces decisively in order to achieve important objectives: It may be sufficient to threaten heavy casualties on U.S. forces or to delay their arrival while the attacking forces move to seize key objectives, confronting us with a fait accompli.

The emerging operational challenges posed by weapons of mass destruction and advanced conventional weapons cannot be met by deploying more soldiers or buying more of the current generation of tanks, planes, or ships. Indeed, populating the battlefield with more Americans is, in many circumstances, exactly the opposite of what is called for. Instead, the answer lies in developing new technological capabilities and the operational concepts for employing them. Exploiting the full potential of these advances will have a profound effect on how we structure forces and conduct military operations.

Concepts and capabilities

Following is a list of the most important new capabilities for a restructured military. Together, they offer the potential to counter effectively the most serious challenges that potential adversaries might pose to U.S. and allied forces in a future conflict.

Layered defenses against intermediate-range (“theater”) missiles. Preventing the effective use of theater ballistic missiles will require a multidimensional approach. First, we will want to develop and deploy more effective missile defenses. The bulk of our investment in this area is in systems such as Patriot, Theater High-Altitude Air Defense (THAAD), and Aegis Upper Tier that intercept missiles in the terminal and mid-course phases of flight. These defenses can be overwhelmed by equipping each offensive missile with a number of canisters containing chemical or biological weapons payloads. When released from the missile in mid-course, these cannisters create multiple targets, each of which must be intercepted separately.

To overcome this problem, we need systems that can intercept ballistic missiles earlier, during the boost phase. Two approaches appear feasible: an airborne laser, now being developed by the Air Force, and high-speed missiles, also launched from an airborne platform. One or both of these approaches should be funded as a high priority.

Second, we must improve our abilities to locate and destroy enemy weapons before they are launched. This involves enhancing reconnaissance capabilities for finding mobile missile launchers (“Scud hunting”) as well as stocks of nuclear, chemical, and biological warheads. We also need to develop better warheads for penetrating earth and concrete, in order to destroy the underground shelters in which weapons are often stored to evade airborne detection and attack.

Finally, to counter the threat posed by weapons of mass destruction, U.S. forces must be able to deliver effective firepower from bases or platforms that are located far beyond enemy territory or are immune from enemy attack. Examples of the former are long-range attack aircraft, often supported by aerial refueling aircraft; examples of the latter are stealthy aircraft and submarines equipped with cruise missiles.

Advanced reconnaissance platforms and sensors. Timely, high-quality information on the enemy’s capabilities, intentions, position, and movements provides tremendous leverage to a military force: Commanders with “information dominance” can concentrate their forces where they are most needed, confident that they will not be surprised by enemy maneuvers. A new generation of reconnaissance platforms and sensors will greatly enhance the quality of information available to U.S. commanders. Smaller and more affordable synthetic aperture radars will allow us to observe phenomena on the ground during day or night and under all weather conditions. Satellite and airborne data links will allow the data acquired by these and other sensors to be passed at once to assessment centers anywhere in the world. And unmanned aerial vehicles (UAVs) equipped with sensors that “stare” at the battlefield for extended periods of time (as opposed to sweeping by once or twice a day on satellites), can help ensure that even small changes in the disposition of enemy forces are detected.

Cuts in Army National Guard combat units could save upwards of $2 billion annually with no loss of useful military capabilities.

“Smart” weapons. Advanced reconnaissance capabilities also allow us to capture the synergy between information and munitions. Accurate information about targets allows a force to make the best possible use of precision (“smart”) weapons. Together these capabilities allow us to achieve most important military objectives without inflicting unwanted damage on civilians. In addition, air-delivered firepower and long-range artillery can destroy enemy maneuver forces before they get within striking range of our forces. Recent and incipient advances in precision guidance, sensing, and decisionmaking algorithms have multiplied the effectiveness of our munitions by an order of magnitude or more.

Suppressing enemy air defenses. Essential to all of these advances is the freedom to operate in the air. Aircraft and spacecraft allow sensors to look deep into enemy territory; afford opportunities to quickly deliver firepower against fleeting targets; and make it possible to operate throughout the area under enemy control without first having to defeat opposing ground forces. Accordingly, gaining and maintaining freedom of action in the air and space–and denying it to the enemy–will constitute one of a commander’s highest priorities.

The chief threat to U.S. air superiority in most future conflicts is posed by surface-to-air missiles (SAMs). It is essential, therefore, that U.S. forces have the capability to suppress and destroy these quickly. For decades to come, the bulk of our military aircraft will not possess the radar-evading qualities of the F-117 or B-2 “stealth” bombers. So defeating enemy surface-to-air threats will involve a combination of some stealth; standoff attack weapons that can be launched from beyond the range of surface-to-air missiles; and weapons that are optimized to suppress and destroy those missiles, their tracking and guidance radars, and the facilities devoted to controlling their operations.

One promising concept now emerging for this latter function is a high-powered microwave weapon. Using nonnuclear means, such weapons may be capable of generating sufficient flux to disable permanently the circuitry of nearby SAM tracking and guidance gear, as well as computers and communications equipment.

This emphasis on surface-to-air threats is not intended to minimize the problems posed by enemy air-to-air fighters and missiles. These are considerable and growing, but current U.S. modernization plans–notably the development of the Air Force’s F-22 fighter aircraft– seem sufficient for dealing with such threats.

Firepower and maneuver

These new capabilities and operational concepts should beget commensurate changes in our notion of what constitutes a balanced force structure. In particular, we need to rethink the relationship between firepower and maneuver. In traditional concepts of land warfare, it is said that “firepower enables maneuver.” Long-range firepower is employed to suppress the activities of the enemy force while maneuvering one’s forces into position to destroy the enemy at close range. The “indirect” fire provided by long-range artillery and aircraft was regarded as useful chiefly to prevent enemy forces from maneuvering, complicate their resupply efforts, or induce shock that might temporarily reduce their effectiveness. But only close, “direct” fires–such as those from rifles, tanks, or, more recently, guided antitank missiles–possessed the accuracy and lethality needed to defeat the enemy decisively. The ability to maneuver these forces into position was the essential precondition for victory. Maneuver forces have also been useful as a source of information about the enemy. By standing on a piece of ground, one could be fairly sure that it was not occupied by the enemy. From any greater distance, one could be fooled.

All of that is changing. Today, advances in technology enable us to locate and identify enemy ground forces at long range and with high confidence, at least in some types of terrain. Having located enemy forces, it is also possible to attack them from long range with levels of lethality approaching or exceeding those of earlier close-fire systems.

For example, the crews of fighter-bombers in the 1970s were trained to go looking for enemy tank columns. Their primary means of surveillance was the human eye, an approach that was not generally effective at night or in conditions of poor visibility. Even during the day, it was dangerous activity: Air defense systems frequently accompanying the tank columns were fairly effective against aircraft flying higher than a few hundred feet above the ground. When crews were lucky enough to find an enemy force, they dropped bombs that were largely ineffective in destroying tanks. It might take 10 or more canisters of unguided antitank munitions–the payload of several aircraft–to have high confidence of destroying a single tank.

Today, every aspect of this mission has changed significantly, and mostly to the detriment of the tank. Systems such as the Joint STARS surveillance aircraft and UAVs with multi-spectral imaging capabilities can locate enemy vehicles 100 or more kilometers away, day or night, and under all atmospheric conditions. Increasing numbers of fighter-bombers have similarly effective target detection and engagement systems on board. And improvements in capabilities to suppress, confuse, and destroy surface-to-air defenses allow U.S. and allied aircraft to operate with greater confidence over the battlefield.

Finally, today’s air-delivered ordnance is far more effective than the bombs used 10 or 20 years ago. Self-guiding antitank weapons, for example, can achieve multiple kills per pass. It is now possible to think in terms of kills per sortie rather than sorties per kill. Soon, artillery and missiles will also be able to deliver smart submunitions with comparable effectiveness.

In light of these breakthroughs, it should not be surprising that the traditional division of labor between fire and maneuver is changing. If our forces can see, identify, attack, and destroy enemy forces at long range, the necessity to maneuver–to close and fight at short range–is greatly diminished, at least under some conditions. The advantage is that battles, campaigns, and wars may be fought more quickly and with far less risk of casualties. Lighter and more mobile U.S. forces can accomplish more than before, and fewer Americans need be exposed to the most lethal forms of enemy firepower.

The long view

In order to achieve critical advances in firepower and to field other critical capabilities, including ballistic missile defense, information dominance, and effective suppression of enemy air defenses, we must find a way to pay for them. As is so often the case, the budgetary and political obstacles to progress are more daunting than the technological and physical ones. Put simply, there is not enough money to sustain the Pentagon’s current force structure, to recapitalize all of it, and to invest in the development and procurement of the most important new hardware.

DOD leaders estimate that, in order to finance an adequate modernization, the department must increase spending on new equipment by at least 50 percent, from $40 billion per year to about $60 billion. We have gotten by over the past few years because the defense buildup of the 1980s financed extensive modernization and because the post-Cold War drawdown has allowed the services to retire their oldest pieces of equipment while retaining the newest. Obviously, this is not a process that can go on forever.

The three most obvious places within the defense budget to find money for new investments are reform in the department’s procurement practices, cuts in defense infrastructure, and the elimination of military units with marginal missions. Indeed, Secretary of Defense William Perry has claimed that the military can save enough money in these three areas to support new investment without requiring force cuts. The problem, however, is that none of these savings can be realized in the short term.

The Pentagon, working with Congress, has made a concerted effort over the past four years to reduce the complexity of its byzantine procurement regulations and the staffs needed to implement them. As more of the military’s goods and services are purchased via more efficient mechanisms, it is reasonable to expect that DoD could save $5 to $10 billion or more annually. But it will take years to achieve these efficiencies.

Similarly, cuts in defense infrastructure–military bases, depots, arsenals, medical facilities and staffs, and other support assets–have proven notoriously difficult to achieve. Such assets are politically popular, and the military services themselves sometimes have only a vague notion of the relationship between expenditures on infrastructure and the performance of their forces. Estimates of overcapacity vary depending on the criteria applied: For instance, the Air Force has far more ramp space and runways than it needs for the number of planes it flies, but its access to air space and bombing ranges is far more constrained. By any measure, however, more rounds of base closure are surely justified for all of the services.

Closing a major base can save roughly $50 million per year. But squeezing money out of infrastructure will be a long process: Often, up-front costs must be paid before savings are realized; environmental cleanup and disposal requirements and the need to relocate facilities and personnel mean that it can take two to three years or more before closing a base begins to yield net savings; and costly incentive packages may be needed to get personnel out of the workforce early.

Finally, a determined effort to cut the combat formations of the Army National Guard could save upwards of $2 billion annually with no loss of useful military capabilities. There are more than 40 combat brigades in the Army National Guard–10 more than the entire active-duty Army–and they comprise well over half of the Guard’s 370,000 personnel. These units have no compelling wartime mission; it would take many months for them to become proficient in combat maneuver operations. At the same time, since most of them are equipped with tanks and armored fighting vehicles, they are poorly suited to conducting potentially useful domestic operations, such as providing disaster relief or maintaining law and order in emergencies. Army National Guard leaders have recently agreed to reconfigure 12 of these brigades from combat units into much-needed support and logistics functions, but there remain far more units than are likely to be needed. At best, one could rationalize retaining a handful of combat brigades in the Guard as a hedge against the remote possibility of a future large-scale war or an occupation of long duration.

Despite these obvious inefficiencies, cutting the Army National Guard is likely to a be politically difficult and therefore protracted process. The National Guard has a strong base of support in Congress. In fact, the conventional wisdom has been that no administration can win this fight. However, at a time when serious new threats loom and pressures to cut the deficit have led Congress to take billions from such programs as Aid to Families with Dependent Children, one hopes that the conventional wisdom is wrong.

With sustained high-level attention and the expenditure of considerable political capital, it may be possible to divert tens of billions of dollars annually into R&D and modernization accounts. In many cases, DoD has begun to move in these directions. But even with accelerated progress in these areas, the bulk of the savings is years away, while the challenges facing our forces are growing. Hence, as is so often the case in times of budgetary reductions, the defense leadership needs to find “fast money.” In light of this, some cuts in military forces appear unavoidable.

Against inertia

The law of organizational inertia states that unless acted upon by an outside force, bureaucracies will always take the path of least resistance. With respect to managing a declining defense budget, the path of least resistance is to cut everything proportionately. This time-honored approach implicitly assumes a static conception of the conduct of military operations. Indeed, the overall balance of air, ground, naval, and amphibious forces today is much the same as it was a generation ago. The trouble is that emerging threats to U.S. forces and emerging concepts for addressing such threats affect our forces unevenly. An across-the-board cut risks reducing those forces and systems that contribute the most to defeating future challenges while retaining forces that contribute less.

The brunt of short-term restructuring should occur in the Army’s armored and mechanized divisions.

A fresh appraisal of the demands of future theater warfare, emerging threats, and the opportunities presented by new technologies suggests that a U.S. military posture that exploited advances in firepower more fully could, within limits, trade off technology for some mass. In particular, it seems reasonable to expect that, under many conditions, U.S. commanders in future conflicts will be able to achieve their objectives with fewer heavily mechanized divisions. Fully six of the Army’s 10 active divisions fall into this category (the remainder are light infantry, airborne, and air mobile divisions) and they account for a disproportionate share of the service’s expenditures. This is where the brunt of the restructuring should occur.

Of course, a force posture that includes modern, highly trained, and ready maneuver forces will always be needed. Confronting an enemy with a combined force that includes tanks and other armored vehicles compels the enemy to counterattack with heavy forces. This, in turn, imposes on the enemy costs in terms of time, logistics requirements, and speed, all of which can give U.S. forces time to reinforce and prepare a stronger defense. And if an enemy leader, such as Saddam Hussein, refuses to withdraw his forces from a battlefield on which they are being pummeled, heavy maneuver forces may be needed to compel that withdrawal. One can also conceive of situations in which terrain, cover, or clever enemy tactics might blunt the effectiveness of modern surveillance and firepower systems. Nevertheless, if resource constraints necessitate some cuts in active duty forces, cutting some heavy maneuver forces and their associated overhead and support elements would seem to be one of the least risky options.

Judicious cuts in this area could yield savings of several billion dollars per year in the near term, which could be applied to high-priority modernization programs. If, over the longer term, commensurate progress is made in reforming procurement practices, streamlining infrastructure, and disbanding unneeded reserve formations, we can provide U.S. forces with the capabilities they need without cutting heavily into force structure. In this way, a force somewhat smaller than today’s could remain capable of defeating two major regional adversaries in concurrent operations, even as those adversaries field new capabilities aimed at blunting many of the advantages our forces of today enjoy. The alternatives–cutting budgets and forces across the board, settling for a scaled-back strategy, or continuing to starve modernization accounts–are surely riskier.

In Numbers We Trust

Our scientific culture, and much of our public life, is based on trust in numbers. They are commonly accepted as the means to achieving objectivity in analysis, certainty in conclusions, and truth. Numbers tell us about the health of our society (as in the rates of occurrence of unwanted behavior), and they provide a demarcation between what is accepted as safe and what is believed to be dangerous. In Trust in Numbers, Theodore Porter, an associate professor of history at UCLA and the author of The Rise of Statistical Thinking, 1820-1900 (Princeton, 1986), unpacks this assumption and uses history to show how such a trust may sometimes be based less on the solidity of the numbers themselves than on the needs of expert and client communities. The pursuit of objectivity through numbers defines modern public policy rather like the pursuit of happiness defines modern private life; and neither pursuit is guaranteed to lead to simple success.

In looking critically at the rigor of quantitative analysis, the book treads on ground that has recently become very delicate. One initial reaction to this study could be to dismiss (or denounce) it as yet another attempt to “demystify” quantification, thus undermining public faith in scientific objectivity. But the historical approach protects the book against such simplistic interpretations. In the examples, we witness a wide variation among the quantitative arguments and their motivations. Also, we are clearly shown the complexity and nuance in the drives for this sort of objectivity, as well as the different degrees of success that they can achieve.

A great benefit of the historical style is that the book is a very good read. The more extended accounts are like novellas. We learn how in Victorian England the accountants and actuaries skillfully deflected government attempts to introduce standardized methods of accounting for insurance firms. There had already occurred some serious scandals, which were reflected in the fiction of Charles Dickens. But the professionals advocated reliance on their skill and judgment, rather than on public regulation with explicit standards of practice. One of their main arguments was that the public should be spared the unnecessary alarm that could arise from excessive openness; hence they proposed that regulation should be accomplished by informal, private understandings. This British style of regulation (closed, informal, and paternalistic), which still persists, differs starkly from the American system of open, explicit, quantitative, and adversarial regulation.

Neither approach is without flaws. One weakness of the British strategy was apparent in the recent “mad cow” epidemic, which was largely a result of government complacency, including the failure to create a database of affected cattle and herds. The legendary U.S. Army Corps of Engineers was an early user of quantitative cost-benefit analysis in its assessment of navigation and water-control projects. But it was applied with a delicate interplay of the objective and subjective because the Corps had to maintain its reputation for scientific impartiality while recognizing the folkways of the U.S. Congress. The limits of quantification (and of proclaimed objectivity) could be seen when a powerful interest group such as the railways expressed its opposition to waterway improvements. On such occasions discussions took a turn that is now familiar, with the quality of expert testimony becoming more contested than the details of numerical arguments that they put forward.

The increasing complexities of cost-benefit analysis were dramatically revealed during the New Deal period, in the great struggles between the Bureau of Reclamation and the Corps. Each had cost-benefit analyses corresponding to their separate mandates (irrigation and flood control, respectively), and these were manipulated by the competing interests (small versus large farmers). Such unseemly battles within the federal bureaucracy provided an impetus for the development of ever more refined and elaborate methods of cost-benefit analysis, in which “objectivity” is protected by a multitude of standardized numerical assessment routines.

Reading these stories could be very illuminating for someone whose professional training has provided no preparation for the real problems of quantification in practice. For such practitioners (and there are many), Porter’s historical accounts convey a sort of insiders’ knowledge, full of “dirty truths” about errors and pitfalls. This private awareness coexists in a living contradiction with the discipline’s public face of perfect, impersonal objectivity as guaranteed by its numbers. In some fields, such as environmental risk assessment, there is now a vigorous public debate about the numbers, and no one is in any doubt that values and assumptions influence the risk calculations. Yet the field flourishes and is actually strengthened by the admission that it is not covering up hidden weaknesses.

Even physicists are subjective

For research scientists the most important chapter of Trust In Numbers is the very last in which Porter shows that in the most highly developed and leading research communities, the ordinary sort of “objectivity”, as secured by open, refereed publication along with quantitative techniques, is really quite secondary. Among high-energy physicists, there is a community of trust, not blind trust but a highly nuanced evaluation of researchers by each other, which assures the reliability of information. Of course, all the research depends on highly objective and disciplined technical work by lower-status personnel, but the creative researchers themselves employ a “personal knowledge” for making and assessing communications. Indeed, we must ask ourselves how scientifc creativity could flourish in a regime dominated by standardized routines.

A broad, balanced, and philosophically informed history such as this not only provides a supplement to the narrowly technical education that so many scientists and experts receive but also furnishes background materials for a general debate on quality of the numbers that we use in developing public policy. Every society needs its totems. In premodern times the enchanted arts, including the quantitative disciplines of gematria (divination by numbers) and astrology, offered security and demanded trust. We now know that they are but pseudo-sciences, although they do retain a peculiar popularity even among the literate. Their social and cultural background made them plausible; they depended on trust in the gods. Now it is in the numbers that we trust, and scientific credibility is vested in apparent objectivity, achieved through quantification. Of course our modern trust is better founded; but trust, like liberty, needs constant renewal.

I find it disturbing that pseudo-science can exist within our modern culture of scientific objectivity, but the possibility needs to be confronted. The phenomenon of garbage-in/garbage-out is alive and well in various fields of environmental, military, social, and political engineering. When the uncertainties in inputs are not revealed, the outputs of a quantitative analysis become meaningless. We have then entered the realm of pseudo-science where people put faith in numbers just because they are numbers. The numerical calculations used to support the U.S. Strategic Defense Initiative were an example of numeric mystification with no foundation in reality. The old saying, “figures can’t lie, but liars can figure” can now be extended from statistics to a variety of fields.

Porter is to be congratulated for showing how intimate can be the mixture of objectivity and subjectivity, real and pseudo-quantification, awareness and self-deception, and vision and fantasy, in the invocation of trust in numbers. His historical insights can provide the materials we need for a debate on quality in quantities, a debate which is long overdue.

Eliminating Excess Defense Production

Throughout history, each time our nation has ended a war we have cut back our weapons arsenals and smartly propped up R&D to prepare for future enemies. But since the Cold War’s end, we have done the exact opposite. We have propped up commercial contractors’ production lines and risked shortchanging experimentation.

The long Cold War did something no previous engagement had: It shifted the balance between public and private power in the defense establishment overwhelmingly to companies. These companies now lobby effectively to protect their economic interests, and politicians, in the face of declining defense budgets, work to protect employment in their districts. Defense has become a jobs program.

As a result, we are wasting billions of dollars each year to keep unneeded production lines open. We have, for example, thousands more tanks and war planes than we need. Continuing this production steals precious resources from R&D, the real lever to future battle readiness, as well as other government programs. It is time to restore a balanced system for research, development, and production. As a nation, we must bite the bullet: buy out the defense industry’s excess capacity and get on with preparing for the future.

You want tanks?

At the end of its wars, the United States has customarily demobilized. Civilian contractors, brought in to help produce desperately needed military equipment, shifted back to commercial production. The nurturing of unique military technologies, to the extent that it has occurred between wars, was mostly done in government-owned arsenals and shipyards. The pattern was short, sharp spikes in funding for wars followed by long periods of small budgets.

The Cold War was different. Although there were cycles in defense budgets, with peaks during Korea, Vietnam, and the Reagan buildup, the budget range was narrow, roughly between $250 and $400 billion (in FY 1995 dollars). With plenty of business, hardly any contractors were compelled to return to civilian production. Instead, during the cyclic downturns, the Pentagon closed arsenals, shifting more business to the politically more influential contractors.

A variety of rationales were offered for shifting weapons design and production to the private sector. Industry was said to be more responsive than the arsenals to the armed services; industrial workforces were believed to be more flexible than public workforces; and contractors could pay higher salaries than the civil service and thus attract top scientists and engineers. A number of government facilities were closed before the recent rounds of base closures. The current debate over what to do with government facilities really concerns only the residual of the government’s arsenal system: a few large laboratories, five or six aircraft repair depots, and a handful of public shipyards.

The established acquisition pattern that keeps private contractors willing to maintain the U.S. edge in military technology is the large production run of a weapon system. Contractors have never made much money on R&D work, and in some cases have lost a great deal. But even losing money on the fixed-price development contracts of the 1980s did not hurt the majority of firms, because they were making profits on the booming production side.

Now, however, weapons inventories are bulging. The United States has 7,500 frontline fighters, even though the Air Force fields only about 2,000. The country has 16,000 main battle tanks, including 8,000 M-1s; but the Army’s six heavy divisions each need no more than 300 tanks, and the Marines need another division’s worth, for a total of 2,100. Obviously, plenty of tanks are left over to supply the National Guard and reserves, to conduct various predeployment activities, and to replace those lost in battle. It makes no strategic sense to support U.S. armored vehicle manufacturers with additional production contracts.

The production-capacity overhang is huge for many types of military hardware. Seven lines are producing military aircraft, six private yards are building large warships, five helicopter companies are totally dependent on military purchases, and four companies are making missiles. What is most striking, however, is how little contractor employment has fallen. Despite major reductions in service personnel and civilian defense jobs since the Cold War’s end, contractor employment is almost 600,000 more than its 1976 Cold War low. This is where the restructuring most needs to occur.

Bogus argument

Some argue that the widely discussed mergers now taking place among defense firms have already reduced excess capacity and that the problem can be left to market forces. But the evidence is plain that this arguments is bogus. Lockheed and Martin Marietta recently merged and have since absorbed Loral, but the new firm still has all three original aircraft lines open in California, Texas, and Georgia. The number of employees related to its defense work has hardly changed. Lockheed Martin’s consolidation has come in the space business, where commercial buyers supplement government demand. On the defense side, the company’s strong position has relieved pressure to rationalize production. Furthermore, the much-discussed synergies between the electronics and platform components of the new conglomerate are blocked by an antitrust agreement that prevents full communication among its divisions.

The agglomeration of Northrop, Grumman, and Vought into one company also did not lead to much change in employment or production capacity. Before the merger, the three companies produced aerostructures in nine plants; all nine are still open. There have been layoffs, but most had been scheduled before the merger. Although Northrop Grumman had another opportunity to consolidate capacity in its electronics businesses when it subsequently acquired Westinghouse’s defense electronics business, little consolidation has been achieved.

Another recent merger offers additional insight. General Dynamics bought Bath Iron Works, but neither Bath nor General Dynamic’s Electric Boat division are slated to close. Because of the post-Cold War politics of warship production, Bath has essentially been guaranteed at least one destroyer contract to work on every year for years to come–an assured income stream. The acquisition price for the Bath yard was less than the present value of that income stream, so it was a low-risk, profitable venture for General Dynamics. Two weak shipyards, each dependent on a political subsidy to stay in business, have joined forces.

No matter how it is described, the defense business is not a market-based enterprise, even when commercial companies own the production capacity. Congress buys weapons in response to defense firms’ lobbying; unnecessary production facilities receive support in order to prop up district employment.

The defense conversion myth

Converting defense plants to commercial production is not the answer either. An important distinction must be made between the big final assembly plants of the prime contractors and the component plants of smaller defense companies. Many Cold War subcontractors always integrated defense with commercial business, or at least have redeployed their assets on that model in the past five years. These firms are taking care of themselves. Too small and too weak politically to remain dependent on the government, they have already moved into the post-Cold War era.

The prime contractors, on the other hand, followed a different Cold War business model, which has led them to a different post-Cold War response to reduced international tension. They dealt directly with the government on big projects that were long lived and very visible politically. Their influence was compounded by the concentration of employment in final assembly operations, often with 10,000 or more workers in a single facility. This was true in the Soviet Union, too. But since the end of the Cold War, large Russian defense facilities converted to production of milking machines, diapers, and other products needed to fill pent-up commercial demand. There are no comparable areas of shortage in the U.S. economy.

The commercial market potential for the largest defense companies looks truly bleak today. For example, in the military shipbuilding market that defense firms are accustomed to, unit prices are extremely high; nearly every ship costs a billion dollars or more. Although a market for commercial ships exists, it is for ships such as tankers, which cost $40 million each. Newport News is making a few commercial ships with a purchase-loan subsidy, but not nearly enough to replace the two $4-billion aircraft carriers now in its yard. Not surprisingly, then, Newport News continues to covet military contracts. In 1995, when the Navy decided to consolidate all submarine building at Electric Boat in Groton, Conn., Newport lobbied hard to preserve its submarine production. In 1996, Congress blessed Newport News with a new contract to build a nuclear attack submarine–over Navy objections.

The government should buy out unneeded companies, their employees, and their surrounding communities.

Acquisition reform is the Clinton administration’s favorite alternative to defense-sector restructuring, and some acquisition changes are desirable. In fact, many acquisition rules were created to undermine Pentagon efficiency. During the 1980s, the Democrat-controlled Congress could not confront President Reagan directly over his popular defense buildup. Instead, it attempted to hobble the buildup through regulation that ostensibly would reduce waste, fraud, and abuse in defense contracting. This became the justification for dozens of laws requiring contract reviews, rewards for whistle-blowing, social engineering through contracting, financial audits, and then even more audits. It is appropriate for today’s Democratic administration and Republican Congress to recognize the burden these laws place on the government and to seek reform.

The big thrust in acquisition reform is the administration’s strong promotion of dual-use technology and the elimination of unique production standards that make defense purchases costly. But cutting cost at the margin will not change the overall defense budget much, if at all. The political will is not there. Support for acquisition reform from the military services and from defense contractors is premised on the expectation that lower unit costs for weapons will lead not to a reduction in the budget, but rather to an expansion of demand for weapons. At the very least, procurement supporters hope that the budget cutters will split the windfall with them. Because this is unlikely to happen, neither the military nor the contractors will be long-term advocates for these reforms.

Even in the short term, the acquisition-reform rhetoric has a pernicious outcome; it allows politicians to maintain the illusion that they are making a cost-effective investment in our future national security. The participation of the F-22 program office in the Air Force’s “Lean Aircraft Initiative,” which claims that acquisition reform and new manufacturing techniques will substantially reduce unit costs, is a leading example of the political cover that acquisition reform provides to some very expensive programs. It diverts attention from the real question, which is not how to cut costs for the F-22, but whether we need to build F-22s when the existing F-15s, F-16s, and F-18s are already better than anyone else’s fighters. Furthermore, we have learned that the promised savings of more efficient production are often lost when Congress, confronted with political uncertainty, reduces production rates and thus increases unit costs.

The other purported benefit of acquisition reform, speeding up the development cycle, makes even less sense. Some think that once politicians show an interest in a new weapons system, we need to build it before they change their minds. Unfortunately, it is physically impossible to build weapons systems faster than politicians can manipulate budget priorities. Worse, accelerating projects by compressing development times, trimming test schedules, and taking other short cuts is a formula for guaranteeing performance shortfalls and cost overruns. Instead of accelerating development cycles, we need to slow them down. Why rush when there is no imminent threat? The freedom to proceed more deliberately is one of the benefits of the end of the Cold War.

From pork to spam

If mergers are not helping, if conversion offers little hope, and if acquisition reform will only make matters worse, what are we to do? To begin with, we need to recognize the real source of the problem: the ongoing lobbying of contractors to keep production lines running, if only slowly. Extended production runs of mature systems are the cash cows of the defense industry. Politically, the lobbying efforts resonate, because the lines represent jobs. Indeed, in 1995 retired general Robert Gard Jr. told Financial World, “We’re not buying some of these major weapons systems because we need them. We’re buying them to keep up employment in states with influential members of Congress.”

With the Cold War over, it is easier and easier for members of Congress, Republicans or Democrats, to ignore the plans and preferences of DOD and the armed services. With each political maneuver to protect current production lines, the opportunity to do new things, to prepare for the wars of the future, and to keep our technological edge disappears. Production funding threatens to crowd R&D out of the defense budget. Adding up the already promised spending on production of major weapons systems in the next decade leaves no room for research under the DOD’s projected budget.

The defense drawdown following the Cold War has been gentler than any drawdown this century. And yet, even as Congress and the president debate what social program to cut to help balance the federal budget, no one in Washington has found the courage to point to defense cuts as a significant potential contributor. Instead, we are debating increasing the procurement budget to $60 billion by FY1998.

The United States needs to recognize and accept the full implications of the end of the Cold War. It needs a plan for future force structure and research. What is needed is the kind of planning that Gary Weir, a historian at the U.S. Naval Historical Center, describes the Navy’s submarine program doing after the World War I. The submariners built no new boats for 10 years but they kept developing technology. They forced the closure of the Lake Boat Company in order to keep Electric Boat alive, because Electric Boat had the better facility. Then, in order to keep Electric Boat focused, they developed submarine construction capacity at Portsmouth, a public yard. During the 1920s and early 1930s, submariners brought in foreign component technology and worked on their offensive doctrine and new submarine designs. What came out of this work was the effective fleet boat of the World War II and the strategy that helped defeat Japan.

Time to pay the bill

The United States should devote resources to an ambitious R&D program. The political obstacles that the services face in the absence of a strong overseas threat work against them, but they have the responsibility for maintaining national security. The new missions being offered the military won’t generate sufficient support to sustain vital technologies. To preserve resources for design teams, the Pentagon will have to eliminate unneeded production capacity.

The United States should restore a balanced system for research, development, and production by adopting a two-step strategy. First, pay the bill. It is time to buy out the excess capacity and get on with the task of preparing for the future. Demand for military products is down, so producers should downsize. Not even the free market proponents can deny this simple logic. It happens in commercial industries every day. The government should buy out unneeded companies, their employees, and their surrounding communities. Sure, this would be expensive. But Congress will spend the same amount of money keeping plants open. The difference is that a buyout is a one-time charge. Artificially sustaining companies goes on and on.

Unfortunately,this proposal faces formidable political opposition. Congress narrowly rejected language attached to the FY1997 appropriations bill that would have ended the DOD’s ability to pay restructuring costs for merged companies that close plants. The present political environment would certainly block a major government restructuring payment. Defense policy is back to the Bush administration’s practice of verbally encouraging mergers but “letting the market decide” the ultimate configuration of the industry. The fallacy with this policy is that the defense industry is not governed by normal, competitive market forces. Plants that would otherwise be forced to close, either due to bankruptcy or a postmerger consolidation, can be kept open by aggressive lobbying.

A simple, properly designed subsidy to plant-level restructuring would provide a ready solution to this market failure. Somehow, though, the idea of paying an exit subsidy to defense contractors has been politically branded as a cash handout to influential companies–a form of corporate welfare. The real welfare going on, however, is the continuation of unnecessary production contracts, which are much more expensive in the long run because production requires the purchase of materials and the sustainment of high overhead on substantial overcapacity.

The system that makes defense work profitable only with long production runs should be replaced by one in which technological experimentation is financially worthwhile.

The political demise of the Clinton administration’s merger policy is due to its failure to do enough for workers and communities. The few payments for restructuring charges have thus far gone mostly to company coffers, leaving workers and local officials with an incentive to lobby against plant closings and against the policy. The government already pays military personnel and civilian DOD workers to leave the federal payroll; it is time to pay civilian defense workers and their communities to leave as well.

Fortunately, our experience with the size of the payments required to encourage civilian defense workers to leave their government jobs suggests that the bill for paying off workers need not be large. We have recent experience in negotiating the value of defense-related property through the Base Realignment and Closure Commission process. Civil DOD personnel have been given $25,000 to leave, and some officers have agreed to early separation for $30,000. Even if the government had to agree to pay workers their full salaries for a long or indefinite period of time, a true worst-case scenario, savings would accrue to the defense budget due to reductions in materials, manufacturing, and overhead costs.

The second major step that we propose for defense procurement is to try to build the equivalent of a public arsenal system, even while defense firms remain nominally private. The system that makes defense work profitable only with long production runs should be replaced by one in which technological experimentation is financially worthwhile for private firms. There is no need for a continuous re-outfitting of the entire U.S. military, but there is a need for continuous research and prototyping. A new institutional design, appropriate for a “private arsenal system” in the post-Cold War world, would award contractors with fair rates of return on R&D alone. Follow-on, large-scale production contracts would be the exception rather than the expectation.

Rethinking the Car of the Future

On September 29, 1993, President Clinton and the chief executive officers of Ford, Chrysler, and General Motors (the “Big Three”) announced the creation of what was to become known as the Partnership for a New Generation of Vehicles (PNGV). The primary goal of the partnership was to develop a vehicle that achieves up to three times the fuel economy of today’s cars–about 80 miles per gallon (mpg)–with no sacrifice in performance, size, cost, emissions, or safety. The project would cost a billion dollars or more, split fifty-fifty between government and industry over a 10-year period. Engineers were to select the most promising technologies by 1997, create a concept prototype by 2000, and build a production prototype by 2004.

As the first deadline approaches, PNGV shows signs of falling short of its ambitious goals. Little new funding has been devoted to the project. More important, the organizational structure that seemed appropriate in 1993–its design goals, deadlines, and funding strategies–may prove to be counterproductive. The program designed to accelerate the commercialization of revolutionary new technologies has focused instead on incremental refinement of technologies that are relatively familiar and not particularly beneficial for the environment.

Major adjustments are needed in order to realize the full potential of this partnership. A reformed PNGV would be capable of efficiently directing funds toward the most promising technologies, the most aggressive companies, and the most innovative research centers. Now is the time to update the program by incorporating the lessons learned during its first few years.

The politics of partnership

A confluence of circumstances drew government and industry together into this historic partnership. In addition to the political benefits of forging a closer relationship with the automotive industry, the Clinton administration saw an opportunity to provide a new mission for the nation’s energy and weapons laboratories and sagging defense industry. And, at Vice President Gore’s instigation, it saw a means to strengthen its public commitment to environmentalism.

The auto industry was motivated in part by the promise of financial support for long-term and basic research. In addition, according to press reports, the three major automakers hoped that by embracing the ambitious fuel economy goal, they might avoid more stringent and (in their view) overly intrusive government mandates: in particular, the national Corporate Average Fuel Economy (CAFE) standards and the Zero Emission Vehicle (ZEV) mandate that had recently beenadopted in California, New York, and Massachusetts. They looked to PNGV to spur the development of so-called leapfrog technologies that would make incremental fuel economy standards and battery-powered electric vehicles superfluous.

An overarching objective for both parties was to forge a more positive relationship. Inspired by the Japanese model, they sought the opportunity to transform a contentious regulatory relationship into a productive partnership. In the words of a senior government official, “We’re trying to replace lawyers with engineers.”

Both parties were also aware that the U.S. automobile industry risks ceding global leadership if it fails to meet the anticipated demand for efficient, environmentally benign vehicles. Automobile ownership has escalated worldwide from 50 million vehicles in 1950 to 500 million vehicles in 1990 and is expected to continue increasing at this rate into the foreseeable future. At the same time, growing concern about air quality and greenhouse gas emissions has led a number of cities to take measures such as restricting automobile use. In response, a number of automakers have begun to develop cleaner, more efficient vehicles. Hybrid vehicles combining internal combustion engines with electric drive lines have been developed by a handful of foreign automakers, and Toyota and Daimler-Benz have unveiled prototypes of fuel cell cars in the past year.

The automotive industry appears to be on the threshold of a technological revolution that promises rapid improvements in energy efficiency as well as reductions in greenhouse gas emissions and pollution. U.S. companies will have to make major changes if they expect to gain a piece of the potentially huge international market for environmentally benign vehicles. This transformation can be accomplished only with government involvement, in part because individual consumers are perceived as unwilling to pay higher prices for cleaner, more efficient cars. In a joint statement to Congress in July 1996, the Big Three said, “Although the market does not presently demand high fuel-efficiency vehicles, we believe that PNGV research goals are clearly in the public’s broad interest and should be developed as part of a mutual industry-government commitment to environmental stewardship.”

Despite such lofty proclamations, the government’s anticipated financial commitment to PNGV never materialized–a casualty of the growing federal budget deficit and the election of a Republican Congress in 1994. In the partnership’s first year, the federal government awarded only about $30 million in new PNGV-related funds. Indeed, only aggressive behind-the-scenes lobbying by the Big Three automakers managed to save PNGV funding. Instead, PNGV has become an umbrella for a variety of existing programs, including about $250 million in hybrid-vehicle research already in place at Ford and General Motors (GM). Most of the government support is in the form of basic research grants only indirectly related to vehicles that was awarded before the advent of PNGV and administered by the National Science Foundation, the National Aeronautics and Space Administration, and other agencies.

With modest funding have come modest accomplishments. PNGV has eased somewhat the adversarial relationship between automakers and regulators, it may have helped the Big Three close a gap with European companies in advanced diesel technology, and it stimulated some advances in fuel cell technologies. For the most part, however, the accomplishments attributed to PNGV, such as those featured in a glossy brochure it published in July of 1996, appear to be the results of prior efforts by the Big Three and their suppliers. For instance, the brochure features GM’s EV1 electric car, unveiled as the Impact prototype in 1990, and hybrid vehicle designs that were also funded before PNGV.

Problematic goals

PNGV has three fundamental problems. First are the project’s design goals: to build an affordable, family-style car with performance equivalent to that of today’s vehicles and emissions levels that meet the standards planned for 2004. Each of these goals–affordability, performance, and reduced emissions–is defined and pursued in a way that effectively pushes the most environmentally promising and energy-efficient technologies aside.

Take affordability. New technologies are almost never introduced in mainstream products such as family cars; they nearly always enter in products at the upper end of the market such as luxury cars. By pegging affordability to the middle of the market, PNGV managers are, intentionally or unintentionally, discouraging investment in technologies that are not already approaching commercial viability.

Similarly, PNGV defines equivalent performance in terms of driving range per tank of fuel. This requirement is intended to ensure that the vehicle is suitable for the mass market. Recent evidence indicates, however, that for a substantial segment of the U.S. car-buying public, limited driving range might be a minor factor in the decision to purchase a vehicle. More than 70 percent of new light-duty vehicles in the United States are purchased by households owning two or more vehicles. A limited-range vehicle can be readily incorporated into many of these household fleets. Market research at the University of California–Davis estimates that limited-range (less than 180 kilometers per tank) vehicles could make up perhaps a third of all light-duty vehicles sold in the United States, even if they cost somewhat more than comparable gasoline cars.

PNGV’s range requirement directs R&D away from some innovative technologies and designs that are highly promising from an energy and environmental perspective. These include pure electric cars that use ultracapacitors and batteries; certain hybridized combinations of internal combustion engines and electric drivelines; and environmentally friendly versions of small, safe vehicles such as the Smart “Swatchmobile” of Mercedes-Benz.

The emissions goal is equally problematic, but the problem is a different one: The standard is too lax. The national vehicle emissions standards planned for 2004 (known as “tier 2”) are less stringent than those already being implemented in California and far less stringent than California’s proposed “Equivalent zero-emission vehicle” standards. If history provides any lesson, it is that the California standards will soon be adopted nationwide: the Environmental Protection Agency has consistently followed California’s lead.

Taking advantage of PNGV’s unambitious emissions requirement, automotive managers and engineers have indicated that they almost certainly will select the most-polluting technology in the PNGV tool box as the platform for the concept prototype. This is a diesel-electric hybrid: a direct-injected diesel engine, combined with an electric driveline and a small battery pack.

Diesel-electric hybrid technology represents only a modest technological step. The automotive industry is already well along in developing advanced diesel engines, similar to what PNGV envisions, for the European market. Production prototypes using hybridized diesel and gasoline engines have already been unveiled by several foreign automakers, including Audi, Daihatsu, Isuzu, Mitsubishi, and Toyota. In fact, Toyota reportedly intends to start selling tens of thousands of hybrid vehicles to the U.S. market in late1997.

Because this hybrid-vehicle technology is relatively well developed, it would be easy to build a concept prototype within the PNGV time frame. In addition, these engines achieve relatively high fuel economy (though probably far short of a tripling). However, diesel engines inherently produce high levels of nitrogen oxide and particulate emissions, the most troublesome air pollutants plaguing our cities. Because lax emissions goals permit this choice, other more environmentally promising technologies, such as fuel cells, compact hydrogen storage, ultracapacitors, and electric drivelines hybridized with innovative low-emitting engines, run the risk of being pushed aside.

Big Three automotive engineers argue that the advanced direct-injection diesel engines they are contemplating are far different from today’s diesel engines and that significant emission improvements are possible, but it is uncertain whether such engines could ever meet today’s national emission standards, much less the tier 2 standards or California’s tighter “ultra-low” emission standards. They will never match the emissions of fuel cells and advanced hybrid vehicles that use nondiesel engines. Given the ground rules established in 1993, PNGV managers are behaving rationally. But are the rules rational, given that this program is the centerpiece of advanced U.S. automotive R&D?

Deadline pressures

The second major problem with PNGV is the procedural requirement that the technology to be used in the 2004 production prototypes must be selected by the end of 1997. At first glance this requirement seems reasonable: It ensures that industry will stay on track to meet subsequent deadlines. But the actual effect may be to thwart the development of more advanced technology. Because the deadline is approaching rapidly, PNGV managers are put in the awkward position of having to favor incrementalism over leapfrogging. They find it safer to choose a prototype they know can be built but that falls short of the 80 mpg goal (that is, the diesel-electric hybrid) than to pursue technologies such as fuel cells that are less developed but environmentally superior.

PNGV managers insist that the Big Three will select more than one technology in 1997 and that they will not abandon fuel cells and other potentially revolutionary technologies. The reality, though, is that the limited funds and the looming requirement for a concept prototype in 2000 will most likely cause automakers and government agencies to concentrate their efforts on a single powertrain design, diesel-electric.

The third fundamental problem with PNGV is its funding strategy. Rob Chapman, the government’s technical chairman of PNGV, testified to Congress on July 30, 1996, that of the approximately $293 million per year that the government is spending on PNGV-related research, about a third goes to the federal labs, a third directly to automotive suppliers, and a third to the Big Three.

This breakdown greatly understates the real role of the Big Three. Most of that $293 million is administered through a variety of programs that have only indirect relevance to automotive applications. Only about $70 million is targeted directly at PNGV’s primary goal of achieving a highly fuel-efficient vehicle. The vast majority of this $70 million has gone to the Big Three. The Big Three also control, directly and indirectly, a substantial share of lab funding. For instance, until mid-1996, government funding of fuel cell research at Los Alamos National Laboratory was administered through a subcontract from GM.

At first glance, it seems logical to let the Big Three play a leading role in designing the R&D agenda. After all, they are likely to be the ultimate users of PNGV-type technologies. But for a variety of reasons, it is in the public interest to downplay their role in government R&D programs.

First of all, most innovation in advanced technologies is now being conducted outside the Big Three, who increasingly rely on suppliers to develop and manufacture components. The leading designer of vehicular fuel cells, for instance, is Ballard Power Systems, a tiny $20-million company located in Vancouver. The shift toward new technologies (batteries, fuel cells, electric drivelines, flywheels, and ultracapacitors) with which today’s automakers have little expertise, will accelerate the trend toward outsourcing technology development and supply. It is not surprising that three-fourths of all PNGV funding sent to the Big Three is being subcontracted to suppliers.

Not only do the Big Three lack expertise in advanced PNGV-type technologies, they also have little incentive to bring significantly cleaner and more efficient technology to market. Fuel prices are low and CAFE standards frozen: there are no carrots and only a politically uncertain ZEV mandate as a stick. Indeed, companies routinely delay commercialization of significant emissions and energy improvements for fear that regulators will codify those improvements in more aggressive technology-forcing rules. (This attitude is exemplified by GM’s former CEO, Roger Smith, who rhetorically asked at the end of his 1990 press conference announcing the Impact electric car prototype, “You guys aren’t going to make us build that car, are you?”)

Understandably, the leading companies in this mature industry are reluctant to aggressively pursue the very technologies that will render much of their physical and human capital obsolete. The automobile manufacturers of the future will need to work with an entirely new set of high-technology supplier companies; as they shift to composite materials, the absence of economies of scale will cause them to forgo mass production in favor of smaller-scale, decentralized manufacturing; and as vehicles become both more reliable and more specialized, they will need to overhaul their marketing and distribution systems. Because the $70 million or so in annual PNGV funding amounts to only 0.5 percent of the Big Three’s $15-billion annual R&D budget, it is unlikely to provide sufficient motivation for them to embrace these changes.

A more effective strategy would be to provide government R&D funds for advanced technology directly to technology-supplier companies, with smaller amounts awarded to universities and independent research centers. In fact, this is the approach PNGV is beginning to pursue with its fuel cell program. Although the Department of Energy (DOE) initially awarded contracts multiyear contracts for fuel cell research to each of the Big Three companies, it soon became apparent that this was an inefficient use of funds. Nearly all of the research in each of the three separate programs was carried out by subcontractors; meanwhile, the extra layer of management consumed a large share of the funds. As a result, DOE and the Big Three jointly agreed that when the current contracts expire in 1997, it will open the bidding to fuel cell developers. The Big Three will monitor the activities of the fuel cell developers but will not be the prime contractors nor receive any government funds. The fuel cell companies will then be able to sell to any or all of the Big Three or any other automaker. By funding the fuel cell companies directly, DOE hopes to spur competition, speed innovation, and improve efficiency as those companies achieve greater economies of scale. The fuel cell program demonstrates the kind of partnership that provides a framework for efficiently accelerating technology development and should serve as a model for PNGV as a whole.

More productive partnerships

The fundamental flaw in PNGV is that it was designed to pursue long-term technologies in a near-term time frame. This has forced it to focus on technologies that are already close to commercialization. But the technologies that are closest to commercialization are least suited to government-industry partnerships, because companies do not want to share innovations that might be central to their future prospects. This near-term technology focus is expecially problematic for partnerships involving huge industrial corporations, whose aggressive political agenda is driven by the interests of their shareholders. In cases where there are large market externalities, such as the costs and benefits of cleaner, more efficient technologies, shareholder interests probably do not match the public interest.

The fundamental flaw in PNGV is that it was designed to pursue long term technologies in a near term time frame.

If PNGV continues along its current path, it will likely direct funds toward neither the right technologies nor the right organizations. Major changes are needed if it is to foster the rapid commercialization of clean and efficient vehicle technologies. More government funding would certainly help. But equally important are fundamental changes in the design and organization of PNGV and how government uses and awards its funds. Here are four recommendations for making PNGV more effective.

Impose more stringent emissions requirements and less stringent performance requirements. Renew the program’s emphasis on cleaner and more promising long-term technologies by aiming for emissions levels more stringent than California’s current “ultra-low” standard and by encouraging engineers to design very efficient, clean, limited-range vehicles.

Remove the 1997 deadline but preserve the 2004 deadline. Engineers need more time to explore, test, and design the most promising technologies. If forced to choose in 1997, they will likely discard the riskier but more promising options. Relaxing the 1997 deadline should not preclude meeting the 2004 deadline.

An industry-government partnership will function most effectively only if the technologies being developed are far from commercialization.

Direct all PNGV funding to independent technology companies and research centers. Eliminating management and contracting oversight from the Big Three will leave suppliers with more funds and allow them to determine the best way to disseminate and commercialize new technologies, whether through joint ventures, licensing, or go-it-alone manufacturing. Government funds are not needed to elicit Big Three participation; they will surely be willing to monitor the research and provide vehicle-integration advice in order to benefit from early access to new technology. Foreign automakers with a significant domestic presence could also be involved in this process if they make the commitment to manufacture the technology in the United States.

Funding of independent research centers and universities would provide a benchmark that regulators and funders can use to evaluate the major automotive companies’ progress in adopting new technologies. In addition, university research can help to train tomorrow’s automotive industry workforce.

Eliminate all but the most advanced technologies from PNGV. An industry-government partnership will function most effectively only if the technologies being developed are far from commercialization. The federal government should create an independent expert panel to determine which technologies should be included in PNGV. Fuel cells, for example, should be included; incremental improvements in gasoline and diesel engines, or even in electric hybrid vehicles, should not. The panel can decide whether to include technologies such as lightweight materials, flywheels, ultracapacitors, and hybrid vehicles with nonconventional engines (such as gas turbines and Stirling engines).

It is with some reluctance that I criticize PNGV, for I am firmly convinced that advanced vehicle technologies can and will play a leading role in preserving the environment. Moreover, I believe that the country would benefit from considerably greater public support of advanced automotive R&D. But if PNGV cannot be reformed in accord with the kinds of changes suggested here, perhaps it should be allowed to die a peaceful death. On the other hand, if changes are made, then the argument for substantial increases in PNGV funding becomes more compelling.

The Dual-Use Dilemma

The Clinton administration began with high hopes for its plan to forge a stronger link between military and commercial technologies. Observers inside as well as outside the administration argued that the Technology Reinvestment Project (TRP), designed to propel the development of commercial technologies critical to the military, could spur innovation throughout industry, give U.S. firms a new edge in international competition, and help ease the post-Cold War conversion of the defense industrial base to commercial production. These high expectations paved the way for rapid congressional approval of the effort. At the same time, they saddled the program with the burden of meeting objectives that it was never designed to achieve.

Approved by Congress near the end of the Bush administration (which resisted its implementation), TRP was launched by the Clinton administration in early 1993 as an experiment to improve the outcome of dual-use technology programs. (It was replaced by a new program at the beginning of fiscal 1997.) The Pentagon had begun to pursue a dual-use investment strategy in the early 1980s, funding innovative R&D for technologies with both commercial and military applications. The strategy was based on the recognition that maintaining a separate defense industrial base is expensive and unwieldy and fails to capitalize on innovation in the commercial sector, where cutting-edge technologies most often originate. In addition, policymakers were concerned that if the United States let foreign firms take the lead in the commercial development of dual-use technologies, the Pentagon could ultimately become dependent on foreign suppliers for key military components.

The first round of dual-use programs, however, produced technologies that served only military needs. The emphasis in TRP was on developing technologies that could result in viable commercial products as well as military uses. Although its life was short, TRP’s innovative programmatic features-including government-industry cost sharing and the awarding of competitive grants to industry-led teams-have earned a positive assessment. It is too soon to judge whether it has contributed in a consistently positive way to the development of viable commercial technologies.

Moreover, in satisfying the expectations of some of its proponents, the program has frustrated the hopes of others. Although TRP was created as a military-sponsored R&D program, it came to be considered the centerpiece of both the administration’s technology policy and its defense conversion efforts; a burden that is simply too heavy for this program to bear. A review of TRP illustrates the political dynamics underlying the dual-use strategy and illuminates the need to disentangle the goals of technology policy and military procurement.

Great expectations

TRP was designed to escape the fate suffered by earlier dual-use initiatives, such as the Very High Speed Integrated Circuits (VHSIC) project and the Strategic Computing Program (SCP). These programs were moderately successful in developing military applications for commercial technologies that were already under development, but they did so by gravitating toward dedicated military production, even prompting manufacturers to build entirely separate production lines for the military versions of the technology. The result was to create separate trajectories for the development of commercial and military technologies.

To avoid creating new defense-dedicated industrial niches, TRP’s designers set out to fund projects that would advance the general technological state of the art. Where a technology development trajectory had already been established commercially and confirmed by a pattern of private-sector investment, TRP projects were supposed to accept the market-driven trajectory, not reshape the line of technological development to achieve a particular military objective.

TRP sought to harness the power of the commercial marketplace for defense needs without distorting market forces. But it was designed to serve the military, not the commercial sector. Although the program was intended to accelerate the development of commercial technologies, its purpose in doing so was to make them available to the military more quickly and cheaply, not necessarily to improve the competitive position or performance of commercial firms.

TRP’s management structure reflected its priorities. Competitions for TRP grants were conducted by the Defense Technology Coordinating Council, made up of nonmilitary people from the National Science Foundation, the Departments of Energy and Transportation, the National Aeronautics and Space Administration (NASA), and the Commerce Department’s National Institute of Standards and Technology. The diversity of the council’s membership was meant to enhance the program’s technical expertise and to ensure that TRP managers were cognizant of recent technological developments in commercial markets and civilian federal agencies. Management and control of the program, however, was assigned to the Pentagon’s Advanced Research Projects Agency (ARPA). Putting TRP under ARPA created a fundamental political imperative: TRP projects had to be justified primarily on the basis of their anticipated value for national defense. This mission was much narrower than the goals of many of TRP’s early backers.

TRP was plagued from the start by the demands of disparate political factions with sometimes contradictory goals. The coalition that mobilized to support the program combined elements of U.S. business bruised by foreign competition, members of the defense research establishment concerned about growing dependence on foreign sources for military technology, and neoliberal Democrats who believed that traditional Democratic economic policies wrongly emphasized consumption and distribution over investment and growth. In addition, the labor and peace movements focused on TRP as a way to stem the loss of domestic manufacturing jobs and win a share of the evanescent “peace dividend” that the United States was supposed to reap after the end of the Cold War. Finally, various agency officials and White House staff who were charged with designing, publicizing, and implementing the program also wanted to demonstrate that a federal dual-use R&D program could actually work.

To cement support for the program, the Clinton administration packaged it as the centerpiece of the national defense conversion effort. The young administration was still reeling from the unexpected February 1993 defeat of President Clinton’s modest economic stimulus package. TRP was intended to be part of the administration’s strategy for addressing long-term structural problems in the economy, whereas most of the administration’s other conversion initiatives emphasized short-term economic adjustment assistance of the type included in the failed stimulus package.

In the chaos of the Clinton administration’s early months, however, the message was seriously muddled. Most Americans who were aware of TRP were convinced that it was primarily aimed at solving the short-term adjustment problems created by the end of the Cold War. Although officials directly responsible for the design and implementation of TRP never saw it as a defense conversion or jobs program, administration officials reinforced this impression in speeches and public documents describing the program.

In a White House press release dated April 12, 1993, President Clinton called TRP “a key component of my conversion plan . . . [that] will play a vital role in helping defense companies adjust and compete.” He added: “I’ve given it another name-Operation Restore Jobs-to signify its ultimate mission, to expand employment opportunities and enhance demonstrably our nation’s competitiveness.”

Ironically, the broad support and high hopes that TRP generated became the source of its greatest political vulnerability. Despite the program’s genuine accomplishments, various constituencies became disillusioned with its failure to live up to the administration’s rhetoric. Moreover, the very features that have protected the program against replicating past failures have prevented it from attracting stronger political support.

Learning from experience

Let’s look briefly at how TRP was supposed to work.

Closing the pork barrel. To ensure that TRP awards would provide government support without dampening market signals, TRP’s designers built in several valuable programmatic features:

The success of TRP’s projects will depend on the extent to which DOD aggressively implements acquisition reform.
  • TRP invited industry input in designing its research agenda. This ensured that TRP awards would target militarily relevant technologies that were also likely to be commercially viable. Investments were made in an array of fields to ensure that the champions of any particular technology or industry did not exercise undue influence.
  • Grant applicants were required to compete in teams combining defense and commercial companies or universities and national labs. The competitions were judged according to technical and economic criteria by a panel of government experts or other independent peer reviewers. Applicants were also required to provide evidence that the technology could be commercially sustained within five years without further federal funding.
  • To ensure that program participants did not become lazy due to an overreliance on the new government subsidy, TRP required grantees to risk their own money as well as the taxpayers’: They had to cover at least 50 percent of the project’s costs. (This commitment was also designed to encourage them to abandon technological approaches if it became clear that they were not working.) Over time, each TRP dollar was matched by an average of $1.33 of nonfederal funds.
  • To prevent the creation of pork-barrel projects that the government could not shed, TRP’s managers made only time-limited grants. Technical milestones and other benchmarks were established up front.

Promoting spin-offs. TRP’s managers looked for technology development projects where military performance requirements will clearly complement commercial market requirements. In other words, the same technology could be used in different products that would meet military and civilian needs.

For example, TRP funded a project to develop a turbo alternator for electric hybrid vehicles. The turbo alternator would serve a military need for tank engines that will be harder to detect with infrared sensors, because the hybrid engines emit less heat. It would simultaneously serve a clear commercial need for small, energy-efficient, low-emission engines that can be used in hybrid (diesel/electric) city buses.

Promoting spin-ons. The purpose of TRP spin-on development projects was to provide the military with leading-edge technology that would be expected to become affordable over time as the evolution of the technology led to the creation of a self-sustaining commercial industry. By adapting a new technology for military use, the project could provide an important market, contributing to the industry’s growth and the rapid advancement of the technology. The trick was to do this without disrupting the market signals that would guide the trajectory of the technology’s commercial development. This was thought to be possible only if the military-sponsored project generates equipment and tools that in turn would expand the commercial market.

For example, one TRP project was aimed at enabling military surgeons to learn surgical procedures on a computer simulator. The project will spin on to military use technologies already developed for commercial purposes. The goal is to ensure that commercial producers develop these systems further with military needs in mind; for instance, treating injuries such as shrapnel wounds that are found frequently on the battlefield but rarely in civilian life. The software and hardware packages that will be developed for these specific military purposes can then be adapted for use in civilian medical training, disaster response, emergency room medicine, and commercial telemedicine. Indeed, the project is expected to facilitate the commercial development of many minimally invasive surgical techniques for which there is now a growing demand among civilian medical professionals.

Civilian-military integration. TRP’s most important long-term goal was to promote commercial-military integration-the creation of a single unified industrial base for commercial and military technology development. Most TRP officials believed that this would be a more viable strategy than direct defense conversion. They argued that even if traditional defense contractors converted to new products, they would be unlikely to compete successfully in commercial markets, given their high-overhead operations and cost-plus-contract culture. Instead, they stressed the need to help specialized defense companies find commercial partners to teach them what they need to learn about marketing and high-quality low-cost manufacturing and thus overcome the barriers that separate the defense industrial base from the rest of the economy.

An early assessment

After less than four years, TRP made significant progress toward meeting its major goals. It fostered industry-led dual-use R&D in a number of fields, and it encouraged defense and commercial firms to work together on the commercial development of a number of militarily relevant technologies. The features that TRP incorporated to avoid government failure proved to even more successful. It is too early to judge the effect of time-limited grants, but other traits-government-industry cost sharing, competitive selection, and the requirement that applicants be made up of industry-led teams-combined with the vigilance of TRP’s directors to make the program nearly free of political pork.

To judge whether TRP will actually prove to have accelerated the development of commercial technology, one will have to examine the results of the program project by project after several more years. However, it appears that by playing midwife to research consortia, TRP has likely facilitated more rapid technology transfer and innovation. With TRP funds, for example, the defense contractor Aerojet partnered with commercial companies such as General Motors, Admiral, and Boeing that were potential users of a class of new materials called aerogels, which have superb heat-insulating properties.

Many TRP projects also boosted or accelerated the efforts of organizations that were already working together. For example, TRP made awards to California’s CALSTART electric vehicle consortium, originally funded by the state government as well as corporate funders, including Lockheed, Allied Signal, and Hughes. In addition, several teams that were created specifically to apply for TRP grants but did not win any TRP money in the first round of competition recognized the potential leverage that they had acquired as a group and decided to stick together and competed successfully for TRP funds in the second and third rounds.

By encouraging this kind of teamwork, TRP also contributed to the cause of defense conversion even though conversion was never a specific objective of the program. It sponsored technology development projects that sought new civilian uses for military technology, as well as technology deployment projects that attempted to modernize engineering education and promote the diffusion of best-practice manufacturing throughout the economy. One example was the somewhat mystically named “realization consortium,” an effort by several leading U.S. engineering schools (including MIT, Cornell, and Tuskegee) to revolutionize undergraduate engineering education by forming a virtual design studio of the future over the Internet. Finally, TRP helped to promote economic adjustment in defense-dependent regions by supporting teams of dual-use manufacturers who were inevitably and disproportionately clustered in heavily defense-dependent areas of the country.

Unresolved issues

Despite its successes, TRP clearly suffered from some of the dilemmas that had plagued other recent dual-use initiatives. Because the program received all of its money and most of its technical expertise from the Pentagon, the panels that awarded TRP grants were compelled to justify the projects first and foremost in terms of their military value. This raised a number of key issues that, if unresolved, will continue to keep dual-use R&D programs from reaching their full potential.

Defending the program. It was always easiest to justify the value of TRP’s investments to the Department of Defense (DOD) on the grounds that they met the “but for” test-that is, but for federal funding, industry would not have undertaken these investments on its own. But this rationale put pressure on the program to invest in projects that emphasized military-specific attributes of emerging technologies that commercial manufacturers would not otherwise have developed-the same trap that VHSIC and SCP fell into.

Moreover, justifying broader commercial investments on this basis left the program vulnerable to the strongest argument of the dual-use strategy’s detractors: that the R&D in question would be conducted by private industry regardless of TRP’s involvement. Thus, critics have argued, TRP was merely substituting public funds for private; it was not increasing the total amount of militarily useful research done by commercial firms.

Programs such as TRP will attract support from a broader coalition of political forces only if they are embedded in a broader set of public purposes.

It is almost impossible to defend against this argument, for one cannot prove that a company would not have made the same investment in the absence of the TRP subsidy. In fact, many recipients of TRP awards volunteer that the TRP grant simply accelerated an investment they were planning to make anyway.

A stronger justification, then, may be that the government’s investment speeds the development of new technology, enabling U.S. systems to incorporate it sooner and at lower cost. By establishing an early, dominant position in markets for an advanced technology, this argument goes, U.S. companies can lock in control of a long-term stream of follow-on product and process innovations, making market entry much harder for companies in other countries. Thus a temporary market advantage can turn into a more enduring technological and perhaps economic and military advantage.

In some cases, however, it might actually be to the economic advantage of U.S. manufacturers to be second rather than first-to reap the windfall of investments that foreign governments and companies have made and to start production further along the technological learning curve. According to this more conventional line of economic reasoning, an economy that is prepared to absorb and capitalize on innovation, whatever the source, will be better able to establish a dominant position internationally. This argument implies that DOD should have encouraged TRP to make more grants for technology deployment-upgrading contractors’ manufacturing capabilities so that they could adapt innovations more easily-and fewer for technology development.

Either justification, whether stressing the need to accelerate innovation or improve our ability to absorb innovation, have provided a politically more defensible rationale for the program.

Supply-side or demand-side? TRP took a supply-side approach to the spin-off of new technologies: It facilitated cooperation among companies in order to help them take advantage of technological and economic opportunities. This may have been sufficient in cases where the commercial market for a new application was already well established. In the past, however, demand-side strategies have achieved the greatest success in speeding the commercialization of military technologies. The classic example is the spin-off of integrated circuits, a process that DOD and NASA launched by agreeing to provide a guaranteed market for the new technology at premium prices.

The political justification for this type of strategy is that the social benefit (in this case, a national security benefit) justifies the cost. In fact, many TRP spin-off projects promote social goals that the Clinton administration views favorably: energy efficiency, job retention, improved public health, environmental remediation, and pollution prevention. A demand-side strategy could still give a real boost to many TRP technologies. But politically it is much easier to justify demand-side intervention on the grounds of national defense than to mobilize support for a civilian mission. Thus, for TRP-backed technologies, there has been no publicly subsidized civilian demand analogous to that provided by DOD, which is committed to purchase TRP-developed technologies that are expected to strengthen the nation’s defenses. DOD will surely buy military versions of hybrid electric vehicles, but there is no plan to require or even encourage the procurement of civilian versions by federal, state, or local government agencies.

The Clinton administration has never attempted to construct a full-blown demand-side strategy in which nonmilitary government agencies-say, the Department of Transportation or the Department of Energy-guarantee procurement of emerging dual-use technologies at premium prices in order to promote a clearly articulated social goal. To be sure, any attempt to do this would run counter to the administration’s procurement-reform efforts, which are aimed at removing government-mandated restrictions and inducements and replacing them with market signals. But this may be a case in which policies that harness the power of government to create a healthy market should take precedence over policies aimed at unleashing the power of the marketplace.

Pressure to protect investments. Finally, it is not clear whether TRP and its successors will be able to avoid becoming trapped by their own technological choices as a result of political pressures from the defense establishment. Such pressures have already threatened to channel TRP projects into emphasizing military-specific rather than dual-use technologies.

Learning from past experience, TRP’s managers consciously attempted to fund multiple technological approaches to meeting critical military needs. For example, TRP funded two approaches for developing rechargeable lithium ion batteries (which would be widely deployed on the battlefield as well as in commercial portable phones and laptop computers); three approaches for developing uncooled infrared sensors (for military and police and firefighter night vision systems); and three approaches for developing low-cost manufacturing processes for advanced display technologies (for a host of defense and civilian products). The rationale for this strategy was that making a commitment to a single approach in the first round of TRP competitions might give it an unfair advantage in subsequent rounds. After all, TRP’s managers were investing taxpayer dollars in dynamic areas of technology that could follow any one of a number of paths of evolution; they understood that it was critical that they have the flexibility to transfer money into the paths that become more promising as they evolve.

Because the money is coming from the defense budget, however, the awards made have to be justified to Congress in terms of their supposedly unique value for national defense. As a consequence, DOD planners are likely to find that it is politically difficult to abandon the paths that they establish in the initial funding rounds. In fact, many DARPA managers were wary of becoming involved in the high-profile TRP because they feared–correctly in my opinion–that it would subject them to unwelcome congressional scrutiny and threaten their flexibility and independence.

The Clinton administration’s single most controversial effort to develop dual-use technology-the National Flat Panel Display Initiative-illustrates this problem. Although it was not part of TRP, some of its development projects were being run through TRP and its rationale was fundamentally the same as TRP’s. The goal of the flat-panel initiative is to generate a globally competitive U.S. industry in a sector already dominated by a globally competitive Japanese industry.

The initiative makes a portion of DOD’s R&D investment in future display technologies available to U.S. companies that commit to producing current-generation products domestically in high volumes and to meeting DOD’s specialized display requirements. However, because the effort initially targets military-specific markets, the TRP-backed flat panel projects emphasize attributes that have no obvious commercial value. The displays must be able to operate in desert and Arctic temperatures; be readable in sunlight as well as in night combat; have extremely high resolution; integrate specialized information-processing capabilities; and come in nonstandard sizes. In addition, each TRP-backed flat panel project is specifically designed so that the demonstration phase of new displays is done using military hardware produced for different branches of the military, in order to ensure broad military support for the technology.

The danger of this approach is that U.S. suppliers could end up with flat panel displays that are of interest only to military users. If this happens, civilian firms will continue to buy their displays from foreign producers, and the initiative will have created exactly what it was designed to avoid: a specialized, government-subsidized arsenal for military flat panel displays.

Designers of the initiative contend that it should be viewed as an insurance policy that can be canceled before the premiums get too high. But if the three or four production plants that they envision can provide the military with gadgets that cannot be obtained anywhere else, the Pentagon’s temptation to protect the investment will be strong, particularly because the investments have been justified to Congress as uniquely necessary to promote the nation’s defenses.

The political pressures to protect investments justified on the basis of military need are relentless, and it is not clear that TRPand its successors TRP will be able to resist them in every case. So far, the bulk of TRP’s projects appear to be adhering to their dual-use objective. In the long run, however, all the Pentagon’s dual-use projects would be better able to resist the pressure to emphasize military applications if they could make a strong public policy case for the nondefense applications of the technologies they sponsor.

In the end, TRP projects–even if they succeed in generating commercial and military applications–will be judged almost entirely on their value for national defense.

In addition, the success of TRP’s projects will depend ultimately on the extent to which DOD aggressively implements federal acquisition reform. Spurred by Secretary William Perry, the Department of Defense recently adopted new internal policies intended to reduce the use of military specifications in procurement. In addition, the Clinton administration worked successfully with Congress to pass the far-reaching Federal Acquisition Streamlining Act of 1994. If the Pentagon bureaucracy nevertheless continues to prevent military customers from buying commercially created technologies off the shelf, commercial and military applications of dual-use technology will continue to follow irreparably divergent paths of development.

Policy, politics, and priorities

Pressure on TRP’s program managers to place even more emphasis on military needs has increased significantly since the congressional elections of 1994. Even before the elections, Congress mandated that fiscal 1995 TRP funds could not be obligated until the secretary of defense ensured that representatives of the military services were full members of the Defense Technology Coordinating Council. The new Republican Congress added further restrictions aimed at ensuring military relevance: The undersecretary of defense for acquisition and technology must now certify to Congress that representatives of the military services constitute a majority of the membership on TRP project-selection panels. And before obligating any funds to a new project, Congress must now receive a report describing its military objective.

In response, ARPA has made a number of changes. TRP was discontinued at the end of fiscal 1996 and replaced with a similar dual-use technology development program more clearly dwesigned to fulfill military needs. The new program will not solicit technology deployment projects or regional alliance or manufacturing education and training projects, because these efforts are considered less relevant to defense. Technology development efforts driven primarily by competitiveness concerns will be left to the Commerce Department’s Advanced Technology Program. ARPA officials have been paired with representatives of the military services to jointly manage previously funded TRP projects. In the end, TRP projects-even if they succeed in generating commercial as well as military applications-will be judged almost entirely on their value for national defense.

The Clinton administration never successfully cemented a political coalition in support of TRP that could counter these pressures from the defense establishment. Because TRP focused mainly on technology development, it was a poor vehicle for quickly replacing lost defense jobs, a fact that all but eliminated support for it among labor and peace groups. At the same time, business support for the program remained weak. The requirement that companies compete for awards in teams limited its appeal among defense contractors, who did not necessarily welcome the new competition from commercially oriented firms. Cost-sharing requirements, meanwhile, limited the size of the program’s cash grants to individual companies. The relatively small size of the awards-they ranged from $130,000 to $80 million and averaged about $5.8 million-have made the much larger and easier-to-obtain tax incentives that the Republicans now offer as an alternative for promoting technology development more attractive to many companies than TRP’s matching grants, which were awarded competitively and remain subject to regular, rigorous review. In addition, congressionally mandated restrictions on foreign participation in TRP continue to buck the market trend toward international strategic alliances for high-risk technology development.

Finally, the program’s success in making sure that its projects were market-driven has reduced opportunities for congressional earmarking. This in turn has limited the ability of TRP’s proponents to cultivate stable political support for continuing the program. Compare the political fortunes of TRP with that of two other Clinton-era technology initiatives: the National Flat Panel Display Initiative, which has survived congressional scrutiny downsized but intact because of the efforts of a specific, and therefore readily organized, industrial constituency; and the Commerce Department’s Manufacturing Extension Partnership, which has built broad support by placing more than 40 manufacturing extension centers in more than 30 states and which the Republican Congress has actually chosen to expand.

TRP has retained a good deal of support among “competitiveness” advocates, but this issue has lost political currency since the program was launched. In the mid-1990s, as changing economic conditions at home and abroad conspired to strengthen the U.S. position in global markets, the focus of the average American’s economic anxieties shifted away from declining international competitiveness and toward the continued stagnation and unequal distribution of U.S. wages and family incomes. Advocates of competitiveness policies remain convinced that efforts to help companies raise their long-term productivity are essential to addressing both sets of concerns. The Clinton administration’s technology policies, however, have often appeared to reward only those groups that are already benefiting from the new high-tech economy, and so have failed to gain support from a larger potential constituency.

To the extent that Americans are wary of spending taxpayers’ money to boost the profits of private firms, they are likely to remain ambivalent about government’s proper role in technology development. Programs such as TRP will attract support from a broader coalition of political forces only if they are embedded in a broader set of public purposes, such as widely perceived threats to health, safety, or the environment, and linked to a set of more rapidly demonstrable results. In the absence of broader public support for federal investments in technology and science, the utility of programs such as TRP for accelerating the commercialization of innovations, whether for defense or “competitiveness” objectives, is limited.

In the brief period since its inception, TRP proved to be a useful and innovative tool for advancing the development of commercial technologies critical to defense, encouraging the more rapid adoption of commercial technologies by the military, and promoting the integration of defense and commercial production within firms. But for believers who thought that DOD, or at least ARPA, would altruistically promote competitiveness or the technical competence of U.S. industry, the lesson of the past three years is clear. Commercial technology spin-offs or the creation of viable commercial industries can never be anything more than byproducts of a defense-sponsored technology program, because its ultimate goals are not economic or scientific but political. The capacity of a program such as TRP to promote defense conversion, or commercial-military integration, or national industrial performance will remain hostage to the program’s overriding military goals.

U.S. Seaports: At the Crossroads of the Global Economy

U.S. seaports are showing signs of neglect, a disturbing prospect as the nation competes in an increasingly dynamic global economy. Many aspects of port infrastructure and management are relics from mid-century. And while ports throughout Europe and Asia are becoming more modern and productive, many U.S. ports will soon become obsolete in the absence of significant upgrading and investment. Since the movement of freight by sea is expected to triple by the year 2020, this is indeed a troubling scenario.

Many U.S. ports are cramped for space, with narrow navigation channels, shallow harbors, and congested truck and rail access routes. For example, the harbor at Newark, New Jersey, is so shallow outside the dredged channel and so prone to siltation that very large ships must unload part of their cargo in Nova Scotia or elsewhere, thereby raising shipping costs and putting the port at a competitive disadvantage. Maintenance and expansion of navigation channels often is impeded by delays in the granting of permits, a complex web of environmental regulations, and disagreements about how to dispose of dredged material. As a case in point, it took the Port of Oakland 20 years to begin the first phase of a channel-deepening project.

Meanwhile, the size of cargo ships has increased considerably. Large oil tankers long ago outstripped the capacity of all ports on the East Coast and along the Gulf of Mexico, and many shipping terminals are finding it increasingly difficult to handle large container ships in competitive time. Port development is slow, frustrated by high costs and budget cutbacks at all levels of government, and the waterways-management infrastructure generally lags available technology.

Contrast these conditions with those in Rotterdam, the Netherlands, one of the most sophisticated seaports in the world. The channel is 50 feet deep, large enough for megacarriers that have yet to be built. An elaborate waterways-management system tracks vessels and provides a running commentary on traffic, reducing disruptions significantly. Cargo handling is highly automated–state-of-the-art pierside cranes load or unload 30 containers per hour, robot stacking cranes dot the container yard, and truckers move containers in and out of the terminal quickly. Although the average U.S. port’s average of 28.5 containers per hour may seem roughly equivalent, fractional differences in productivity can have major economic consequences over time. Or consider the new container facility planned for Japan’s Port of Yokohama. To be built on 535 acres reclaimed from Tokyo Bay, the port will include vessel berths that are more than 49 feet deep, a modern container terminal allotted 138 acres, and ample storage and distribution areas and access roads.

Although specific projections of economic losses have not been made, failure to modernize U.S. ports is likely to result in substantial losses of American jobs and in increases in the price of goods transported. Some industry officials predict dire consequences, such as the development of huge ports in neighboring nations that draw shipping away from the United States and impose significant time and cost penalties on U.S. imports and exports. The impact of losing such critical transportation routes might be imagined by visualizing American life without the interstate highway system.

The importance of ports

Foreign trade helps drive economic growth. This is particularly true in the United States, which leads all nations in value of imports and exports. The value of U.S. imports and exports of commodities was almost $1.2 trillion in 1994, and exports constitute a growing proportion of the gross domestic product (GDP). Both foreign and domestic commerce rely on ports and waterways, which handle almost all U.S. trade by weight and about half by value. Waterborne transportation of all commodities totals over 2 billion metric tons, about half domestic trade and half international.

The U.S. waterways transportation system includes about 145 ports that each handle more than 1 million metric tons of cargo annually. The top ten ports handle a total of more than 900 million metric tons annually. The burden on ports is expected to mount in the near future as foreign trade soars. Beyond supporting commerce, major ports and their smaller counterparts provide many tangible benefits to their communities. The port industry and port users generate more than 15 million jobs and add some $780 billion to the GDP annually. Port-generated economic activities include shipping and related enterprises, trade services, inland transportation, and cargo and vessel services.

These benefits, although large, may be overlooked by those who usually pay for port improvements. Much of the economic value generated is national in scope, whereas ports traditionally have depended on state and local governments for financial and planning support. This mismatch has become a growing problem amid budget shortfalls confronting state and local officials, some of whom are cutting port subsidies and, at the extreme, even asking for a share of port revenues.

Other benefits provided by ports are intangible. Because of their waterfront locationsports sustain biological diversity, create economic vitality, and enhance the general quality of life. Newark Bay, for example, is home to wildlife, a fishery nursery, and numerous water-dependent industries. Other ports, such as Baltimore and Boston, have provided their cities with significant revenues from waterfront developments that attract tourists.

Thus, stakeholders in the future of U.S. ports are diverse. A number of federal agencies have relevant responsibilities, including such activities as ensuring maritime safety and enforcing environmental regulations. Commercial stakeholders are legion, and include waterfront industries, manufacturers of all types, and commodities brokers. State and local governments traditionally have depended on ports to stimulate regional development. The public enjoys boating and various waterfront activities and is interested in protecting the coastal environment.

This diversity in port activities and stakeholders means that decisions about a port and its future must be made in the context of competing uses. If waterfront property is allotted to condominiums, then industrial sites will be lost. Allow industrial wastes to be discharged into the water, and the port’s capability to maintain dredged channels is limited by the difficulty of disposing of contaminated sediments. Site a marina in the industrial port, and the waterway may become overcrowded with recreational boats, precipitating congestion and safety problems. These considerations serve to complicate an increasingly urgent need for strategic port planning.

The need to modernize

Two primary factors demand attention to ports at this time. The first is the rapidly changing intermodal freight transportation market, which moves increasing amounts of cargo on ever-more-demanding schedules. This market has fueled a trend toward larger and faster ships that make precisely timed and efficient port connections in order to achieve maximum cost-effectiveness and competitiveness. The survival of a general cargo port therefore depends on its capability to receive and transfer goods as quickly as possible. The second factor demanding attention is the increasing number and complexity of environmental regulations that pertain to ports. Although enacted for important reasons, such regulations are nonetheless inhibiting maintenance and growth of ports at a time when modernization is needed. These factors are examined in more detail below:

Intermodal transportation. Freight often is transferred among sea, land, and air carriers. The expansion of this “intermodalism” in the 1980s can be traced to the emergence of railcars double-stacked with containers, which carry the greatest number of revenue loads for a given train length and therefore save fuel and labor costs. These containers are often moved between rail yards and the waterfront by truck. The challenge for marine terminals is to handle these heavy loads quickly, allowing both the seagoing and land-based modes of transportation to maintain their tight schedules. This means that if a ship, train, or truck is running even a few hours late, the connection at the marine terminal may be missed.

Achieving high throughput of freight is no simple matter. Unbelievable as it may seem, the queuing and unloading of container ships carrying goods from Asia through Puget Sound is influenced by rush-hour train schedules in Chicago. (Double-stack trains cannot pass through that city at rush hour, when commuter trains have priority.) Road and rail access to ports is a major problem nationally. A 1991 survey found that half of public ports, and nearly two-thirds of container ports, faced growing traffic congestion on local truck routes. For about one-third of container ports, bridges and tunnels lacked sufficient clearance for double-stack trains. And the problems can cut both ways: Almost half of ports reported that feeder rail lines crossed local streets, meaning that the long trains prized for efficiency can frequently tie up traffic.

Congress should provide funding incentives to state and local governments that implement realistic port upgrade plans.

Throughput can be enhanced by improving road and rail connections to ports and installing high-capacity container cranes and various types of automated cargo handling systems. Such improvements have been implemented or are planned at many U.S. ports, particularly on the West Coast where navigation channels generally are deeper than those in other areas. The Port of Tacoma, Washington, a leading container port, already has on-dock train depots, and similar facilities are under development in Los Angeles. Private investment is contributing significantly to modernization efforts. Major shipping companies, for example, can be counted on to install the latest automated cranes and electronic cargo-tracking systems at terminals they own or lease. One private terminal in New Orleans can move 37 containers an hour. Indeed, private investment in shoreside terminals is estimated to be twice the amount of public investment.

There is also a need to deepen and widen channels to accommodate the new generations of larger cargo ships. Ships that can carry 6,000 container units are already plying the seas, and 8,000-unit ships are planned. The draft of these ships is about 13 to 14 meters (about 43 to 46 feet), and their maximum beam is about 40 meters. Water depths in most Eastern and Southern ports currently prohibit passage by ships that draw about 40 feet. Although this limitation can be overcome by unloading part of the cargo offshore, operating vessels at reduced speeds or only at high tide, these are only partial and short-term solutions at best.

Similarly, ultrafast moderate-size cargo ships are being designed, creating a need for new docking facilities. Ships that can maintain 40 knots even in heavy seas will cut transit times across the Atlantic Ocean from eight days to four, and by connecting through special terminals, they will cut door-to-door cargo transit times from several weeks to one week. Such high-value transportation options aid business inventory control and customer service and will create market advantages in the automotive, chemical, and other industries.

Yet another factor that influences port productivity is its information infrastructure. The private sector is implementing many of its own cargo-tracking and other information technologies, and companies are working together to develop communal systems to automate customs, process commercial transactions, and improve document collection. The federal government, however, is finding it more difficult to carry out its traditional responsibility for managing traffic on waterways. Shipping efficiency can be enhanced through the use of technologies that track vessels and manage traffic flow and by improving systems for collecting and disseminating real-time data on tides, currents, and other environmental conditions. Indeed, ample technology is available that can help avert accidents and optimize traffic flow, but capital and operating funds have proved scarce. European ports have invested heavily in these shore-based surveillance and communications systems, but U.S. investments have lagged. For example, there is a pressing need to resurvey an estimated 43,000 square nautical miles of harbor areas outside navigation channels in order to provide accurate hydrographic data for updated charts. However, the National Oceanic and Atmospheric Administration (NOAA) lacks the funds to tackle this job, and funds are limited for port-specific installation and operation of a NOAA system to provide environmental data in a timely manner.

In summary, efficient intermodal transfer capabilities are seen as vital to the continuing success–perhaps even the existence–of U.S. ports. And competition for this business is intense, with shippers often choosing their routes based on a port’s intermodal transfer capabilities. As some ports modernize, they attract more business. One result has been concentration of cargo at the most competitive facilities, known as “load center” ports, and an uncertain future for lesser rivals. Of course, the decline of weaker ports is inevitable and not necessarily bad for the nation as a whole (even though there may be severe repercussions on regional economies), because a reduced number of highly competitive ports is preferable to retaining many inefficient ones. However, strategic planning is needed to ensure that the right capabilities are developed and maintained at appropriate locations. This can enable some ports to thrive in spite of inherent limitations.

The Port of Baltimore, for example, has taken a realistic and workable approach to strategic planning. Port officials noted that the port’s relatively shallow approaches may put it at a competitive disadvantage in the container market, given current trends in the industry and the dominance of its nearest competitors, New York and Norfolk, Va. Therefore, in addition to making efforts to sustain container trade, port officials plan to focus on building existing niche markets in noncontainerized goods, including “roll on/roll off” commodities, such as cars. The port’s goals for the next three to five years include becoming the largest roll on/roll off and automobile port on the Eastern seaboard.

Environmental regulation. The recent increase in environmental regulations, from all levels of government, stems from the recognized and growing need for coastal protection. At least ten major federal environmental laws as well as myriad amendments and other requirements affect the port industry. To comply, ports must expend considerable staff time and resources.

One major issue is dredging. Most U.S. ports require periodic and sometimes constant dredging to maintain shipping channels at the authorized depths. All the materials dredged, about 280 million cubic yards annually, must be disposed of or reused in some way. Clean material may be deposited in the ocean or put to beneficial uses, such as for the creation of offshore islands. But roughly 5 to 10 percent of dredged material is considered contaminated and requires special handling, such as permanent containment or costly treatment. It can take years to resolve all dredging and disposal issues and obtain the requisite permits, with the result that channel maintenance and deepening are seldom timely or cost-effective. Dredging permits for the Port of Newark, for example, were delayed for three years amid controversy over disposal of sediments that contained traces of dioxin, a chemical that has been linked to cancer and other health problems. Improving processes for making decisions, undertaking consensus building among stakeholders, and implementing technological advances will help resolve some of these issues. But in the meantime, steps must be taken to upgrade ports to a level that will sustain U.S. economic growth in the coming years.

The need for ports to deal with cargo spills and other shipping-related pollution creates additional environmental concerns. These challenges can be considerable. As but one example, the petrochemical industry relies heavily on waterborne trade, and the industry has expanded greatly in recent decades. The United States now imports more than half of the oil it consumes, and U.S. ports receive by water about 1.4 million metric tons of crude oil and petroleum products each day (some from Alaska, but most from foreign sources). Accompanying this growth in the petroleum trade has been continuous waterfront development; for example, the largest petrochemical complex in the world is now located along the Gulf of Mexico. With this expansion has come increases in pollution, sometimes in dramatic fashion, as with massive oil spills, but more often in more subtle but no less threatening ways.

Faced with these and numerous other environmental concerns, many ports are finding it increasingly expensive and difficult to dredge channels, manage wastes, respond to spills of oil and hazardous substances, control air emissions, and comply with wetlands and endangered-species legislation. Unfortunately, these growing demands come at a time when port resources are dwindling due to cutbacks in subsidies from state and local governments. Many smaller ports lack the resources and technical expertise to address environmental issues adequately, further evidence of the need for port-specific strategic planning as well as expanded attention to these problems on the national scale.

Toward a coordinated plan

Modernizing U.S. ports will require a coordinated effort on many fronts. The first step is to ensure that all parties recognize the scope of the problems. This can be accomplished by instituting a national dialogue on the future of the nation’s ports. While the decentralized nature of U.S. port management virtually precludes conducting a comprehensive upgrade program on a nationwide scale, this national dialogue can help all parties learn together and perhaps identify some common goals and approaches. The timing for such a dialogue may be appropriate, given the spirited political attention being focused on the nation’s economy and international competitiveness, issues that are intertwined with the status of U.S. ports.

Once the dialogue is under way, the real challenge will be to begin implementing solutions. Two possible approaches, conceived as a package, are identified here. These proposals are described in concept only, recognizing that more detailed plans will need to be developed on a port-specific basis.

New ports. The need for new, deeper ports was recognized more than 10 years ago in a National Research Council (NRC) study, which concluded that the United States should develop the capability on all three of its coasts to handle large ships efficiently. The authors determined that given the length of the nation’s coastlines, there was a need to have deep-water “superports” at two locations on the East Coast, two on the West Coast, and one on the Gulf of Mexico. The study also concluded that the nation’s lack of flexibility to respond to new developments in maritime trade was placing a critical limitation on future competitiveness. This problem remains; indeed, it is becoming even more pressing. Strategically planned port construction and upgrades would provide the needed flexibility. Such projects will have to be implemented soon, because expanding port capacity is a long-term job, and numerous other nations are far ahead in this quest.

More regional officials must be convinced of the need to improve road and rail access to ports.

Although the cost of building a new port will be steep–in some cases, the price tag may run into the billions of dollars–there are now ways to make such projects more affordable. For example, it is no longer necessary to locate ports in densely populated areas, where land is the most expensive and there are many competing demands for space, access, or even the view. All that is needed is adequate water depth, easy sea access, ample shoreside space for marshaling cargo, and efficient access to land transportation. Perhaps deep-water berms could be built a short distance from shore, and these berms could be served by innovative rapid transit systems to move containers from ships to shoreside marshaling areas. (Japan, which has developed varied uses for reclaimed coastal land and for artificial islands, has been constructing port facilities offshore for decades.) A strategy especially suited to petroleum and other bulk liquid cargoes would be to construct deep-water ports many miles off the coast. This approach is exemplified by the Louisiana Offshore Oil Port, which was built some 18 miles away from land; huge tankers can easily dock at the port and unload their petroleum, which is then transported by pipeline to onshore facilities.

Construction of offshore artificial islands may be the only solution for the Ports of Los Angeles and Long Beach, which are running out of land but are faced with a projected doubling in cargo shipping (by weight) by the year 2020. The ports plan to use landfill to construct 2,500 acres of artificial islands in Sad Pedro Bay. The new islands will be large enough to accommodate 38 state-of-the-art shipping terminals. The ports also plan to build 50 new berths and miles of new navigation channels up to 85 feet deep. Project 2020, as this effort is called, is expected to cost $4.8 billion. But this cost will be less than one year’s projected revenues from the two ports in 2020, according to trade forecasts. This project may serve as a model of sorts for other ports, in that it reflects innovative planning tailored to local port conditions and needs.

Just as costs will vary according to each project, the funding sources for building and improving ports and terminals will likely vary. Some deep-water ports, such as offshore terminals for oil tankers, might be profit-making ventures privately owned by shipping companies. Other facilities, such as new industrial islands built at the entrances to port complexes, might be publicly financed under the auspices of existing port authorities.

Regional planning for port needs. Although building new ports would improve shipping access to the United States, different tactics are needed to improve road and rail access to ports in order to enhance the efficiency of vessel movements. Making the necessary improvements will require regional planning that takes port needs into account. One barrier that has typically stood in the way of such regional planning, however, is that state and local government officials have tended to be more interested in highway and mass transit improvements than in port access.

In addition to addressing road and rail access problems, regional planners will need to look for ways to improve waterways management capabilities. There is a barrier here, too. Although waterways management has traditionally been a federal responsibility, federal funds for this task are tight and becoming tighter. This budget shortfall is likely to frustrate the Coast Guard and NOAA in their efforts to undertake the necessary improvement projects. Furthermore, political pressure to transfer federal responsibilities to state and local levels suggests that new approaches to technology selection, purchasing, and operation are needed. Because few ports can afford to install and operate waterways-management systems on their own, regional planning groups might be better positioned to undertake the scale of solutions required for modernization. Of course, these groups should work in cooperation with the appropriate federal agencies, particularly the Coast Guard, which is responsible for port safety and maritime law enforcement, among its other missions.

Hopes for improving road and rail access to ports nationwide were raised in 1991 when Congress passed the Intermodal Surface Transportation Efficiency Act (ISTEA). The ISTEA marked the first time that federal transportation policy explicitly recognized intermodal connections as an important topic for planning and infrastructure investments. This act did catalyze some progress. For example, some metropolitan planning organizations did begin to recognize the importance of intermodal access and freight transport. But by and large, funding and the development of strategic guidelines generally continued to be focused on traditional highway and transit projects, which has made it difficult for ports to compete for investments.

The ISTEA is up for reauthorization by Congress in 1997, which will provide an opportunity to draw attention to port needs. More regional planning and transportation officials need to be convinced of the importance of intermodal freight transport and of the corresponding need to improve road and rail access to ports. A recent NRC study recommended that incentives be provided to state and local governments to ensure that port access needs are considered fairly and thoroughly along with competing demands and to encourage long-range planning.

Such incentives might be provided through an initiative modeled on the federal Coastal Zone Management (CZM) Program, which was authorized by Congress in 1972. This is a voluntary, incentive-based program. It rests on the premise that there is a national interest in managing the nation’s coastal zones but that states have jurisdiction over their land and therefore should assume the lead as coastal zone managers. The program satisfies federal and state interests: The federal government provides matching funds to assist states in developing and implementing coastal zone management programs while at the same time promising that federal activities will be consistent with a state program once it has been federally approved.

This effort has been successful to the extent that most eligible states and territories have completed CZM programs that combine to cover 94 percent of the U.S. coastline. A 1994 NOAA report found that the federal-state partnership is producing measurable beneficial changes in the management of coastal resources. Some reviews, however, have been less enthusiastic, describing the results of state programs as uneven and identifying problems with implementation, enforcement, and conflict resolution.

But even if the CZM program has proved imperfect,a voluntary, incentive-based approach to strategic planning is considered by many observers to be an important means of fostering port development. Now, Congress should establish a formal national interest in maintaining safe and efficient ports and waterways, and then buttress this interest by offering funding incentives to state and local governments that develop and implement realistic port upgrade plans. This approach would be consistent with the decentralized management structure that characterizes U.S. ports while also encouraging attention to the national economic interest in port modernization.

As regional planning groups come to focus on ports, it is hoped that the groups will be granted increased access to established forms of federal and state funding as they develop coordinated efforts to modernize ports. Still, new funding sources will probably have to be established as well. As one possible model, in 1993 California created the Maritime Infrastructure Bank as a funding mechanism for the development of port facilities. The bank can make and issue bonds, guarantee loans and make and accept grants. Other states, such as Maryland and Louisiana, have set up special trust funds to increase the ability of state agencies to respond to intermodal transportation problems. And in some areas, such as the Los Angeles-Long Beach port complex, public-private partnerships have emerged to provide funding and operating solutions to problems facing private waterways-management systems (which operate in cooperation with the Coast Guard). This partnership concept is a promising way of fostering consensus and distributing costs. The development of such partnerships will require strong leadership at the local level, which might be provided by a port authority, harbor safety committee, or some other group that seizes the opportunity. Indeed, the emergence of local groups interested in port issues may be encouraged by the hoped-for increase in availability of federal planning grants.

Avoiding a stormy future

Following this general blueprint, filling in the necessary details and putting the plans into practice, will require efforts on many levels–public and private, spread across many agencies and organizations. Indeed, port modernization is too important to the national interest to be left to the whims of any single agency or group.

Of course, not every port can become a superport. But smaller ports in key locations will also benefit greatly from modernization, becoming efficient “feeder” ports by developing superior terminals and inland access routes. The challenge is to develop incentives for all types of port improvements nationwide without sacrificing the tradition of state and local control. The key–stated previously but deserving repetition–will be realistic strategic planning. Failing such planning, the nation’s ports face stormy weather ahead.

Ideally, port modernization will also help address a number of environmental concerns and protect the interests of other stakeholders. For example, the construction of offshore oil ports would keep large tankers, and potential oil spills, away from the coasts. The construction of ports on offshore artificial islands would help reduce concerns about how to manage dredged material, since huge quantities of sediment could be used in building the islands themselves and the islands’ offshore location would reduce the need for maintenance dredging. For municipalities, the development of dedicated truck and rail corridors to serve ports would help to relieve traffic congestion, and the increased business attracted by more efficient ports would create numerous permanent jobs locally. Balancing these various issues can be achieved as port stakeholders organize to resolve mutual concerns and ensure that maritime issues are considered in regional planning and economic development efforts. In this way, U.S. ports could be transformed into the transportation superhighways needed in the 21st century. The process needs to begin now.

Can Science Get Any Respect?

This has been an up-and-down year for the public image of science. Those smart-ass humanist critics of science got their comeuppance when Social Text, one of their trendy journals, published an article by Alan Sokal, a physicist at the City University of New York, that turned out to be a parody of their incomprehensible critiques of science. When Sokal wrote about his experience in the journal Lingua Franca, the New York Times picked up the story. Congressman John Dingell and his insufferable fraud police got theirs when David Baltimore and Thereza Imanishi-Kari were cleared of any scientific wrongdoing.

On the other hand, Congress was making it clear that constantly increasing federal support for research was not a birthright and that significant reductions loomed on the horizon. But the cruelest blow came when John Horgan, a writer for Scientific American, published a book with the provocative title The End of Science, which argued that all the big discoveries have already been made and that the leading lights of science are really engaged in nonempirical flights of imagination that are little better than philosophy. The book was widely, and quite favorably, reviewed, and Horgan was prominent on the talk-show circuit. Can’t science get any respect? Are these congressional budget battles and popular critiques of science (not to mention the spread of quack science among a scientifically illiterate populace) the early stages of the decline of science?

The attack of cultural studies

The Sokal article comes after much hand-wringing in the scientific community about attacks from intellectuals who question the objectivity of science and even its ability to learn anything about what is true. These deconstructionists, poststructuralists, postmodernists, and other prefix-ridden thinkers take a radically skeptical view of any human attempts to get at the truth. For many of them, everything is subjective and is a product of social conditioning. Science’s presumption that it can actually arrive at some objective truths does not sit well with them. But the meaning of the Sokal affair does not lie in the strength of their analysis. Questions about what is knowable are worth asking, and we need to better understand how social forces outside of science influence its practice.

What is appalling about the publication of Sokal’s article is that the editors were willing to publish something that they obviously could not understand. They can be forgiven for not knowing that much of the science in the article was wrong, though they should have wondered at the assertion that pi is not a constant. But they should have recognized that Sokal’s argument didn’t even attempt to make sense.

Readers may be relieved to know that the editors of Social Text have not been chastened by this experience. They wish that they had not published the article, but they bristle at the suggestion that “something is rotten in the state of cultural studies.” Instead, they see the incident as indicative of a problem with science: “Its [Sokal’s article] status as parody does not alter substantially our interest in the piece itself as a symptomatic document. Indeed, Sokal’s conduct has quickly becomes an object of study, for those who analyze the behavior of scientists.” Symptomatic document! Get real. You got caught with your pants down. Pull them up before you start yammering again.

Baltimore’s travels

The Baltimore affair also has a complicated lesson. It is more about John Dingell’s high-handedness than it is about Congress. It is also less about the the integrity of science than about the rise of self-righteous scolds. The scientific misconduct vigilantes who worked with Dingell began to resemble the zealots of animal rights, antiabortion, antipornography, and environmental campaigns, who lose all perspective in their efforts to point out the moral shortcomings of others.

What was curious from the beginning-and what scientists should have found flattering-was the incredibly high standard of integrity that society assumes is being followed in science. Can one even imagine a public furor over revelations that a member of another professional group-lawyers, accountants, stockbrokers, or journalists-had played loose with the evidence in order to strengthen an argument. We all must be aware that scientists are human beings and as capable as anyone else of deceit. There is no denying the record of scientific misconduct, but society obviously holds science in very high regard to become so upset about a few transgressions. And the paucity of cases of scientific misconduct that have emerged during the past decade of intense scrutiny suggests that something is right in the state of science. Still, as David Baltimore observed in Issues (“Baltimore’s Travels,” Summer 1989), one reason why he wound up in front of Dingell’s committee was that scientists had not taken active responsibility for protecting the integrity of science.

The end of science?

But what about this arrogant Horgan? Where does he get off claiming that the emperor has no clothes? Besides, he makes some very well-known scientists sound like pompous asses. Couldn’t a popular book such as this undermine public confidence in science and convince Congress to stop funding basic research? In a word, no. All of the members of Congress who read this book could fit in George Brown’s car-and he still might not be able to drive in the HOV lane. But science would actually be better off if they did read it, because readers are likely to come away with an increased appreciation of science.

Horgan provides a lively tour of the major questions in particle physics, cosmology, evolutionary biology, chaos/complexity theory, and several other fields. He takes some cheap shots at individuals, but he also makes it clear that scientists have emotions and personalities as well as intellects. Like it or not, that helps make science interesting to nonscientists.

The bottom line is that Horgan’s book is not a threat to science. It will not change the world’s point of view. Horgan has chosen to propose a provocative thesis, which will help sell the book. He raises the valid issue of the difficulty of finding empirical support for superstring theory or definitive evidence about the origin of the universe, but the argument is far from convincing in other fields, particularly the life sciences. Besides, is the average American more likely to have heard of John Horgan or the possibility of life on Mars and the discovery of a third major form of life? Most readers of this book will come away better informed and more interested in the possibility of life in other parts of the galaxy, the nature of consciousness, and the mystery of how DNA guides the development of an organism. Score one more victory for science.

Why so touchy?

Scientists have curiously brittle egos, considering the vigor of the profession and the respect in which it is held. The scientific community invited John Dingell’s interest by failing to acknowledge the possibility of misconduct when it was first raised as a concern. Instead of circling the wagons, scientists could have simply accepted their own humanity and taken forceful steps to protect against human frailty. Scientists are taking steps now to police themselves, and the Dingells, Stewarts, and Feders are fading into the background.

Many scientists have also overreacted to the cultural criticism of science. We shouldn’t assume that all criticism is destructive. If scientists listen, accept constructive criticism, and engage in more rigorous self-criticism, they won’t feel so beleaguered. Besides, it is often an overdefensive reaction to criticism or questioning that makes an issue news. Just ask Hillary Clinton. If scientists willingly join the cultural debate about science, science can grow in stature.

The response to Horgan is simple. Do good science and make the extra effort to explain it to someone other than colleagues. Among the major professions, scientists rank second only to physicians in public esteem. Journalists and politicians trail far behind. One reason is that deeds speak louder than words. Scientists should participate in discussions of science with the confidence that the public trusts them. But the reason that they are trusted is that their deeds produce results that people can see and appreciate.

Ethical Dilemmas

To be successful, the Human Genome Project, the 15-year-long effort to map the estimated 3 billion nucleotides and 100,000 genes that form the genetic makeup of a human being, will need to achieve major breakthroughs in a host of scientific and technical fields. No less challenging are the philosophical and ethical questions that must be addressed and adequately resolved if this huge expenditure of energy and talent is to result in a net improvement in the quality of human life.

Philip Kitcher’s The Lives to Come is an introduction to some of the most important legal, ethical, and philosophical issues raised by this revolution in molecular biology. Fluently written in a manner accessible to any lay reader and free of either scientific or philosophical technicalities, it provides a series of compelling explorations of key philosophical issues. For all its accessibility, however, this is a sophisticated book. Its commonsense arguments are based on an important series of philosophical discussions that Kitcher is able to navigate through and draw on with ease.

After a good introduction to the basic science and technology of contemporary molecular biology, Kitcher embarks on a series of explorations of key issues raised by recent advances. What principles should guide the movement of increasing numbers of genetic tests out of the lab and into society? When is gene replacement therapy (rather than more standard treatment approaches) a suitable response to medical conditions? To what extent-if ever-may genetic information be used to discriminate among applicants for insurance or employment? Is it wise to rely on DNA evidence in forensic contexts, and should we welcome or oppose widespread DNA typing for identification purposes? Can we develop a set of concepts, particularly a clear distinction between “disease” and “normalcy,” that will facilitate sound medical and prenatal decisionmaking and can help society avoid a dangerous slide into eugenics? Finally, to what extent will our cherished notions of individual freedom and responsibility withstand the influx of new information suggesting that much of our behavior, including such intimate matters as sexual preference, is shaped by our genes?

Again and again, Kitcher brings good judgment and in-depth knowledge of the scientific and philosophical issues involved to bear on these questions. His treatment of genetic discrimination in insurance is one example. Probably no area today is more contested and more in need of resolution. The threat of genetic discrimination hangs over everyone and impedes both research and the deployment of valuable genetic diagnostics or therapies. Nevertheless, insurance underwriters continue to press for the opportunity to include genetic risk factors in their actuarial decisionmaking, as they have traditionally done for many other preexisting or familial medical conditions.

In the face of this controversy, Kitcher invites us to participate in an exercise involving what he calls “the great unveiling.” Tomorrow, the genetic truth about each of us will be known. Roughly 10 percent of us will learn that we carry allelic combinations that place us at significantly high risk for some disease. Today, we can vote for one of two proposals dealing with the way in which health care will be run. The first assigns medical insurance premiums on the basis of risk; the second, independently of risk. Unaware of how you will fare in the genetic lottery but knowing that your choice will affect you, your children, and your grandchildren, how will you vote? On the basis of this experiment, Kitcher argues that it is probably wise to look at this aspect of the health care system as “a cooperative venture in which the fortunate help to support those who have been victims of circumstances.”

Rawlsian logic

Philosophically informed readers will recognize that Kitcher’s argument here draws on John Rawls’s contract theory of justice, with its provocative idea that basic social policy be chosen from a hypothetical “original position of equality.” Although it is rare that the formation of major social policies ever offers an actual Rawlsian moment of choice, the issue of genetic discrimination is one such opportunity. We all know in a general sense that we will soon confront genetic information of fateful importance for our well-being in society, but none of us knows whether this information will benefit or burden us in particular. Kitcher’s skillful application of Rawlsian methodology at this juncture makes a real contribution to the public debate. This is especially true when he turns from the relatively easy areas of health or disability insurance to the more vexing area of employment discrimination. Recognizing the perils of allowing employment discrimination on genetic grounds, Kitcher nevertheless entertains a series of circumstances in which genetic discrimination might be permissible as long as it is accompanied by counseling, retraining, and the provision of alternative job opportunities. These include instances of genetic conditions in which symptoms express themselves suddenly with grave consequences for others or conditions whose manifestation and prevention would require enormously expensive environmental modifications in the workplace.

This same combination of good philosophical footing, deep familiarity with the issues, and sound judgment characterizes Kitcher’s treatment of the complex issue of prenatal testing. Fully acknowledging the bad science and the vicious forms of social repression that were prevalent in the early eugenics movement, Kitcher nevertheless argues that we are inescapably involved in making decisions of a eugenic nature. Whether we like it or not, our decisions today will affect the genetic inheritance of future generations. Reacting to earlier genetic abuses, we have developed what he calls a “laissez-faire” form of eugenics in which parents, assisted by nondirective counseling, frequently use available testing technologies to select features of their child’s genetic makeup. For all its value, however, this approach is severely limited by the lack of resources available to provide counseling and other genetic and medical support services to all members of society, as well as by the resultingly uninformed and often harmful decisions (or lack of decisions) made by individual parents and couples.

Utopian eugenics

In place of this laissez-faire eugenics, Kitcher proposes what he calls “utopian eugenics.” Its goals are to bring into the world children whose lives are unrestricted by serious genetic disease and who are assisted to realize the highest possible degree of development. To achieve these goals, utopian eugenics seeks to employ reliable genetic information in prenatal tests that are equally available to all citizens. Widespread public discussion of the value and consequences of individual decisions would proceed without any societally imposed restrictions on individual choice. The emphasis would be on education, not coercion. Just health care arrangements would help reduce the pressures on individuals to bend reproductive decisionmaking to economic necessity. To prevent even public opinion from becoming unduly coercive, “there would be universally shared respect for difference coupled with a public commitment to realizing the potential of all who are born.”

In this connection, Kitcher addresses one of the recurrent concerns of those who object to the growing emphasis on prenatal testing: that the willingness to avoid the birth of children with disabilities will somehow foster increased discrimination against the disabled. Yet on Cyprus, where the Greek Orthodox Church has administered a testing program for thalassemia, Kitcher points out that as the incidence of this disease has diminished, help for those afflicted with it has actually increased. Indeed, Kitcher believes that there is no reason to think that decisions to avoid lives burdened with severe genetic disease should lead to a devaluation of the lives of those who already have these disorders. It is crucial to recognize the distinction between an actual life and a “life in prospect,” because different standards of evaluation apply. Unless this philosophical distinction is understood and appreciated, needless suffering can result.

Kitcher himself acknowledges the fragility of the utopian enterprise he envisions, and many of his readers will not share his confidence that it can be implemented without drifting across the line into a coercive eugenics. This is especially true if Kitcher’s own appeals for universal access to health care are not heeded and freedom of parental choice becomes eclipsed by a desire to reduce health care outlays for costly genetic diseases. Despite these difficulties, Kitcher’s ideas move in the right direction. Given the increasing power that genetic information will confer on parents, including the power to harm their children, we must begin to think about how we can enhance the quality of parental genetic decisionmaking while preserving the freedom of that decisionmaking.

The kind of popular philosophy that Kitcher endeavors to convey in this book has its perils. Here and there, complex issues do not receive the careful and well-annotated treatment one would expect in a scholarly philosophical article. Kitcher’s treatment of the concept of disease, one of the core issues in genetic ethics today, is an example. An adequate concept of disease could help us distinguish between medically indicated genetic interventions able to command wide support and efforts at enhancement genetics that may prove socially divisive or destructive. Kitcher is right to note the serious problems attending many previous efforts to define disease. The concept of “normal (physiological) functioning” is an idea that founders on closer analysis by exhibiting hidden value assumptions that undermine its promise of scientific objectivity.

Overly broad notion

Despite this, Kitcher is probably too hasty in jettisoning the concept of disease in favor of the notion of “quality of life” as a basis for ethically justifiable genetic decisions. This notion is too broad, as it can legitimize ethically questionable practices such as prenatal sex selection in India, where the anticipated quality of life of females is undeniably poor. Besides, it may not be needed. Some contemporary philosophers have developed definitions of disease that work remarkably well. These definitions, involving a constellation of ideas such as abnormality, endogenous causation, and enhanced risk of suffering death or disability, fit well with many of our established medical judgments and may prove to be reliable guides as we make decisions in the newer realm of genetic disorders.

Experts will always quibble with the details of any effort that moves out of its own specialized field into the broader world of public education and policy formation. Despite this, such education must go on. The Lives to Come is a major effort to bridge a host of fields to provide insight into the promises and challenges of the genetic revolution. The care that has gone into its preparation and the quality of its arguments make it an enduring reference for anyone interested in thinking about where the Genome Project should take us.

The Dilemma of Environmental Democracy

We live in a world of manifest promise and still more manifest fear, both inseparably linked to developments in science and technology. Our faith in technological progress is solidly grounded in a century of historical achievements. In just one generation, we have seen the space program roll back our geographical frontiers, just as the biological sciences have revolutionized our ability to manipulate the basic processes of life. At every turn of contemporary living we see material signs that we human beings have done very well for ourselves as a species and may soon be doing even better: computers and electronic mail, fax machines, bank cards, heart transplants, laser surgery, genetic screening, in vitro fertilization, and, of course, the siren song of Prozac–an ever-growing roster of human ingenuity that suggests that we can overcome, with a little bit of luck and effort, just about any imaginable constraints on our minds and bodies, except death itself.

Increasing knowledge, however, has also reinforced some archetypal fears about science and technology that overshadow the promises of healing, regeneration, material well-being, and unbroken progress. Rachel Carson’s Silent Spring became a global best seller in the 1960s with warnings of a chemically contaminated future in which animals sicken, vegetation withers, and, as in John Keats’s desolate landscape, no birds sing. Genetic engineering, perhaps the great technological breakthroughs of our age, is etched on our consciousness as the means by which ungovernable science may fatally tamper with the balance of nature or destroy forever the meaning of human dignity. Communication technologies speed up the process of globalization, but they also threaten to dissolve the fragile ties that bind individuals to their local communities.

Opinion polls and the popular media reflect the duality of public expectations concerning science and technology. A 1992 Harris poll showed that 50 percent or more of Americans considered science and medicine to be occupations of “very great prestige,” but these ratings had fallen by 9 and 11 percentage points respectively since 1977. Another survey in 1993 showed that more than 45 percent of the public felt that there would be a nuclear power plant accident and significant environmental deterioration in the next 25 years; slightly smaller percentages expected a cure for cancer and a rise in average life expectancy. And while virtually real screen dinosaurs went on the rampage in Jurassic Park, pulling in record audiences worldwide, a science reporter for the New York Times wryly commented that in the eyes of the popular media, “drug companies, geneticists, and other medical scientists–wonder-workers of yesteryear–[were] now the villains.”

But it is not only these black and white images of alternative technological futures that make modern living seem so dangerously uncertain. While we worry about the global impacts of human depredation–endangered species, encroaching deserts, polluted oceans, climate change, the ozone hole–we are also forced to ask questions about who we are, what places we belong to, and what institutions and communities govern our basic social allegiances. There is mix-and-match quality to our cultural and political identities in the late 20th century. With ties to everywhere, we risk being connected to nowhere. Benedict Anderson, the noted Cornell University political scientist, alludes to this phenomenon in his acclaimed monograph Imagined Communities. Modern nationhood assumes for Anderson the aspect of the “lonely Peloponnesian Gastarbeiter [guest worker] sitting in his dingy room in, say, Frankfurt. The solitary decoration on his wall is a resplendent Lufthansa travel poster of the Parthenon, which invites him, in German, to take a ‘sun-drenched holiday’ in Greece….[F]ramed by Lufthansa the poster confirms for him…a Greek identity that perhaps only Frankfurt has encouraged him to assume.” On the other side of the Atlantic, the gifted Czech playwright, political leader, and visionary Vaclav Havel gives us his version of the modern, or perhaps more accurately the postmodern, human condition: “a Bedouin mounted on a camel and clad in traditional robes under which he is wearing jeans, with a transistor radio in his hands and an ad for Coca-Cola on the camel’s back.” The Greek and the Bedouin are citizens of a shrinking world, but their identities have been Balkanized and only imperfectly reformed through the forces of the airplane, the transistor, the Coca-Cola can, and the entire global network of technology.

In this futuristic present of ours, there are only two languages, one cognitive and the other political, that aspire to something like universal validity. The first is science. We may not understand many facts about the environment: why frog species are suddenly disappearing around the world; whether tree-cutting or dam-building is causing floods in the plains of northern India; whether five warmer-than-average years in a decade are a statistical blip or a portent of global warming; or why an earthquake’s ragged path throws down a superhighway but leaves a frail wooden structure standing intact by its side. But we do believe that scientists who are asked to examine these problems will agree, eventually, on ways to study them and will come in time to similar answers to the same starting questions. We expect scientists to see the world the same way whether they live in Japan, India, Brazil, or the United States. This is a comfort in an unstable world. As our uncertainties increase in scope and variety, we turn for answers, not surprisingly, to the authoritative voice of science.

In the domain of politics, democratic participation is the principle we have come to regard as a near universal. The end of the Cold War signaled to many the end of the repressive state and a vindication of the idea that no society can survive that systematically closes its doors to the voices and ideas of its citizens. America’s strength has been in plurality. We now see the idea of pluralism taking hold around the globe, with the attendant notion that each culture or voice has an equal right to be heard. Participatory democracy, moreover, seems at first glance to be wholly congenial with the spirit of science, which places its emphasis on free inquiry, open access to information, and informed critical debate. Historians of the scientific revolution in fact have speculated that the overthrow of esoteric and scholastic traditions of medieval knowledge made possible the rise of modern liberal democracies. Public and demonstrable knowledge displaced the authority of secret, closely held expertise. Similarly, states that could publicly display the benefits of collective action to their citizens grew in legitimacy against alternative models of states that could not be held publicly accountable for their activities.

What I want to do here is complicate the notion that the two reassuringly universal principles of science and democratic participation complement each other easily and productively in the management of environmental risks. I will argue that increasing knowledge and increasing participation–in the sense of larger numbers of voices at the table–do not by themselves automatically tell us how to act or how to make good decisions. Participation and science together often produce irreducible discord and confusion. I will suggest that two other ingredients–trust and community–are equally necessary if we are to come to grips with environmental problems of terrifying complexity. Building institutions that foster both knowledge and trust, both participation and community, is one of the greatest challenges confronting today’s human societies.

The many accents of participation

For more than two centuries, we in the United States have tirelessly cultivated the notion that government decisions are best when they are most open to many voices, no matter how technical the subject matter being considered. A commitment to broad public participation remains a core principle of U.S. environmentalism. Most recently, the environmental justice movement has reprised the belief that autocratic government produces ill-considered decisions, with little chance of public satisfaction, even when decisions are made in the name of expertise. An example from California neatly makes this point. A dispute arose over siting a toxic waste incinerator in Kettleman City, a small farming community in California’s Central Valley with a population of 1,100 that is 95 percent Latino and 70 percent Spanish-speaking. The county prepared a 1,000-page environmental impact report on the proposed incinerator, but it refused to translate the document into Spanish, claiming that it had no legal responsibility to do so.

County officials presumably felt that, having conducted such a thorough inquiry, they would gain nothing more by soliciting the views of 1,100 additional citizens possessing no particular technical expertise. But the Kettleman citizens exercised their all-American right to sue, and a court eventually overruled the county’s decision, saying that the failure to translate the document had prevented meaningful participation. In its input to California’s pathbreaking Comparative Risk Project, the state’s Environmental Justice Committee approvingly cited the judge’s decision, saying, “A strategy that sought to maximize, rather than stifle, public participation would lead to the inclusion of more voices in environmental policymaking.”

Producing more technical knowledge in response to more public demands does not necessarily lead to good environmental management.

But not everybody shares our conviction that more voices necessarily make for more sense in decisions involving science and technology. At about the same time as the Kettleman dispute, public authorities in Germany were concluding that the inclusion of many voices most definitely would not lead to better regulation of the risks of environmental biotechnology. Citizen protests and strong leadership from the Green Party led Germany in 1990 to enact the Genetic Engineering Law, which provided a framework for controlling previously unregulated industrial activity in biotechnology. Responding to citizen pressure, it also opened up participation on the government’s key biotechnology advisory committee and created a new public hearing process for releasing genetically engineered organisms into the environment. These procedural innovations seemed consistent with the European public’s growing interest in participation. They were taken by some as a sign that all liberal societies were converging toward common standards of legitimacy in making decisions about environmental risk.

When put in operation, however, the biotechnology hearing requirement set up far different political resonances in Germany than in the United States. German scientists and bureaucrats were appalled when citizen participants at the first-ever deliberate-release hearing in Cologne demanded that scientific papers be translated into German to facilitate review and comment. Many of these papers were in English, the nearly universal language of science. Officials could not believe that the citizens meant their demand for translation in good faith. No court stepped in, as at Kettleman City, California, to declare that the public had a right to be addressed in its language of choice. Instead, critics of citizen-group involvement denounced the request for German translation as a diversionary tactic that emphasized “procedure” and “administration” at the expense of “substance.” The German government concluded that the hearing requirement could not advance the goal of informed and rational risk management. An amendment to the law in 1993 eliminated the hearing requirement just three years after its original enactment.

It would be easy at this point to draw the conclusion that the Germans were wrong and that full participation, in the sense of including more citizens on their own terms in technical debate, would simply have been the right answer. Indeed, this was the position taken by a thoughtful German ecologist I met in Berlin. A member of the state’s biotechnology advisory committee and a participant in the unprecedented Cologne hearing, he was not much worried by the unruly character of those proceedings. He commented that in matters of democracy Germany was still a novice, with ten years of lessons to learn from the United States. In time, he suggested, people would lose their discomfort with the untidiness of democracy, and more public hearings with multiple voices would prevail on the German regulatory scene.

But questions about science and governance seldom conform to straightforward linear ideas of progress toward common social and cultural goals. A different point of view guides participatory traditions in the Netherlands, a country that does not need to apologize for either its lack of environmental leadership or its lack of democratic participation. In the Netherlands, as in most of Europe, the concept of public information concerning environmental risks has evolved quite differently from the way we know it in the United States. Americans demand full disclosure of all the facts, whereas Europeans are often contented with more targeted access to information. The contrast is quite striking in the context of providing information about hazardous facilities. We in the United States have opted for a right-to-know model, which declares that all relevant information should be made available in the public domain, regardless of its complexity and accessibility to lay people. The Europeans, by contrast, prefer a so-called “need to know” approach, under which it is the government’s responsibility to provide as much information as citizens really need in order to make prudent decisions concerning their health and safety.

To an American observer, the European approach looks paternalistic and excessively state-centered. It delegates to an impersonal and possibly uncaring state the responsibility for deciding which and how much information to disclose. To a European, the American approach seems both costly and unpredictable, because it assumes (wrongly, many Europeans think) that people will have the resources and capacity to interpret complex information. Empirically, the American assumption is clearly not always consistent with reality. The Spanish speakers in Kettleman City were fortunate in receiving help and support from California Rural Legal Assistance, but others, less well situated, could well have failed to access the information in the environmental impact report. Europeans also see the U.S. position as privatizing the response to risk by making individuals responsible for acquiring and acting on information. This approach, to them, threatens the ideal of community-based approaches to solving communal problems. It also potentially overestimates the individual’s capacity to respond to highly technical information.

The idea of participation, then, comes in many flavors and accents. What passes as legitimate and inclusive in one country may look destabilizing and anarchical in another, especially when the subject matter is extremely technical. Different models of participation entail different costs and benefits. Inclusion in the American mode, for example, is expensive, not only because resources are needed to make information widely available (the Kettleman environmental report illustrates this point) but also because, as we shall see, including more perspectives can increase rather than decrease the opaqueness of decisions. It can add to our uncertainties about how to proceed in the face of scientific and social disagreement without offering guidance on how to manage those uncertainties.

Openness and transparency

Risk controversies challenge the intuitive notion that the most open decisions–that is, those with the most opportunities for participation–necessarily lead to the greatest transparency–that is, to maximum public access and accountability. Let us take as an example the case of risk assessment of chemical carcinogens as it has developed in the United States. Back in the early 1970s, Congress first adopted the principle that government should regulate risks rather than harms. This approach represented an obvious and appealing change from earlier harm-based approaches to environmental management. It explicitly recognized that government’s job was to protect people against harms that had not yet occurred. In an age of increasing scientific knowledge and heightened capacity to forecast the future, compensating people for past harms no longer seemed sufficient.

The principles that federal agencies originally used to regulate carcinogens in the environment were relatively simple and easy to defend. They were rooted in the proposition that human beings could not ethically be exposed to risks. Hence, indirect evidence was needed, and results from animal tests began to substitute for observations on humans. As initially construed by the Environmental Protection Agency (EPA), only about seven key principles were needed to make the necessary extrapolations from animal to human data. These were easily understandable and could be stated in fairly nonmathematical descriptive language. For example, EPA decided that positive results obtained in animal studies at high doses were reliable and that benign and malignant tumors were to be counted equally in determining risk.

Very quickly, however, EPA had to start refining and further developing these principles as it came under pressure from industry and environmentalists to explain its scientific judgments more completely. The principles grew in number and complexity. Qualitative assessments about whether or not a substance was a carcinogen gradually yielded to more probabilistic statements about the likelihood that something could cause cancer under particular circumstances and in particular populations. Paradoxically, as risk assessment became more responsive to its multiple political constituencies, it became less able to attract and hold their allegiance.

In the 20 years or so since its widespread introduction as a regulatory tool, cancer risk assessment has evolved into an immensely complex exercise, requiring sensitive calculations of uncertainty atmany different points in the analysis. Agencies have committed increasingly more technical and administrative resources to carrying out risk assessments. Industry and academia have followed suit. New societies and journals of risk analysis have sprung up as the topic has become a focal point of professional debate. But as the methods have grown in sophistication, the process of making decisions on risk has arguably grown both less meaningful to people who lack technical training and less responsive to policy needs.

The task ahead is to design institutions that will promote trust as well as knowledge community as well as participation.

EPA’s dioxin risk assessment of the mid-1990s is a case in point. Nearly three years in the making, this 2,000-page document incorporating the latest science was published ten years after the agency’s first risk assessment for this compound. Yet, public reception to the document was already skeptical, even jaundiced. A cover story in the journal Environmental Science and Technology struck a common note of disbelief: Sitting before a crowd of puzzled-looking members of the public, a man held up a document saying, “Dioxin Risk: Are We Sure Yet?” In and around Washington, people began to talk about the dioxin risk assessment as an example of hypersophisticated but politically irrelevant analysis.

Evidence from focus groups and surveys shows that the public increasingly does not understand and feels little confidence in the methods used by experts to calculate risk. Sociologists of risk have argued that this gap between what experts do and what makes sense to people accounts for a massive public rejection of technical rationality in modern societies. More prosaically, the growing expenditure and uncertainty associated with risk assessment has led to disenchantment with its conclusions in national and local political settings. The dioxin assessment is good indication that certainty, in scientific knowledge and political action, may have to be achieved through means other than extensive, methodologically perfected, technical evaluations of risk.

Conflict and closure

We have stumbled, it seems, on a hidden cost of unconstrained participation. There is considerable information from a wide range of environmental controversies that open and highly participatory decisionmaking systems do much better at producing information than at ending disagreements. When issues are contested, neither science nor scientists can be counted on to resolve the uncertainties that they have exposed. Uncertainties about how much we know almost invariably reflect gaps that cannot be filled with existing resources, and this lack of knowledge relates to our understanding of nature as well as society. At best, then, scientists can work with other social actors to repair uncertainty. Put differently, producing more technical knowledge in response to more public demands does not necessarily lead to good environmental management. We also need mechanisms for deciding which problems are most salient, whose knowledge is most believable, which institutions are most trustworthy, and who has the authority to end debates.

The metaphor of “dueling” between experts is often heard in the world of regulation. Pressure has grown on the scientific community to supply plausible, quantitative estimates of likely impacts under complex scenarios that can never be completely observed or validated. Modeling rather than direct perceptual experience underlies decisions in many fields of environmental management, from the control of carcinogenic pesticides to emissions-trading policies for greenhouse gases. Such modeling often produces a policy stalemate when scientists cannot agree on the correctness of alternative assumptions and the numerical estimates they produce. How are decisions reached under these circumstances?

In one instructive example, a dispute arose between the U.S. Atomic Energy Commission (AEC) and Consolidated Edison (Con Ed), a major New York utility company, concerning the environmental impacts of a planned Con Ed facility at Storm King Mountain. Scientists working for the AEC and Con Ed sought to model the possible effects of water withdrawal for the plant’s cooling system on striped bass populations in the Hudson. The result was a lengthy confrontation between “dueling models.” Scientists refined their assumptions but continued to disagree about which assumptions best corresponded to reality. Each side’s model was seen by the other as an interest-driven, essentially unverifiable surrogate for direct empirical knowledge, which under the circumstances seemed impossible to acquire.

In the end, the parties agreed to a greatly simplified model by shifting their policy concern from hard-to-measure, long-term biological effects and focusing instead on mitigating short-term detriments. The new model stuck more closely to the observable phenomena than the earlier, more sophisticated but untestable alternatives. It became possible to compile a common data base on the annual abundance and distribution of fish populations. Relying on a technique called “direct impact assessment,” the experts now came to roughly similar conclusions when they modeled specific biological endpoints. In this case science did eventually contribute to closing the controversy but only after the underlying policy debate was satisfactorily reframed.

The public controversy over the ill-fated Westway project in New York City provides another example. Contention centered in this case on a plan to construct a $4-billion highway and waterfront development project along the Hudson River, creating prime new residential and commercial real estate but also changing the river’s course, permanently altering the shape of lower Manhattan, and influencing in unknown ways the striped bass population. Two groups of expert agencies attempted to assess Westway’s biological impact. Favoring construction were several state and federal “project agencies,” the U.S. Army Corps of Engineers, the Federal Highway Administration (FHWA), and the New York State Department of Transportation; urging caution were three federal “resource agencies,” EPA, the Fish and Wildlife Service, and the National Marine Fisheries Service. Their differences crystallized around an environmental impact statement (EIS) commissioned by FHWA that declared the proposed project area to be “biologically impoverished” and hence unlikely to be harmed by the proposed landfill. Between 1977 and 1981, the resource agencies repeatedly criticized the EIS, but the project agencies pushed ahead and, in March 1981, acquired a landfill permit for Westway.

It was an unstable victory. A double-barreled review by a hostile federal court and the House Committee on Government Operations uncovered numerous methodological deficiencies in FHWA’s biological sampling methods. What began as an inquiry into the scientific integrity of the EIS turned into a probing critique of the moral and institutional integrity of the project agencies, particularly FHWA and the Army Corps. Congressional investigators concluded that both agencies had violated basic canons of independent review and analysis. Their science was flawed because their methods had not been sufficiently virtuous. The House report accused the project agencies of having defied established norms of scientific peer review and independence. For example, the Army Corps had turned to the New York Department of Transportation, “the very entity seeking the permit,” for critical comment on the controversial EIS.

Doubts about the size, cost, irreversibility, and social value of the proposed development helped leverage the scientific skepticism of the anti-Westway forces into a thoroughgoing attack on the proponents’ credibility. Under assault, and with rifts exposed between their ecology-minded and project-minded experts, the project agencies could defend neither their intellectual nor institutional superiority. Their scientific and moral authority crumbled simultaneously, and their opponents won the day without ever needing to prove their own scientific case definitively.

Responding to uncertainty

Science and technology have given us enormously powerful tools for taking apart the world and examining its components piecemeal in infinite, painstaking detail. Many of these, including the formal methodologies of risk assessment, enhance our ability to predict and control events that were once considered beyond human comprehension. But like the ancient builders of the Tower of Babel, today’s expert risk analysts face the possibility of having their projects halted by a confusion of tongues, with assessments answerable to too many conflicting demands and interpretations. The story of EPA’s dioxin risk assessment highlights the need for more integrative processes of decisionmaking that can accommodate indeterminacy, lack of knowledge, changing perceptions, and fundamental conflicts. If after 15 years and 2,000 pages of analysis, one can still ask “are we sure yet,” then there is reason to wonder if the basic prerequisites for decisionmaking under uncertainty have been correctly recognized.

I have suggested that to be sure of where we stand with respect to complex environmental problems we need not only high-quality technical analysis but also the institutions of community and trust that will help us frame the appropriate questions for science. To serve as a basis for collective action, scientific knowledge has to be produced in tandem with social legitimation. Insulating the experts in closed worlds of formal inquiry and then, under the label of participation, opening up their findings to unlimited critical scrutiny appears to be a recipe for unending debate and spiraling distrust. This is the core dilemma of environmental democracy.

The task ahead then is to design institutions that will promote trust as well as knowledge, community as well as participation–institutions, in short, that can repair uncertainty when it is impossible to resolve it. We know from experience that the scale of organization is not the critical factor in achieving these objectives. Authoritative knowledge can be created by communities as local, self-contained, and technically “pure” as a research laboratory or as widely dispersed and overtly political as an interest group, a social movement, or a nation state. The important thing is the organization’s capacity to define meaningful goals for scientific research, establish discursive and analytic conventions, draw boundaries between what counts and does not count as reliable knowledge, incorporate change, and provide morally acceptable principles for bridging uncertainty. There are many possible instruments for achieving these goals, from ad hoc, broadly participatory hearings to routine but transparent processes of standardization and rule implementation. But these methods will succeed only if scientific knowledge and the shared frames of meaning within which that knowledge is interpreted are cultivated with equal care.

Fortunately for our species, environmental issues seem exceptionally effective at engaging our interpretive as well as inquisitive faculties. Communities of knowledge and trust often arise, for example, through efforts to protect bounded environmental resources, such as rivers, lakes, and seas, which draw experts and lay people together in a mutually supportive enterprise. The formal, universal knowledge of science combines powerfully in these settings with the informal, but no less significant, local knowledge and community practices of those who rely on the resource for recreation, esthetics, livelihood, or commerce. International environmental treaties provide another model of institutional success. Here, the obligation to develop knowledge in the service of a prenegotiated normative understanding keeps uncertainty from proliferating out of control. In the most effective treaty regimes, norms and knowledge evolve together, as participants learn more about how the world is (scientifically) as well as how they would like it to be (socially and politically). Last but not least, environmental policy institutions of many different kinds–from established bodies such as EPA’s Science Advisory Board to one-time initiatives such as California’s Comparative Risk Project–have shown that they can build trust along with knowledge by remaining attentive to multiple viewpoints without compromising analytic rigor.

Science and technology, let us not forget, have supplied us not merely with tools but with gripping symbols to draw upon in making sense of our common destiny. The picture of the planet Earth floating in space affected our awareness of ecological interconnectedness in ways that we have yet to fathom fully. The ozone hole cemented our consciousness of “global” environmental problems and forced the world’s richer countries to tend to the aspirations of poorer societies. These examples illuminate a deeper truth. Scientific inquiry inescapably alters our understanding of who we are and to whom we owe responsibility. It is this dual impact of science–on knowledge and on norms–that environmental institutions must seek to integrate into their decisionmaking. How to accomplish this locally, regionally, and globally is one of the foremost challenges of the next century.

Dangerous Intersections

Over the past two years, several prominent working groups and expert committees have circulated or published position papers attempting to address widespread public anxiety over the potential uses and abuses of genetic information. A number of proposals put forth in these papers–particularly those focusing on the role of informed consent in guiding genetic testing and research–have provoked deep concern among scientists who work with human tissue samples. These researchers are imaginatively employing the new and powerful tools of molecular biology to gain unique insights into human diseases. They fear, however, that the restrictions called for in these proposals would severely burden, if not fatally encumber, this entire class of promising research.

Our nation’s hospitals, especially the academic medical centers, collectively contain an enormous archive of human tissue samples. Originally removed for medical reasons and preserved for future reference and study, these specimens have long provided a rich resource for clinical investigation. Using conventional methods of histopathologic analysis, scientists have gained a wealth of insights into the nature and course of human diseases. Because of the limits of those research methodologies, however, the results of these studies have usually had little relevance for the individual patients (known as “sources”) from whom the tissues had been obtained. Accordingly, the practice and standard of informed consent for this vast body of research have been quite minimal.

Most studies involving human subjects require case-by-case review and approval by local institutional review boards (IRBs) as well as the observance of stringent protocols for obtaining patients’ informed consent. Studies using archival human tissue specimens, however, have historically been reviewed under more relaxed requirements. Thus, the consent forms routinely used for hospital admissions or operative procedures usually contain only a line or two stating that any removed tissue not required for diagnostic purposes may be used for teaching and research. Local IRBs have approved this type of informed consent, which I shall call minimal informed consent, on the grounds that such research poses “minimal risk” to patient sources and does not “adversely affect [their] rights and welfare” and that the research “could not practicably be carried out” if the full consent protocol were observed.

All that promises to change dramatically with the introduction of powerful new research techniques, such as the use of monoclonal antibodies or the polymerase chain reaction. Pathologists can apply these methods to preserved tissue specimens to identify specific abnormalities of gene structure and expression. For instance, they may be able to determine whether specific genetic abnormalities in individual cancers, for example, breast or prostate, correlate with different degrees of aggressiveness or responsiveness to therapy. Such unprecedented insights would help clinicians diagnose cancers or other medical conditions far more precisely and tailor their treatments accordingly, and may lead to new advances in therapy or even prevention.

The advent of these new genetic research techniques, however, has changed the calculus of risk. In the course of their studies, researchers may discover information that could profoundly affect the lives of the tissue sources and even their relatives. For instance, they may discover that a genetic change found in certain tissue samples is a germline mutation (one that is heritable), rather than a somatic mutation (a spontaneous, localized genetic change). Research results that detect the presence of germline mutations may have predictive value not only for individual patients but for their close relatives as well. Moreover, the inappropriate use of genetic information obtained in the course of research could have dire consequences for individuals and families, affecting critical personal matters such as employment, insurance coverage, and the decision whether to have children. The sensitivity and fears of potential abuse of this information have given rise to the present public concern with issues of informed consent.

Already, the Office for Protection from Research Risks, which oversees the federal regulation of research involving human subjects, has issued guidelines emphasizing that because genetic research protocols involve psychosocial rather than physical risk, such protocols–even those involving archival tissue samples–should no longer automatically be considered of “minimal risk” and therefore eligible for expedited IRB review. In addition, a number of private organizations have set out to take a fresh look at the protection of autonomy and privacy in research and the requirements for informed consent in genetic testing and research with human tissue specimens.

The most influential entity studying this issue is likely to be the Ethical, Legal and Social Implications (ELSI) Program of the National Center for Human Genome Research, sponsors of the Human Genome Project, which has established a working group to examine these matters.. Although the working group has not yet expressed an official opinion or issued a formal statement, ELSI has supported efforts by a number of other organizations, including the drafting of a document called the Genetic Privacy Act, which proposes sweeping model legislation and has been widely circulated to the Congress and state legislatures. Many of its provisions have already been embedded in legislation enacted in 1995 in Oregon, and some have found their way into proposed legislation introduced last summer by Senator Pete Domenici. In addition, a number of other organizations, including the American College of Medical Genetics, the American Society for Human Genetics (ASHG), a committee working under the auspices of the National Action Plan for Breast Cancer, the Task Force on Genetic Testing of the National Institutes of Health-Department of Energy Working Group, and, of particular importance, a working group convened jointly by the National Institutes of Health (NIH) and the Centers for Disease Control and Prevention, have published position papers.

Given the current climate of anxiety about genetic privacy and the privacy of medical information more broadly, there is likely to be escalating public pressure to resolve these issues legislatively or through the hasty enactment of guidelines and regulations. Yet, proposals such as these may frame the terms of the debate over genetic privacy for years to come and lead to unintended consequences of profound import. It is disappointing, therefore, that each of these proposals is directed more at circumscribing the feasibility and scope of genetic inquiry than at creating more effective protections against the inappropriate use of clinical and research data derived from this vital area of research.

Stringent standards

There seems to be broad agreement among the various documents that the dramatically enhanced power of the new research methodologies demands that informed consent procedures for genetic research on tissue samples be significantly strengthened. Different proposals, however, suggest different criteria for determining how stringent those standards should be. Two of the most widely circulated documents–the Genetic Privacy Act and the Consensus Statement from the NIH-CDC Working Group–have stirred serious discontent among scientists who work with human tissue samples.

The Genetic Privacy Act, the most extreme of the documents, reflects an important school of bioethical and legal thought in stressing that genetic information is qualitatively different from other types of personal information and so requires special protection. Thus, the Act sets forth elaborate consent mechanisms for information derived from DNA analysis. Each person who collects a DNA sample for the purpose of performing genetic analysis must, among other things, provide a notice of rights and assurances prior to the collection of the sample, obtain written authorization, and restrict access to DNA samples to persons authorized by the sample source. The authorization includes a description of all approved uses of the DNA sample and notes whether or not the sample may be stored in an identifiable form, whether or not the sample may be used for research under any circumstances, and if so, what kinds of research are allowed or prohibited. The Act further states that an individually identifiable DNA sample is the property of the sample source, who thereby retains the right to order its destruction at any time. In any event, the sample must be destroyed upon the completion of the specifically authorized genetic analysis or analyses unless retention has been explicitly authorized or all identifying information is destroyed.

The Consensus Report from the NIH-CDC Working Group also adopts a stringent position on most of these issues. It interprets narrowly the existing regulations that govern research involving human subjects and states that all research with human tissue samples should require IRB review, whether the tissue samples are anonymous, anonymized (originally identifiable but stripped irreversibly of all identifiers and impossible to link to sources), identifiable (linkable to sources through the use of a code), or identified. The report also calls for full informed consent for tissue samples that are obtained in the course of clinical care as well as for any existing tissue samples that can be linked to sources. Its stringent standard of informed consent includes many of the provisions contained in the Genetic Privacy Act: for example, giving the subject authority to determine whether or not samples may be anonymized for research, whether they will share in the profits of any commercial products that might arise from the research, whether they are willing to have their samples shared with other investigators, and whether they wish their specimens to be used only in the study of certain disorders.

The national tissue archive has always been and should remain a public resource not a depository of private property.

Other proposals seek to vary standards for informed consent based on whether studies are retrospective or prospective–that is, whether they use pre-existing samples or samples being drawn for future use–and whether the samples to be used are anonymous or not. For example, the report of the Rapid Action Task Force of the ASHG states that all human genetic research protocols must undergo IRB review and that stringent informed consent is necessary for all prospective studies. Moreover, it would not allow researchers to ask subjects to grant blanket consent for future genetic research projects if the tissue specimens will be identifiable. In retrospective studies with identifiable samples, investigators are required to recontact sources and obtain their full informed consent, although the IRB may waive this requirement. Full consent is not required for retrospective studies with anonymous or anonymized samples.

The need for definitions

It is noteworthy that the terms “genetic information,” “genetic research,” and “genetic testing” are often used in the various documents without further definition. Where they are defined, however, the definitions are very broad: for example, the analysis of DNA, RNA, chromosomes, proteins, or other gene products to detect diseaase-related genotypes, phenotypes, or karyotypes for clinical purposes. Among all of the documents produced to date, only the report of the NIH-DOE Task Force on GeneticTesting circumscribes the definition of genetic testing by stating that the authors do not intend to include in their definition tests conducted purely for research, and that they are most concerned about the predictive uses for genetic tests. The Task Force report also seeks to exclude “tests for somatic cell mutations, unless such tests are capable of detecting germline mutations.”

Even with this exclusion, however, the definition would encompass much of what comprises the routine surgical pathology work-up of patient specimens, as well as almost all molecular research with human tissue samples. In most cases, tissue from lesions is juxtaposed with normal residual tissue on microscopic slides, so that any test for a putatively somatic genetic alteration in the lesion could readily detect the presence or absence of change in the neighboring tissue, indicating the presence of a germline mutation. Indeed, researchers often do not know at the outset of their research whether or not they will stumble onto a germline mutation, so the distinction between the two types of testing–although important in theory–is difficult to draw in practice.

In the context of contemporary molecular biology, the terms “genetic research,” “genetic test,” or “genetic sample” are exceedingly broad. As used in the various proposals, the terms are so inclusive and imprecise that it is inadvisable–and indeed impractical–to use them as the basis for new research guidelines or regulations. By defining a genetic sample as any human specimen that contains DNA and a genetic test as any test applied to that specimen that can reveal genetic information (a definition that includes not only direct analyses of DNA but also tests of RNA expression or protein products), these proposals would apply new and stringent consent requirements to many highly informative diagnostic tests that are used routinely in pathology laboratories to work up tissue specimens, as well as procedures that are essential for the conduct of research. As a result, their recommendations would not only contradict the goals of sound medical practice, patient management, and even law, but could wipe out a broad swath of vital research.

For instance, it is commonplace in pathology practice that the initial examination of a specimen may indicate the need for additional diagnostic tests. These tests–often by definition “genetic” tests–are typically carried out in an ordered sequence, where each is dictated by the results of the preceding. Under requirements such as those of the Genetic Privacy Act, the pathologist would have to return to the patient to obtain specific informed consent before performing each of the tests in the diagnostic sequence.

To permit patients to require that tissue samples be destroyed not only contradicts sound medical practice but is actually illegal. The authors of these reports may not be aware that state laws as well as medical accreditation bodies require health care providers to maintain tissue archives for purposes of patient management for up to twenty years. The reason is straightforward: archival tissue samples are part of the medical record. For example, if a patient has a cancer excised and a tumorous lesion later recurs, it is essential for diagnosis and management that the earlier tissue specimen be retrievable for pathologic review to determine whether the new growth is a metastasis or an entirely new tumor. Patients simply cannot be allowed to mandate the destruction of clinical specimens as soon as particular “authorized” tests are completed.

More seriously, by failing to distinguish between diagnostic or predictive testing and tests conducted for purposes of research, these proposals would impose a tremendous burden on laboratory researchers using genetic techniques. For example, scientists may use genetic analyses on human tissue specimens to try to discover whether a genetic marker is associated with a particular disease or determine whether certain kinds or patterns of mutations are correlated with particularly aggressive or therapeutically unresponsive cancers. These research studies would be subject to the same stringent consent requirements as tests used for clinical or diagnostic purposes.

It is probably true, as the NIH-CDC Working Group points out, that the current language in routine operative and hospital consent forms about the use of tissue samples in research is inadequate and that IRBs should establish a higher threshold for “impracticality” as a criterion for waiving a more stringent consent protocol. Nonetheless, the practicality of some of the complex informed consent procedures that have been proposed is highly dubious. The requirement that researchers seek new consent from former patients or their next of kin for research with archival materials is particularly burdensome. In an era in which the demand for biomedical research funding chronically outruns the supply of funds, such consent requirements would impose new administrative, logistical, and financial burdens of enormous magnitude.

The proposals for obtaining prospective consent for human tissue specimens are also problematic. The draft consent forms that have been circulated are lengthy and complex. They might conceivably be workable in protocols involving studies of healthy populations. However, introducing lengthy and complex speculations about the hypothetical future uses of samples in genetic research would likely be confusing and intimidating to anxious patients awaiting surgery whose primary concern is their own health and might well encourage them to forbid any research at all on their specimens–a reaction well known in the epidemiology community that researchers have dubbed “uninformed denial.”

Equally inadvisable would be a consent procedure that offered patients multiple opportunities to limit or prohibit the kinds of research that could be done with individual specimens. To construct tissue archives with detailed instructions and proscriptions attached to each specimen would be a logistical nightmare. More important, it is impossible to foresee the kinds of research questions that might arise in future years, and to cramp future research opportunity in such blanket fashion would be tragically unwise.

In theory, researchers could solve this dilemma by relying only on fully anonymous or anonymized tissue samples. Most proposals would continue to require only minimal consent for the use of these specimens. However, this solution proves unrealistic in practice. The national archive of human tissue specimens, which has historically been the predominant source of tissue samples for research, is overwhelmingly comprised of specimens that were removed for medical reasons and must remain identifiable for future access on behalf of the patient, in the same way that the patient’s medical record must remain identifiable. Furthermore, researchers often need to obtain follow-up clinical information in order to determine whether their findings are in fact significant in diagnosing or predicting the course of a disease. Given the impossibility of creating a totally anonymous tissue archive, the imposition of stringent consent requirements would burden the vast majority of research involving archival tissue specimens.

The terms used in these proposals are so inclusive and imprecise that it is inadvisable–and indeed impractical– to use them as the basis for new research guidelines or regulations.

Redressing the balance

The proposals reviewed here reflect the heavy input of bioethical opinion and perspective, but they are insufficiently attentive to the requirements of medical and biomedical research practices. Put simply, they come down too heavily on the side of private interest at the expense of public benefit and thereby distort the delicate equipoise that must always be respected in research involving human subjects.

To help to redress that perceived imbalance, I offer the following principles:

It has been too readily accepted that genetic information is unique and different in kind from other information contained in a patient’s medical record. To the contrary, I would argue that the issue is one not of qualitative difference but of degree; and the distinction is important in devising appropriate and workable mechanisms for protecting confidentiality. Although it is true that genetic information is intensely private and can be sensitive as well as predictive, the same is true for much of the information that may be in the medical record; nor is it the case that only genetic information is susceptible to misuse for discriminatory purposes by insurers, employers or others.

It is interesting to note, for instance, that the authors of the Genetic Privacy Act define “private genetic information” so as to exclude genetic information derived from a family history or from routine biochemical tests, on the grounds that the broader definition would make virtually all medical records subject to the provisions of the Act and so would require the overhaul of well established medical procedures and practices. Instead, they define it as information derived from “an analysis of the DNA of an individual or of a related person.”

This discussion is intriguing, for it indicates that the authors at least glimpsed the artificiality of the distinctions they were creating and recognized the impracticality of trying to embrace all genetic information within their definition. Unfortunately, they seemed unaware of the entirely analogous impracticalities that arise from attempting to apply these strictures to contemporary diagnostic pathology and biomedical research practices.

The protection of sensitive, stigmatizing, and even predictive information is not a new issue for medicine, nor is the compelling need to strengthen the legal framework of protection against inappropriate or forced disclosure or discrimination limited exclusively to genetic information.

Much more careful attention should be paid to the definition of terms. In particular, the definition of a genetic test should be narrowed to focus on the purpose of the study, not on the particular research methods used. The term “genetic test” should be applied only to tests that are carried out on healthy, or presymptomatic, subjects prospectively for the explicit purpose of determining the presence of heritable risk factors whose predictive significance is well established. This definition would include tests performed on a population sample to determine the distribution of such factors for epidemiologic purposes. Appropriately defined, genetic testing should certainly meet a high standard of informed consent, for it entails potentially significant social, psychological, and financial risks.

By contrast, research studies on human tissue specimens, even if they involve tests to examine gene structure and function, ordinarily should not be defined as “genetic tests.” (IRBs would always have the prerogative to decide for cause that particular research protocols warranted exception from the general rule.) They should not be considered diagnostic and should not be entered into the patient record or communicated directly to the tissue source. Genetic data resulting from research are nascent, in the sense that they cannot be fully interpreted until they have been replicated, validated analytically and clinically, and demonstrated to have clinical utility. Only then can the medical community determine whether the results provide the basis of a useful genetic test that should be introduced into clinical practice. In addition, research results are obtained under experimental conditions that ordinarily do not meet the accreditation standards for diagnostic laboratories.

In the event that research yields information that the investigators believe could be of significant consequence to particular individuals, the investigators should bring the matter before the local IRB for determination of whether, how, and to whom to communicate the results.

There appears to be general agreement that the degree of informed consent required for the use of human tissue samples in research depends largely on whether or not the samples can be linked to specific individuals. The problem is that so few samples are anonymous or anonymized. To facilitate tissue-based research, it is essential that research with coded samples continue to be eligible for IRB approval under a minimum informed consent procedure. At the same time, research institutions should take whatever steps are necessary to limit investigators’ access to information about the sources’ identities. The crafting of credible protective mechanisms will not be a simple task and demands thoughtful consideration.

When samples for research are accumulated in a central repository, a feasible approach is for the hospitals or clinics where the samples originate to code them before depositing them in the registry. The coded samples could then be distributed to investigators. If an investigator requires follow-up information pertaining to specific samples, the request would have to be routed back through the registry and then to the originating hospital or clinic. Identifying information would thus remain restricted to points remote from both the investigator and the registry.

This solution, however, would not apply in academic medical centers, where the central tissue repository typically resides within the pathology department and individual clinicians may compile their own collections of tissue samples from known patients. In such cases, plausible fail-safe coding mechanisms are much more difficult to design. Nonetheless, if the creation of such mechanisms becomes an absolute requirement for permitting research on human tissue specimens to continue under a minimum informed consent procedure, it would certainly be worth the effort to protect patient privacy and assuage public anxiety.

Any modified consent procedures proposed for tissue samples obtained in the clinical setting must be clear and simple. They should be crafted on the premise that the national tissue archive has always been and should remain a public resource and not, like a savings bank, a depository of private property. The blanket imposition of stringent and complex consent requirements offering multiple options to limit or prohibit research, in settings of high patient anxiety, would have a stifling effect on archival tissue research that could only be justified if it served a compelling public interest. In my judgment, such a criterion has not been met.

Federal and state governments should continue their efforts to protect the confidentiality of sensitive medical information resulting from genetic testing and broaden them to include the results of genetic research. Very recently, a number of states have enacted legislation to prohibit employment or insurance discrimination on the basis of genetic testing or information. Similar bills have been entered into both Houses of the Congress, and both the House (H.R. 3103) and Senate (S. 1028) versions of health insurance reform legislation contain provisions that would prevent health insurers from discriminating on the basis of genetic information. Moreover, the U.S. Equal Employment Opportunity Commission recently extended protection from discrimination in employment under the Americans with Disabilities Act to discrimination on the basis of genetic predisposition. These are important first steps in allaying public anxieties about genetic privacy.

Similar concerted energy should be devoted to strengthening protection of the information obtained in genetic research. One valuable mechanism might be the statutorily authorized Certificate of Confidentiality, issued on a discretionary, project by project basis by the Public Health Service (PHS), to protect the identities of research subjects in specific types of research studies. Among the categories of information deemed sensitive are “information that if released could reasonably be damaging to an individual’s financial standing, employability, or reputation within the community,” and “information that normally would be recorded in a patient’s medical record, and the disclosure of which could reasonably lead to social stigmatization or discrimination.” It is striking how closely these criteria mirror those applied to the potential misuse of genetic information.

I suggest that the reach of the Certificate of Confidentiality be broadened to include, as a class, all genetic information created in research. This could probably be accomplished through regulation under the existing statute and might be implemented through an assurance mechanism whereby protection would be extended to research institutions that had put into force an institutional confidentiality policy that met federally specified requirements. For this purpose, I would exploit the imprecision of current definitions of “genetic information” and cast the protective net as widely as possible in the hope of providing maximum alleviation of public anxiety.

Most urgently, we need to establish an effective process to guide the public debate and assure that its resolution strikes the proper balance between the conflicting interests of individual privacy and the public benefits of genetic research. To date, these issues have been discussed extensively by geneticists, ethicists, lawyers, and patient representatives, with a nearly exclusive focus on ethical issues. There has been inadequate participation by the larger biomedical research community, especially those who are engaged in genetic research on human diseases. Many of the flaws and oversights of the proposals reviewed here (particularly those involving clinical diagnosis) almost surely are the inadvertent result of inadequate input into the process.

One approach that could bring order and direction to this debate and increase the odds of a constructive outcome would be for the director of the NIH to convene a committee of experts representing an appropriate range of disciplines to review this problem and propose recommendations through the Director’s Advisory Committee. Although not foolproof, this process is time-honored and has been used successfully in the past to develop positions of broad agreement on controversial or complex issues, such as fetal tissue research or the NIH intramural research program. By soliciting input from a variety of constituencies, such a committee could ensure an orderly and balanced discussion and help frame a debate that is presently rudderless and susceptible to premature political intercession. The committee’s recommendations could set the parameters in an important area of emerging science policy and provide a credible basis for whatever statutory, regulatory, or less formal mechanisms may be deemed necessary.

As biomedical science continues to progress, it will continue to raise new and challenging issues of genetic privacy, informed consent, and the ownership and custodianship of patient data. We will continue to be vexed by troubling questions that lie at the boundary between our society’s commitment to individual autonomy and its compelling interest in the benefits that flow from its generous investment in biomedical research. It is important that the policies that emerge from the current debate respect patients’ rights to privacy and to informed consent; at the same time, however, they must not unduly encumber researchers’ access to the nation’s treasury of tissue samples nor inhibit its future growth. The thoughtful resolution of these issues can set an important precedent that will inform future decisions in this difficult arena.

Transforming the Navy’s Warfighting Capabilities

On June 26, 1897, Great Britain’s Royal Navy conducted a review in honor of Queen Victoria’s diamond jubilee. The review represented the greatest concentration of naval power the world had ever seen. At the heart of that power were the Royal Navy’s battleships, row upon row of them.

In some respects, that review represented the high-water mark of the battleship. The advent of the airplane and the modern submarine would soon consign the battleship to the same fate as the wooden sailing ships of the line that preceded it. By the early days of World War II, the aircraft carrier displaced the battleship as the naval fleet’s capital ship.

Rapid advances in aviation technology had transformed the carrier from a ship that provided aircraft to scout the enemy fleet and provide gunfire adjustment for the battleships into a strike platform in its own right. Carriers could now launch strikes several hundred miles from their targets, while battleships could hurl their shells but a few miles. Although the so-called battlewagons managed to hang on for nearly 50 years after the war, following the Gulf War the battleships were decommissioned, apparently for good.

Today, advances in information technology, coupled with the diffusion of advanced military technology, threaten to make aircraft carriers–and their crews of thousands–increasingly vulnerable to attack. Nonetheless, the Navy plans to spend billions of dollars on a new super carrier and is testing expensive new missile systems to defend its carrier battle groups. To look to the future, the Navy should look to the past: The technological advances that are putting carriers at risk could pave the way for the resurrection, albeit in a more efficient, lethal, and different form, of the battleship.

A growing threat

In the wake of the Cold War, a growing number of would-be U.S. adversaries are developing or procuring military and intelligence capabilities that may soon permit them to mount a serious threat to U.S. aircraft carriers. A brief overview of these capabilities includes the following:

Long-range reconnaissance and strike capabilities. A number of countries have access to surveillance satellites and other intelligence collection capabilities that would allow them to observe a carrier battle group from space, perhaps days before the carrier comes close enough for its aircraft to strike their targets.

To be sure, carriers, despite their size, are not easy to find at sea. But this may not be necessary. Cold War-era operations emphasized using the carrier to maintain control over the high seas, thereby keeping sea lanes open. But with the Cold War over, concern over regional threats such as Iran, Iraq, and North Korea has led the Navy to emphasize operations in coastal areas, to better support U.S. military operations ashore should aggression occur. As a result, Third World aggressors could have a much easier time finding their targets: They could focus their reconnaissance efforts exclusively on their littoral area, which the carrier would have to enter in order to launch air strikes against inland targets.

Third World states also are acquiring the means to strike targets at far greater distances and with greater precision than they could just a few years ago. More than 15 nations (including Iran, Iraq, Syria, North Korea, and Libya) have ballistic missiles. It is not clear whether these states could move targeting information quickly enough to conduct successful strikes against the carriers, or whether they would be able to penetrate U.S. fleet missile defenses. But the threat is serious enough that the Navy is expending considerable resources to protect the carriers from such attacks.

Antiship cruise missiles. More than 40 Third World militaries now possess antiship cruise missiles (ASCMs), which can be launched from the shore, aircraft, ships, or submarines. Although they are not cheap, these missiles have been used to good–and sometimes devastating–effect in recent years. During the 1982 Falklands War, Argentine Exocet missiles inflicted substantial damage on the Royal Navy. In 1987, during the U.S. Navy’s escort of reflagged Kuwaiti oil tankers in the Persian Gulf in the midst of the Iran-Iraq War, another Exocet fired by Iraq severely damaged the USS Stark, killing 37 of her crew.

Iran has been particularly enamored of these missiles. Recently, the Iranian Navy test-fired an ASCM with a 60-mile range. The commander of U.S. naval forces in the region has expressed concern that, over time, Iran’s acquisition of an increasingly capable inventory of ASCMs, when combined with its attack submarines, ballistic missiles, and antiship mines, could make the fleet’s job “a lot tougher.”

Antiship mines. Mines have long posed a vexing challenge for the Navy. Of the 18 Navy ships seriously damaged in operations since 1950, 14 were hit by mines. During the Gulf War, the cruiser USS Princeton and a countermine task force flagship, the USS Tripoli, were both damaged by mines. After the Gulf War, General Schwartzkopf described the Navy’s minesweeping force as “old, slow, ineffective, and incapable of doing the job.”

Today there are 48 navies capable of laying mines. Thirty-one nations manufacture mines, nine more than in 1991. More than 20 nations now export mines. The former Chief of Naval Operations, Admiral Jeremy Boorda, made improving the service’s capabilities in this area a high priority. Nonetheless, he admitted that there is no easy way to defend against mines. “Once they’re in the water,” he observed, “you’ve got a big problem.”

What does this mean for the carriers? If carriers are operating several hundred miles out at sea, it may mean little or nothing. On the other hand, the Navy operated carriers in the Persian Gulf during Operation Desert Storm. To enter the gulf, the carriers had to transit the Strait of Hormuz, which can be mined. With the Navy focusing its operations on supporting the battle ashore, the carriers will increasingly find themselves operating close to the coast. The closer they come to shore, the greater the danger they face from mines.

Submarines. Conventional submarine sales are expected to double over the next decade, with a total of 50 to 60 submarines being bought by some 20 countries. It may be more difficult for the Navy to conduct effective antisubmarine warfare operations in littoral areas, where noise levels are considerably greater than in the open ocean. Moreover, by operating close to shore, carriers will make the submarines’ task easier by reducing their search area. These problems may be counterbalanced somewhat by the difficulty Third World navies are encountering as they attempt to become proficient in operating these complex pieces of military equipment.

Ultimately, it will likely be the cumulative effect of these threats, rather than any one of them, that significantly erodes the carrier’s utility. For example, a single mine explosion may not sink a carrier, but it may reduce the carrier’s speed, making it an easier target for missiles. Even if the carrier is not hit by mines, if it enters a mine field it may have to reduce its speed to avoid them. Operating close to shore, with attack warning times reduced, the carrier’s defenses may be stressed not by one type of missile but by a variety of them.

Gulf War 2010

Consider, for example, a Persian Gulf war in the year 2010. This time, assume that Iran is the aggressor. Perhaps Teheran’s objective is not to threaten the Kuwaiti and Saudi oil fields directly, as the Iraqis attempted to do in the Gulf War. Rather, Iran could choose to constrict the oil flow indirectly by closing the Strait of Hormuz at the mouth of the Persian Gulf.

To this end, the Iranians seed the strait with advanced antiship mines that are guarded by submarine patrols. Along the coast, Iran positions batteries of Silkworm and Seersucker antiship missiles. It also disperses mobile launchers armed with long-range cruise and ballistic missiles and gathers intelligence provided by satellite reconnaissance photography, commercial communication services, and precise positioning systems. By investing modestly but prudently in advanced technology, Iran has presented the U.S. military with a dramatically different, and more dangerous, challenge than it faced in the Gulf War.

How would Navy commanders respond to such a challenge? Under present conditions, they have three basic options. One is to keep the carriers out at sea. This would reduce their vulnerability but also limit their ability to strike targets ashore. Another possibility is to bring them in closer to shore, subjecting the carriers–and their crews of 5,000 to 6,000 sailors–to increased risk. Third, they could offset the increased threat from missiles by creating ever-thicker anti-missile, antisubmarine and countermine defenses around the carrier–a large, unwieldy target. Of course, these options are not necessarily mutually exclusive.

The U.S. Navy is, in fact, devoting considerable resources to developing more effective missile defenses for its carrier battle groups. One key element is the creation of the Cooperative Engagement Capability (CEC) battle-management system. The CEC is designed to combine all the combat systems and major sensors on ships into a single, integrated architecture for intelligence, surveillance, reconnaissance, and C4I (command, control, communications, computers, and intelligence). If successful, the CEC would give commanders a dramatically improved picture of the extended battle area and greatly enhance their ability to intercept missiles at extended ranges.

The combination of tight budgets and a strong attachment to tradition threatens to crowd out the Navy’s investment in innovation.

However, maintaining and beefing up the traditional carrier battle group represents a high-cost strategy for protecting the Navy’s carrier strike assets. It is not clear whether the Navy, which is experiencing budget shortfalls in other critical areas, such as surface ship recapitalization, countermine capabilities, and the procurement of precision munitions, can (or should) afford it–especially if more effective alternatives are available. Fortunately, such options exist.

The arsenal ship

Advances in missile and precision-guidance technology, combined with long-range reconnaissance and targeting technology, are giving ships the ability to out-range carrier aircraft in conducting strike operations. For a Navy concerned about tight budgets, it is likely that such ships can be procured and operated at substantially lower cost than the carriers, while placing far fewer sailors in harm’s way. The emerging offensive capabilities of these new “battleships” promise to transform the strategic role of the aircraft carrier.

Just as the information revolution has transformed the computer world from its focus on the mainframe computer to a distributed web of mainframes and personal computers, so can it transform the future Navy. Whereas existing fleets center on the carrier as the focus of naval strike capability and devote their resources to defending it, the fleet of the future could distribute its offensive capabilities among a variety of ships networked into a systems architecture, or “system of systems.” A constellation of strike platforms within a web of surveillance, reconnaissance, and battle management systems would comprise the new, distributed capital ship.

A key node in this system would be a new form of battleship. The Navy already has a name for it: the arsenal ship. The concept for the arsenal ship dates back at least to the late 1980s, when Vice Admiral Joseph Metcalf challenged traditional surface combatant designs and laid out the basic characteristics of the “striker,” a stealthy, sturdy warship that could deliver a devastating amount of firepower at long range.

The arsenal ship does not have the imposing profile of traditional battleships or today’s super carriers. In fact, it looks more like a tanker. The ship would be highly automated, with a crew of fewer than 100. For enhanced protection, it would be semi-submersible with a very low profile, and its design would incorporate stealth technologies. It would also be equipped with active point defenses to protect against missile attacks and a double-hull design, offering good protection against mines or torpedoes.

The arsenal ship’s long, flat deck would incorporate a grid of 500 vertical launch systems, or missile tubes, capable of launching a wide variety of extended-range precision munitions such as the Tomahawk land-attack cruise missile, the Army Tactical Missile System, and the Evolved Sea Sparrow Missile. The tubes could launch antiballistic missile and unmanned aerial vehicles (UAVs), remotely piloted drones that gather and relay data, as well.

To its credit, the Navy plans to invest some $500 million, with support from the Advanced Research Projects Agency, to construct a prototype arsenal ship. But it is unclear whether the initial design will be the optimal design. Indeed, in the 1920s and 1930s, the Navy had to experiment with several classes of carriers before settling on the carrier design that proved successful in World War II. According to the Navy’s own figures, for an extra few hundred million dollars, it could commission four or five concept studies for the arsenal ship and produce two prototypes. At present, the Navy has chosen not to do so.

The Navy also has the opportunity to convert four Trident ballistic missile submarines to stealthy, general purpose warships. Trident conversion is made possible, in part, because the Navy is removing four of the ships from Cold War era nuclear service. Options being considered include a stealthy strike Trident, with well over 100 missile tubes; a stealthy troop transport carrying 500 troops for short periods, or 144 troops for an indefinite period; or a stealthy multimission ship with about 100 missile tubes and 100 troops. The estimated costs of these options range from $450 million to $750 million per boat. The converted Tridents could launch long-range precision munitions, UAVs, and unmanned underwater vehicles.

In the newly configured fleet, the arsenal ship and the Tridents, along with carriers, submarines, and surface combatants such as cruisers and destroyers equipped with missile grids, could be linked into a web of reconnaissance systems, including satellites, unmanned aerial vehicles, special operations reconnaissance teams, and remotely emplaced sensors. This could allow the fleet to employ its long-range precision strike systems more effectively. The Navy CEC program could be the first major step in providing this kind of comprehensive linkage.

The fleet in battle

How would such an architecture be employed? Let us return to the future Persian Gulf conflict. A U.S. fleet substantially different from today’s fleet would approach the Persian Gulf. It might comprise four carrier battle groups. The fleet also would include ships capable of providing both an “upper-tier” and a “lower-tier” defense to intercept missiles at high and low altitudes, respectively, and would include three arsenal ships and three Trident stealth battleships.

The fleet would be led by a screen of attack submarines, whose mission is twofold: to conduct antisubmarine warfare against Iranian subs and to begin clearing the Iranian minefields blocking the Strait of Hormuz. Behind the submarine screen are the stealth battleships, followed by the arsenal ships. Two stealth battleships are equipped with 162 precision-guided missiles and UAVs. The third stealth battleship is a multimission boat, equipped with a mix of 117 missiles and UAVs and carrying a dozen Navy SEAL and Army Special Forces reconnaissance teams.

Once the submarines have cleared a lane in the mine belts, the multipurpose Trident boat moves in behind them close to shore and begins to disgorge its special operations forces. Two days later and a few score miles out to sea, the stealth battleships and arsenal ships begin launching UAVs and missiles designed to deploy sensors. Within 18 hours, a U.S. reconnaissance architecture is in place and operating, composed of an upper tier of satellites, a UAV “grid,” remote sensors, and special operations forces.

Even before the architecture is in place, the battleships and arsenal ships, in combination with long-range Air Force bombers, begin an extended-range precision-strike campaign against key Iranian fixed targets. These strikes are quickly supplemented by similar (although less effective) attacks on critical mobile targets that can be identified and tracked by the U.S. reconnaissance deep-strike architecture.

The Iranians respond by launching a barrage of nearly 400 ballistic and cruise missiles. Thanks to U.S. information dominance, persistent attacks on Iranian missile units, and land- and sea-based ballistic missile defenses, the effects of the attack are mitigated considerably. Other Navy surface combatants with long-range precision-strike capabilities now add their fire support, launching long-range Tomahawk cruise missiles.

After several weeks, the long-range precision strike campaign has substantially weakened the Iranian long-range missile forces and, correspondingly, the threat to regional air bases. U.S. Air Force tactical aircraft are now deployed to bases in the Gulf. At this point, the Navy’s carriers are directed to move much closer to shore. The effect is to increase target coverage dramatically–at the cost of increased risk to the naval forces involved.

Soon the Iranian armed forces no longer possess the capability to deny U.S. naval forces freedom of movement in the Persian Gulf region. The Navy’s carriers, now operating relatively freely outside the mine belts, lend the full weight of their air wings to the Air Force tactical wings operating ashore and the Navy and Air Force long-range strike forces. With U.S. and allied ground forces deploying in large numbers to the region in preparation for a land assault on Iran, the government in Teheran requests a cease-fire through the United Nations.

The arsenal ship would play a key role in a future fleet in which offensive capabilities would be distributed among a variety of ships.

Whither the Navy?

Will the U.S. Navy exploit rapidly emerging technologies to build this very different–and potentially far more effective–fleet, or will it employ these technologies to improve existing capabilities at the margin? At present, the answer is unclear.

The U.S. Navy is far ahead of any other naval competitor in its thinking about the arsenal ship, the stealth battleship, and the integration of reconnaissance, battle management, strike, and missile defense systems. Yet to meet the demands of a U.S. strategy to wage regional conflicts in a way that leads to quick, decisive victories with minimal loss of American lives, it will have to exploit the potential of the distributed capital ship and the precision-strike systems architecture described above. Given its limited resources, this means that the Navy will have to make some tough choices.

The Navy already has a dozen supercarriers, yet it has requested more than $4 billion from Congress to build another. In addition, it is preparing to spend billions more to develop a new class of carriers. Meanwhile, it does not have a single arsenal ship or stealth battleship, which are each projected to cost about $700 million; the purchase of precision-guided munitions lags behind earlier plans; and the CEC program, which showed encouraging results in initial tests, still requires much additional development. The combination of tight budgets and strong attachments to tradition may crowd out the Navy’s investment in innovation.

The Navy is at a crossroads. It can follow the familiar path into the future, relying primarily on tried-and-true capabilities that have proven their worth over the last half century. Or it can place somewhat less reliance on traditional forces while exploiting rapidly emerging technologies to create a very different fleet to meet what will likely be very different future challenges. In short, if it is to leap into the future of a smaller, but very different kind of fleet, the Navy will likely find itself embracing a familiar symbol of its past–the battleship.

Rethinking Drug Policy

As the 1996 presidential campaign heats up, drug use and control have once again become prominent political issues. Republican presidential candidate Bob Dole has labeled the increased use of marijuana among teenagers, as reported in several recent surveys, a “national disgrace.” In criticizing the White House’s leadership on drug issues, Dole has quipped that instead of adopting the “just say no” position popularized by the Reagan administration, President Clinton has chosen to “just say nothing.” And to demonstrate his commitment to intensifying the war against drugs, Dole has called for the U.S. military to broaden its involvement in interdicting drug shipments, including the use of National Guard units in “rapid response” operations.

General Barry R. McCaffrey, director of the White House Office of National Drug Control Policy, has likened drug control efforts to the war against cancer, implying a more compassionate program centered on treatment and education, but the 1996 National Drug Control Strategy demonstrates that the administration’s primary emphasis is on prohibitive and punitive measures. Although the plan calls for increasing funding for demand-reduction initiatives (treatment, prevention, education, and research) by 8.7 percent, from $4.6 billion in FY 1996 to $5 billion in FY 1997, drug-interdiction funding was slated to increase by 7.3 percent, from $1.3 billion in FY 1996 to $1.4 billion in FY 1997, and federal domestic law enforcement was to grow by 9.3 percent, from $7.6 billion in FY 1996 to $8.3 billion in FY 1997.

As these figures indicate, no matter who wins in November, prohibition and punishment will continue to be the major focus of the drug war. But is this approach optimal, or even feasible, in today’s increasingly interdependent world? Paul Stares, a senior fellow at the Brookings Institution, makes an impressive effort to redraw the battle lines of the drug war in Global Habit: The Drug War in a Borderless World.[Ed.: Check title; it is The Drug Problem at the beginning of the review.] Stares argues that the prohibitive or negative philosophy on which the United States and most countries of the world have based their drug policies is inappropriate at a time when production, trafficking, and consumption are rapidly increasing worldwide. Instead, Stares proposes emphasizing positive measures. Wisely ignoring the endless and ultimately fruitless debate between supply-reduction and demand-reduction strategies, he would target supply and demand but would emphasize measures that provide positive alternatives to drug use and trafficking. Stares supports his proposal with an expert analysis of the underlying trends and market dynamics that make, and will continue to make, the global drug problem so intractable. When he turns his attention to policy prescriptions, however, he constrains himself by working within the confines of the current debate, and thus his efforts are ultimately disappointing.

A history of failure

Today’s global prohibition regime, based on limiting drug production and trafficking and deterring consumption, has deep roots. In a well-researched if overly long chapter, Stares demonstrates that from the first attempts to control the opium trade at the International Opium Commission in 1909 to recent conventions on money laundering and trade in precursor chemicals, the drug problem has been defined as a law-and-order problem.

But the global prohibition regime, despite its longevity, has not met with many successes. The Organization for Economic Cooperation and Development estimates that consumers in the United States and Europe spend $122 billion per year on heroin, cocaine, and cannabis. Estimates of annual global retail sales range from $180 billion to $500 billion. Fifty to 75 percent of the proceeds of this mammoth “industry” are laundered, resulting in profits larger than the gross national product of three-quarters of the world’s countries.

Stares’s thorough explanation of global drug market trends since the late 1980s shows why there is little reason for optimism that drug flows can be staunched. Although poppy and coca cultivation in the main source areas (the “Golden Triangle” countries of Laos, Thailand, and Burma; the “Golden Crescent” countries of Afghanistan, Pakistan, and Iran; and the Andean countries of Peru, Bolivia, and Colombia) appears to have leveled off, new production sites have sprung up in the countries of the former Soviet Union and Central Asia. Worldwide cannabis production remains undiminished, and the manufacture of illicit synthetic drugs, such as LSD and Ecstasy, is also on the rise.

Despite record seizures by law enforcement authorities, trafficking continues unabated. Higher volume shipments, using large commercial-grade cargo aircraft and containerized ships and cargo trucks, permit the movement of multi-ton shipments and reduce the risk to the trafficker of shipment disruption. Trafficking organizations have also become more sophisticated in their operations, diversifying into other drugs and, in some instances, other premium “goods” such as conventional weapons and endangered animals. [Another significant development, which Stares does not discuss, is the now widespread availability of commercially produced Global Positioning System (GPS) technology. GPS permits extraordinarily accurate targeting, allowing the trafficker to drop drug shipments by air to vessels at sea without the need to use high-frequency transmissions that are subject to interception by law enforcement authorities.]

All of these trends are exacerbated by the major change that is the basis of Stares’s thesis: The drug trade has become a transnational phenomenon. Deregulation of trade and the expansion of tourism and transportation networks have not only made the drug “industry” more efficient in reaching long-standing markets but have also made distribution to former communist and developing countries easier.

Drug consumption presents a murkier picture. Although overall consumption in North America and Western Europe has declined, hard-core use has not, and indeed may be accelerating. At the same time, consumption in central and eastern European countries is on the rise.

Accentuate the positive

Stares’s review of current policies finds that negative measures designed to control production, such as crop-eradication programs, chemical controls, and destruction of processing centers, often face violent resistance and can result in the movement of production to other areas or in substitution of uncontrolled chemicals. Given the increasing opportunities to sell drugs internationally and to launder profits, interdiction opportunities will be increasingly limited. The extraordinary levels of international transactions taking place have greatly reduced the probability that traffickers will be caught. Traffickers themselves are extremely adaptable and can take on new modes of trafficking and new partners easily. Border and customs inspections, the disabling of trafficking networks, and standard law enforcement techniques of surveillance and infiltration will meet limited success. Noting that informal social norms play the greatest role in encouraging or discouraging drug use and that these norms are constantly evolving, Stares believes that negative measures such as increasing penalties for drug use are likely to be off-target and difficult to sustain..

Although he favors positive approaches, he is aware that previous efforts in this direction have not produced glowing results. Crop-substitution programs have usually been disappointing because they have failed to deal adequately with the market barriers of economies of scale, lengthy gestation periods for establishing alternative crops, and limited transportation networks. To demonstrate that crop substitution can work if properly managed, Stares provides a brief but instructive description of a program in Thailand, where local economic development has been linked to an extensive road program. Stares’s ideal positive approach would be much more comprehensive, including infrastructure improvements, easy credit, marketing advice, direct engagement of local and community leaders, and national macroeconomic policies to create market and employment opportunities.

Stares notes that positive measures designed to constrain trafficking face even greater challenges. Offering legal amnesty or clemency to traffickers in return for cooperation with authorities has had minimal effect on trafficking levels. Stares maintains that targeted investment and employment programs to limit the pool of trafficking recruits could be successful provided that they are of sufficient size and desirability.

He is critical of information-only mass media efforts to dissuade drug use and instead favors an integrated educational program that teaches resistance skills and reinforces the antidrug message. He also recognizes the limitations of drug-free zones (which can displace use to other locations) and “harm-reduction” or “harm-minimization” efforts, such as needle-exchange, maintenance, and prescription programs, as well as legal changes that would permit the commercial sale of “soft drugs” (which, in the case of methadone maintenance, is generally accepted as quite successful, but in the case of “prohibition-free” drug-use zones in Europe, is considered only marginally so). In other words, Stares’s commitment to the positive approach does not blind him to the reality that half-hearted or ill-designed positive programs will not work.

A frightening future

Stares concludes his argument for policy reorientation by characterizing the future of the global drug market and its implications for policy. The stresses on the developing countries posed by explosive growth in population, urbanization, lack of economic alternatives, environmental degradation, and the spread of infectious diseases will threaten the legitimacy and integrity of public institutions. The same stresses will result in increased demand for drugs in the developing world, forcing already weakened institutions to address the economic and social costs of growing consumption. Criminal organizations in former communist countries are becoming increasingly involved in drug trafficking, as the newly independent states reel under the bewildering transition to capitalism. Increasing economic and social marginalization of minority and immigrant groups in the United States and Europe will also create conditions conducive to increased drug use.

Given his persuasive characterization of the underlying dynamics of the future global drug market, one would hope that Stares’s policy prescriptions would address some of them. Instead, although recognizing that international drug control should be explicitly integrated into larger policy initiatives, he limits himself to prescriptions that attack the symptoms of proliferation and use, not its root causes. And many of his proposals face formidable barriers to implementation.

Stares’s call for the creation of a global drug-monitoring and evaluation network to collect and analyze data makes sense on the surface, but most of the detailed data on production and trafficking is classified or considered sensitive for law enforcement purposes. A regional center in the United States would not be willing or able to share these data with centers in other areas of the world, as Stares would like. Evaluation of individual nations’ drug policies by the network’s centers would also be a nonstarter. The United States, for one, would be unlikely to subject its programs to international review.

Stares’s proposals for global drug-use prevention and treatment programs are attractive and thoughtful, but prospects for implementation would be constrained by funding (which Stares acknowledges) and an uncertain commitment by some of the necessary implementing authorities, such as Congress (which he does not acknowledge). Stares also argues that drug interdiction efforts should be made part of a broader international effort to bring greater regulation and oversight to global trade-an argument that flies in the face of the still-dominant ideal of free trade on a global scale.

Despite the problems and limitations of his policy prescriptions, Stares has made an important contribution to the drug policy literature. His analysis of the failings of the current prohibition regime and the trends that will reinforce those failings in the future should expose the hollow rhetoric of both sides in the drug war debate.

The Greening of U.S. Foreign Policy

For many individuals concerned with the ecological health of the planet, the end of the Cold War presented an unexpected opportunity to harness U.S. foreign policy to a grand strategy of environmental rescue. The 1992 “Earth Summit” in Rio de Janeiro underscored the urgency of the diverse environmental problems confronting humankind; the peace dividend provided the resources that would be necessary; and presidential candidate Bill Clinton expressed his commitment to taking on the challenge.

The lackluster performance of the Clinton administration–its energies focused on taming a scrappy Congress and appeasing a restless public–deflated the environmental movement, which responded with a flood of pessimistic scenarios of the conflict and violence that soon would characterize an environmentally degraded world. Robert Kaplan’s February 1994 article in The Atlantic Monthly, “The Coming Anarchy,” popularized this position through a chilling account of how demographic changes, urbanization, environmental degradation, and easy access to arms can converge to trigger widespread violence, state failure, and migration in West Africa. Kaplan concluded that these destabilizing forces are evident worldwide; West Africa, he argued, is a case study of the planet’s future.

Someone in the White House must have been listening to these dire predictions, because the administration has dusted off its pre-election promises. In a speech given at Stanford University on April 9, 1996, Secretary of State Warren Christopher expressed the administration’s determination “to put environmental issues where they belong: in the mainstream of American foreign policy.”

Christopher’s speech was a shot in the arm for those who have spent years pushing for a more aggressive environmental policy agenda, but it has not restored the enthusiasm of 1992. Concerns have been raised about the administration’s level of commitment, its capacity to manage the political obstacles it would face even if committed, and the extent to which Christopher’s proposals provide clear guidelines for effective policies.

One thing is certain: Environmental issues do belong “in the mainstream of American foreign policy.” Scientists have amply demonstrated that environmental change is transnational, related to human activities, and threatening to human welfare. Regardless of the motives, Christopher is correct in stating that: “The environment has a profound impact on our national interests in two ways: First, environmental forces transcend borders and oceans to threaten directly the health, prosperity and jobs of American citizens. Second, addressing natural resource issues is frequently critical to achieving political and economic stability, and to pursuing our strategic goals around the world.”

The environment is one of the pillars upon which 21st century America should be built. Our lives, prosperity, and freedom depend upon clean air and water, adequate food and fuel, and robust and healthy ecological systems. These are ends that need to be integrated into our values, beliefs, practices, and institutions in a balanced and realistic manner. Although the problems we face are complex and pervasive, we know a great deal about what should be done; what we lack are leadership, vision, and will. Can we expect it from the current administration?

Christopher’s agenda

Assessments of the Clinton administration vary enormously. Some analysts point to a string of failures. Clinton jogged to the White House fueled by promises to save the environment; invest in human capital, infrastructure, and R&D; reform health care; and restore equity to a society whose upper class was making unprecedented gains, whose middle class was listing, and whose lower class was spiraling downwards. Assessments of his achievements in these areas have ranged from modest to abysmal. Meanwhile, a discontented public has become certain–in spite of much contrary evidence–that crime is escalating, immigrants are plundering the economy, civility is declining, the nuclear family is on the ropes, the nation’s capital is corrupt beyond salvation, and racism is alive and well. To win public approval, Clinton often has allowed initiatives to fall to the vagaries of public opinion.

Administration critics point to broken promises and positions that seem to rest on sand. Supporters note that during a period of tremendous change and uncertainty, progress has been made on many fronts, there have been no major losses, and the momentum for real gains has been achieved. Most important, as Clinton is no doubt aware, the transition to the next millennium offers an unusual opportunity for a second-term leader with vision to find a place in the annals of history. We might therefore be optimistic about Christopher’s speech; it signals that the administration has found its feet on the environment and a focus for the next four years.

If this is true, however, the political obstacles remain a source of concern. The first hurdle is the election, but the bar to the White House is probably too high for the aging legs of the Dole campaign; a Clinton victory is a good bet. Then what? In general, democracies do not favor long-term planning with distant payoffs. Numerous domestic constituencies do not place a high value on environmental policy, and the Republican majority in Congress, despite some recent, election-oriented moderation, has been largely anti-environmentalist.

To further complicate matters, environmental issues lack the muscle that appeals to many members of the security, intelligence, and diplomatic communities. Environmental problems tend to emerge gradually through the complex interaction of economic, political, demographic, and technological variables. Unlike the staples of foreign policy–war and trade–they cannot usually be resolved through superior force or the signing of a treaty, and they rarely offer a quick or tangible payoff to policymakers. Thus, a commitment to environmental issues requires a mind-set guided by an unfamiliar incentive structure, one that accepts hard work today for the sake of long-term benefits.

Moreover, the foreign counterparts of U.S. officials often are uncomfortable with U.S. leadership, even when little can be accomplished without it. Especially in the Third World, aggressive environmental initiatives tend to be perceived as attempts to fix the status quo by burdening the development process with constraints and shifting the costs of the North’s “mistakes” onto the South. China, Indonesia, Brazil, and many other states are wary of proposals that seek to modify the strategies through which they are pursuing economic growth. And although the United States is the only superpower, it is no longer able to control the global agenda as it did after World War II. It now has to persuade other countries that environmental policies are in their interest.

Even if the administration were sincere in its commitment and able to manage the turbulent political landscape it faces at home and abroad, it is not evident that it has a clear sense of what to do. Vice President Gore is virtually alone insofar as vision and expertise are concerned. Much of Christopher’s speech suggests the zeal of a recent convert.

Christopher talks of “the growing demand for finite resources,” perhaps forgetting that environmentalists long ago shifted their attention to the pressures on renewable resources. Although he notes the negative impact of “dangerous chemicals” such as PCBs and DDT, that are banned in the United States but still used elsewhere, he fails to mention that many of the suppliers are U.S. companies. He speaks of his commitment “to reconcile the complex tensions between promoting trade and protecting the environment–and to ensure that neither comes at the expense of the other.” An admirable goal, but the interesting cases are those that do conflict. What happens then? How do we deal with a China committed to burning low-grade coal to maintain economic growth; an Israel, Turkey, or Egypt seeking to monopolize scarce fresh water; a Brazil or Indonesia that builds foreign reserves by cutting down rain forests; or a Spain or Japan willing to harvest fish to the point of extinction? Economic growth and environmental rescue may be compatible in theory, but to get there we have to make some tough decisions.

Many of the problems Christopher tags are well known and have been explored in great detail. We are all aware of water scarcity, deforestation, the erosion of arable land, the loss of biodiversity, and climate change. What we need are solutions, not further confirmation that the problems exist. Christopher’s speech lends itself to the perennial critique–all talk, no action. “We will press Congress to provide the necessary resources to get the job done,” he says, but he provides no indication of how this pressure will be applied. It would be easy to dismiss Christopher’s speech as promising nothing more than another round of studies, negotiating frameworks, and discussion groups.

A satisfactory security policy must involve greening the military.

And yet the speech provides some grounds for optimism. Elements of Christopher’s agenda could be refined into a viable grand strategy for environmental rescue. The preservation and promotion of U.S. interests do depend on a healthy planet, and this could be secured in part through a foreign policy that is realistic and forward-looking.

Fundamentals of a green policy

Nature is like the market: Through some fuzzy logic, a huge number of small, random, self-interested actions combine to produce a robust and efficient totality. Like the market, nature can succeed in the face of a fair amount of intervention. But when the level of intervention crosses critical thresholds, nature–like the market–begins to fail. The goal of any environmental policy should be to correct “market” failure and ensure that future interventions do not reach this point. In other words, insofar as the nature-civilization relationship is concerned, things tend to work themselves out, but some guidelines are nonetheless imperative to manage the human penchant for excess and relieve the pressure points generated by a variety of social and natural forces.

In short, guidelines are needed to promote an equilibrium between the demands social systems place on ecological systems and the capacity of the latter to supply the former. Because so many ecological systems are being exploited beyond their natural capacity, recovering an equilibrium will require managing adjustment costs, taking preemptive actions, and responding to problems that are well-advanced and likely to trigger conflict.

Such a program must integrate two forms of expertise. First, little can be achieved without an adequate understanding of the scientific explanations of environmental change and the functioning of ecological systems. In many cases, adequate knowledge exists, but policymakers (and Americans generally) tend to be weak in the area of natural science. The consequences of this ignorance are enormous. For example, the perception that scientists disagree on everything from the reality of climate change to the rate of species extinction is widespread and provides a rationale for demanding further study of any given problem. In fact, scientific consensus is remarkably high on most issues, and the significance of disagreements often has been misconstrued and overstated. Policymakers need to understand when disagreements warrant a “wait and see” stance or confront the policy community with difficult choices between different standards and objectives, and when they do not.

Equally important, many problems can be attacked at different points. For example, it is scientifically possible to attack the problem of malaria by focusing on eliminating the parasite, the mosquitos that carry it, the breeding grounds of the mosquitos, the vulnerability of people to the parasite or the mosquitos, or the adverse effects of the parasite on its human host. In many cases, the chain of interactions through which environmental change eventually threatens socioeconomic welfare is long and complex. Choosing one or more points of attack generally requires a reasonable comprehension of the relevant science.

A firmer grounding in science, however, will not by itself produce effective policies. Social systems vary enormously in their impact on the environment, their vulnerability to different forms of environmental change, and their capacity to respond to environmental scarcity and degradation. Effective policies have to be extremely sensitive to social variables. In the case of malaria, economic reliance on agriculture and forestry, the state of health and education services, waste management practices, and technological capacity all affect policy choices. Developing effective models of the complex interactions within and between diverse social and ecological systems clarifies policy options, but the task can be a social scientist’s nightmare.

Consider one illustrative case. Tensions over access to the fresh water of the Jordan River system have increased steadily since 1948. The 1967 Arab-Israeli War, in which Israel took control of the Jordan’s headwaters, has been described as a water war. Science provides an accurate estimate of the amount of renewable fresh water that is available. It also offers technologies to maximize the utility of this good. But Lebanon, Syria, Jordan, and Israel differ in their capacity to obtain and use efficient technologies. The economic and security goals of each state further complicate matters, as does the longstanding hostility among the countries. Under these conditions, where can scarce policy resources be applied to the greatest advantage? Clearly, a satisfactory solution must address both social and ecological variables. In this case, the problems may be too far advanced to bring peace to the region, but in many other situations they are potentially solvable and should be addressed before they become crises. What, then, should guide U.S. foreign policy?

Low-cost, high-impact tools

According to Christopher, “Environmental initiatives can be important, low-cost, high-impact tools in promoting our national security interests.” This may be true, but what exactly does it mean? Christopher describes a patchwork of policy initiatives, many of which are promising. The proposed Annual Report on Global Environmental Challenges promises to assess global environmental trends and identify U.S. priorities, beginning in 1997. Christopher’s “Environmental Opportunity Hubs” promise to involve U.S. embassies in assessing and addressing environmental issues worldwide. The International Conference on Treaty Compliance and Enforcement, to be hosted by the United States within two years, promises to give teeth to existing environmental treaties, many of which suffer from egregious monitoring and compliance problems. His multitiered approach of forging partnerships with business and promoting bilateral, regional, and global initiatives promises to channel environmental problems into social settings in which the resources and willpower necessary to solve them are available. But none of this really makes clear the relationship between low-cost, high-impact environmental policies and national security.

National security involves three things: identifying the core values to be protected and advanced; assessing foreign threats and U.S. vulnerabilities; and formulating appropriate policies–packages of goals, resources, and strategies. On this basis, the promise of low-cost, high-impact tools is somewhat misleading. In the security world there are a number of low-cost threats to U.S. core values, but few–if any–low-cost responses. Anyone linking environmental change and security is signaling that the problems are fairly big. But is this linkage accurate?

The relationship between environment and security has received considerable attention in recent years. In the 1970s and early 1980s, writers such as Lester Brown of the Worldwatch Institute, Arthur Westing of Westing Associates in Environment, Security, and Education, and Richard Ullman of Princeton University argued that forms of environmental change posed a new type of threat to U.S. national security by undermining socioeconomic welfare. These claims became the basis of a lively debate. The basic controversy in this debate concerns the meaning of the term “environmental security.” On one side are those I call the environmental security “maximalists,” such as Norman Myers. They seek to harness the rhetorically powerful language of security, and the vast resources available in this arena, to the promotion of an ecological world view that focuses on the well-being and security of the individual. As admirable as it may be to strive for a fundamental restructuring of existing practices and beliefs, this approach involves a rethinking of security that is too extreme for most of those involved in its provision. Environmentalism is ill-suited to the task of radically redefining security, if only because we continue to face a wide array of traditional security threats such as nuclear proliferation. In a contest between maximalists and conventional thinkers to control the discourse, the latter have a decisive edge.

In response to the maximalists, “rejectionist” thinkers such as Daniel Deudney, a professor at the University of Pennsylvania, have argued against linking environmental issues to security issues. According to Deudney, environmental change is an unconventional threat that rarely leads to interstate war, military tools are not of much value in addressing environmental issues, and the military penchant for secrecy and “we versus they” thinking is antithetical to the interdependent nature of most environmental problems, which require information sharing and cooperation to be resolved. The environmental problems we face are real and urgent, but they are not, according to Deudney, national security problems.

Between these extreme positions lie a large number of writers who seek to integrate specific environmental concerns into security thinking. This middle ground, a logical site for U.S. security policy, emphasizes several things, none of which are picked up by Christopher. First, many of the research, training, testing, and combat activities related to national security cause environmental degradation. For example, nuclear weapons tests contaminate air, water, and soil; land mines cripple agriculture; Iraq’s torching of oil wells and diversion of oil into the ocean resulted in enormous environmental damage. A satisfactory security policy needs to involve greening the military.

Defense and intelligence agencies possess vast resources that could be used for environmental ends without compromising traditional missions.

In this regard, the past five years have been promising. According to Kent Butts, a professor at the U.S. Army War College’s Center for Strategic Leadership, the Department of Defense (DOD) has, for example, reduced toxic and hazardous waste disposal by half, cooperated with the Environmental Protection Agency (EPA) and Department of Energy (DOE) to develop cleanup technologies, supported efforts to find alternatives to ozone depleting substances, and worked with Norway and Russia to manage radioactive contamination in the Arctic. Further concrete goals need to be set by the administration to ensure that this process does not stall.

Second, defense and intelligence agencies possess vast resources that might be used for environmental ends without compromising combat readiness or other security missions. For example, Butts has argued that the U.S. military can help train foreign militaries in environmentally sensitive practices through its Military-to-Military Contact and Security Assistance Programs. These programs enable the military to transfer environmental assessment technologies and help restore or improve foreign military sites. Further, the U.S. military manages millions of acres of land at home and abroad and could be compelled to comply more fully with environmental regulations than has been the case in the past. In this regard, its restoration and preservation activities in the Chesapeake Bay area may become a model for future military land use.

Similarly, the intelligence community has state-of-the-art data collection and analysis assets that might be harnessed to environmental ends. The shroud of secrecy under which it operates poses certain problems, but at the very least it can contribute to tracking global environmental trends and providing some of this information to organizations that can use it. In 1993, the National Intelligence Officer for Global and Multilateral Affairs (NIOGMA) was established and has begun to determine how the intelligence community can support environmental policy. Richard Smith, the Deputy NIOGMA, has identified ongoing analysis, negotiations support, treaty monitoring and compliance, support for military operations, and support for scientific enterprise as principal areas of concern. The key problem that needs to be addressed today is that of establishing effective and appropriate principles for classifying intelligence. The intelligence community’s penchant for secrecy is unlikely to be relaxed unless the administration acts forcefully.

Third, environmental degradation, especially scarcity, functions as an underlying as well as triggering cause of conflict in certain regions of the world. The work of Thomas Homer-Dixon, a professor at the University of Toronto who has directed three major projects on environmental change and conflict, has been useful in clarifying these connections. A satisfactory security policy needs to respond to this work.

Homer-Dixon has developed useful models showing how environmental factors interact with social, political, and economic variables to create, trigger, or intensify instability and violence in developing countries. These situations can have important implications for U.S. national interests. For example, with its vast quantity of oil, the Middle East remains vital to U.S. national security. It is almost inevitable that scarcity-related conflict will occur in this region in the near future. Nine countries depend on water from the Nile, but weak downstream states such as Sudan and Ethiopia have had their access to water limited by threats from a far more powerful Egypt. A similar situation exists on the Euphrates, where upstream Turkey uses its might to limit the water available to Syria and Iraq. In all of these cases, natural limits have more or less been reached, but the demand for water is rising steadily.

Conflicts have erupted elsewhere in the world over the exploitation of fisheries (Canada versus Spain) and because of pressures such as migration associated with the decline of arable land (Honduras versus El Salvador). The potential for further conflict is high. With this in mind, U.S. security policy needs to spell out what will be done in those areas deemed vital to our interests in which scarcity-related tensions are evident and likely to worsen.

A logical first step is to try to reduce the pressure in these regions by promoting cooperative resource management schemes. Often this will require a process of education and negotiation to establish shared interests and temper existing hostilities, economic and technical support to assist in policy development and implementation, and military assistance to offset lingering security concerns or provide temporary stability so that other initiatives have a reasonable chance to take root.

Managing resources in regions plagued by scarcity and other forms of degradation will rarely be an easy task because the priorities, needs, fears, and capabilities of states often vary enormously. Where it is not possible, a policy of damage control should be in place. For example, the United States can reduce vulnerability to the adverse spillover effects of regional instability through domestic policies, such as promoting energy efficiency and alternative energy forms, and multilateral policies, such as strengthening international guidelines for treating environmentally displaced people or restricting light arms sales.

Finally, after the impact of environmental change has generated a security crisis that has required the threat or use of force, “low-cost, high-impact tools” may become useful. In these cases, emphasis needs to be placed on integrating environmental issues into conflict resolution processes. Negotiating teams need the expertise and foresight to do this. For example, negotiators may well discover that long sessions spent partitioning the former Yugoslavia along ethnic lines will achieve little if the various parties are not also ensured fair and reliable access to essential environmental goods. Strengthening the environmental component of our conflict resolution teams should be a fourth element of an environmental security policy.

In short, we need to foster a military that is green as well as lean; to tap into security assets without compromising their traditional roles; to develop a strategy for tracking and responding to those areas where scarcity is likely to trigger conflict; and to incorporate environmental expertise into conflict resolution capabilities. These are modest–although not really inexpensive–ambitions that draw upon existing knowledge and skills; however, they have the potential for making real gains.

Environmental diplomacy

Security, however, is only one dimension of foreign policy, and although much could be achieved by integrating environmental concerns into the security community, much more could be achieved through the careful deployment of other assets. To this end, Christopher’s speech lays out a viable framework of partnerships with nonstate actors and bilateral, regional, and global initiatives. Such an approach might be termed “concentric multilateralism,” which is based on two premises.

The U.S. must develop a strategy for dealing with areas deemed vital to our interests in which environmental scarcity could help trigger conflict.

First, the nature and impact of environmental changes vary significantly, and this fact needs to be reflected in foreign policy thinking. During the Cold War, foreign policy was generally assessed in terms of a single objective–containing Soviet influence. Today no such baseline exists; in many ways environmental change is the very antithesis to the Soviet threat in that it has not one, but many centers. Consequently, it is essential that these centers be identified, so that different problems can be addressed in the domestic, bilateral, regional, or global settings that are likely to yield the greatest returns.

Second, in addressing environmental problems governments have to form partnerships with nonstate actors. Modern technologies have empowered elements of the private sector, enabling them to roam the globe evading state regulation, but also making them important sources of expertise and capability. This can be tremendously frustrating for a government trying to thwart drug trafficking, terrorism, money laundering, or computer espionage. The perpetrators of these threats to U.S. national interests can often use technology to escape detection or repression. At the same time, the fact that private actors cooperate freely across borders, are not as constrained by diplomatic protocols, and are often at the cutting edge of R&D can be directed towards beneficial ends. For example, scientists worldwide played a key role in shaping the global response to ozone depletion; environmental groups such as World Wildlife Federation and Nature Conservancy have successfully introduced sustainable economic practices into countries in Africa, Asia, and Latin America; the Cousteau Society has developed technologies to monitor and restore the health of the world’s oceans; and multinational corporations have forged profitable associations with governments to protect rain forest. These are small acts that together produce large results. The U.S. government should act to encourage, facilitate, and, where appropriate, coordinate activity in the nonstate sector.

Christopher’s “concentric multilateralism” demonstrates a fine grasp of what will be required to restore equilibrium between social and ecological systems. The former have evolved through the steady centralization of power. Historically, this was the key to providing the high level of order necessary for economic, social, and political progress. In nature, however, power is diffuse, and this natural democracy unrelentingly evades strategies of domination. We have witnessed growing tensions between civilization’s compulsion for hierarchy and nature’s indomitable anarchy. Christopher’s approach is an important step towards reconciling these two interactive modes of power.

But a viable framework for problem-solving is of little use until it is filled in with goals and resources. Here the Christopher speech is somewhat less inspiring. The ultimate objective of U.S. foreign policy is to protect and promote the health, prosperity, security, and freedom of Americans. The status of these things depends on the ways in which patterns of production, consumption, waste management, and population growth here and abroad affect ecological systems, and on the ways in which the environmental changes we have caused or enabled feed back into our social systems in such forms as water, food, and fuel shortages, flooding, or disease and other health problems. The feedback may affect us directly, as in the increase in the incidence of skin cancer, or indirectly, by placing stresses on other countries that result in reduced access to environmental goods, immigration pressures, or conflict.

With this in mind, environmental diplomacy should be guided by four interrelated objectives: (1) reducing our vulnerability to environmental scarcity by, for example, advocating domestic measures to increase energy efficiency and reduce emissions that contribute to climate change; (2) strengthening weak states so that they become competent to manage environmental problems by, primarily, promoting the regulation of the light arms trade, fostering economic openness, cautiously supporting democracy, and funding international health programs; (3) cooperating with other states and nonstate actors to manage the key environmental problems likely to generate conflict–water, food and fuel scarcity, rapid urbanization, and population growth; and (4) supporting efforts to manage environmental crises plaguing the former Soviet Union and those on the horizon in China, which is experiencing a growing food deficit and an expanding population.

This is a big agenda, but environmental problems are pervasive and escalating–they require a big agenda. It can, however, be made manageable in three ways. First, a substantial investment in education is essential. There is no silver bullet that will restore the balance between civilization and nature. Instead, we have to build the foundations for a better relationship by investing in the future through education here and abroad. At home this means strengthening the science component of public education, supporting environmental research, and fostering an ecological sensibility–an awareness of how ecological systems function and sustain us, how we affect them, and how we can modify our behavior to ensure their health and our well-being. Internationally, the quality of environmental data and education vary enormously. The United States should target pivotal states such as Russia and China and assist in the development of their environmental awareness. Cooperative solutions to transnational problems are more likely when the parties involved see shared interests at stake.

Second, just as we once measured every foreign policy decision against the grand strategy of containment, we should now ensure that every foreign policy decision takes heed of the environment. But instead of thinking in grandiose terms, as policymakers looking for big wins are prone to do, we should commit ourselves to a program of incremental gains. Incorporating reasonable environmental standards into trade agreements, as was done in the North American Free Trade Agreement, greening aid packages, and pressuring the International Monetary Fund to follow the World Bank by developing serious environmental impact assessment procedures are examples of policies that will lay the groundwork for long-term success. They do not attract television cameras today, but they will attract historians tomorrow.

Finally, we have to encourage and support environmental initiatives undertaken by other states and nonstate actors. To this end, the United States should continue to support UN programs, facilitate the commercialization and transfer of green technologies, and ensure that environmental experts are included in international negotiations.

All of this, however, takes a back seat to the real problem that threatens us. The wealthiest portion of humankind is, on most issues, able to insulate itself from the adverse effects of many forms of environmental change and thus feels little incentive to modify its behavior. The poorest portion of humankind, its numbers swelling daily, is being ravaged by urban squalor, unsanitary water, malnutrition, and disease–it can do little but struggle to survive, even though this places tremendous stress on many ecological systems. In the big picture, environmentalists are probably correct in saying that everything is connected, but it may be decades before countries like the United States experience the full force of environmental change. And there is always the hope that before then human ingenuity will lead to the discovery of low-cost, high-impact solutions.

Framing our choice in the context of the world our grandchildren will inherit might provide some basis for acting today, but this type of long-term thinking rarely influences the policy world. Attacking the global equity problem fueling many environmental problems through some major redistribution of wealth is politically unfeasible. The bottom line is that environmental rescue can only be a long, slow, incremental process–it will rarely win Nobel Peace prizes. Committing U.S. foreign policy to such a grinding process requires vision, leadership, and an enlightened understanding of what our long-term national interests are. Christopher’s speech represents a tentative step in this direction, but it is far too early to say that the baby is walking.

A Better Home for Undergraduate Science

A renaissance is beginning in undergraduate education in science, mathematics, engineering and technology. New discoveries and emerging technologies are changingthe face of science–and the way that science is learned. In the wake of a series of reports underscoring the inadequacy of existing curricula and recommending new approaches, educators are coming to a deeper understanding of the kinds of programs that stimulate student interest and convey a useful understanding of how science is done. They now face a new challenge: putting new in old spaces.

At the vast majority of colleges and universities across the country, facilities for undergraduate science are inadequate to this challenge. Critical to sustaining the current momentum in curriculum reform are spaces and structures that can accommodate programs designed to attract and sustain student interest in science, engineering, mathematics, and technology (SEMT). If the promise of the renaissance is to be fulfilled, a nationwide renewal of undergraduate science facilities is needed.

This will be a costly and time-consuming endeavor in both time and dollars. National Science Foundation (NSF) data indicate that the 500 largest colleges and universities alone need at least $10 billion to $12 billion to renovate or construct facilities for undergraduate research and classroom instruction. Planning, financing, and constructing or renovating SEMT facilities requires a predictable framework in which institutions can make long-term decisions–a framework that includes financing options that encompass the needs of a variety of institutions. To support the renaissance now under way, we need a national collaborative effort to formulate a comprehensive multiyear agenda focused on the renovation and construction of the spaces and structures in which undergraduates are learning science.

An emerging consensus

Throughout U.S. society there is a deepening awareness that strong undergraduate SEMT education serves the national purpose in many ways–creating a scientifically literate populace, preparing a technologically sophisticated workforce, training the next generation of K-12 teachers, and educating professionals in fields that are crucial to maintaining U.S. preeminence in the world economy. In recent years, groups such as the Council for Undergraduate Research and numerous private foundations have helped to support curriculum reforms in undergraduate SEMT programs. Major reports, such as Science in the Nation’s Interest from the Office of Science and Technology Policy, describe the why and how of strong undergraduate SEMT programs. The National Science Foundation has just conducted a review to develop recommendations for future action by each sector of SEMT education, and the National Academy of Sciences has established a Center for Science Education, within which the Committee on Undergraduate Science Education focuses on scientific literacy.

One catalyst for these efforts was the realization that it is during the early years in college that the largest numbers of students become discouraged and make the decision to abandon further study in science. Faculty in institutions across the country began exploring ways of engaging student interest in the study of science and math early in their careers. Their goal was to enable more students to experience the excitement of doing science, gain an understanding of the scientific way of thinking about their world, and then persist and succeed in science.

A broad consensus has emerged about what works in strong undergraduate programs: Students learn best in a community in which learning is experiential and investigative from introductory courses on up through advanced courses for nonmajors as well as majors. In addition, students are most likely to succeed when what they are learning is personally meaningful to themselves and their teachers, makes connections to other fields of inquiry, and suggests practical applications related to their lives.

Many of these reforms have incorporated assessment measures designed to document their effectiveness. What we know from experience and from available data is that the number of students receiving degrees in science and engineering is increasing and that in particular the number of women and minority studens is growing. In addition, many colleges report that a higher proportion of nonscience majors are continuing to take science courses beyond those required for graduation.

Despite the growing understanding of what works in undergraduate SEMT programs, there remain significant barriers to continued reform. One critical barrier is the design and quality of many undergraduate classroom and laboratory spaces. This is an issue that we have been investigating over the past few years as part of Project Kaleidoscope (PKAL), an informal, independent alliance of colleges and universities working toward the goal of strengthening undergraduate science. Since 1992, PKAL has hosted more than 30 meetings on a variety of issues critical to building and sustaining strong undergraduate SEMT programs; 10 of these have been workshops focused on planning new and renovated facilities for undergraduate science. More than 200 colleges and universities have participated in one or more PKAL facilities-planning workshops.

Reports from these institutions make it very clear that inadequate space is a significant barrier to strengthening the SEMT programs on their campuses. They document the unsafe labs, the lack of research space for students and faculty, the intractability of spaces for discovery-based, investigative learning, the lack of building systems to accommodate new technologies, and the inhospitality of spaces for collaborative work–that is, for doing science as scientists do science.

Facilities now being used for undergraduate learning are deficient on several counts. As undergraduate enrollment in science-related courses has risen, buildings have become overcrowded. Many do not meet present-day standards for safety, accessibility, or cost-effectiveness. Chemistry labs lack adequate ventilation or storage and handling facilities for hazardous chemicals. Few labs are configured to meet requirements for accessibility to handicapped students. Classroom and laboratory buildings may not be energy efficient.

Moreover, many facilities are simply deteriorating. Electrical service and heating and air conditioning systems are outdated and often unreliable, creating problems for maintaining delicate instruments. Excessive humidity and leaky roofs damage lab equipment. And many buildings lack the electrical capacity to support the sophisticated computer workstations and networks that have become an integral part of contemporary science.

If we understand that the useable life-span of a research and research-training facilities is about 30 years, one explanation for the current state of facilities becomes clear. A large percentage of the spaces and structures now being used for science-related research, research-training, and instruction on college and university campuses across the country is approximately three decades old, constructed as part of the national effort to improve U.S. science education in response to the shock of Sputnik. But perhaps more important than their age and structural condition, the design of these spaces reflects the science and the practices in science education prevalent at that time.

Sputnik-era facilities, with large lecture halls and relatively cramped laboratories, were designed for passive learning. They reflected the then-current strategy of targeting science education to the “cream of the crop” and screening out the rest. Today, however, undergraduate programs are designed to attract all students, rather than weed them out, and to give them the opportunity to do science the way real scientists do. Such programs need spaces organized differently from those built in the 1950s and 1960s.

Linking program and space

Current facilities do not support (nor were they designed to support) interdisciplinary approaches, sophisticated technologies, or the kinds of collaboration central to modern science. Colleges and universities now need spaces that can support these uses. At Dickinson College, for instance, changes in the way physics is taught spurred the redesign of a physics classroom. Previously, long tables were fixed in a series of rows, like lunch counters, making it hard for students to engage in group discussions or for instructors to squeeze between the rows to check students’ work. Computers on the work tables blocked students’ views of demonstrations, and there was no space for using video cameras to capture motion experiments for later analysis. This design impeded faculty’s efforts to shift from lectures to Socratic-style dialogue and discussions and to incorporate computer and video technology into the classroom.

The revised design included replacing the lunch counter-style tables with T-shaped workstations arranged around the perimeter of the room, making it easy for instructors to circulate, monitor discussions, and check computer screens. Each workstation consists of a hexagonal table for group discussions, comprising the stem of the T, and a long table with a computer on each end, where students can work in pairs. A raceway was installed at the center of the room, with a video camera mounted on the ceiling above it; a removable demonstration table was placed at the front of the room. In addition, the room has been rewired for a faster computer network.

Faculty in a variety of fields are eager to incorporate computers into their classrooms. In chemistry, computers can allow students to visualize molecular structures and simulate chemical interactions. In physics, students can use computers to collect, analyze, and graph data. At Grinnell College, for instance, a new science lecture hall is designed as a series of tiered tables, arranged so that students can work in groups, and wired to allow the use of networked laptop computers during class sessions.

Grinnell is also seeking to promote a research-rich environment by blurring the distinction between teaching and research. To this end, it is reconfiguring laboratory space in its science building to link instructional and research labs. This allows students supervised access to specialized equipment housed in the research labs, while permitting research activity to spill over into the instructional lab space in the summertime.

The placement of laboratories, faculty offices, and social spaces can help foster a sense of community and encourage faculty-student interaction. At Carleton College, faculty offices in the mathematics and computer sciences departments were scattered in several different locations far from the math center; there was no place for informal interaction among students and faculty. In the new center for mathematics and computing, faculty offices are centralized, with lounges and conference rooms nearby. Most important, the center of the building is a two-story, glass-walled drop-in math help room, easily accessible to faculty and students alike.

Making math and science visible and appealing is also an important part of strengthening undergraduate participation. It is hard to attract students to the pursuit of science when research is being done in the bowels of a building far off the beaten path. Many colleges are determined to put science on display. At Boston College, the nuclear magnetic resonance lab in the chemistry building is equipped with floor-to-ceiling windows so that passersby can share the excitement of seeing how real science is done.

Also, the location of science-related buildings and departments can play an important role in fostering interdisciplinary learning. At Colby College, a new walkway between the chemistry and biology buildings houses the biochemistry labs. At the University of Oregon, a new science complex links individual departments with interdisciplinary “institutes” in molecular biology, chemical physics, materials science, theoretical science, and neuroscience. Stairways link shared administrative offices and conference rooms, while a central atrium provides what one administrator calls “an agora for science.”

As a nation we are at another “Sputnik” juncture; the challenge we face is as urgent as that faced by the country in the 1960s. We must make the same collective commitment to strengthening education that was made 30 years ago.

Resources for facilities

Data on federal obligations for science and engineering (S&E) to universities and colleges indicate that the nation’s response to the challenge of Sputnik addressed the physical as well as the human infrastructure. In 1963, total federal expenditures for general research and development (R&D) were $829.5 million; total federal expenditures for R&D plant were $105.9 million.

In 1971, the first time that data were separately identified for instructional facilities, obligations for R&D plant totaled $29.9 million and for instructional facilities $28.7 million; general R&D obligations totaled $1.5 billion and those for fellowships, traineeships, and training grants totaled $421 million. In the years that followed, federal obligations for R&D plant and instructional facilities steadily decreased, in part because the physical infrastructure was still relatively new. The nearly $30 million targeted in 1971 for instructional S&E facilities represented the high-water mark for that purpose for 20 years.

Over the coming decade, most of the nation’s 3,500 colleges and universities will be facing facilities renewal bills averaging at least $5 million.

In 1988, as a result of years of pressure from colleges and universities, Congress passed the Academic Research Facilities Modernization Act. Although legislators’ primary concern was improving facilities at the research-intensive universities, they were aware of equally alarming conditions in liberal arts colleges and comprehensive universities. The Act required NSF to design, establish, and maintain a data collection and analysis capability in order to identify and assess universities’ and colleges’ needs for research facilities. Since then, NSF has conducted a biennial survey to document the need for construction and modernization of research laboratories, including fixed equipment and major research equipment, in each major field of science and engineering. It also collects and analyzes data on university expenditures for the construction and modernization of research facilities as well as the sources of funds used. Although the survey is limited by its focus on facilities for externally funded research, it has been of significant value in helping set current policies and programs. NSF’s most recent study of the facilities needs of the approximately 500 institutions that receive at least $50,000 in outside research funding would require an investment of close to $12 billion.

NSF has used the survey data to develop and implement the Academic Research Infrastructure (ARI) program, which provides competitive grants for the renewal of spaces used for research and research-training in science, mathematics, and engineering. The program is intended to address the needs of all types of institutions, based on levels of NSF funding as well as by student populations served. Approximately 515 institutions have applied to the program in the past six years even though the funds available are limited (ranging from a low of $20 million in 1991 to a high of $116.5 million in 1993). Since 1991, when the first ARI awards were made, the program has provided a total of $136.5 million to 317 institutions. The response to the ARI program is one indication of the national scope of the facilities problem. Nonetheless, the program has never been funded at its full authorized level of $250 million a year.

There is important anecdotal evidence as well of the overwhelming need to renew the spaces in which undergraduates learn and do science. At one small liberal-arts college, for example, overall enrollment has remained steady over the past five years, but the proportion of students majoring in science or mathematics has jumped from 23 percent to 33 percent; the college projects that 47 percent of its incoming freshmen will major in math or science. In addition, more nonscience majors are taking upper-level math and science courses. In the past five years, the college has spent $23.5 million on new and renovated spaces, advanced computer workstations, and a campus-wide network, and administrators plan to spend $5 million more. That’s $28.5 million for an institution with an annual operating budget of $46 million. PKAL files include similar reports from many institutions.

Surveys of the more than 200 institutions that have participated in PKAL facilities-planning workshops indicate that the pressure for renovation is widespread and the cost daunting. Among the institutions participating in the PKAL facilities planning workshops, 91 were either far enough along with their planning or had set a budget limit for their project to estimate project cost. Together, they plan to spend a total of nearly $1 billion on new, expanded, or renovated spaces for undergraduate science. Budgets for specific projects range from $320,000 for a renovated space to $58 million for a new science building; major projects average about $12 million. Over the coming decade, most of the nation’s 3,500 colleges and universities will be facing facilities-renewal bills averaging at least $5 million.

Many institutions are only beginning the difficult task of identifying the range of financing options and opportunities available. For private institutions–a science resource that the nation can ill afford to neglect–raising money for science facilities is especially challenging. Fewer than five private national foundations and only a handful of regional foundations support bricks-and-mortar projects. At the same time, most private colleges do not have access to state capital funds. These institutions desperately need to bring together the right package of debt financing, tax incentives, gifts, and grants to implement their facilities plans.

Congress can consolidate the various agency programs through which it funds research related infrastructure.

From the information gathered from institutions participating in the PKAL workshops, it is our conviction that the scope of the problem is larger and more complex than is commonly recognized, both from the perspective of financing issues and from the perspective of what will happen if the problem is not addressed. It is the premise of PKAL that the problems of SEMT education cannot be addressed piecemeal. Instead, they require the perspectives of all those who will participate in the solution–students, teachers, researchers, administrators, design professionals, and representatives of federal and state governments and private foundations. Nowhere is the need for such kaleidoscopic vision more evident than in the need for teaching and research spaces that will truly accommodate the renaissance in SEMT education. These needs must be addressed in a national plan of action.

An agenda for action

The National Research Council (NRC) should begin this process by convening a blue-ribbon committee, including leaders from business and industry, academe, and government, as well as experienced design professionals, to outline and implement a 10-year plan to address SEMT infrastructure needs in the nation’s colleges and universities. The charge to the committee is to recommend a coordinated set of policies, programs, and funding mechanisms for infrastructure renewal, based on a recognition of the spatial requirements for a quality undergraduate SEMT program.

The initial work of the committee should be to gather and disseminate information about the cost of the new construction and remodeling necessary to achieve adequate facilities for research, research-training, and instruction in quality undergraduate SEMT programs. If the momentum of current reforms is not to be lost, we need better data. The NSF survey does not include the many four-year colleges that receive less than $50,000 in external support for research or any of the two-year colleges where 40 percent of college students are enrolled. It focuses on spaces for research and research training, but does not cover the need for better classroom spaces and student labs–even though recent curriculum reforms have focused on the importance of introductory-level classroom instruction. It may be that much of the data needed to establish national policies and programs are already available in various agencies and associations, but the information needs to be assembled and analyzed in order to be useful for planning and action by the larger community.

The committee should also catalogue sources of funding and financing available to institutions. No single funding source, public or private, is expected to provide all the funds needed for an individual project. Instead, the committee must seek to broaden the range of funding and financing opportunities, including low-cost loans, loan guarantees, tax exempt bonds, and tax credits.

There is a great deal that the federal government can do to support this effort. To begin with, Congress can consolidate the various agency programs through which it funds research-related infrastructure. Right now, agencies such as NSF, the Department of Agriculture, and the National Institutes of Health have their own programs. A formal, collaborative interagency set of programs would be more efficient. The ARI program can serve as a template for a larger federal effort, since it is competitively funded and requires grant-seekers to formulate a multiyear plan that extends from fundraising to facilities maintenance and program evaluation

Overall, federal funding should better balance R&D funding with funding for the facilities that will support academic research and training. This year, NSF is spending about $3.5 billion on R&D and only $50 million on research and research-training facilities. A step in the right direction would be to appropriate the full $250 million that has been authorized for the ARI program.

Colleges and universities should develop ways to share expensive instrumentation and facilities.

In addition, Congress should increase the breadth of financing and funding options and opportunities available to institutions of higher education for infrastructure renewal. One option would be to explore tax incentives or other mechanisms, such as preference in grant allocation, to encourage collaboration among academic institutions or between academic institutions and businesses in developing or renovating facilities.

State governments should make a commitment to improving science education spaces and programs in public and private higher education. Both kinds of institutions serve as an important resource for a state’s economic development by educating a highly skilled workforce. They also play an important role in training teachers, particularly for K-12 programs. Improving the quality of elementary and high school education in general, and math and science education in particular, can help to attract business to the region. In many cases it may make more sense to spend state money on modernizing existing buildings on private as well as public campuses than to undertake major new construction projects at public institutions.

Colleges and universities can assess the need for new facilities and technologies across their campuses and determine the funding requirements for meeting them. They will need to reallocate resources away from short-term expenditures in order to ensure long-range funding for renovation or construction and maintenance of critically needed facilities. It is particularly important that they identify ways that emerging technology, particularly communications and computers, will change the nature of teaching and research so that new facilities will not quickly become obsolete.

One way that colleges or universities can ease the financial burden of infrastructure improvements is to develop ways to share expensive instrumentation and facilities. Such arrangements can be made either with other educational or scientific institutions or with private industry. In addition, educational institutions can help one another by sharing information about innovative designs and fundraising strategies.

Business and industry can play a constructive role as well. In addition to contributing to infrastructure projects that serve the mutual interests of industry and society, they can take inventory of their own R&D facilities and explore ways to share research space with academic institutions. They can also press for tax and other incentives to encourage the development of such shared spaces for research and research-training.

Finally, private foundations should broaden their range of support. By far the largest share of private funding is devoted to science programs rather than to the spaces in which those programs are housed. Foundations should recognize that programmatic enhancement is often short-lived or futile unless it is accompanied by appropriate infrastructure renewal. In addition, funders should supplement facilities grants with a variety of other instruments, such as loans and planning grants. Loans can play a particularly important role in ensuring the stable progress of major, long-term projects despite the unpredictable cash flows incurred during lengthy fundraising campaigns. Foundations can also spur universities and colleges to engage in careful, critical, and collaborative planning and can use their leverage to require that their grantees build endowments to maintain their investment in new facilities.

Over the past 20 years, chronic underfunding has led to a considerable backlog of infrastructure projects for institutions of higher education, which must now deal with the problem in an era in which money is tight and public confidence low. To the extent that we as a nation neglect undergraduate education, we lose the service to society of a great pool of talent. To the extent that we challenge today’s undergraduate students to make sense of their world by understanding the scientific process and ways of thinking, we succeed in preparing them for productive careers in a world in which science and technology affect all aspects of life.

Crunch Time for Control of Advanced Arms Exports

In the wake of the Cold War, the proliferation of conventional weapons is emerging as a critical international issue. New economic pressures–the result of shrinking international arms sales combined with cutbacks in domestic defense procurement in many countries–are forcing arms producers at home and abroad to jostle for position in an overcrowded market. This fierce competition is matched by buyers’ growing interest in the high-end weapons and whose effectiveness was demonstrated so dramatically in the Gulf War. Meanwhile, the demise of the Coordinating Committee on Multilateral Exports (CoCom) has left a major gap in the international coordination of national arms export policies.

In February 1995, the Clinton administration established the Presidential Advisory Board on Arms Proliferation Policy to study the factors that contribute to the proliferation of strategic and advanced conventional military weapons and technologies and to identify policy options for restraint. Members of the panel (whose views are reflected in this article) included Edward Randolph Jayne II, Ronald F. Lehman, David E. McGiffert, and Paul C. Warnke. Together, we spent more than a year hearing presentations by representatives of government agencies, industry, and nongovernmental organizations. Our conclusion, released in a formal report in July 1996: If the United States’ overall nonproliferation goals are to succeed, the control of conventional arms exports must become a significantly more important and more integral element of U.S. foreign and defense policy. Right now, however, we have neither the international nor the domestic mechanisms we need to deal effectively with this problem.

New dangers, new pressures

The control of conventional arms has always been a lower priority than the control of weapons considered more dangerous or repugnant, such as nuclear, chemical, and biological weapons. Yet the line between conventional and unconventional weapons is growing ever finer. Some so-called conventional weapons–those with destructive mechanisms that are not nuclear, chemical, or biological–have achieved degrees of military effectiveness previously associated only with nuclear weapons. In addition, certain advanced systems can be used to deliver weapons of mass destruction. In fact, the principal formal international conventional arms transfer restraint arrangement, the Missile Technology Control Regime, restricts the sale of ballistic and cruise missiles largely because they are capable of delivering nuclear, chemical, and biological weapons.

Unregulated proliferation of conventional arms and technologies, particularly in their more advanced forms, can drastically undermine regional stability, posing a threat to U.S. security and interests. By putting ever more powerful weapons in the hands of potential problem states, questionable arms exports could ultimately cost American lives. And the threat of facing more sophisticated weapons abroad could compel exporting states to develop even more advanced weapons, setting in motion a vicious circle.

The pressure to sell advanced conventional weapons is accelerating in the depressed arms market of the post-Cold War era. Since 1989, the constant dollar value of conventional weapons exported by the six leading suppliers has dropped by more than half, mostly because of a sharp decline in exports from the former Soviet Union. Accompanying this overall decline in exports, domestic arms procurement in supplier countries also has dropped precipitously as governments downsize their military forces.

In the United States, military procurement dropped more than 50 percent between 1987 and 1995, from $104 billion to $47 billion. U.S. exports of conventional arms have remained steady over this period, averaging about $10 billion per year, though they now account for a much larger share of the international market–nearly three-fifths, compared to about a quarter in 1987. Faced with excess capacity in weapons production, both national governments and arms suppliers have become much more aggressive in seeking to sell arms abroad. Like any other merchants, they are cutting prices and negotiating special deals with buyers. Sensing their advantage, buyers are demanding access to front-line, state-of-the-art equipment and technologies that suppliers previously reserved for their own national forces. In addition, they are pressing for more generous terms: contracts are more likely to include so-called direct offset agreements that allow buyers to undertake licensed production of the weapons systems or technologies they purchase. These provisions can further the dissemination of military technology or know-how. Purchasers who lack the requisite capabilities may negotiate indirect offsets that require sellers to import other goods from the buyer, transfer commercial technology, or invest in the purchasing country.

Arms sales that would be rejected for national security reasons should not be approved simply to preserve jobs or keep a production line open.

The diffusion of technology plays an important role in the proliferation of advanced conventional weapons. As the world’s economies develop technologically, the number of current and potential producers steadily expands beyond the handful of nations that once designed and built these systems. More than 35 countries now export conventional weapons (admittedly of varying degrees of capability). And as developing countries establish their own weapons industries, they become more capable of tapping into new sources of commercial and dual-use technologies (those that have both commercial and military applications) that are not subject to national or international export constraints.

This trend poses important challenges to the control of international transfers. For one thing, critical technologies that are vital to defense, from supercomputers to biotechnologies to fiber optics, are more and more likely to have commercial origins. As a result, an ever-shrinking proportion of military-related technologies are subject to direct governmental controls. For another, the rising number of potential suppliers of weapons and technology makes the creation of a self-regulating cartel difficult, if not impossible. A number of suppliers have indicated that they will not support any restraint regime until they have a more equal share of the arms market.

The history of past arms control efforts teaches us that restraints depending solely on supplier cartels are weak at best. A broader and more effective solution is to push for international consensus and control mechanisms to limit selected conventional weapons and technologies. Economic competition may be the greatest remaining obstacle to this effort. Although the end of the Cold War has made possible increased international cooperation, it has also removed the perception of a common threat. In the face of growing economic pressure, the will to accept restraints is weak. Alliances and individual nations that might in the past have been counted upon to take conservative, restrictive approaches to sales of state-of-the-art conventional weaponry show much less, if any, inclination to do so today. For this reason, U.S. leadership on this issue is essential; nothing will happen without it.

Toward a new regime

The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies represents a practical and potentially promising forum in which to address the dangers of proliferation of conventional weapons and related technologies. First proposed by the Clinton administration two years ago and formalized in December 1995 as a successor to CoCom, the Arrangement is intended to establish a formal process of transparency–information-sharing–among its 28 members and to adopt common policies of restraint. It is still a work in progress, however, and the outcome of future negotiations will determine its effectiveness.

Because arms and technology transfers are so controversial, the most promising strategy for an international restraint regime is to begin with modest, noncontroversial objectives that can be expanded over time. One such approach might be to emphasize restraint only on certain highly effective advanced conventional weapons.

Highly effective weapons share certain key capabilities: autonomous functioning, which permits them to be operated by military forces with limited sophistication; precision; long range; and stealth. Examples include advanced sea and land mines, advanced missiles, stealth aircraft, and submarines. Many of these weapons have few or no substitutes; and since considerable technical prowess is needed to develop autonomous capability–a critical factor in making these weapons useful for less-advanced militaries–most states are not likely to be able to develop their own versions of these weapons in the near future. Moreover, some advanced weapons, like advanced munitions and missiles, account for only a small share of international arms sales, so that the economic losses associated with restraint would not be high. Weapons systems that meet these three criteria–high effectiveness, low substitutability, and low opportunity costs–would be good candidates for restraint.

The National Security Council should be given top authority for formulating arms export policies.

A second approach would be to emphasize restraint on the sale of especially repugnant weapons. These might include certain incendiary and fragmentation weapons, blinding lasers, or antipersonnel mines. No government has a significant stake in these weapons. The discussion of a global ban on the export of such weapons could be a reasonable starting point for a multinational dialogue on technology transfer restraint.

Controlling technology transfers is more complicated that controlling the flow of weapons and requires a multipronged approach. Key technologies with purely military applications, such as fuse or warhead technologies, may be addressed effectively through supplier restrictions, for there are few suppliers and their commerce can be segregated from routine trade. But more and more technologies with military applications have valuable civilian applications as well. They may be essential to economic growth, environmental sustainability, health, or education, and may move in international trade in what appears to be a nonmilitary and therefore nonproblematic way. The commercialization of military technologies argues for a control system that begins to shift the focus away from controls only on exports to controls on their actual end use. In other words, technology transfers with commercial applications should be permitted only if the seller states can be confident that these technologies will not be used for proscribed military applications.

Creating a credible system of end-use assurances is essential. But it will require profoundly greater levels of transparency in the international trading system as well as a more effective system of enforcement. Industry is also likely to complain about the imposition of an added regulatory burden. However, if a transparency regime could reduce intrusions on legitimate trade while protecting the goal of nonproliferation, it might well be welcomed by participants.

For both weapons and technology, transparency is the key principle around which international efforts should focus. Again, the Wassenaar Arrangement provides a valuable framework within which to establish procedures for monitoring and anticipating technological developments and to create mechanisms for routine consultation among countries that both sell and buy arms and technology.

Finally, efforts to monitor and restrain the proliferation of weapons and technology would benefit from streamlining existing national and multinational enforcement mechanisms. The United States alone currently participates in at least six distinct control arrangements. Although they have different histories, in practice they face similar administrative and regulatory challenges; violations within the various regimes often involve the same arms traders, and pose common intelligence and enforcement challenges as well. This effort, too, could be implemented through the Wassenaar Arrangement.

Restraint starts at home

If the United States is to take the lead in encouraging international restraint in weapons transfers, it must resist domestic political pressure to approve arms exports on economic grounds. In the United States, no less than in other arms-exporting countries, industry has responded to dramatic cuts in domestic procurement by arguing that arms and technology exports are vital to maintaining the defense industrial base. Indeed, the Clinton administration’s conventional arms transfer policy, finalized in February 1995 in a Presidential Decision Directive, accords a more explicit level of recognition to the preservation of the defense industrial base and to domestic economic issues associated with arms exports than has been the case in the past. Later in 1995, the President signed into law two new arms export subsidy programs: a government-backed $15-billion loan-guarantee fund and a $200-million-a-year tax break for foreign arms purchasers.

For both weapons and technology transparency is the key principle around which international efforts should focus.

The sharp decline in domestic weapons procurement has proved devastating for some communities in which military-related industries were located. These communities, along with organized labor, have put pressure on political leaders to expand arms exports in order to preserve jobs. Industry representatives, meanwhile, have argued that production for export helps to cut the costs of domestic procurement by contributing to overhead costs, improving economies of scale, and keeping production lines operating during periods when they would otherwise be shut down, thus avoiding the costs of restarting the line.

None of these arguments provide a rational justification for stepping back from well-conceived arms restraint policies. For one thing, arms exports account for only about 300,000 jobs–far too few to make up for the 1.8 million jobs lost as a result of military downsizing. But more important, arms sales that would be rejected on the basis of foreign policy and national security consideration should not be approved simply to preserve jobs or keep a production line open. Unwise arms sales remain unwise no matter how many jobs are involved.

A policy approving arms transfers solely for industrial base reasons would undercut the very sort of international regime that is so desperately needed. If any participating country is allowed to use its independent judgment to transfer a weapon or technology, the whole purpose and nature of a restraint regime would be subverted. It is not only appropriate but mandatory that the United States and other nations agree to handle legitimate domestic economic and defense industrial base issues through other policies and actions.

By the same token, however, the U.S. government should not yield to political pressure to intervene further in arms exports once they have been approved. Excessive government involvement in arms transfers may distort prices or foster an unhealthy, special-interest relationship between government and industry. In recent months, debate has erupted around a number of issues related to the government’s role in arms sales, particularly the negotiation of offset agreements. Organized labor has argued strongly that the U.S. government should prohibit, or at least significantly restrict, offset agreements on the grounds that they divert needed jobs and wages overseas.

To the extent that offset agreements involve the potentially destabilizing transfer of arms and related technology, they warrant careful government review. But once arms transfers are approved on foreign policy or national security grounds, the economic aspects of each sale should be left to the producer and purchaser. The long and successful history of U.S. commercial trade in high technology is full of direct and indirect offset arrangements, and the net benefit to U.S. employment and the domestic economy has been substantial.

Duplication and inefficiency

Good policy and good process go hand in hand. There is no doubt that the way we make policy and the way we make individual arms or technology transfer decisions are absolutely critical to achieving U.S. arms control goals. Right now, however, the U.S. arms export control process is beset by duplication, fragmentation, and inefficiency. Weapons exports are handled separately from technology exports; in each case, decisionmaking is dispersed among a variety of federal agencies. Moreover, the process of reviewing export requests is cumbersome and outdated, particularly because of the inability to get broad interagency agreement on information system requirements. Bureaucratic warfare, rather than analysis, characterizes a process whose outcome is more likely to reflect short-term political compromise than coherent, long-term policy goals.

Any effort to restrain arms and technology transfers must balance competing foreign policy, national security, and economic interests. The current U.S. system of export controls reflects this tension, as it must and should. However, a stronger hand is needed at the helm. The National Security Council (NSC) is the natural candidate to take the lead. Its role should be more than that of a mediator, however. Drawing on its longstanding interagency process, it should take responsibility for formulating arms and technology export control policy and issuing procedural guidelines.

An important first step is to develop an integrated management information system for use by all agencies involved in the export control process. This will save time and money and will make for more consistent and intelligent application of policy in the long run. A more far-reaching reform is to consolidate the application, review, and approval process for arms and technology exports into a single organization. At a minimum, a uniform application process could be established and, in clear-cut cases, the approval process could be expedited so as not to require interagency review. These steps would vastly improve the efficiency of the process, cutting costs for both government and the companies seeking to export.

The world struggles today with the implications of advanced conventional weapons. It will, in the not-too-distant future, be confronted with yet another generation of weapons whose destructive power, size, cost, and availability can raise many more problems even than their predecessors. These challenges will require a new culture among nations, one that accepts increased responsibility for control and restraint at the price of short-term economic gain. This kind of change cannot happen overnight, but strong U.S. leadership can do much to guide the international community toward it.

The Politics of Space

In Can Democracies Fly in Space?, W. D. Kay, a political scientist at Northeastern University, argues that “something is terribly wrong” with the U.S. civil space program. It is in trouble, he believes, because the U.S. political system is ill-suited to sustaining large-scale technological enterprises. He is right that the space program is in trouble and that the United States supports such undertakings poorly. It is not as clear, however, that the woes of the National Aeronautics and Space Administration (NASA) can be laid at the doorstep of political dysfunction.

Few would argue that the space program is healthy. Kay succinctly records the catalog of adversity that has befallen NASA since the halcyon days of Apollo. Programs come in late and over cost. Projects costing over $1 billion, such as the Hubble Space Telescope and the Mars Observer, produce spacecraft that disappoint or even disappear. Other countries take over leadership in areas the United States once dominated, such as commercial satellite launching. The space shuttle program fails its supporters by denying them the cheap and reliable access to space that it promised; the space shuttle Challenger fails its crew. Like the shuttle before it, the nascent space station is redesigned into irrelevance. Kay believes that previous attempts to explain this sorry record have proved inadequate because they ascribe the problems to a single cause: NASA is an ossified bureaucracy.Presidents since Kennedy have failed to provide leadership.Congress micromanages or underfunds the space program. Vision is lacking. The country needs a cabinet-level Department of Space. All of these imply that some silver bullet can set everything right.

Not so, says Kay. He seeks the roots of the problem in the political environment. Although he does not ignore or excuse NASA’s own mistakes, Kay concludes that “the space program’s failures, like the earlier successes, have multiple causes, all of them ultimately traceable to the way the American political system operates.” To get at these multiple causes, Kay adopts a “metatheoretical perspective” and explores the political context of the space program in nine categories or “arenas,” all of which, he claims, shape the space program. These arenas are corporate-managerial, legislative, executive, judicial, regulatory, academic-professional, labor, popular mobilization, and international. His analysis portrays NASA as a victim of forces beyond its control. Presidents voice a rhetoric of expectation but fail to lead. George Bush, for example, proposed a “Human Exploration Initiative” but failed to build popular or congressional support. Other executive agencies support NASA only when it suits them. The Air Force, for instance, endorsed the space shuttle only after NASA redesigned it to carry spy satellites. Congress puts NASA officials through a gauntlet of committee hearings that test patience and endurance but otherwise achieve little in the way of coherent, long-term policy formulation. The aerospace industry, a powerful force in shaping national policy, cares more about an expensive program than a productive one.

The list goes on. In the absence of a clear and compelling national goal in space, our space program succumbs to interest-group politics. Presidents want to appear bold and forward looking. Congressmen want contracts for their states and districts. Corporate executives want large, stable projects of long duration. Space scientists want their experiments funded and launched. Space enthusiasts want their vision of the future embraced. Other federal agencies want their missions facilitated. Even foreign countries lobby to have their projects sustained. Small wonder then that space policy bears little resemblance to NASA’s recommendations. Kay admits that NASA has contributed to its own problems through mismanagement and misdirection, but, he says, “many of its ‘mistaken’ decisions, procedures, and policies do not necessarily reflect the agency’s own preferences, but are rather an attempt on its part to accommodate forces over which it has no control.”

Kay’s decidedly apologetic point of view stems, perhaps, from his close ties to NASA and his admitted enthusiasm for spaceflight. He researched this book in 1993 during a term as scholar-in-residence at NASA headquarters. By his own account, he is “a devoted follower of the Star Trek television programs.” In spite of these sympathies, however, he insists that he is agnostic about space policy and unsure whether NASA should even proceed with its current centerpiece program, the space station. Rather, Kay wants to frame the problem constructively. He believes that we should either reform the way space policy is developed or “rethink our original policy decision” to be a spacefaring nation.

No single cause

In many ways, Kay succeeds admirably. He makes a persuasive case for the pluralistic nature of space policy formulation and demonstrates how political forces led some of our largest projects astray. For example, the space shuttle was reshaped by the Air Force, Congress, and the White House. The constrained budget for the Hubble Space Telescope had no room for the testing procedures that would have revealed the flawed mirror. The space station has been reduced to a life sciences laboratory. In developing this useful insight, however, Kay falls into the very pattern that he finds objectionable in the other literature on the space program, indulging in a single-cause explanation of NASA’s problems. For him, all is politics. The political arenas that he investigates explain to his satisfaction all of NASA’s successes and failures . When the politics were right, as they were in the early 1960s, Apollo was possible and the moon was within reach. When they are wrong, as they have been since Apollo, you get the Mars Observer, Hubble Space Telescope, and Challenger. Though he does not say so directly, Kay implies that good politics generate adequate resources, and adequate resources yield success in space.

This misconception might be called the Apollo myth: If we had a president who was visionary enough and a Congress that was generous enough, we could do anything in space we set our minds to. In this view, politics is the single determinant of successful space policy. Would that it were so. But in fact, the laws of economics and the laws of nature limit the space program just as surely as politics does. For example, it is not just that our current space shuttle disappoints. Any space shuttle built with existing technology would fail to achieve the reliability and economy that NASA promised. There is no technology on the horizon that is going to change that, no matter what kind of presidential leadership the country has and no matter how wide Congress opens the public purse. No reform of our political system is going to change the laws of nature or make manned spaceflight commercially viable.

How did NASA get itself in the position of pursuing programs that defy the laws of nature and economics? One answer is “buying in.” This is the now-venerable Washington technique of intentionally underestimating a program’s cost and overestimating its payoff. The real cost and benefits surface only when the project has absorbed so much funding that there is no turning back. Kay says he finds no evidence of NASA consciously playing buy-in, but he was looking in the wrong places. His study is based entirely on published sources. Though he researched this book as a scholar-in-residence at NASA, he apparently sought no primary documents and conducted no interviews. Had he done so, he might well have found ample evidence of buy-in. He surely would have found that NASA repeatedly promised Congress results that were physically and economically impossible to achieve. The space station currently under way is one such project.

What, then, should NASA do? How do you sell the Hubble telescope in a democracy? For that matter, how do you sell a breeder reactor or fusion energy, the human genome or a superconducting supercollider? NASA is not alone in wrestling with the problem posed by Kay. Large-scale technological enterprises take years, even decades, to complete. Seldom can democracies sustain political consensus that long. Sooner or later, the pluralistic political environment ties up Gulliverian dreams in a web of Lilliputian special interests. The simplest answer is to tell the truth. Propose what is physically and economically possible. If the political system will not fund it, then propose something it will fund. Good ideas do not go away; they can be funded another day. But a bad idea, especially an underfunded bad idea, will hang around the neck of NASA or any other federal agency and taint all future proposals. There are more examples of highly touted projects that proved disappointing than there are of worthy, feasible undertakings that went unfunded. We lost the commercial space market not because Congress underfunded NASA; we spend more on space than the rest of the world combined. We lost it because Congress bought a bad idea, the space shuttle. The sponsor of that idea was a space agency that sacrificed technical and economic judgment on the altar of politics.

Roundtable: The Politics of Genetic Testing

This discussion took place in March 1996 at “The Genetics Revolution: A Catalyst for Education and Public Policy,” a conference sponsored by North Lake College and others in Dallas, Texas. The participants were all Green Center fellows at the University of Texas at Dallas at the time.. The discussion began with the specific example of a man with Huntington’s disease and the the repercussions for his family. The panelists were asked to represent the views of the characters in the case study, but it should be noted that when they are speaking for the character, they are presenting that character’s view, not their own. When the discussion takes off from the case study, the panelists are speaking for themselves.

The panelists are R. Alta Charo, associate professor of Law and Medical Ethics at the University of Wisconsin Schools of Law and Medicine; Robert M. Cook-Deegan, senior program officer at the National Academy of Sciences; Rebecca S. Eisenberg, professor of law at the University of Michigan Law School; and Gail Geller, associate professor at the Johns Hopkins University School of Medicine. Issues editor Kevin Finneran is the moderator.

Finneran: Adam Stewart is diagnosed with Huntington’s disease in 1982 at the age of 45. Mr. Stewart was adopted, so he does not know anything about his ancestors’ genetic histories. His family consists of a wife and their 18-year-old son John. To get us started, I’ll have Bob Cook-Deegan give us a briefing on the nature of Huntington’s disease and what was known about it in 1982.

Cook-Deegan: Basically, Huntington’s disease is a neurologic disease. It results from the death of certain populations of brain cells, and it leads to movement disorders and often psychiatric conditions such as depression and loss of cognitive function. Onset typically occurs between the ages of 40 and 50. Although treatment exists for some of the symptons, there is no treatment for the disorder itself. For the purposes of our discussion there are only a few things that you need to understand. One is that if you have a parent with Huntington’s disease, you have a 50/50 chance of inheriting the disease. It is about the simplest situation that one can have in genetics. Huntington’s disease was the very first human disease that was mapped to the chromosomes with the new methods that became available in the early 1980s. In 1983, researchers identified the chromosome on which the mutation was located. In 1993, the gene itself (that is, the nature of the mutation) was discovered.

Finneran: To get us started today, we have to try to imagine what it is like for Adam to be in this position. I am going to ask Gail Geller to try to explain what is going through Adam’s mind and particularly what Adam thinks about telling his son John about this.

Geller: Adam’s biggest concern is his own mental state and then the degree to which he would inflict worry and depression on his son. He is feeling a need to protect John from information that would be of no use to him. He is also wondering if the doctor is required to tell John.

Charo: Why does the doctor even have to tell Adam? You’ve got a diagnosis of a disease that cannot be cured. You’ve got symptoms that, if left unexplained, could still be treated to the degree that you can modulate them without having to explain why it is that you have the symptoms. Since Adam can’t do anything about the disease, why give him this terrible knowledge? Why not keep it a secret and just treat the symptoms?

Cook-Deegan: In fact, there was a time in medicine when that’s exactly what most physicians would have done.

Geller: In many countries of the world, that is how providers handle the situation today. The patient’s right to know what is knowable about his or her condition is a new concept that is not widely accepted outside the United States.

Charo: But to be honest, I’m not so sure that it’s necessarily a wonderful thing to always tell the truth and, in fact, none of us do feel that we are always compelled to. Anybody here who has every broken up with somebody in a romantic relationship knows that one of the questions you get asked is, “Why? What’s wrong with me?” Has anybody here ever satisfactorily answered that question? I doubt it. There are times when more information is not necessarily a good idea, and we know that emotionally and intuitively.

Eisenberg: There may be a difference between your metaphorical example and in knowing what there is to be known about the length and progress of the disease. There may be certain family decisions that you would make differently. There may be financial decisions that you want to make differently in light of your knowledge of the length and course of your disease.

Genetic counselors are trained to be nondirective or value neutral to the extent possible in their counseling. 

— Geller

Finneran: We will come back to this, because the decision often changes with the nature of the information that is available about the specific disease.

Cook-Deegan: There is one reason, however, for telling Adam’s son John. He might want to know that he could pass this condition on to his children.

Geller: But a family physician must balance the confidential relationship with Adam and the wishes of the other family members who are also his patients.

Finneran: Let me move you forward. Adam decides not to tell his son John about his condition, and the doctor agrees to go along and not say anything about it. In 1985, Adam’s condition has deteriorated and he commits suicide, which is not an unusual development among people with an illness such as this. Does the doctor now have a duty to tell John that his father had Huntington’s disease?

Cook-Deegan: First, we should remember that the test for Huntington’s disease was not developed until 1993. The doctor could tell John that he has a 50 percent chance of having the disease, but he could not offer him the option of finding out for sure. At any rate, the doctor decides to give John as much information as he can.

Finneran: Rebecca Eisenberg is going to explain John’s reaction.

Eisenberg: Well, John has a number of concerns. The practical financial concerns include: What is this going to mean for his employment prospects? What is this going to mean for his ability to obtain insurance? What is this going to mean for his ability to provide for a family in the future? He also has a number of emotional concerns: How is this going to affect his ability to enter into relationships? What’s this going to mean for his self-image? If he has the disease, will he respond the way his father did?

Even if testing were available, he sees little value in it. Testing would not enable him to take any precautions that could delay the onset of the disease or slow its course. It may interfere with his ability to go on leading a happy life for as long as possible. It may make it more difficult for him to follow up on his professional ambitions. It may make it difficult for him to find a partner who would be willing to raise a family with him. Even now, John would choose uncertainty over testing.

Finneran: Life goes on for John. In 1994, he becomes romantically involved with Susan. They start talking about getting married. He now has to think about these questions vis-á-vis Susan, and Susan also has to think about what her interests are. She doesn’t know anything about John’s family medical history. She doesn’t have any reason to be suspicious, but anybody in this position has to think about what they would like to know about a prospective spouse. Alta Charo is going to tell us how Susan looks at this.

Charo: Most people considering marriage have numerous questions: Are the in-laws going to be bearable? Where does the other person want to live? What are his or her career plans? Susan wasn’t thinking about genetic disease until she heard Oprah Winfrey talking to some of her guests about it. This leads her to ask John if his family has a history of inherited disease.

Finneran: John stalls for a while, but he tells Susan that his father had Huntington’s disease and that there is a chance that he has it. What can John say to Susan?

Eisenberg: Well, first of all John thinks that he is living a life worth living, even though he is at risk of developing Huntington’s disease. He does not see that as such a terrible fate. In fact, John thinks that if he and Susan get married, they should have children even though those children would have a 25 percent risk of carrying Huntington’s disease. If they have the disease, their lives would be worth living just as his life is worth living. He hopes that Susan will be willing to marry him and make as much of a life together as they can for as long as possible.

What the law can’t do, the medical profession can do through professional agreement. 

— Charo

Charo: Susan doesn’t like the odds. She tells John that they have been talking about having a big family, which means that there is a pretty good chance that one of the children will have Huntington’s. She wants him to be tested. If he doesn’t have the disease, they can get married and have a healthy family.

Finneran: At this point they decide to go visit a genetic counselor. What can a genetic counselor tell them?

Geller: Most of the public doesn’t really understand probabilities and risk information. Genetic counselors can help explain the information. They are very good at taking family histories, at drawing individual pedigrees, and at discussing with people what their individual risks for various diseases might be. Most physicians can’t do this. Genetic counselors are trained to be nondirective or value-neutral to the extent possible in their counseling. They feel very strongly that it is not their role to recommend a specific course of action. Physicians, on the other hand, are trained to give advice, and patients will often ask, “Doc, what do you think I should do?”

When it comes to the question of having children, genetic counselors will lay out the options, from the conventional biological approach, to using a sperm donor, to adoption.

Eisenberg: John’s first choice is the old-fashioned way, and adoption would be second, but he points out that none of these options elimates the risk of Huntington’s.

Charo: Susan worries about the risk if they have their own children, and adoption is not that appealing. She wants the experience of giving birth to a child and is willing to consider the sperm donor approach.

Finneran: They ultimately agree to adopt, but while the adoption process is going forward Susan learns that she is pregnant. It’s back to the doctor and back to the genetic counselor.

Charo: Susan is happy to be pregnant but worried that the child could have Huntington’s. She says to the doctor, ” It just seems sad to have a child that is already predestined to have a short life, so I want to take some time to think about it. You don’t have to tell John yet, do you?”

Geller: The genetic counselor will again want to lay out all the options. Well, the options range from continuing the pregnancy to having an immediate abortion. One possibility is to have a genetic test of the fetus. The problem is that current practice is to require that both parents consent to the test. (There is also a rule that minors will not be tested for Huntington’s without their parents’ consent.) John will balk at this, because if the fetus has Huntington’s disease, he will know that he has it.

Finneran: What is the basis of this requirement that both parents consent? Does the physician have any discretion? Can Susan shop for a doctor who will do the test without John’s permission?

Cook-Deegan: In the case of Huntington’s and most other genetic conditions, in theory you could find somebody because the technical means are available to do the testing in various places. It happens that most of the U.S. labs that do Huntington’s testing have agreed to abide by this dual-consent requirement. That is not the case in other countries, so it would be difficult but possible to have the test done without John’s consent. For many other tests, it would probably be possible to find someone in the United States. It is not a brick wall.

Charo: It is also important to distinguish between professional practice and law, because the government cannot pass a law that says that Susan can’t get tested without John’s permission. It can’t pass a law that says that Susan can’t have an abortion without John’s permission for the same reason, which is that once you have rights as an adult, the government is not entitled to condition those rights on somebody else’s permission or it puts you in second-class status. There was a time when husbands were able to exercise this control over their wives, but that time has passed. But what the law can’t do, the medical profession can do through professional agreement. Doctors can create a professional monopoly and dictate the rules. This is more powerful than government because you can’t even vote against it. It is a genuine issue about, ironically, more power lying in the private sector than in the government sector.

Finneran: How formal is this process by which these sorts of accepted physician practices develop? How was it done for Huntington’s? Has it been done for any other conditions? How can one influence these decisions?

Geller: In an ideal world the process develops on the basis of empirical research. The National Institutes of Health funded several studies of Huntington’s disease several years ago, and a consortium was formed to develop policies and procedures for Huntington’s testing. Currently, there is a consortium funded by NIH on cancer susceptibility testing. The belief is that we can’t decide what policy should be until we address in an experimental fashion certain questions such as: What are the implications? What are the psychological ramifications? What are the benefits? What are the harms? When is it good and when is it bad? Initially, we can do that only under research-protocol circumstances, which are artificial but at least begin to provide some answers from which policy can be developed.

Eisenberg: What do we do in the interim? In order to have meaningful empirical data, you have to keep monitoring the situation over the space of a generation or more for an ailment such as cancer.

Geller: But that is the policy right now. Cancer susceptibility testing should not be offered at all out of the context of a research protocol. That’s how Huntington’s testing began as well; you had to participate in a study in order to undergo Huntington’s testing before it entered standard practice.

Cook-Deegan: We have lots of ways in which medical technologies find their way into practice. At one extreme, you’ve got drugs, where you have a formal regulatory process. You have to prove safety and efficacy before a drug is allowed on the market. At the opposite extreme, you have a lot of surgical techniques that are never put through formal clinical trials before they are pretty widely disseminated. What we have with genetic tests is a mix of both models. With Huntington’s and now with cancer, the approach is very deliberate and careful. With cystic fibrosis, clinics were offering the test when the first formal studies were just getting under way.

Charo: It is also worth noting the effect that business interests will have on individual access to the tests. When it becomes possible to mass-produce genetic test kits, companies will want to market these kits widely and will not care whether or not the user subscribes to the carefully developed protocols of the professionals. There will be a direct conflict between the monopolistic control of the professionals and the anarchic individualism of the market.

Finneran: What is the status of the security of this information? One reason John didn’t want to be tested was that he was afraid his insurer would find out that he had the disease and want to cancel his insurance or his employer would find out and wouldn’t put him in the executive training program.

Geller: That is actually one of the advantages to being tested within the context of a research protocol. It is not foolproof, but in fact research data has greater protection than data that is just collected in a provider’s office.

Eisenberg: If you share the information gathered under a research protocol with the subject, the subject may then have some confidentiality obligations to disclose the information if asked by his or her insurer. If the individual lies or refuses to answer, that might be grounds for canceling the policy.

Finneran: As it stands now, can an employer or an insurer ask you if you have been tested, and are you required to give an honest answer?

Eisenberg: Yes.

Cook-Deegan: They can ask that question. You are not required in a legal sense to answer the question, but if you want the insurance you have to answer. And if they discover that you did not answer honestly, they can cancel the policy.

Charo: Remember, though, that this is where the state can intervene and prevent the insurance companies from asking the question and therefore prevent the insurance companies from using that information to screen people. That was a big issue surrounding AIDS and HIV testing, and it is something that could be tackled also in the context of genetics if one chose to, but by and large it has not. A few states have done it, but very few.

A comprehensive approach to insurance discrimination is necessary to avoid market shifting. 

— Eisenberg

Eisenberg: Employers are more restricted than insurers in their ability to ask questions of this sort, just as they can’t ask you what your marital plans are. They are limited as to how they can discriminate in hiring and firing people. The regulatory situation for insurance is quite different and more complicated, and it makes it difficult to figure out how to attack the problem of insurance discrimination legally. Much insurance is offered by insurance companies who are regulated by the states, but there are large employers who provide their own insurance to their employees, and they are not regulated by the states. They may be regulated by federal statute, but it is difficult to figure out how to implement rules that address one segment of this insurance market in a way that does not create market distortions for the other. If we regulate the insurance companies, for example, more employers will find it advantageous to self-insure their workers and to discriminate in that setting. A comprehensive approach is necessary to avoid market shifting.

Charo: The employers don’t have the same justifications as the insurers, and they don’t have quite the same degree of freedom, but they have more than what might be apparent, because in nonunion shops an employer can discriminate. An employer can decide not to hire you because you have straight hair or red hair or a Texas accent or a Brooklyn accent. The law specifies that employers can’t discriminate on the basis of gender, of pregnancy, of race or ethnicity. But unless it has been specified in law, the employer is free to choose the criteria for employment.

Cook-Deegan: The answer in this case is probably that you cannot discriminate because of the Americans with Disabilities Act. But this has never been tested in court.

Charo: That’s right, because the Americans with Disabilities Act has not gone and covered genetic conditions to the point where there has been a challenge and there has been some kind of adjudication about its precise coverage. We know that it covers people with a physical disability that is currently changing their capabilities, and we know that it covers people who are HIV-positive who are perceived as disabled. But an employer can argue that he doesn’t perceive somebody with Huntington’s as disabled but simply as somebody who down the road is going to cause the company’s health insurance premiums to rise. That’s not discriminating against the disabled; it’s allowable discrimination on the basis of who is going be a more expensive employee, just as an employer can choose not to hire an employee who is likely to take too many breaks.

Finneran: Let’s look at this from a different angle. We’ve been focusing on reasons not to be tested. Are there also situations in which testing should be mandatory or where people would like it to be mandatory? Where is it in society’s interest or in an individual’s interest to be tested? Are there areas now where mandatory testing is taking place?

Geller: There are certain circumstances of newborn screening that are mandatory, and the main justification is that there is a treatment. For example, newborns are screened for PKU, an inherited metabolic disaster with devastating effects that can be prevented by early treatment. The consensus is that the benefits of knowing far outweigh whatever risks also come with knowing. And since newborns are completely vulnerable, physicians are usually given more decisionmaking authority in how to treat them. In some states it is mandatory to inform pregnant women that certain other genetic tests are available for the fetus.

Charo: Let me just add that in New York there is a battle royal on the very closely related issue of mandatory testing of fetuses or newborns for HIV status. When it was discovered that there was a significant reduction in the risk of transmitting HIV from mother to child if the mother was taking AZT during the pregnancy, New York state began to consider mandatory prenatal testing. But because this would force pregnant women to learn their HIV status, it pitted the interests of the mother against those of the child. The debate has not been resolved, and for the moment testing is voluntary.

Cook-Deegan: We need to maintain the distinction between tests that individuals might find useful and tests that should be mandatory. They are situations now-and there will certainly be more in the future-when knowing more about your genetics will enable you to prevent or reduce the likelihood of developing a disease through treatment or behavioral change. Huntington’s disease, for which nothing can be done, is not typical. There will be many cases in which the individual will want to be tested. That is very different, however, from mandatory testing, which should be implemented only in very compelling circumstances. In fact, an Institute of Medicine committee concluded that even PKU testing should not be mandatory.

We as a society have not decided who owns and controls genetic information. 

— Cook-Deegan

Finneran: What problems arise with screening large segments of the population with a genetic test?

Charo: When you are screening a large population, you are going to pick up many more people who are at no risk and force them to submit to testing that can produce erroneous results. If you identify and test only the high-risk individuals, it is more efficient; but some individuals who have a problem will not be tested. A choice between targeted testing and widespread screening will have to be made for each genetic disorder.

Geller: There is also a fuzzy line because sometimes targeted screening programs can look very much like testing programs. For example, if you offer sickle cell screening only to the African American population, which has an extremely high prevalence of the trait, is that screening or testing?

Eisenberg: Another significant concern is that the larger the population you are screening, the less likely it is that you are going to be offering anything in the way of effective genetic counseling to go along with the testing and the more likely it is that people will actually be harmed by acquiring information they don’t really know how to make sense of.

Finneran: One last question. Are there cases in which a physician is required to release information?

Geller: If a person is asking for certain employment where her particular genetic susceptibility would potentially create substantial harm to large segments of the population, one could argue there is an obligation there to disclose.

Charo: From the physician’s point of view, there is sometimes a perceived duty to share information with third parties, because in a small number of cases physicians have been sued for failure to disclose to third parties. For example, people have successfully sued physicians for failing to tell them that their spouses were infected with various sexually transmitted diseases.

Eisenberg: A more immediate administrative concern is that you often need to make disclosures to your insurance company, not because you are sitting down and applying for insurance and then making a judgement as to whether they want to underwrite your particular risks or not, but because you seek reimbursement from your insurance company for your medical costs. The tests themselves are quite expensive. You might want your insurance company to pick up the cost of the test. The test may indicate that certain costly interventions are appropriate in your case, and you may want to keep this information secret from the insurance companies. The reality today is that you probably can’t keep it secret.

Charo: Now let’s up the ante just a little more with the insurance issues, because insurance companies share information. If it has been made known to insurer A that you were tested for Huntington’s, when you switch jobs or switch insurers, insurer B has access to that information from insurer A through information sharing, which is not yet thoroughly regulated. As a result, if you try to hide the fact that you were previously tested, there is an excellent chance that insurer B could find out that you were and drop you immediately or drop you at the worst possible moment.

Cook-Deegan: The question of genetic information is inflaming the already heated debate about the confidentiality of medical records. We as a society have not decided who owns and controls that information.

Charo: This is complicated by the fact that in many instances the patient is not the doctor’s primary client. If you are in the Army and a military physician examines you, do you have control of that information? If an employer requires a physical exam by a company doctor, do you control that information? The reality is that most of us have little control over the flow of medical information about ourselves.

Audience: If a couple decides not to have a genetic test even though they are at high risk for having a baby with a serious genetic defect, can the child sue for wrongful birth?

Eisenberg: The courts have been quite resistant to recognizing the complaints of children that are premised on the theory that had the parents behaved responsibly, I would never have been born and then I would have been better off. The courts have been more receptive to the claims of parents who have said that a genetic counselor failed to give them advice that might have led them to make a different decision about childbearing.

Audience: Are there new laws being written to address these questions?

Charo: There are many, many bills that have been introduced in Congress and in state legislatures that touch on a variety of issues-funding for selective abortion or not, regulation of what doctors can or cannot do in terms of screening, regulation about what kind of information can or cannot be shared among insurance companies or shared among employers-but most of it has gone nowhere. So many of these questions pit the rights of one type of person, say a physician, against another, perhaps an employer, and we simply have not reached a consensus on what to do.

Climate Science and National Interests

Scientific developments and a change in U.S. policy have shifted the terms of the discussions that will take place in June 1997 at the conference of parties to the Framework Convention on Climate Change. Growing scientific confidence about the role of human activity in global climate change and the willingness of the United States to consider binding reductions in greenhouse gas emissions will force the conference participants to address the issue of climate change more directly and to consider immediate and far-reaching measures. But this is not a problem that can be solved by science alone. Reaching agreement on targets for greenhouse gas emission reductions for the period beyond the turn of the century will be difficult because of the deep-seated differences between rich and poor nations, between coastal countries and fossil-fuel rich nations, and between various other factions.

When the participating nations met in Berlin in March/April 1995, they agreed on emissions limitations for the decade 1990-2000 that would limit global greenhouse gas emissions in the year 2000 to the level that prevailed in 1990. However, all signs indicate that the world will miss its goal by a wide margin. In fact, U.S. officials believe that only the United Kingdom and Germany are on track to meet their targets.

What’s more, all parties recognize that even if they were to reach their goal for the year 2000, it would not stabilize the climate but only stabilize the rate of increase of atmospheric carbon dioxide concentrations and presumably, to a reasonable approximation, the rate of increase of the average surface temperature. At the levels of greenhouse gas emissions of 1990, some 6 billion tons of carbon would be emitted to the atmosphere each year, increasing the carbon dioxide concentration in the atmosphere annually and thus adding to the radiative forcing and presumably to the continuing rise in global temperature.

Stabilization of the climate in its present state, as expressed by the global surface temperature, would require reductions of 60 to 80 percent in greenhouse gas emissions-an unrealistic goal for a global economy that is fundamentally dependent on coal, oil, and gas for its viability in the foreseeable future. Influenced by recent scientific reports and aware of the failure of voluntary efforts to achieve emission reductions, the United States has now indicated that it will seek a binding agreement on emission caps and timetables. Timothy Wirth, undersecretary of state for global affairs, announced the new U.S. policy in July 1996: “The United States recommends that future negotiations focus on an agreement that sets realistic, verifiable, and binding medium-term emissions targets . . . [they] must be met through maximum flexibility in the selection of implementation measures including use of reliable activities implemented jointly and trading mechanisms around the world.”

The delegates preparing for the 1997 conference of parties to the climate treaty are on a crash course with reality. They must plot a course that recognizes the growing certainty among most scientists that human actions are changing the global climate, as well as the political divisions that threaten to unravel any attempt at coordinated action. And once they confront the limits of what can be done to slow global climate change, they must face the challenge of what to do next.

New science

The coming policy debate will be particularly contentious because negotiators will have to deal with the fact that scientific understanding of the climate warming process has changed significantly since the 1990 international assessment by the International Panel on Climate Change (IPCC) of the World Meteorological Organization and the United Nations Environment Program. The scientific findings reflected in the 1996 report will necessitate changes in the negotiating positions of many countries. Although a scientific debate still rages about it, the most important new finding in the summary report is that comparisons between the forecast and observed patterns of global surface temperature convinced the panel that “the balance of evidence suggests that there is a discernible human influence on global climate.” This is a significant change in conclusions.

Until now, there has been a reluctance on the part of the scientific community to claim that the temperature rise observed over the past century is due in part to human activity. The prevailing view has been that the observed global surface temperature rise was within the limits of natural climate variability. Some scientists not involved in the IPCC process still maintain that there is not enough evidence to support such a statement, and an international group of dissenting scientists has warned against premature action on global warming. But the clear implication of the IPCC conclusion is that serious consideration must be given to actions that influence human activities so that global reductions of greenhouse gas emissions can be achieved.

The new report also devotes more attention to long-term projections of temperature changes to be expected by the year 2100. Although the 1990 report also made projections for the end of the next century, it was principally concerned with the temperature around 2030, the year in which greenhouse gas concentrations are projected to be double the present levels. The 1996 IPCC report expects smaller global average temperature increases than were being projected five years ago. The best estimate for the year 2100 is for a global surface average temperature increase of 2 degrees centigrade, with a range of 1 to 3.5 degrees centigrade. The 1990 report projected a 3-degree-centigrade increase with a range of 1.5 to 4.5 degrees centigrade. In short, new projections indicate more gradual and smaller increases in temperature.

At the high end of the range of projected warming, a 3.5-degree-centigrade increase in surface temperature over a century represents a rate of change outside recent historical experience and implies major changes in climate conditions. At the lower end of the range, 1.0 degrees centigrade, the climate warming is not severe and it is likely that humanity will find little difficulty in adjusting, although the effects on specific ecosystems remain uncertain. Because the actual global average surface temperature can fall anywhere within this range, negotiators are still left with very large uncertainties about the extent and severity of actions that will need to be taken by the global community.

A further important change in the most recent scientific results is that a much closer correspondence between the temperature increase over the past century and that reproduced by the newest and more complete mathematical models of the global atmosphere and oceans has been achieved. The models, which are the basis for projections of climate change that have been developed during the past half decade, give much more realistic simulations of atmospheric conditions. Previously, mathematical models yielded projections of temperature that were much higher than the observed temperature rise. This has been a great puzzle and, for those who placed little credence in the temperature projections, a source of legitimate criticism of the models and the policies that were based on them. But here finally are projections that look much like the observed record of the global average surface temperature, thus lending support and credence to the calculations based on them and the policies that flow from the calculations.

The new element that changed the nature of the calculations was the incorporation into the mathematical models of the effects of aerosols. Aerosols are small solid or liquid particles that form in many ways-most importantly as sulfates formed from sulphur dioxide emissions in the burning of fossil fuels but also from volcanoes and dust. Unlike carbon dioxide, aerosols are not evenly distributed throughout the global atmosphere but are concentrated over industrial areas and deserts. As particles, however, they act in an opposite manner from greenhouse gases. They tend to cool the atmosphere by reflecting sunlight into space. When the effects of aerosols are introduced into the mathematical models, as they have been in those considered in the 1996 report, they partially counteract the warming effects of greenhouse gases and thus result in predicted rates of warming that are lower and slower than those of previous mathematical models. According to a recent report of the National Research Council, “global models suggest that sulphate aerosols produce a direct forcing in the Northern Hemisphere of the same order of magnitude as that from anthropogenic greenhouse gases but opposite in sign.”

The likelihood is that no matter how successful the negotiations are humanity ill still be faced with the prospect of a changing climate.

Same old politics

As consensus is building in the science underlying the policies that the parties to the conference will seek to adopt in 1997, there has been a hardening of political positions among the developed and developing countries. Island nations and some coastal nations, fearing that their territories may be inundated by the rise in sea level associated with global climate warming, are understandably strongly in favor of immediate action. They seek agreements on emission caps and time tables. Countries dependent on fossil fuel production and use, such as the oil-rich Persian Gulf states, and coal-dependent countries such as China and India are opposed to such agreements.

This Balkanization of political interests in the negotiating process is superimposed on the longstanding difference of views between the industrialized and developing nations on how to proceed. This fault line between negotiating positions is almost irreparable. Only vast economic and resource concessions to the third world by the industrialized countries can bridge this discontinuity. On one side of the fault line are the developing nations that are committed to the view that it has been the industrialized nations who have up until now caused the global increases in greenhouse gas concentrations in the atmosphere. The developing countries are now industrializing and need greater amounts of energy, largely from fossil sources; they do not intend to let their economic growth be slowed by restrictions on energy use.

But all projections of economic and population growth and associated increases in energy usage conclude that any realistic approach to constrain greenhouse gas emissions must focus on the developing world because that is where the largest increases are expected to occur. The negotiating position of the industrialized countries, which are prepared to accept restrictions on energy use, stems from the credence that they place in scientific assessments that the projected temperature changes and carbon dioxide residence times have a good probability of being on the high side of expected ranges.

In the view of the developing world, the industrialized North owes the industrializing South an “ecodebt,” which should be paid in two ways: The North should bear most of the burden of greenhouse gas reductions, and it should transfer environmental and energy technology to the South on favorable terms so that the energy efficiency of their economies can be increased, thus reducing the emissions of greenhouse gases from their territories. In fact, one of the major achievements of the 1992 UN Conference on Environment and Development in Rio was the agreement to create the Global Environmental Facility to provide funding from the industrialized countries to enable the developing countries to acquire amd introduce environmentally advantageous technologies. Pledges of resources have been substantial but have fallen far short of the aspirations of the developing countries.

Accommodating all these varied interests will not be easy. Strategies that will achieve global emission goals without impeding the economic growth of developing nations abound, but all involve severe penalties on one or another of the parties to the convention. The various strategies are based on models of the evolution and growth of the economies of the world’s countries, assumptions about technological trajectories, estimates of rates of population growth, and alternative modes of accommodating all parties. What emerges from the various studies employing such models are scenarios of possible futures that depend fundamentally on the fraction of the total global energy supply that will be met with fossil fuels of various kinds and assumptions about the role that will be played in the energy supply system by renewable and nuclear sources. Assumptions are also made about the rate of increase in the efficiency of energy supply and demand technologies.

The dilemma is now being resolved largely in the political arena. In countries with politically strong “green” movements, governments favor emission caps and timetables for achieving them. They are buttressed by the results of the IPCC assessments. In the United States, where there is a vocal dissenting scientific community and political differences on this issue between the Republican and Democratic parties, a year-long battle looms on how to position the United States for the negotiations. The Clinton administration has stated its policy, but the results of the 1996 election could lead to changes. Other nations will face similar internal debates before the 1997 meeting.

The outcome of the negotiating session will vitally affect not only the energy supply and demand industries but other industries and businesses as well, to say nothing of the effects on agriculture and water resources. The implications go further, for if the threat of an unacceptable climate cannot be addressed, we will certainly be unable to achieve an environmentally sustainable global economy.

An interesting development has taken place within the industrial community as it contemplates these upcoming negotiations. Some parts of the international insurance industry have come to believe that the weather anomalies of the past several years that have caused an estimated $25 billion to $30 billion in global annual losses are out of line with normal climatological expectations. In the United States, hurricane Andrew alone accounted for $15.5 billion in insurance claims. The suggestion that anomalous weather phenomena that have caused great insurance industry losses may be related to climate warming has been implied by some U.S. scientists and publicized in the press. It is not surprising, therefore, that the insurance industry is supporting efforts to arrest the rise in global temperatures. The fossil energy industries, including producers such as the oil, gas, and petroleum companies, and users such as the automobile interests, have always argued for caution before implementing policies to limit fossil fuel use, with some questioning the scientific validity of the climate warming concept.

In this confusing confluence of scientific and political interests, little is understood about the distribution of the climatic and economic effects that must be of central concern to negotiators. The global average surface temperature is but a surrogate measure for the intensity of the climate-warming phenomenon. Any particular global average surface temperature will give rise to nonuniform geographic distributions of high and low temperatures.

Mathematical models are as yet not able to portray the regional and national distributions of temperature or precipitation with any certainty. Negotiators therefore do not know, except to a crude approximation, what the effects of global climate warming will be on their territories. Sea level rises are exceptions because the effects of global climate warming are essentially uniform throughout the world oceans. But these effects are now projected to be smaller than previously thought. In the 1990 report, the IPCC projected approximately a 2-foot rise in sea level by 2100; in its 1996 report, the sea level rise is estimated at about 1.5 feet for the same period. Yet the 1996 estimates have a wide range, from 0.5 feet to more than 3 feet. Again, at the lower end of the range the rise is unlikely to be troublesome for most regions, whereas at the high end of the range the effects would be devastating.

We know little about other distributional effects except on the grossest scale. All projections are for greater warming in the polar than in the equatorial latitudes. This suggests that nations located at higher latitudes will undergo a greater warming at the surface than those in mid-latitudes and in equatorial regions. Because the intensity of the global circulation is driven by temperature differences between polar and equatorial regions, the implication is for a less intense global circulation, which is more typical of warmer seasons in mid-latitudes. Even slight changes in climate can significantly affect climatically marginal regions, but it is not clear, for example, whether arid regions will be exposed to more precipitation or increased desiccation.

The negotiations

Unlike other negotiations where national interests are clear, government representatives will literally be negotiating unknown consequences for their countries. They will have as their goal the stabilization of the present distribution of climate with its advantages and disadvantages for the nations of the world. Climate can be regarded as a resource, conferring advantages on some nations and disadvantages on others. The current climate is advantageous for U.S. agriculture and disastrous for Mongolian farmers. No negotiation, except perhaps for those related to preventing nuclear conflicts, has the potential for such broad societal impacts.

Many of the promising energy options are sufficiently far from commercialization that international collaborative actions might advance their availability.

The economic effects are similarly uncertain. These will depend on the way economic development evolves. Many different scenarios and options are portrayed by mathematical models of the global economy. Even more than the mathematical models that project the physical state of the atmosphere and the oceans, economic models of the global economy are shot through with assumptions and simplifications concerning the course of economic growth. Whereas refinement of mathematical models of the physical environment can be expected to continue to reduce uncertainties, models of the evolution of the global economy may be so distorted by political and economic events as to be projecting the unknowable.

Assumptions are made in the economic models about the trajectories of technological development in moving from fossil to nonfossil fuels and about the growth of population. When these economic and population models are joined with physical models of the atmosphere, oceans, and biosphere, it becomes possible to project the characteristics of future climates. Although these models provide important information on possible futures, they tell us little about how economic effects will be distributed among nations and individuals.

The weakness of the economic models helps explain why even when nations can agree on greenhouse-gas abatement goals, negotiators will find it extremely difficult to agree on how to achieve them. An international regime with the power to promulgate policies and regulations and enforce them would be out of the question. Timothy Wirth has explicitly ruled out the acceptance of such a system by the United States: “As a general proposition, the United States opposes mandatory harmonized policies and measures.” Few nations would agree to such an international authority, especially in the face of the uncertain consequences for their economies.

Whatever approaches individual countries adopt, from free market incentives to command-and-control regulatory systems, there would still be the need to allocate greenhouse-gas emission quotas to individual nations. Many policymakers favor the market-based approach of tradeable emission permits, which the United States uses to control sulphur dioxide emissions. When working properly, this system provides tremendous flexibility to those responsible for reducing emissions and takes advantage of market forces to achieve the lowest possible cost of attainment. However, the success of such a concept depends on the initial allocation of emission caps to various countries. Would they be allocated on the basis of population, gross national product, the geographical extent of territory, or some combination of these? Arriving at an equitable formula for allocation of greenhouse gas emission caps would be an extremely difficult task, if doable at all.

Furthermore, because it is the cumulative amount of carbon dioxide emissions over time that governs their effects on climate, negotiators can play with emission limits that vary with time. For example, it has been suggested that emissions in the near term could be allowed to grow rapidly, with serious emission restrictions reserved for the future. To negotiators, this might seem to be a rational approach because it would provide time to verify that the climate is indeed changing before more drastic action is taken. Finally, assuming that negotiators could come to agreement on allocation of greenhouse gas emission caps and schedules to each nation, the individual governments would face the equally difficult task of allocating such caps within their territory and among economic sectors.

Tools for change

The likelihood is that no matter how successful the international negotiations are, humanity will still be faced with the prospect of a changing climate. If the actual temperature increases are at the low end of the projected temperature range, traditional modes of adaptation are feasible. In fact, the actions taken by nations today in the face of existing climate variability would need to be extended only slightly for people to adapt. Human beings live in the most extreme polar and desert regions. Throughout history, humans have adapted to climate variability by planting crops that thrive in different climates, building dams to store water, building coastal defenses against inundations, and adapting clothing and modes of shelter to enable them to exist in almost all climates. Extraordinary changes in these strategies would probably not be needed.

Even if the temperature regime and the implied changes in precipitation occur at the higher end of the projected range, adaptation is still a key way to cope with climate change as its regional and distribution effects become apparent. International assistance could be invoked to deal with the most egregious of these conditions, as indeed it is today in the face of persistent droughts or floods.

It is surprising, then, that missing from the negotiations is any concept of marshaling international action to develop technologies that will be needed for adaptation. Such technologies could provide options for coping with climate changes that might result from greenhouse gas emissions. Central to reducing greenhouse gas emissions are changes in the global energy system. Options are needed for moving to nonfossil energy sources, should this be necessary. Research and development in a wide range of alternative technologies is under way in many countries, and international collaboration has already begun on some of them, such as the development of nuclear fusion through the International Thermonuclear Experimental Reactor.

However, more needs to be done. Many of the promising energy options are sufficiently far from commercialization that international collaborative actions might advance their availability. For transportation, the development of hydrogen as a safe fuel appearsfeasible. For other uses, more efficient energy production by photovoltaics, biomass, wind, and fuel cells seems promising. Even the unthinkable, nuclear fission breeder reactors, might be considered. The fact is that absent efficient new energy technologies to achieve greenhouse emission goals, the negotiators simply will not have the tools necessary to address the problems associated with global climate change.