Forum – Fall 2004
In his article “Completing the Transformation of U.S. Military Forces” (Issues, Summer 2004), S.J. Deitchman makes a strongcase for continuing the development and subsequent production of all of the advanced weapons (ships, planes, ground antimissile systems, new logistics ships, enhanced command/control/communication, etc.) currently under way. He then goes on to describe— again, with considerable credibility—the vulnerability of each of these systems and, therefore, the need to initiate and fully fund their enhancements. Finally, he points out that all of these expenditures will be required to simply counter potential “conventional” warfare concerns. And he, again properly, observes that the so-called “asymmetric” threats—guerilla warfare, terrorism against our troops, and post-war insurgencies such as we are seeing today in Iraq—will require a whole new set of troops and equipment. Additionally, significant resources will be required for the difficult job of post-war nation building and development of the physical, legal, economic, and social infrastructure.
Although no one can argue with the desirability of each of these investments, the challenge for the Department of Defense (DOD) and for the nation is paying for all of this—particularly in the current budget-deficit environment and with the growing needs for both homeland security and social programs. Here, many analysts believe that even the current DOD budget plans are considerably underestimated; but Deitchman’s estimate of only an added $2 billion to $3 billion per year, or the DOD’s forward projection of an added $10 billion per year (excluding the as-yet unspecified costs of the current operations in Iraq) also seems unrealistically low. Even with an annual defense budget of over $400 billion, the United States cannot afford everything. And that raises the really tough and unavoidable question: What are the priorities that absolutely have to be funded, and what can be dropped?
Unfortunately, if history is any indication, the first things traditionally cut by the military services are long-range research, along with spare parts and maintenance—the two ends of the acquisition cycle. But cutting long-range research is “eating the seed corn.” With the rapid evolution of new technology, this can easily undermine the U.S. strategic posture of technological superiority in only a few years. And reducing spending on logistics in order to be able to procure more new weapons significantly diminishes force readiness. In fact, research and logistics should be the last areas to be cut.
Again looking at historic practices, as the cost of the traditional mainstream weapons continues to rise, the tendency has been to cut back on new items such as precision weapons, communications, unmanned systems, and advanced sensors. But it is these lower-cost “force multiplier” systems that are the big elements of the 21st century’s transformation of warfare.
That leaves the tough choices of which weapons and/or forces to cut. This is not a choice that can be put off, and it should be the subject of a future article.
Misguided drug policy
I share Mark A. R. Kleiman’s exasperation with the recent decision to drop the Arrestee Drug Abuse Monitoring (ADAM) program of the National Institute of Justice (NIJ), a decision that reveals a great deal about the underlying premises of our War on Drugs (“Flying Blind on Drug Control Policy” (Issues, Summer 2004).
In a rational, evidence-based policy system—one oriented toward measurable consequences rather than symbolism—a system like ADAM might serve several valuable purposes, including epidemiological tracking; community planning and agency coordination; economic market analysis; sociological research; and the evaluation of laws, programs, and interventions. For most of these purposes, ADAM was a flawed instrument because of its nonrandom, arrest-based sampling approach. But these problems were hardly devastating. For some of these purposes, accurate point estimation matters less than having enough reliability and validity to get a rough sense of the size of the problem and the direction of trends and correlations.
But as University of Maryland economist Peter Reuter has long argued, there was never much evidence that the federal government has based drug policy decisions on point estimates; indeed, it is not clear that our national drug control strategy is influenced by research, period. An examination of government rhetoric over the past decade suggests that ADAM (and its predecessor, the Drug Use Forecasting System) were mostly used to keep the public focused on the link between drug consumption and crime, especially predatory crime. The rhetorical focus has always been on trends in drug use, especially by youth, rather than trends in drug-related harms.
ADAM’s political support may have suffered from the fact that researchers used NIJ data to show that the drug/crime link is partly spurious (because of common causes of both criminality and drug use) and partly due to prohibition (drug-market violence). But perhaps a deeper reason is that our drug policies are driven more by moral symbolism than by technocratic risk regulation. A policy based on optimal deterrence needs careful data collection; a policy based on retribution does not.
Having studied drug policy for 15 years, I can confirm the wisdom of Mark A. R. Kleiman’s comments in his article. He writes about the penny-wise, pound-foolish decision to cancel the Arrestee Drug Abuse Monitoring (ADAM) program that collected data from arrestees about their drug use. ADAM accounted for just $8 million of the $40 billion spent annually on drug control, and it was the best source of data on the criminally involved users who cause most of the $100 billion-plus per year in social costs associated with illicit drugs.
Kleiman notes that the proximate problem was budgetary. We generously fund research on drug treatment and prevention, but spend next to nothing studying drug-related law enforcement, which consumes far more program dollars.
The more fundamental problem, in Kleiman’s eyes, was that ADAM was useful but not rigorous. More expensive programs that are of less value to policy evade the budget ax by being controlled by health agencies with larger research budgets and by maintaining higher standards of scientific rigor.
If the rigor gap stemmed from bad management, then ADAM should be axed. However, the reason is simply that criminally involved users are harder to study. Sampling them at the time of arrest is practical, but it is objectionable to purists because the arrest process extracts a strange slice of offenders, and one whose composition varies in unknown ways across cities and time.
Purists prefer sampling households. That is also valuable, but not perfect. Household respondents report using perhaps 10 percent of the drugs we know are consumed based on supply-side estimates, so the pretty, “statistically valid” confidence intervals can surround grossly inaccurate point estimates.
Kleiman’s essay raises a more general question, though, of what’s better: a reasonable but “crude” answer to an important question, or a “precise” answer to a minor or irrelevant question? The former better serves decisionmakers, but science prefers the latter. Some of that preference is pedantry, but some is well founded. Science progresses by slow accumulation of highly trustworthy building blocks. One bad block can threaten the whole wall. So one logical position is to discourage scientists from seeking to be practically relevant on strategic issues, at least in areas like drug enforcement.
Yet academia claims societal subsidies in part by claiming relevance to society’s major problems. Indeed, I have often heard policymakers say that my and my colleagues’ insights are different from and a useful complement to the dialogue that takes place among policy practitioners. Furthermore, if there is any justification for tenure and money for research as a protection of intellectual freedom, it surely applies to politically charged topics such as drug policy.
I am sure that scientific thinking can be of great service in improving U.S. drug policy. I am also sure that scientists who take on that mission will be swimming upstream if they seek respectability, let alone tenure, particularly in disciplinary departments. I suspect that drug policy is not unique in this regard.
Perhaps the infrastructure of scientific careers—the journals, funding mechanisms, promotion and tenure processes, etc.—ought to be adjusted to make greater room for relevance. Doing so might not even distract from or dilute the advance of pure science. Perhaps if academics help the nation make progress on a few challenges as important as illicit drugs, the budget cutters will become budget builders for all aspects of the scientific infrastructure.
First, let me be clear that I consider Mark Kleiman to be one of the best present-day analysts of drug control policy. In his article, Kleiman makes several points that would seem to be beyond dispute. First, to maintain a sensible strategy for dealing with illicit drugs it is important to know which elements of that strategy are working. This means knowing who is using which drugs, how much of each drug is being used, and how much the stuff costs. Second, those who are the heaviest users and consume the great bulk of the illicit drugs are not adequately included in our most widely used surveys. Third, the majority of the heaviest users are involved with the criminal justice system. Therefore, we need to know more about the drug use patterns of this population in order to judge the effectiveness of our drug control efforts.
If these points are valid, then the recent decision of the National Institute of Justice (NIJ) to cancel the $8 million Arrestee Drug Abuse Monitoring (ADAM) program—the only national program that uses objective measures (urine tests for drugs) to gauge the extent of drug use among arrestees—is an inexplicably poor way to save money in a multibillion-dollar drug control budget, particularly when almost $40 million is spent annually on the National Household Survey on Drug Use and Health (NHSDA).
Curiously, Kleiman tries to explain the inexplicable by describing recent cuts in the budget for the NIJ and problems of interpreting data obtained from arrestees. The interpretation difficulty arises in part because the kinds of people arrested can vary widely from place to place and time to time, depending on the emphasis given to different problems by local law enforcement agencies.
Kleiman suggests some ways to save ADAM by reducing its costs, for example by sampling arrestees less frequently. But the logical fix is for the Office of National Drug Control Policy to reassign this kind of prevalence monitoring to the Substance Abuse and Mental Health Services Administration (SAMHSA). SAMHSA now has responsibility for the NHSDA, as well as for the Drug Abuse Warning Network (DAWN), which gathers data on episodes of drug-related medical problems from hospital emergency rooms. (DAWN was originally proposed by the Bureau of Narcotics and Dangerous Drugs, but it seemed more appropriate at the time to have a health agency perusing the charts of patients seeking medical help.) If such a reassignment does happen, SAMHSA should be given the flexibility to adjust the surveillance budget to accommodate estimates of drug use among varying populations of potential users, both those living in households and those encountering the criminal justice system.
Once the data from arrestees is flowing again, I hope that Kleiman will tell us how to convert the proportion of urine tests positive for a given drug into an estimate of the tonnage of that drug used annually.
Jay Apt, Lester B. Lave, Sarosh Talukdar, M. Granger Morgan, and Marija Ilic (“Electrical Blackouts: A Systemic Problem,” Issues, Summer 2004) are to be commended for their efforts to encourage more deliberation on systemic issues constraining the reliability of the nation’s most critical infrastructure— the electric power delivery system. The industry response to the weaknesses identified by the experts who examined the August 14, 2003, blackout has been admirable. A list of more than 60 individual issues ranging from training to communications has been addressed. Although these actions are necessary, they are not sufficient to prevent another such blackout.
This latest blackout had many similarities with previous large-scale outages, including the 1965 Northeast blackout, which was the basis for forming the North American Electric Reliability Council in 1968, and the July 1996 outages in the West. Common factors include: conductor contacts with trees, inability of system operators to visualize events on the system, failure to operate within known safe limits, ineffective operational communications and coordination, inadequate training of operators to recognize and respond to system emergencies, and inadequate reactive power resources.
Four fundamental vulnerabilities (the four “Ts”) caused the August 2003 blackout: the lack of properly functioning tools for the operators to see the condition of the power system and to assess possible options to guide its continued reliable operations; inadequate operator training; untrimmed trees; and poorly managed power trading. The trading problem is the one vulnerability that has been largely ignored.
As one consequence of restructuring, the power delivery system is being utilized in ways for which it was not designed. Under deregulation of wholesale power transactions, electricity generators—both traditional utilities and independent power producers—are encouraged to transfer electricity outside of their original service area in order to respond to market needs and opportunities. This can stress the transmission system far beyond the limits for which it was designed and built. This weakness can be corrected, but it will require renewed investment and innovation in power delivery.
The U.S. power delivery system is based largely on analog technology developed in the 1950s, and system capacity is not keeping pace with growth in electricity demand. In the period from 1988 to 1998,
U.S. electricity demand grew by30 percent, but transmission capacity increased by only 15 percent. Demand is expected to grow another 20 percent during the ten years from 2002 to 2011, but only a 3.5 percent addition of new transmission capacity is planned. Meanwhile, the number of wholesale electricity trades each day has grown by roughly 400 percent since 1998. This has significantly increased transmission congestion and stress on the power-delivery system. The resulting annual cost of power disturbances to the U.S. economy has escalated to an estimated $100 billion.
Adequate investment and modernization in the nation’s electric infrastructure are critically needed. Although the Federal Aviation Agency example outlined by the authors provides an interesting precedent, the fit is not perfect for the nation’s electricity system, which has weakness at the regional level that must be addressed through local distribution networks as well as the national transmission infrastructure. A more appropriate evaluation and enforcement model is the nuclear power industry’s Institute of Nuclear Power Operations, which operates under the mandate of the Nuclear Regulatory Commission.
The fundamental issue limiting electricity system reliability today is the lack of the necessary incentives for investment and innovation. Mandatory and enforceable reliability standards, which have been endorsed by the industry, can correct the problem, but these new standards are being held hostage by the deadlocked congressional debate over national energy legislation.
The first anniversary of the Big Blackout of August 2003 was a fitting time for Issues to publish an article by the Carnegie Mellon team of Jay Apt, Lester Lave, Sarosh Talukdar, M. Granger Morgan, and Marija Ilic. The authors caution us to remember that “although human error can be the proximate cause of a blackout, the real causes are found much deeper in the power system.” And they invite us to recall that “major advances in system regulation and control often evolve in complex systems only after significant accidents open a policy window. The recent blackouts in this country and abroad have created such an opportunity.”
Like the North American Electric Reliability Council in its February 2004 report and the U.S./Canadian Power System Outage Task Force in its April 2004 report, the Carnegie Mellon team focuses on a fundamental problem: the electric industry’s reliance on voluntary reliability standards. The authors call on Congress to enact a new law authorizing the federal government to establish and enforce mandatory reliability standards in the electric industry. Although not a new idea, it is still a timely and important message, one that adds another set of expert voices to the diverse and bipartisan chorus calling for Congress to break the logjam that has for many years prevented the establishment of this reliability authority.
The Carnegie Mellon team is expert in public policy, electrical engineering, applied physics, finance and economics, and information science. This team knows that “complex systems built and operated by humans will fail” and that for the new reliability framework to work robustly and efficiently, it must be designed and implemented in a way that “recognizes that individuals and companies will make errors and creates a system that will limit their effects. Such a system would also be useful in reducing the damage caused by natural disruptions (such as hurricanes) and is likely to improve reliability in the face of deliberate human attacks as well.”
These economic, public safety, and national security goals for our critical electric infrastructure are broadly shared. To accomplish them, the adoption of mandatory reliability standards is necessary but not sufficient. Some regions are more advanced than others in terms of their reliability practices, but the interconnected nature of the grid and the consumers and businesses that depend on it require that the reliability bar be raised for the industry as a whole.
In addition to mandatory reliability requirements, federal and state regulators must adopt a clearer set of regulatory incentives for needed transmission enhancements. These enhancements include new transmission facilities where they are needed, but as the Carnegie Mellon team points out, they also include distributed generation, sensors, software systems, updated control centers, systematic tree-trimming, appropriate reliability metrics, inventories of critical hardware, training and certification of grid operators, periodic systems testing, price signals to induce real-time customer response to changing supply conditions and costs, and actionable protocols to shed load efficiently and safely if needed during emergencies. Clarifying predictable cost-recovery policies, enforceable reliability standards, and delineations of responsibilities for different parties in the electric industry will help provide the needed stimuli for lagging investment and innovation.
The authors’ observations on lessons learned from the experience of the air traffic control system provide a basis for believing that these changes can be accomplished, as long as there is the right public will to do so. If the 2003 blackout wasn’t enough to get us there—which, alas, it seems not to have been—then we will continue to need the kind of well-reasoned and articulate reminders that the authors provide.
This article explains what is needed for our critical electric infrastructure to keep pace with the increasingly complex and growing demands of our nation’s economy. As the authors conclude, “A plan comprising these elements, one recognizing that failures of complex systems involve much more than operator error, better reflects reality and will help keep the lights on.” Let’s hope that Congress moves us closer to realizing this plan before the grid is tested again by significant human error, a major natural disruption, or a deliberate human attack.
Jerry F. Franklin and K. Norman Johnson provide some crucial insights about the need for new approaches to forest conservation (“Forests Face New Threat: Global Market Changes,” Issues, Summer 2004). Changes will need to occur in attitudes, as well as in tools such as regulations, incentives, markets, subsidies, and ownership. Implicit in their discussion of various possible responses is the recognition that public resources will not be sufficient to accomplish all possible goals everywhere.
The need for a more strategic approach to applying limited resources is exemplified here in the Pacific Northwest by circumstances Franklin and Johnson touch on briefly in their necessarily concise overview. As they point out, the Northwest Forest Plan (covering federal forest lands in the range of the northern spotted owl in portions of Oregon, Washington, and northern California) may have had the unplanned but desirable effect of creating a more stable regulatory setting for private and state lands. Unfortunately, the biodiversity-oriented land allocations of the plan don’t necessarily coincide with the greatest potential for biodiversity, which tends to occur on lower-elevation private and state lands. Also, many private landowners are asking not only for even greater regulatory predictability, but also for a lower regulatory burden, even though there is broad (though not universal) agreement that the current regulatory framework is inadequate to conserve biodiversity.
The answers may lie in part in giving greater consideration to a tool addressed only briefly in the paper—incentives, which may most usefully be considered as one half of a regulation-incentive framework. Regulations set the common baseline for all landowners, whereas incentives can be used to reward landowners who exceed the regulatory requirements in providing public benefits.
Such incentives can include tax relief or direct payments, and the reality is that there will not be enough of either to reward all potentially interested and deserving landowners. Conservation planning, particularly plans that can simultaneously consider multiple public values such as biodiversity, watersheds, and open space, can provide a means for strategically targeting available resources. Less burdensome application procedures, better marketing, and more efficient delivery of incentives would also help. Creation of flexible incentives programs, such as that established (but as yet unfunded) in Oregon in 2001, can help overcome artificial and often overly restrictive boundaries among programs designed to protect wildlife, watersheds, or recreation, or to provide for carbon sequestration. In many cases, measures taken to provide for one of these values will protect others as well.
Like Franklin and Johnson, we can not claim to have all the answers or to see the future with perfect clarity, but the authors deserve our gratitude for opening this most important discussion about the evolving landscape of forest conservation.
A failure to foresee the consequences of a changing global market for wood has resulted in an emerging crisis for forest management in the United States. The crisis (together with possible solutions) is very well documented by Jerry F. Franklin and K. Norman Johnson.
To an outside observer, U.S. forest policy, especially in the 1990s, has been difficult to comprehend. When it should have concentrated on increasing its industrial competitiveness to meet the predicted increases in the wood harvest from the fast growing and therefore low-cost plantations of the tropics and Southern Hemisphere, the United States has instead increased regulations and costs for its forest industry. Also, in the misguided belief that lower wood use would result in more forests being “saved,” some environmentalists have advocated using less wood.
Some extreme environmentalists may disagree, but most, if not all, forests must be managed. Without management there will be a decline forest health, more wildfires, the possible reduction—if not total loss—of the habitat of some forest dwelling species, and other problems. If the funding of forest management is not to come from an economically viable but environmentally responsible forest industry, the money must come from another source. Because the revenue from other forest uses is unlikely to cover the cost of management, the only alternative is public financing. Given the increasing competition from other seekers of government funding, forest management is unlikely to be a high priority.
The Franklin and Johnson plea for “an overhaul of forest policy” is urgent, if not overdue. The growing demand for water is a particularly compelling rationale for watershed management in forest policy. By far the most likely source of funding is their recommendation for “creating or maintaining a viable domestic forest industry.” At a time when concrete, metals, or ceramics are being substituted for wood in many applications, we should be looking for ways to improve the effectiveness of wood products as well as promoting their environmental advantages. The only way to pay for environmentally responsible forest management is to maintain a healthy forest products industry that can provide the funds.
Anne E. Preston’s article, “Plugging the Leaks in the Scientific Workforce” (Issues, Summer 2004), challenged me as a university president, a social scientist, and a mother, because I have always encouraged my best and brightest students, including my daughter, not to shy away from possible futures in natural science or engineering. Preston’s analysis adds a sense of urgency to other recent reports that a life in the natural sciences is, for women, all too often a life of diminishment and loss: of marriage, children, career, or all three.
In a similar vein, Mary Ann Mason, dean of the graduate division at the University of California at Berkeley, last year reported the results of a survey that examined the academic careers of 160,000 people who earned Ph.D.s between 1978 and 1984. She found that male graduates who took university jobs were 70 percent more likely than the female graduates to become parents. And the women who gave birth within five years of earning their doctorates were 30 percent less likely to obtain tenure than the women without children.
We know that modified-duty policies, tenure clock time-out policies, and part-time flexible policies are critical lifelines for women, yet many institutions report that women are more hesitant than men to use these for fear of being seen as inadequate or “in need of help.” But the truth is that no one succeeds without help, and we should spread the word. Colleges and universities need to build campus cultures that help every person be expansive and creative.
We need cultures of collaboration and support that reward those who serve as mentors and supporters, recognizing that valuable social support may come in different forms. An experienced colleague may give the best advice about how the system works, but a peer may be best at giving social and emotional support in the face of the inevitable feelings of inadequacy we all encounter. Institutions themselves should provide the support I call “refreshment”—interdisciplinary opportunities that allow scholars to get excited again by encountering different perspectives.
Environments of excellence depend on intellectual and social diversity. They are places where we can be flexible enough to “try it another way,” to change our minds, to take risks, where we are called upon to be flexible and juggle multiple roles. To make our campuses socially diverse, we need many more women at all stages of their academic careers. The number needs to be what I would call a “critical mass,” enough women to innovate, sustain, and support each other through the hard times as well as the good.
To recruit enough women to the natural sciences and engineering, we need to start in nursery school and welcome them every step of the way. Their life paths will get easier as they become more numerous, because they will change us to the benefit of all. Women have done that before.
The excellence of colleges and universities—and their science and engineering departments—lies in the vibrant exchange of people and ideas. Those people should and must include women, lots of women.
Anne Preston starts off her otherwise excellent article on the problems of retention of women in science with a real zinger of a sentence: “In response to the dramatic decline in the number of U.S.-born men pursuing science and engineering degrees during the past 30 years, colleges and universities have accepted an unprecedented number of foreign students and have launched aggressive and effective programs aimed at recruiting and retaining underrepresented women and minorities.” This sentence implies all sorts of things that I do not believe to be true. I am writing this as a graduate chairman of a chemical engineering department, where I have been involved in the recruitment of graduate students for the past decade.
Although it might be true that there are fewer white males applying for graduate school (and I am not sure this is really the case at Michigan), the programs aimed at recruiting women and minorities were not introduced to make up for that shortfall. Instead, these programs were introduced to make the opportunities and rewards of advanced degrees available to all. They were launched in response to a historical injustice, not a shortfall of willing white guys. And in our particular case, the inclusion of these groups did not have a large effect on the number of “U.S.-born men” (read nonminority), because additional slots were created with the increase in funding targeted to underrepresented groups.
Likewise, we admit international students not because we are short of domestic ones, but simply because we admit the most qualified and talented students. U.S. universities are among the most desirable ones in the world, and as a consequence we receive applications from the top of the classes of the top schools in every country in the world. This competition does perhaps put U.S. students at a disadvantage, and in fact it is common to apply “affirmative action” to U.S.-born students; they are preferred even when their test scores are not as high or their mathematical training not as strong. U.S. universities are world class, and to remain so they must be open to all. Even when there are plenty of domestic students, as there has been in recent years (perhaps because of the weak job market), Michigan has continued to admit international students, albeit at a somewhat lower rate.
Whatever the motivation, of course, U.S. universities have now increased the number of various underrepresented groups in science and engineering. Preston provides an accurate description of the various problems women and their families face in the current academic environment. Likewise, underrepresented minority groups face some similar hurdles. As Preston says, it is truly a waste for all if these people, after so much training and work, ultimately leave science.
In “What is Climate Change?” (Issues, Summer 2004) Roger A. Pielke, Jr. addresses the problems that arise from the different framing of the climate challenge by the Framework Convention on Climate Change (FCCC) and the Intergovernmental Panel on Climate Change (IPCC). The FCCC’s goal is to avoid dangerous anthropogenic changes, whereas the IPCC deals with all climate change regardless of its cause. Pielke concludes that the different framing has serious implications for political decisions. The FCCC concept would limit the range of policy options mostly to mitigation through the reduction of anthropogenic climate change and would force scientific research into the rather unproductive direction of reducing uncertainty by concentrating on detection and attribution of anthropogenic climate change. Pielke’s analysis makes sense and leads to a few other observations.
In the view described by Pielke, adaptation is seen as a measure to deal with only the risks emerging from a changing climate. But the present climate is already very dangerous.
Extreme weather events cause extensive damage, and many countries, particularly in the developing world, are badly prepared for the emergencies connected with such events. To adapt means to reduce the vulnerability to such extreme events. Thus adaptation is beneficial today, and it will likely become even more necessary in the future.
Pielke considers “detection” efforts mainly as evidence-gathering for supporting the institution of mitigation policy. This is correct when detecting deals with global variables. However, on the regional and local level, detection is also required to assess the present risk of extreme weather and to monitor any change in that risk. To protect coastal communities it is necessary to know the distribution of storm-surge water levels and to project how they might increase in the coming 50 years.
We also need a more complete understanding of climate history. To make the most of current detection efforts, it helps to know if current weather events are beyond the scope of what might be called natural weather patterns. The instrumental record, which extends back about 100 years, is not adequate, particularly for extreme events, which tend to cluster in time. Historical information is also helpful in understanding the social and cultural dimensions of climate. To develop a workable climate policy, social and cultural insights will be needed to complement the scientific understanding of the physical dimensions.
Less power to the patent?
What I find amusing about Richard Levin’s “A Patent System for the 21st Century” (Issues, Summer 2004) is that he does not even mention the basic question of whether an advanced society needs a patent system in the first place. Although tangible property is indeed a key foundation of human freedom, intellectual property (IP) has at best a mixed record in terms of its claims of being a driving force for innovation. In fact, many recent studies provide a plethora of anti-IP arguments.
Although Levin points out some perceived deficiencies of the patent system, his prime axiom is that the system only needs some cosmetic adjustments to streamline it. In spite of the fact that a multibillion-dollar patent and IP litigation industry is undoubtedly capable of producing vocal and superficially effective pro-IP rhetoric (as in the recent music downloading “wars”), it remains an open question whether a technological society at large loses or gains from the eventual phasing out of IP. Despite fierce opposition from vested corporate interests and IP lawyers (their bread and butter, after all), the social momentum to redefine key IP issues in a more relaxed form is growing.
The very fact that the pace of science and technology development shows exponential acceleration renders it highly unlikely that long-term patenting will survive intact for much longer. Perhaps a real recipe for the 21st century should be a gradual shortening of patent terms (say, to 5 or 10 years of nonrenewable terms), with a simultaneous advancement of non-patent means of supporting and rewarding invention and innovation.
Better watershed management
I agree with Brian Richter and Sandra Postel that modified river flows require “a shift to a new mindset, one that makes the preservation of ecological health an explicit goal of water development and management” (“Saving Earth’s Rivers,” Issues, Spring 2004). Although I also agree that scientists will play a key role in defining “sustainability boundaries” for river flows, we must acknowledge that uncertainties and value judgments abound in attempting to determine what constitutes both ecological health and sustainability.
No matter how successful we are in preserving or restoring a river’s natural flow conditions, there will likely be continuing growth in demand for the river’s water as both a product for consumption and as a service for various uses. Therefore, we must increase our efforts to address the demand side, as well as the supply side, of water usage. We should strive to improve the efficiency of our water use in all sectors: residential, industrial, agricultural, energy, transportation, and municipal. Improved efficiency in the use of water by household appliances, industrial water-using systems, and irrigation systems can go a long way toward helping reduce demand for water and thereby leave more in the river.
We also must research technologies that can address water supply needs, while recognizing that there are no quick fixes. Our greatest potential for supply-side success might reside in less costly technologies for desalination. Although desalination has its own environmental issues related to water intake and brine disposal, it offers the promise of an alternative water source not only in coastal areas but in areas with brackish groundwater, especially in arid, drought-stricken, or high-growth areas. Desalination could allow us to keep more river water in stream for natural flow and ecosystem health.
We should think in terms of integrated water resources management that accounts not only for surface water but also for groundwater, whose relationship to surface water is often unclear, but is increasingly seen as closely interdependent. Removing water from one of these sources can readily affect the other. Clearly, more research is needed to understand and model surface water/groundwater interactions.
Likewise, wetlands should be seen as a key part of riparian ecology when adjacent to rivers and a key part of the local ecology even when isolated from rivers. Like rivers, healthy wetlands are dependent on natural changes in water levels and flows, and wetlands conservation and restoration are vital to the ecological health of many places.
Our approach to integrated water resources management should be carried out on a watershed basis, encompassing all of the complex interrelationships inherent in water management. At the Environmental Protection Agency (EPA), we are working to build stronger partnerships at federal, state, tribal, and local levels to facilitate a watershed approach. Last year, we started a “targeted watershed grants program” with nearly $15 million in grants to 20 watershed organizations. These kinds of community-driven initiatives are ideal forums for addressing all aspects of integrated water resources management, including natural river flows.
The EPA is also working with the U.S. Army Corps of Engineers, under a Memorandum of Understanding signed in 2002, to facilitate cooperation between our two agencies with respect to environmental remediation and restoration of degraded urban rivers. Just one year ago, we announced pilot projects to promote cleanup and restoration of four urban rivers. For the project on the Anacostia River in Maryland and the District of Columbia, we recently helped the Anacostia Watershed Society reintroduce native wild rice to the river’s tidal mudflats—an intensive ecosystem restoration and environmental education project involving inner-city school students as “Rice Rangers.”
Through such pilot projects, our collective mindset regarding rivers can change over time, and we can ensure broad popular support for appreciating the “full spectrum of flow conditions to sustain ecosystem health.” Ultimately, minds are changed through better understanding. The EPA will do its part to inform and educate people about the value of rivers, especially the ecological services performed by healthy, naturally flowing waters. Richter and Postel’s article moves us closer to the broader understanding that will change minds and allow us to save and sustain Earth’s rivers.
In the Fall 2003 Issues, author Michael J. Saks was incorrectly identified. He is professor of law and professor of psychology at Arizona State University.
In “Completing the Transformation of U.S. Military Forces” (Summer 2004) on page 68, it should say that the Comanche (not Cheyenne) helicopter was cancelled.