Forum – Winter 1999
Drug warriors
The Office of National Drug Control Policy is doing, or trying to do, everything Mark Kleiman calls for and more. And it is frankly disappointing that, despite our continued efforts to bring national policy in line with what science and experience have established, many in the research community continue to preach to us rather than joining voices with us.
It appears that many in academia simply cannot believe that public servants, whom they call bureaucrats, can engage in collegial efforts to adjust their policies and actions. In fact, this is happening. And the real challenge to the research community is to look carefully at what we are actually doing, abandon the comfortable role of “voice in the wilderness,” and become vocal advocates for government policies that are sound, as well as for those that are needed.
Admittedly, a system of government that separates and divides powers does not easily come to unanimity on how to address the problems presented by drugs. A drug czar cannot command compliance with his chosen policies. But a director of National Drug Control Policy can set an agenda for policy discussion and seek the full engagement of parents, researchers, and congressional representatives. That is what we are doing. The five goals of the National Drug Control Strategy, and the budget and performance measures that support and assess them, focus not only on drug use but on the $110 billion cost to Americans each year due to crime, disease, and other social consequences of drugs.
In the short space allowed here, one example will have to serve. The National Drug Control Strategy sets explicit targets for reducing the gap between need and capacity in the public drug treatment system by 20 percent by 2002 and by 50 percent by 2007. Because drug treatment capacity is sufficient to treat only about half of the over four million people in immediate need, the strategy calls for an expansion of capacity across the board. For those who need and will seek treatment, those who can be reached, those who must be coerced, and those who will resist all of society’s efforts, the following interrelated actions are part of the long-term National Drug Control Strategy and budget:
(1) Increased block grant funding to the states to maintain progress toward targets for closing the treatment gap and to expand support for low-cost treatment and for self-help transitional and follow-up programs.
(2) Targeted funding for priority populations to increase capacity where it is most needed and require the use of best practices and concrete outcome measures, to expand outreach programs for treatment-resistant populations, and to make full use of criminal justice sanctions to get priority populations into treatment.
(3) Regulatory reform to make proven modalities more readily accessible, including the provision of adequate resources to reform regulation of methadone/LAAM treatment programs and maintaining and improving program quality.
(4) Policy reform to provide insurance coverage for substance abuse services that is on a par with coverage for other medical and surgical services.
(5) Priority research, evaluation, and dissemination to develop state-by-state estimates of drug treatment need, demand, and services resources; improve dissemination of best treatment practices, including ways to increase retention in treatment, reduce relapse, and foster progress from external coercion to internal motivation; and provide comprehensive research on the impact of parity.
This example describes action on one performance target. It is one integral part of a comprehensive, long-term, cumulative process, not merely a slow fix. To review the other 93 performance targets by which the strategy is being assessed, please visit our Web site at www.whitehousedrugpolicy.gov or call our clearinghouse at 1-800-666-3332. See what we are doing, determine where you can help.
I completely agree with the views expressed by Steven Belenko and Jordon Peugh in “Fighting Crime by Treating Substance Abuse” (Issues, Fall 1998). They correctly conclude that because of the high correlation between substance abuse and criminal behavior, prison inmates should be receiving more treatment, counseling, educational and vocational training, medical and mental health care, and HIV education and testing. They accurately state that these measures will enhance public safety and generate significant savings for taxpayers, because they will end the cycle of addiction and recidivism and reduce the future costs of arrest, prosecution, incarceration, health care, property damage, and lost wages.
Our experience in Brooklyn, New York, the state’s largest county, supports these conclusions. In Brooklyn we handle more than 3,000 felony drug prosecutions each year. Although some of these cases involve defendants who are engaged in major drug dealing or ongoing narcotics trafficking, the overwhelming majority of cases involve second offenders who are sent to prison under mandatory sentencing laws for selling or possessing small quantities of drugs. A certain portion of these defendants are drug addicts who sell drugs to support their habit. They and the public are better served by placing them in drug treatment programs than by sending them to prison.
For the past eight years the Brooklyn District Attorney’s Office has operated a drug treatment alternative to prison program called DTAP. It gives nonviolent, second felony drug offenders who face mandatory prison sentences the option of entering drug treatment for a period of 15 months to two years instead of serving a comparable sentence in state prison. If the offender completes treatment, the criminal charges are dismissed; if the offender absconds, he or she is promptly returned to court by an enforcement team for prosecution and sentence.
The results of our program are impressive. Our one-year retention rate of 66 percent is higher than published statistics for long-term residential treatment, and our 11 percent recidivism rate for DTAP graduates is less than half the recidivism rate (26 percent) for offenders who fail the program or do not enter the program. An analysis of the savings from correction costs, health care costs, public assistance costs, and recidivism costs, when combined with the tax revenues generated by our graduates, has produced an estimated $13.3 million saving from the program’s 372 graduates to date. If such a treatment program could be effected on a larger scale, the savings to taxpayers would be in the hundreds of millions of dollars.
Blenko and Peugh are to be commended for saying publicly what many of us in the law enforcement community already know: Most crime is related to substance abuse, and fighting substance abuse is the only rational way to reduce crime.
Steven Belenko and Jordon Peugh make a compelling case for the benefits of a treatment-oriented strategy as a means of crime control. Unfortunately, the primary problem involved in implementing such a strategy now is a political one, not a lack of technical knowledge.
Despite increasing support for drug treatment on Capitol Hill in recent years, the administration and members of Congress want to have it both ways when it comes to drug policy. Although political rhetoric has supported the concept of treatment, only modest funding increases have been adopted. The primary emphasis of national drug policy continues to be a focus on the “back end” strategies of law enforcement and incarceration.
We see this in the areas of both appropriations and policy. Two-thirds of federal antidrug funding continues to be devoted to police and prisons and just one-third to prevention and treatment efforts, a division that has held steady through Democratic and Republican administrations. The massive $30 billion federal crime bill of 1994 was similarly weighted toward arrest and incarceration. Sentencing policy at both the federal and state levels continues to incorporate broad use of mandatory sentencing for drug offenders, resulting in the imprisonment of thousands of low-level drug users and sellers. In addition to mandating what often results in unjust sentencing practices, these laws virtually guarantee that a treatment-oriented approach will continue to take a back seat to an ever-escalating prison population that diverts public funds toward more penal institutions.
The real tragedy of national drug policies is that they have exacerbated race and class divisions in the nation. Substance abuse clearly cuts across all demographic lines, but our societal response to such problems is very dependent on an individual’s resources. Drug treatment is readily available for those with financial resources or insurance coverage. Low-income drug abuse, though, has been the primary target of the nation’s “war on drugs,” resulting in the disproportionate confinement of African Americans and Hispanics. Rather than investing in treatment resources in these communities, national policy has only exacerbated the disparities created by differences in access to private treatment services.
Developments in recent years suggest that there is no better time than the present to consider a shift in priorities. Crime has been declining for six years, the benefits of community policing are increasingly recognized, and public support for drug treatment is widespread. Yet remarkably, federal lawmakers opted to add nearly a billion additional dollars for radar surveillance, patrol boats, and other interdiction hardware as part of the “emergency supplemental” final budget package adopted by Congress in October 1998. The nation’s drug abuse problem would be far better served if politicians paid attention to the real emergency of closing the treatment gap and implementing Belenko’s and Peugh’s recommendations.
It is impossible to understand the arguments for and against a controversial public policy such as regulating drug use without paying close attention to the language in which the discourse is couched. The routine use of terms such as “addiction,” “hard drug,” and “drug treatment” create a semantic aura lulling writer and reader alike into believing that a person’s decision to use a chemical and the government’s decision to interfere with that choice are matters of medicine and public health. They are not-they are moral, religious, legal, and political matters.
In “Drugs and Drug Policy: The Case for a Slow Fix” (Issues, Fall 1998), Mark A. R. Kleiman’s language, couched in drug policy jargon, is the language of the political demagogue. Without stating the constituency whose interests he ostensibly seeks to protect, he simply claims to be on the side of the angels; his goal, he declares, is “to minimize the aggregate societal damage associated with drug use.” That sounds irrefutably good until one asks some simple questions.
What drug is being referred to? The drug that helped make Mark McGuire America’s national hero? The drug to which Franklin Delano Roosevelt was addicted? What “societal damage” is meant? The damage done to Mark McGuire, President Roosevelt, or the “juveniles enticed into illicit activity”? Do drugs “entice”? Do juveniles have no free will and responsibility for their behavior?
When Kleiman uses the word “drug,” he makes it seem as if he were referring to a scientifically defined or medically identifiable substance. In fact, he is referring to a legally, politically, and/or socially stigmatized substance whose use he believes the government ought to discourage or prohibit. When he uses the term “societal damage” he pretends that he and we know, or ought to know, what he has in mind. Yet what is societal damage in our heterogenous society? He has his targets. Everyone does. My favorites are programs sponsored and paid for by the government that support tobacco farmers and experts on drug policy.
The fact that drug use has medical consequences for the user and implications for public health is beside the point. We have so expanded our idea of what counts as a medical (public health) matter that every human activity-from boxing to eating, exercising, gambling, sex, and so forth-may be said to fall into that category and thus justify government regulation. This medicalization of personal conduct and of the coercive interference of the state with such conduct justifies the ceaseless expansion of the Therapeutic State, serves the existential and economic interests of the policymakers, and injures the interests of the regulated “beneficiaries.”
Kleiman assumes that the drug policy debate is between “drug warriors” and “drug legalizers.” However, warriors and legalizers alike are drug prohibitionists. The two parties differ only in their methods: The former want to punish their chosen scapegoats and call it “punishment”; the latter want to punish theirs and call it “treatment.” The true adversaries of the prohibitionists (regulators) are the abolitionists (free marketers). The former quibble among themselves about how to meddle in people’s lives and have the taxpayer pay them for it. The latter believe that the government ought to respect people’s right to put into their bodies whatever they want and reap the benefits or suffer the harms, as the case may be.
Nowhere has the impact of drug abuse been more pervasive than in the criminal justice system, as documented succinctly by Steven Belenko and Jordon Peugh. They call for a policy shift in managing nonviolent substance-abusing offenders that would focus on three fronts: (1) revision of sentencing policies for nonviolent offenders to reduce mandatory minimum sentences for drug offenses, (2) diversion of nonviolent drug offenders to community treatment programs, and (3) expansion of substance abuse treatment capacity in correctional settings. Not only does this policy shift make good economic sense, given the several studies reviewed, but it is fully supported by the voting public and by criminal justice personnel. For example, according to a 1996 report by the Center for Substance Abuse Treatment, 70 percent of adults who know someone with a substance abuse problem believe that supervised treatment would be more beneficial than imprisonment.
The authors describe a growing gap between the need for correctional treatment and the treatment services available in state and federal prisons. A new source of funding for prison treatment services is the Residential Substance Abuse Treatment (RSAT) Formula Grant Program offered by the U.S. Department of Justice, It provides $270 million during 1996-2000 to develop services in state and local correctional and detention facilities. Funds for correctional treatment are also available from the Byrne Formula Grant Program. Despite these initiatives, only 1 to 5 percent of state and federal prison budgets is spent on substance abuse treatment.
There is a strong and consistent research base that supports the effectiveness of correctional substance abuse treatment. Findings from the 1997 final report of the National Treatment Improvement Evaluation Study are also noteworthy. This national study of treatment effectiveness found that correctional treatment has the greatest impact on criminal behavior (for example, an 81 percent reduction in selling drugs) and arrest (a 66 percent reduction in drug possession arrests and a 76 percent reduction in all arrests) of all types of treatment settings and modalities examined. In a 1998 correctional treatment outcome study by the Federal Bureau of Prisons, treated inmates were 73 percent less likely to be rearrested and 44 percent less likely to use drugs and alcohol during a six-month followup period, in comparison to a sample of untreated inmates.
The positive effects of correctional treatment are augmented by the inclusion of transitional programs. In a recent study, participation in treatment services after release from prison was found to be the most important factor predicting arrest or drug use (S. S. Martin, C. A. Butzin, and J. A. Inciardi, Journal of Psychoactive Drugs, vol. 27, no. 1, 1995, pp. 109-116). One example of innovative post-release transition services is the Opportunity to Succeed (OPTS) program, developed by the National Center on Addiction and Substance Abuse. OPTS program sites provide an intensive blend of supervision, substance abuse treatment, case management, and social services that begins upon release from institutional treatment programs and continue sfor up to two years.
Despite the importance of transitional services, funds from federal block grants that support correctional treatment (for example, the RSAT program) are restricted to institutional approaches. As a result, treatment services are abruptly terminated for many inmates who are released from prison, leading to a greater likelihood of relapse or rearrest. Federal grant programs are needed that recognize the vital importance of transitional treatment services and leverage state correctional agencies to work with state forensic and social service agencies to engage ex-offenders in community treatment services.
Steven Belenko and Jordon Peugh provide compelling arguments that treating the drug-involved offender is a way to reduce crime and substance abuse in society. Although they fail to mention that continuing to pursue incapacitation-based correctional policies will ensure that state and local correctional budgets skyrocket with little hope of decreasing crime, the authors do highlight the crime reduction benefits that can be achieved by providing drug treatment services to incarcerated offenders. Scholars and practitioners are beginning to understand that treatment is not merely rehabilitation but encompasses sound crime control policies.
Treatment as a crime control strategy is a major shift in policy. Historically, treatment has been considered a rehabilitation strategy. The current movement to use drug treatment programs to prevent and control crime emphasizes societal benefits instead of changes in individual offenders’ behavior. The focus on societal benefits enlarges the expected goals of treatment programs to include reducing criminal behavior; changing the rate of drug offending, the pattern of drug consumption, and the nature of drug trafficking; and reducing costs. It also fosters the comparison of drug treatment to other viable crime control interventions such as domestic law enforcement, interdiction, incapacitation, and prevention. Drug treatment results are also comparable to results from the treatment of other chronic diseases such as diabetes and heart disease.
Belenko and Peugh underemphasize the importance of monitoring and supervision as critical components of treatment oriented toward reducing crime. The leverage of the criminal justice system can be used to ensure that offenders comply with treatment and court-ordered conditions of release as part of the overall crime-reduction strategy. The coupling of treatment with supervision increases the probability that drug treatment will produce the results highlighted by Belenko and Peugh.
Global science
I welcome Bruce Alberts’s timely reminder (“Toward a Global Science,” Issues, Summer 1998) that science is a global enterprise with global responsibilities and is not just about increasing wealth at the national level. He makes some powerful claims that merit close examination.
The relation between science and democracy, Alberts’s first claim, is a complex one. It is plausible that the growth of science has helped to spread democracy, as Alberts suggests: Science needs openness, and democratic societies tend to be relatively open. The story of Lysenkoism in the Soviet Union shows what can happen to science in a closed society where political orthodoxy determines what is acceptable and what is not. But the free exchange of ideas on which science depends can be threatened by many forms of political correctness, and the scientific community must be alert to this. Indeed, major progress in science often requires the challenging of well established paradigms and facing the scepticism, if not hostility, of those whose work has been grounded in these paradigms. Moreover, science itself is not democratic: Scientific truth is eventually settled by observation and experiment, not by counting votes.
Threats to openness can come also from the pressures of commercial secrecy and the increasing linkage between even fundamental research and wealth creation, as controversies over patenting the human genome illustrate. These are real and difficult issues. All publicly funded science in democracies is supported by taxpayers, whose interest naturally lies in the benefit that might accrue to them. It will need much work to persuade the public that it is in their long-term interest, or rather in the interest of their children and grandchildren, to share the world’s resources, both physical and intellectual. This applies at the national and global levels. The appeal to self-interest must be an appeal to enlightened self-interest.
Alberts rightly stresses the role of science and technology in addressing global problems such as coping with the impact of the growing world population. This is an immense, many-sided challenge. At the level of academies, I share Alberts’s appreciation of the work of the world’s academies in combining in 1993 to highlight population issues. I would highlight, too, the significance of the joint Royal Society/National Academy of Sciences 1997 statement on sustainable consumption. The Royal Society strongly supports the current work of the InterAcademy Panel on transition to sustainability and looks forward to the May 2000 Tokyo conference, which should make a practical contribution in this area.
It is surely right, as Alberts argues, that we are only beginning to recognize the potential of information technology in developing science as a global enterprise. Facilitating access to the world literature is certainly relevant to this, but the primary requirement is to build up the indigenous scientific capability of the developing countries. This is a long-term task that will require a healthy educational infrastructure in each country and direct access for its researchers to the scientists of the developed world as well as to their written output. The Royal Society maintains an extensive programme of two-way exchanges with many countries as a contribution to this capacity building.
As a recent survey has demonstrated, virtually all academies of science seek to advise governments about policy for science and about the scientific aspects of public policy. This is a difficult undertaking, but one where the authority and independence of academies give them a special role. It would be excellent if, as Alberts suggests, the InterAcademy Panel could become recognized as playing an analogous role at the international level.
Infrastructure vulnerability
I applaud George Smith’s (“An Electronic Pearl Harbor: Not Likely,” Issues, Fall 1998) conclusion that “…computer security concerns in our increasingly technological world will be of primary concern well into the foreseeable future.” However, I disagree with his assessments that downplay the seriousness of the threat. It may well be true that hoaxes about viruses propagate more successfully than the real thing and that many joy-riding hackers do not intend to do real harm. It certainly is true that the cleared insider is a serious threat. But these points do not mean that threats from external attack are not serious, or that it is inappropriate for government to be very concerned.
From January to mid-November 1998, the National Security Agency (NSA) recorded approximately 3,855 incidents of intrusion attempts (not simply probes) against the Defense Department’s unclassified computer systems and networks. Of these, over a hundred obtained root-level access, and several led to the denial of some kinds of service. These figures, of course, reflect only what is reported to NSA, and the actual number of intrusions probably is considerably higher.
The concern, in a networked environment, is that a risk accepted by one user becomes a risk shared by all. One is no more secure than one’s weakest node. We are working hard to improve our network security posture. But intrusion tools are proliferating and becoming easier to use, even as our own dependence on these networks is growing.
Smith dismisses the concerns we had over the intrusions into our networks last spring as a product of “the Pentagon’s short institutional memory.” In fact, many in responsible positions during this incident were well aware of the 1994 Rome Labs intrusions. However, a major difference between 1994 and 1998 was that the intrusions last spring occurred while we were preparing for possible hostilities in Southwest Asia. Since we could not quickly distinguish the intruders’ motives or identities, we took this incident very seriously. In December 1941 there was no mistaking who our attackers were. In the cyber world, penetrations for fun, profit, or intelligence gain may be indistinguishable at first from intrusions bent on doing serious harm.
The fact that the perpetrators turned out to be a couple of teenagers only reinforces the fact that such “joyrides in cyberspace” are not cost-free events. In addition to the laws that were broken, the resources expended by the Departments of Defense and Justice, and the impact on the private sector service providers who helped run the intruders to ground, this case compounded an already tense international situation. Future incidents may not work out so benignly.
Smith downplayed the importance of the infrastructure vulnerabilities simulated during the Eligible Receiver exercise. He should not. The risks to our critical national infrastructures were described last year by the President’s Commission on Critical Infrastructure Protection, and the importance of protecting them was reaffirmed in May by President clinton in his Decision Directive 63. This is a complicated area where solutions require an energetic partnership between the public and private sectors. We are working to forge such alliances.
In this context, I take particular exception to Smith’s insinuation that those who express concern about information warfare do so mainly because they will benefit from the resulting government spending. For several years a wide variety of sources in and out of government-private industry advisory councils, think tanks, academia, as well as entities such as the Defense Science Board–consistently have said we must do more in the area of information assurance and computer security. It is hardly surprising that some of the proponents of this research should work for companies who do business with the Defense Department. To impugn the integrity of their analysis on the basis of these associations does a disservice to those whose judgment and integrity I have come to value deeply.
The United States was fortunate in the 1920s and 1930s to have had a foresighted group of military planners, the congressional leadership to support them, and an industrial base to bring into being the weapons and doctrine that survived Pearl Harbor and prevailed in World War II. My goal is to ensure that we are similarly prepared for the future, including the very real possibility of cyber attack.
As those familiar with George Smith’s work would expect, his article is timely and provocative. Although we might disagree with some specifics of Smith’s characterization of the past, the focus of the National Infrastructure Protection Center (NIPC) is on the future. Based on extensive study of the future global information environment, the leadership of this country believes that the risk of a serious disruption of our national security and economy by hostile sources will grow in the absence of concerted national action.
The U.S. intelligence community, including the director of the Central Intelligence Agency, has noted the growing information warfare (IW) capabilities of potential adversaries. Several foreign governments have operational offensive IW programs, and others are developing the technical capabilities needed for one. Some governments are targeting the U.S. civilian information infrastructure, not just our deployed military forces. Terrorists and other transnational groups also pose a potential threat. However, a potential adversary would probably consider the ability of the United States to anticipate, deter, and respond to its attacks. By reducing the vulnerability of the national information infrastructure, we raise the bar for those who might consider an attack and reduce the national consequences if one occurs.
The Presidential Commission report cited by Smith discusses in depth the vulnerability of our national infrastructures across the board, including the telecommunications, banking and finance, energy, and emergency service sectors. All of these sectors in turn depend on the rapidly growing information infrastructure. However, technologies to help protect communications, their content, and the customers they serve have begun receiving attention only recently as amateur hackers, disaffected employees, unscrupulous competitors, and others have escalated attacks on information systems. These types of relatively unstructured, unsophisticated groups and individuals have already demonstrated the vulnerability of some of our most sensitive and critical systems. Can there really be any doubt that we need to take action now to prevent the much more serious and growing threat posed by more malicious, sophisticated, and well-funded adversaries, such as terrorists and foreign governments? Given the demonstrated vulnerabilities and the clear threats we already know about, it would be irresponsible for the U.S. government to fail to act now.
Presidential Decision Directive 63, the key outgrowth of the commission’s report, provides national strategic direction for redressing the vulnerabilities in our information and other national infrastructures. The Critical Infrastructure Assurance Office is developing a national plan and organizing joint government-industry groups to begin to build sector strategies to address these vulnerabilities. At the NIPC, we are taking steps to design and implement a national indications and warning system to detect, assess, and warn of attacks on critical private sector and government systems. This involves gathering information from all available sources, analyzing it, and sharing it with all affected entities, public or private. We are also designing a plan to coordinate the activities of all agencies and private sector entities that will be involved in responding to an attack on our infrastructures. These efforts to improve our ability to prevent and respond are critical if we are to be prepared to face the most serous challenges of the Information Age.
Still, we cannot substantially reduce the vulnerability of our national information infrastructure without close collaboration between government and the private sector. As Smith states, neither government nor industry has succeeded so far in sharing enough of the information needed to help reduce vulnerabilities, warn of attacks or intrusions, and facilitate reconstitution. We have made this an important priority on our agenda. Public discussion of these issues through media such as Smith’s article provides an important opportunity to foster the understanding that we are all in this together and that we can reduce our vulnerabilities only through effective partnership. In the absence of partnership, the only clear outcome is the greater likelihood of the type of devastating information attack that we all seek to avoid.
As George Smith points out, “thousands of destructive computer viruses have been written for the PC.” As he would apparently agree, some of these have been used with malicious intent to penetrate governmental and private information systems and networks, and they have required the expenditure of time, money, and effort to eradicate them and their effects. To that extent, it is not improper to characterize these network attacks and their perpetrators as components of a threat that is perceived and responded to by information security professionals in all sectors of our information-rich and -dependent society.
Where Smith appears to quibble is about the magnitude of this threat, as reflected in the title of his article. An “information Pearl Harbor” is a phrase that has been part of the jargon of Information Warfare (IW) cognoscenti for several years now. I have observed that as this quite small group has steadily labored to bring clarity and precision into a field that was totally undefined less than five years ago, this particular characterization has fallen out of use. I believe that this is because Pearl Harbor evokes ambiguous images as far as IW is concerned. To suggest that the nation should consider itself at risk of a physical attack along the lines of the sudden, explosive, and terribly destructive effects of December 7, 1941, is to miss the point. On the other hand, I do see parallels to Pearl Harbor in the IW threat dimension that are not often cited.
Specifically, I suggest that for the vast majority of individual U.S. citizens and much of Congress, what really happened at Pearl Harbor was that a threat that had been sketchy, abstract, and distant became personal and immediate. Then, as now, there were those who saw the growing danger and strove to be heard and to influence policy and priorities. However, it took the actual attack to galvanize the nation. I suggest that Pearl Harbor’s real effects were felt in the areas of policy, law, and national commitment to respond to a recognizable threat.
So it will be, in my judgment, with the information Pearl Harbor that awaits us. Smith and I would agree that without a broadly based public commitment to cooperate, the efforts of government alone will not produce the kind of information protection regime required to detect and respond appropriately to the whole range of Information Age threats. Smith suggests that “the private sector will not disclose much information about . . . potential vulnerabilities.” That may be true today. To suggest that it will forever be so is to fail to read the lessons of history and to sell short the American people’s resolve to pull together in times of recognized need or danger.
In “Critical Infrastructure: Interlinked and Vulnerable” (Issues, Fall 1998), C. Paul Robinson, Joan B. Woodward, and Samuel G. Varnado are exactly right in their main premises: A unified global economy, complex international organizations, a growing worldwide information grid, and countless other interlocking systems now form the very stuff that supports modern civilization.
The vulnerability of this maze of systems is highlighted by the recent global financial crisis. The movement of huge sums of capital around the world at the speed of light exerted great pressure on Asian governments, which could not respond effectively because of limitations in local business and public attitudes, causing a loss of confidence in financial markets, which then forced institutional investors to withdraw investment capital, repeating this vicious cycle and spreading the crisis from one economy to the next. The same vulnerability lies behind the computer-related year 2000 (Y2K) problem that looms ahead.
The authors are also right on target with their recommendations for assesssing the surety of these systems, using simulations to explore failure modes and asking higher-level authorities to monitor and manage these efforts. They do not say much about the pressing need to redesign these systems to make them less vulnerable, however.
It is one thing to contain such failures and quite another to create robust systems that are less likely to fail in the first place. The technological revolution that is driving the emergence of this complex infrastructure is so vast, so fraught with implications for restructuring today’s social order, that it seems destined to form a world of almost unfathomable complexity, beyond anything now known. In a few short decades, we will have about 10 billion people sharing this small planet, most of whom will be educated; living and working in complex modern societies like ours; all interacting through various public utilities, transportation modes, business arrangements, information networks, political systems, and other facets of a common infrastructure. Now is the time to start thinking about the design and operation of this incredibly dense, tightly connected, fragile world. The most prominent design feature, in my view, is the need to give these systems the decentralized self-organizing qualities we find in nature. That is the key feature that gives natural systems their unique ability to withstand disasters and bounce back renewed.
Let me offer an example of where we may go. The Federal Aviation Administratin (FAA), I am told, is experimenting with “self-directed” flights in which aircraft use sophisticated navigational and surveillance systems to make their way across a crowded airspace. This may seem like a prescription for disaster, but the FAA finds it to be far safer and more efficent. The idea here is to replace dependence on the cumbersome guidance of a central authority-traffic controllers-with a self-directed system of guidance.
True, this type of flight control remains dependent on navigational and surveillance systems, which reminds us that some central systems are always needed. But even these central levels could be redesigned to avoid massive failures. One obvious solution is to include redundant components for critical functions, such as a network of navigational satellites that is able to withstand the failure of a few satellites.
This article is a great starting point for addressing a huge problem. But to safely manage the complex, fragile world now evolving, we must now focus on designing systems that can withstand failures.
C. Paul Robinson, Joan B. Woodward, and Samuel G. Varnado showed how proponents often debate “for” national infrastructure protection. George C. Smith showed how skeptics often debate “against” such protection. Because Issues printed these articles side by side, the casual reader might treat it as a dispute over whether to protect the national infrastructures at all. In reality, Robinson et al. played up a minor “cyber threat” to help justify more protection while Smith focused on the chronic overemphasis on cyber threats.
Robinson et al. described spectacular infrastructure failures-one caused by an earthquake, another sparked by a sagging power line-and postulated that a terrorist could trigger similar failures via a remote computer. He cited then-CIA Director John Deutch, who in 1997 told Congress that “information warfare” ranked second only to terrorists wielding nuclear, biological, or chemical weapons. Therefore, Robinson et al. concluded that we must go to extraordinary lengths to protect national infrastructures from both physical and cyber threats.
Smith asserted that the insane complexity of national infrastructures prevents terrorists from triggering spectacular failures via remote computer. Those who claim otherwise rely on exaggeration and fear, not evidence, to bolster their cries of alarm. Smith led us to ask obvious questions: If terrorists possess deadly cyber weapons as claimed, why don’t they use them? Why don’t newspapers cover cyber terrorism comparable to the Tokyo nerve gas attack or Oklahoma City bombing? Smith concluded that we don’t need to go to extraordinary lengths to protect national infrastructures from electronic bogeymen.
In the final analysis, Robinson et al. showed that we must do more to protect national infrastructures from acts of nature and design errors (such as earthquakes and the Y2K problem). We also must protect national infrastructures from genuine terrorist threats. More protection will require more resources-but as Smith explained, we shouldn’t try to scare the money out of people with Halloween stories about computer nerds.
George Smith’s article discounting the notion of information warfare (IW) makes two important points. First, he correctly asserts that proof of the threat of IW remains elusive. Second, he states that all of us should make a greater effort in the area of computer security. He raises a third issue, that an objective, detached assessment of the IW threat should be undertaken.
Smith should do some more reading, including one of his own recommendations: Critical Foundations: Protecting America’s Infrastructure, the 1996 report to the President’s Commission on Critical Infrastructure Protection. He would find there a thoughtful, methodical perspective that, although not persuasive to him, has the administration and the U.S. Congress, among others, concerned about the serious threat of IW. The 1996 RAND report Strategic Information Warfare: A New Face of War should also be included in his reading list. Here he will find the conclusion that “Key national military strategy assumptions are obsolescent and inadequate for confronting the threat posed by strategic IW.” Added too should be Cliff Stoll’s earlier and more relevant The Cuckoo’s Egg (Simon and Schuster, 1989). In addition, War and Anti-War (Little, Brown) is a marvelous read. It describeshow high technology was used in the Persian Gulf War and notes that “the promise of the twenty-first century will swiftly evaporate if we continue using the intellectual weapons of yesterday.”
Out of the genre but equally important is Irving Janis’ Victims of Groupthink (Houghton Mifflin, 1972). In this classic work, psychologist Janis uses several historical examples to illustrate various attitudes that lead to poor decisionmaking. One is the feeling of invulnerability. History records that although they were warned of an imminent attack on the forces under their command, Admiral Husband E. Kimmel and Lieutenant General Walter C. Short dismissed the information as foolish and did nothing to defend against what they regarded as Chicken Little-like concerns. On the morning of December 7, 1941, at Pearl Harbor, Kimmel and Short witnessed with their own eyes what they had arrogantly believed could not happen.
Benefits of information technology
When such careful analyses of data as those made by Robert H. McGuckin and Kevin J. Stiroh (“Computers Can Accelerate Productivity Growth“) and by Stephen S. Roach (“No Productivity Boom for Workers,” Issues, Summer 1998) fly in the face of market logic and observation, one must be aware of both the data and the conclusions. Market logic suggests that managers would not continue to invest in computers over a 30-year period if they did not believe they were better off by doing so than not. A 1994 National Research Council study in which I participated (Information Technology in the Service Society) indicated that managers invest in information technology (IT) not just to improve productivity but to enhance other aspects of performance as well, such as quality, flexibility, risk reduction, market share, and capacity to perform some functions that are impossible without IT.
If all or most competitors in a market invest for similar reasons, each will be better off than if it does not invest, but the measured aggregate “productivity” of their investments may actually fall unless the companies can raise margins or their total market’s size grows disproportionately. The economics of the service industries have forced their large IT users to pass margins through to their customers, who in turn capture the benefits (often unmeasured in productivity terms) rather than the producers. At a national level, many industries simply could not exist on their present scales without IT. This is certainly true of the airlines, entertainment, banking and financial services, aerospace, telecommunications, and software industries, not to mention many manufacturers of complex chemicals, pharmaceuticals, instruments, and hardware products. The alternative cost of not having these industries operating at IT-permitted scales is trillions of dollars per year. But the value of the outputs they could not produce without IT does not appear as an output credit in national productivity statistics on IT use. Such statistics simply do not (and perhaps cannot) capture such alternative costs.
Although they acknowledge many limits of the data (especially in services), both articles focus more on the labor and capital substitution aspects of IT within industries than on its output or value generation for customers or the economy. In a series of articles entitled “Is Information Systems Spending Productive? New Evidence and New Results,” beginning with the Proceedings of the 14th International Conference on Information Systems in 1992, E. Brynjolfsson and L. Hitt showed how high the returns are on IT investments if analysts use only conservative surrogates for the market share losses IT investors avoid. If one adds the quality improvement, totally new industries, and risk avoidance IT enables, the benefits are overwhelming. What would the medical care industry be without the diagnostics and procedures IT permits? Where would the communications and entertainment industries be without the satellite programming and dissemination capacities IT enables? And what would the many-trillion-dollars-per-day international finance industry’s capabilities or the airline industry’s safety record be without IT? The alternative cost of such losses for individual companies and for the entire country is where the true productivity benefits of IT lie. A more appropriate measure of productivity benefits would be “the sum of outputs society could not have without IT” divided by the marginal IT inputs needed to achieve these results. It will be a long time (if ever) before national accounts permit this, but a coordinated set of industry-by-industry studies could offer very useful insights about the true contributions of IT use in the interim.
R&D partnerships
I believe that continued technological innovation is key to our ability to compete successfully in a global marketplace. In stating that “collaborative R&D has made and will continue to make important contributions to the technological and economic well-being of U.S. citizens,” David C. Mowery (“Collaborative R&D: How Effective Is It?” Issues, Fall 1998) has highlighted a key element in our approach to nurturing innovation. The challenge for policymakers is to encourage collaboration in a manner that produces the broadest set of social and economic benefits.
Our experience with R&D collaborations between industry and federal agencies has shown that giving the agencies and their institutions broad flexibility to negotiate the terms of a partnership results in the most fruitful interactions. When agencies are given this flexibility, the parties expedite both the negotiation and the management of the partnerships. Because all collaborations are unique, the ability to craft a particular agreement based on the specific needs of the parties involved, rather than requiring a rigidly standardized approach, is essential for success.
Federally supported institutions have devised creative ways of fostering R&D collaborations and technology transfer that benefit the economy while fulfilling agency missions. Outstanding examples include entrepreneurial leave-of-absence programs, which have provided opportunities for employees to commercialize technologies they developed within their home institutions. Products developed from these technologies are then available to support the agency activities that originally motivated the research.
Many federal laboratories also give businesses, especially small businesses, access to sophisticated capabilities and facilities that they simply could not afford on their own. Some institutions allow their people to work as professional consultants on their private time, providing yet another way for their technological expertise to be used by the private sector. Others have programs that allow small businesses free limited-time access to laboratories’ scientists and engineers. For many small businesses, this provides an opportunity to receive help on an as-needed basis without the requirement, say, of negotiating a Cooperative Research and Development Agreement.
This timely access to facilities and human resources can be key in converting innovative ideas to commercial products.
Additionally, locating businesses close to federal researchers is the best way to promote the person-to-person collaboration that distinguishes truly successful partnerships. An example of this approach is the Sandia Science and Technology Park, which is being developed jointly by the city of Albuquerque, New Mexico, and Sandia National Laboratories.
Finally, a wide range of collaborative research investments can be stimulated in a flexible market-oriented manner by improved tax incentives for R&D. Several members of Congress, including myself, have proposed legislation that would improve these incentives by making the research tax credit permanent and applicable to the many types of business arrangements under which R&D is now being performed, including small businesses, consortia, and partnerships.
The nation’s future economic strength will be fueled by the infusion of new technologies, which will provide entirely new business opportunities and increases in productivity. By seeking clever new ways to use all of our nation’s scientific and technological resources, we will ensure our continued prosperity.
I enthusiastically support David C. Mowery’s call for a study of R&D partnerships. Over the past two decades, the United States has spent hundreds of millions of dollars on such partnerships without a critical study of lessons and results. There are many success stories in Europe, Taiwan, Korea, Singapore, and Japan that can be used to improve U.S. programs. I suggest academic studies including economists, lawyers, political scientists, sociologists, and technologists. It is not a question of whether these partnerships will continue but of how to make them as efficient as possible.
Two other comments. First, the semiconductor roadmap mentioned by Mowery has been far more successful in setting R&D agendas than I believed possible when we started the effort in 1992. The effort is not inexpensive; SEMATECH spends hundreds of thousands of dollars each year to keep the document up to date. The costs are for travel for some of the hundreds of engineers and scientists involved, for editing the document, and for paying consultants. The document is now in electronic form and is available on the SEMATECH home page at www.sematech.org. An unexpected effect has been the acceleration of technology generations from about every three years to less than two and one-half!
Second, I agree with Mowery that longer-range R&D should be done in universities with funding from both industry and the government. The semiconductor industry has started an innovative program funding research in design and interconnection at two university sites with several schools involved. Although it is too early to assess the results of this program, it should be closely watched and the resulting lessons applied to future partnerships. A key question in any of the university partnerships is intellectual property. Here an excellent model exists at Mowery’s own institution in the development of SPICE, a widely used simulation program. There were no patents or secrets associated with this research, and this helped the program gain wide industry acceptance.
David C. Mowery has raised a timely question concerning the value of collaborative R&D. I support his conclusion that a comprehensive effort is needed to collect more data on the results of such ventures. He has also done a nice job of reviewing studies and legislation done during the 1970s and 1980s that led to many new programs that bring together industry, universities, and government to colloborate on R&D. It is an impressive array of actions that have paid off handsomely in the 1990s, particularly in regard to our nation’s economic competitiveness.
Competitiveness depends on ideas that are transformed into something of real value through the long and risky process of innovation. An article in the Industrial Research Institute’s (IRI’s) journal last year showed that 3,000 raw ideas are needed for one substantially new, commercially successful, industrial product. Universities and federal laboratories can be a great source of ideas, which is exactly why industry has reached out to them over the past 10 to 15 years. IRI’s External Research Directors Network has been particularly active in promoting an understanding of industry’s changing needs to universities and government laboratory directors. The network has also communicated the risk and investment needed to transform great ideas into new products, processes, or services.
This trilateral partnership that has evolved among industry, universities, and federal laboratories is without equal in scope or size in any other country. It is a valuable national asset. The cooperation brought about by this partnership, supplemented by industry’s own strong investment in R&D, the availability of venture capital to exploit good ideas, and new management practices have helped take the United States from the second tier in competitiveness in the 1980s to the world’s most competitive nation in the 1990s.
Industry’s doubling of its support for academic research from 1988 to 1998 is strong evidence of the value it receives from its interaction with universities. This support will continue to grow and is likely to more than double over the next 10 years. Too much industry influence over academic research, however, would not be in our nation’s best interest, because the missions of universities and industry are totally different. Likewise, the mission of federal R&D laboratories-national defense and societal needs-is quite different from that of industrial R&D laboratories. Nevertheless, government can spin off technology that is of use to industry, and industry can spin off technology that is of value to government, particularly in the area of national defense.
Collaborative R&D has been highly beneficial to our nation, but more studies are urgently needed to identify best practices that reduce stress in this system and maximize its effectiveness.
Environment and genetics
Wendy Yap and David Rejeski (“Environmental Policy in the Age of Genetics,” Issues, Fall 1998) are no doubt right in saying that gene chips-DNA probes mounted on a silicon matrix-have the potential to transform environmental monitoring and standard setting. One wishes, however, that their efforts at social forecasting had kept pace with their technological perspicacity. Their rather conventional recounting of possible doomsday scenarios imperfectly portrays the social and environmental implications of this remarkable marriage of genetics and information technology.
Their observations concerning litigation are a case in point. Yap and Rejeski suggest that gene chips are a “potential time bomb in our litigious culture” and will lead to disputes for which there are no adequate legal precedents. In fact, courts have already grappled with claims arising from exposure to toxic substances in the environment. One popular solution in these “increased risk” cases is to award the plaintiffs only enough damages for continued medical monitoring and early detection of illness. Gene chips could actually facilitate such surveillance, leading to better cooperation between law and technology.
Lawsuits, moreover, do not arise simply because of advances in technology. People go to court, often at great personal cost, to express their conviction that they have been treated unfairly. Thus, workers have sidestepped state workers’ compensation laws and directly sued manufacturers of dangerous products when they were denied information that might have enabled them to take timely protective action. Such actions do not, as Yap and Rejeski suggest, point to loopholes in the law. Rather, they serve as needed safety valves to guard against large-scale social injustice.
Missing from the authors’ litany of possible unhappy consequences is an awareness that gene chips may imperceptibly alter our understanding of the natural state of things. For instance, the authors approvingly cite a recent decision by the Nuclear Regulatory Commission to distribute potassium iodide to neighbors of nuclear power plants. The goal is to provide a prophylactic against thyroid cancer by means of emissions from radioactive iodine. Yet this apparently sensible public health precaution runs up against one of the most difficult questions in environmental ethics. When and to what extent are we justified in tinkering with people or with nature in order to make the world safe for technology? Genetic breakthroughs have opened up alluring prospects for reconfiguring both nature and humanity in the name of progress. Our challenge is to avoid sliding into this future without seriously considering the arguments for and against it.
It has become customary in some policy circles to bemoan the demise of the congressional Office of Technology Assessment (OTA) as a retreat from rationality. Yap and Rejeski end on this note, suggesting that it may take a metaphorical earthquake to alert people to the harmful potential of gene chips. But environmental policy today needs philosophical reflection and political commitment even more desperately than it needs technical expertise. If we tremble, it should be for the low levels of public engagement in the governance of new technologies. Resurrecting OTA, however desirable for other reasons, will not address the growing problem of citizen apathy.
The power industry: no quick fix
M. Granger Morgan and Susan F. Tierney (“Research Support for the Power Industry,” Issues, Fall 1998) have neatly packaged a litany of images of the diverse ways that innovative technologies are fundamentally enabling power restructuring. They also correctly point out that many of the most important technologies, such as advances in high-performance turbines and electronics, have derived from investments and R&D outside the energy sector. The net result is that we can now choose from a staggering diversity of energy technologies that can lead to widely different energy futures
The authors also point out that advanced technologies can also provide energy services with vastly lower environmental externalities. So far, so good. But the authors observe that further technological advances (and there are potentially a lot of them) are largely stymied by lack of sustained funding for relevant basic and applied research. The potential contribution to our nation’s economy, security, health, and environment is enormous, so publicly supported R&D is well justified. Some argue that in a restructured electricity it makes sense to raise the funds for this effort at the state level, but it is foolish to devise independent state R&D programs, which would very likely occur. We are dealing with a national issue and should organize around it accordingly.
If the national public interest is sufficiently large to merit public investment and the private interest too small to meet the investment “hurdle rates,” then we should require the Department of Energy (DOE) and the National Science Foundation to mount an appropriate R&D program, which would focus on peer-reviewed proposals and public-private consortia. If the nation chooses to place surcharges on energy processes that impose significant external costs on society as a way to fund this program, so be it.
Finally, I concur with the authors that DOE is too focused on nuclear weapons, environmental cleanup, and peripheral basic research such as particle physics. Therefore, DOE would be well-advised to give much more attention at the highest level to devising an aggressive and comprehensive R&D program aimed at sustaining progress toward an efficient, environmentally friendly energy system.
I would like to underscore M. Granger Morgan and Susan F. Tierney’s call for more basic technology research in energy from the perspective of the Electric Power Research Institute’s Electricity Technology Roadmap Initiative. This initiative has involved some 150 organizations to date, collaboratively exploring the needs and opportunities for electricity-based innovation in the 21st century.
The Roadmap shows that in terms of our long-term, global energy future, (1) we will need wise and efficient use of all energy sources-fossil, renewables, and nuclear technology. (2) We will need to create a portfolio of clean energy options for both power production and transportation fuels. (3) We must start now to create real breakthroughs in nuclear and renewables so that superior alternatives are ready for large-scale global deployment by 2020.
Why? Because of the demographic realities of the next century. Global population will double by 2050 to 10 billion. It is sobering to realize that global energy requirements, for even modest increases in the standard of living, will require a major power plant coming online somewhere in the world every three to four days for the next 50 years. The alternative is abject poverty for billions of people as well as environmental degradation and massive inefficiencies in energy use and resource consumption.
To minimize the environmental impact of this unprecedented scale of economic development, we are going to need technology that evolves very quickly in terms of efficiency, cleanliness, physical imprint, and affordability. The worst thing we can do now is to freeze technology at today’s performance level or limit it to marginal improvements, but that is exactly what our short-term investments and short-term R&D patterns are inclined to do.
Funding trends are down for electricity R&D, and with industry restructuring, they are dropping even faster in the areas most critical to our long-term future, including strategic R&D, renewable energy, energy efficiency, advanced power generation, and the environmental sciences. The problem, as pointed out by Morgan and Tierney, is exacerbated by the potential Balkanization of R&D in state restructuring plans and by the focus in many public programs on the deployment of current technology at the expense of advancing the state of the art, when both are needed.
The purpose of more basic research, as recognized by the authors, is to accelerate the pace of technological innovation. With some $10 to 15 trillion needed in investment in global electricity infrastructure over the next 50 years, we should strive to create and install super-efficient, super-clean, and affordable energy options for all parts of the world. This amount of money sounds enormous, but it is less than 0.5% of global gross domestic product over this period and is less than the world spends annually on alcohol and cigarettes.
Wenow have a window of extraordinary but rapidly diminishing opportunity to pursue sustainable development. To stay ahead of the global challenge, we must keep the pace of technological progress at 2 percent per year or better across the board-in productivity and emissions reduction-throughout the next century. We need a recommitment to the full innovation cycle, from basic research through to commercial application, and we need to find an acceptable mechanism for collaboratively funding the nation’s infrastructure for innovation. Innovation is not only the best source of U.S. competitive advantage in the coming century, it is also the essential engine of global sustainability.
M. Granger Morgan and Susan F. Tierney present an excellent overview of current changes in the electric power industry and conclude that more basic research is needed in the energy and power sectors.
Since the middle 1970s, after the flurry of initiatives on developing new energy technologies during the Carter presidency, few if any new energy supply systems have been implemented. Our current fossil fuel-based energy supply system for power generation, transmission and utilization, as well as for transportation, required a turnover time of about 40 years before it reached a significant level of market penetration. Even longer time periods appear to be needed for systems that are not fossil fuel-based or are based on the first generation of nuclear energy technologies. The development of fusion energy has been in progress for about 50 years and is far from commercial. Solar technologies are mired at low application levels (photovoltaics may never become commercially viable on a large scale and has been dubbed a net energy loser by the ecologist H. T. Odum). Nuclear breeder reactor development has been suspended or curtailed after many years of development in all of the participating countries. Very long commercialization schedules have also characterized the large-scale utilization of new energy transformation technologies. Examples are provided by fuel cells (more than 150 years have elapsed since W. R. Grove’s pioneering discovery of the “gaseous voltaic battery” and more than 40 years have passed since fuel cells were first used as stationary power sources on spacecraft); use of hydrogen as an energy carrier; superconducting power transmission lines; and ocean thermal energy conversion.
Morgan and Tierney’s thesis is that augmented expenditures on basic research wil produce accelerated availability of large-scale energy and power systems. As a long-term academic, I firmly believe that they are right in stating that more money for basic research will help. However, as a long-term participant on the commercialization side of new technologies, I am not certain of early or even greatly accelerated market success. The cost of transferring results derived from basic research to a commercial device far exceeds the investment made in basic research. Success in the market requires the replacement of established systems while meeting rigorous environmental, safety, and reliability standards. The financial risks incurred by the energy and power sector in this transition are recognized to be so great that government-industry consortia may be needed for successful implementation. Even these have thus far failed to commercialize shale oil recovery, nuclear breeder reactors, and many renewable energy technologies. I cannot be optimistic that slowly developing energy and power systems will jump forward when far more money is allocated to basic research without also greatly augmenting efforts and expenditures on the commercialization end.
Natural flood control
In “Natural Flood Control” (Issues, Fall 1998), Richard A. Haeuber Haeuber and William K. Michener point out that the time is ripe for a shift from a national policy of flood control to flood management. The term “management” acknowledges that we do not control rainfall and that our options for dealing with the runoff that results amount to choices about where to store the water. We can store it in constructed reservoirs and release it gradually, thereby reducing flood crests downstream where we have erected levees and floodwalls in an attempt to keep water off areas that naturally flood. Another approach is to provide upland storage in many small wetlands; slow the delivery of water with riparian buffer strips (and also by dechannelization and remeandering of selected stream segments); and allow portions of the downstream floodplains to flood, thereby maintaining flood-adapted native species and ecosystems.
The authors discuss several impediments to this policy shift, including private ownership. In parts of the United States, such as the Corn Belt, the upland drainage basins that deliver runoff to the rivers are largely privately owned. Incentive programs to slow runoff pay private landowners for restoration of wetlands and riparian buffer strips. Landowners enroll in such programs (if the price is right), but the geography of participation may not match the geography of lands that contribute the most to downstream flooding. Also, a significant number of landowners do not participate in such programs. If the unenrolled lands yield disproportionately large amounts of water (and excessive amounts of sediment, nutrients, and pesticides that move with the water), then the best efforts of surrounding landowners will be for naught. The converse is also true-flood detention measures in certain critical areas might detain much more water than in equivalent acreages elsewhere. Stormwater flows from rapidly urbanizing or suburbanizing areas that were formerly rural are another issue, but stormwater ordinances and requirements for green space in developments (parks can also be used for flood detention) are being effectively applied in many areas.
We need to determine the degree to which actual water management practices conform to flood management needs and revise policies accordingly. Such evaluations should include measurements in representative basins as water management programs are instituted, predictive modeling of the downstream effects of alternative distributions of wetlands and riparian buffers, and socioeconomic analysis of decisionmaking by private landowners. Private property rights will need to be balanced by private responsibility for actions that cause detrimental downstream effects. Such technical analysis can point out the consequences of alternative policies and landowner decisions, but as the authors point out, policy revision will occur through the political process.