Remaking the Academy in the Age of Information

Higher education around the world must undergo a dramatic makeover if it expects to educate a workforce in profound transformation. In 1950, only one in five U.S. workers was categorized as skilled by the Bureau of Labor Statistics. By 1991, the percentage had risen to 45 percent, and it will reach 65 percent in 2000. This dramatic upheaval in the labor force and in its educational and training needs reflects the fact that a great shift has taken place in the corporate world from an overwhelming reliance on physical capital, fueled by financial capital, to an unprecedented focus on human capital as the primary productive asset. This development, combined with the aging of the baby boomers, has been altering the course of higher education at a pace and with a significance undreamed of even five years ago. Likewise, this rate of change in the workforce and its educational needs has been the context for the success of the new for-profit, postsecondary institutions–making it possible for the University of Phoenix (UOP), with nearly 70,000 full-time students and more than 26,000 continuing education students, to become the largest accredited private university in the United States.

In a world where technology expenditures dominate capital spending and the skills that accompany it have half-lives measured in months, not years; where knowledge is accumulating at an exponential rate; where information technology has come to affect nearly every aspect of one’s life; where the acquisition, management, and deployment of information are the key competitive advantages; where electronic commerce already accounts for more than 2.3 million jobs and nearly $500 billion in revenue; education can no longer be seen as a discrete phenomenon, an option exercised only at a particular stage in life or a process following a linear course. Education is progressively becoming for the social body what health care has been to the physical and psychic one: It is the sine qua non of survival, maintenance, and vigorous growth.

Not surprisingly, a new education model, which UOP anticipated since its founding in 1976, has been quickly molding itself to fit the needs of our progressively more knowledge-based economy. Briefly, the education required today and into the future assumes that learners will need to be reskilled numerous times in their working lives if they wish to remain employed. Access to lifelong learning will therefore become progressively more critical for employees as well as their employers, who will find themselves pressured to provide or subsidize that access if they wish to retain their workforce and remain competitive. This new model is also based on the need to provide learning experiences everywhere and at any time and to use the most sophisticated information and telecommunications technologies. It is also characterized by a desire to provide educational products tailored to the learner; and in order to be competitive in the marketplace, it emphasizes branding and convenience.

It is not difficult to imagine why what were once innovations championed by UOP have become common practice in the corporate and political worlds. A quick survey of the contrasts between the “old” and “new” economies helps to elucidate their necessity. A knowledge-based economy must depend on networks and teamwork with distributed responsibilities; its reliance on technology makes it inherently risky and extremely competitive; and the opportunities created by new and continually evolving jobs place the emphasis on ownership through entrepreneurship and options, rather than on wages and job preservation. With technology and the Internet have also come globalization and e-commerce, making a virtue of speed, change, customization, and choice, and a vice of the maintenance of the status quo, standardization, and top-down hierarchical organization. This is a dynamic setting where win-win solutions are emphasized and public-private partnerships are widely prized. In such a vibrant milieu as this, many of the risk-averse, traditional rules of higher education are beginning to appear not merely quaint but irrelevant or, to the less charitable, downright absurd.

What society needs

The contemporary disconnect between what traditional higher education provides, especially in research institutions and four-year colleges, and what society wants can be gleaned in part through a 1998 poll of the 50 state governors. The aptly titled inquest “Transforming Postsecondary Education for the 21st Century” reveals that the governors’ four priorities were (1) to encourage lifelong learning (97 percent), (2) to allow students to obtain education at any time and in any place via technology (83 percent), (3) to require postsecondary institutions to collaborate with business and industry in curriculum and program development (77 percent), and (4) to integrate applied or on-the-job experience into academic programs (66 percent). In contrast–and most tellingly–the bottom four items were: (1) maintain faculty authority for curriculum content, quality, and degree requirements (44 percent); (2) maintain the present balance of faculty research, teaching load, and community service (32 percent); (3) ensure a campus-based experience for the majority of students (21 percent); and (4) in last place–enjoying the support of only one of the governors responding–maintain traditional faculty roles and tenure (3 percent).

But politicians and business leaders are not the only ones having second thoughts about the structure and rules undergirding higher education today. In a recent poll primarily of university presidents, administrators, and faculty by one of the six official accrediting bodies [the North Central Association (NCA) of Colleges and Schools], the respondents identified the following trends as likely to have the greatest impact on NCA activities: increasing demands for accountability (80 percent), expanding use of distance education (78 percent), increasing attention to teaching and learning (72 percent), and expanding use of the Internet (71 percent).

Perhaps more than any other institution, UOP has contributed to the recognition that education today must be ubiquitous, continuous, consumer-driven, quality-assured, and outcomes-oriented. In effect, UOP has truly shattered the myth for many that youth is the predominant age for schooling, that learning is a top-down localized activity, and that credentialing should depend on time spent on task rather than measurable competence. From its inception, UOP has addressed itself to working adults; and given what it has done in this niche, it has become the country’s first truly national university. In doing so, it has helped to prove that the age of learning is always, the place of learning is everywhere, and the goal of learning for most people is best reached when treated as tactical (with clear, immediate aims), as opposed to strategic (with broad aims and distant goals).

By restricting itself to working adults (all students must be at least 23 years old and employed), UOP contributes to U.S. society in a straightforward fashion: In educating a sector previously neglected or underserved, it helps to increase the productivity of individuals, companies, and regions. A 1998 survey of UOP’s alumni–with a 41 percent response rate–eloquently expresses my point: 63 percent of the respondents stated that UOP was their only choice, and 48 percent said they could not have completed their degree if it were not for UOP. The assessments of quality were also gratifying: 93 percent of alumni reported that UOP’s preparation for graduate school was “good to excellent”; 80 percent agreed that compared with coworkers who went to other colleges and universities, the knowledge and skills they gained from their major prepared them better for today’s job market; and 76 percent agreed that compared with coworkers who went to other colleges and universities, their overall education at UOP gave them a better career preparation.

That said, how UOP or any other institution of higher education is likely to contribute to human well-being in the coming century is not obvious. UOP must continually balance the inevitable need to invest in its transformation with the need to fulfill its present promises to its students, their employers, its regulators and shareholders, and to its own past. But maintaining this balance is a difficult task, because the road leading to the new millennium has been made bumpy by the uncertainty that has accompanied the rapid technological and economic changes.

Shifting sands

To begin with, the New Economy can be characterized by unprecedented employment churn, which is making a potential student out of every worker. Labor Department officials claim that an estimated 50 million workers, or about 40 percent of the workforce, change employers or jobs within any one year. Most of this churn comes from increases in productivity made possible, in part, by companies reducing their labor force in unprofitable or underperforming sectors and expanding their head count in more profitable areas. In addition, a significant part of the churn results from shifts in the ways companies are managed and organized. Today’s companies, facing more varied competition than in the past, must be more flexible than ever before. To accomplish this, they need management and a workforce that have been reeducated and retrained to be cross-functional, cross-skilled, self-managed, able to communicate and work in teams, and able to change on a moment’s notice. In this far more demanding workplace, managers and others who do not meet the criteria are usually the first to be dropped, but the more fortunate are retrained or reeducated.

The model of higher education, as represented by, say, Harvard, is an ideal that not even today’s Harvard seeks to implement.

In an environment with this level of churn and organizational and managerial transformation, where the median age is in the mid-30s and where adults represent nearly 50 percent of college students, a growing number of learners are demanding a professional, businesslike relationship with their campus that is characterized by convenience, cost- and time-effective services and education, predictable and consistent quality, seriousness of purpose, and high customer service geared to their needs, not those of faculty members, administrators, or staff. Put another way, students who want to be players in the New Economy are unlikely to tolerate a just-in-case education that is not practical, up-to-date, or career-focused.

This is not to imply, as some zealots of the new believe, that traditional institutions, especially research-driven ones, are going to disappear. What I mean instead is that the model of higher education, as represented by, say, Harvard, is an ideal that not even today’s Harvard seeks to implement. For instance, Harvard Provost Harvey Fineberg, reflecting on the future of his institution, recently spoke about the UOP model by making reference to Intel founder Andy Grove’s anxious observation that the U.S. domestic steel industry is moribund today because it chose not to produce rebar (the steel used to reinforce concrete) and thereby permitted the Japanese to gain market share in the country. Nervous about the future of his venerable institution and other traditional centers of higher education, he asked during an interview published in the Boston Globe, “Is the University of Phoenix our rebar?” And fearful of being left behind by the future that UOP is helping to create, Fineberg concluded with the observation, “I know that Harvard has to change. No institution remains at the forefront of its field if it does the same things in 20 years that it does today.”

Indeed, no institution of higher education in today’s economy can afford to resist change. Ironically, some of the most jarring characteristics of today’s innovative institutions–their for-profit status, their lack of permanent buildings and faculties, and their need to be customer service-oriented–were actually common among the ancestral universities of the West. What these old institutions had in common with their traditional descendants, however, is that both were and continue to be geographically centered; committed to the pedagogical importance of memorization (rather than information management); and, perhaps even more important, synchronous in their demand that all students meet at regular intervals at specific times and places to hear masters preach to passive subjects.

But the needs of the New Economy challenge higher education to provide something different. Web-based education, an inherently locationless medium, is likely to push to the margins of history a substantial number of those institutions and regulatory bodies that seek to remain geographically centered. Meanwhile, the Internet and the database management systems that make useful the information they transport and handle can provide time-constrained consumers with just-in-time information and learning that, because it can be accessed asynchronously, places the pedagogical focus on arriving at syntheses and developing critical thinking while making localized learning and mere memorization secondary. And with asynchronicity and high electronic interactivity, socialization can be refocused on the educational process, a phenomenon that is reinforced by a commitment to results-oriented learning based on actual performance of specified and testable outcomes, rather than, as in the traditional situation, relying primarily on predetermined inputs and subjective criteria to maintain and assess quality.

All this represents a huge challenge for higher education and technology. A brief comparison of traditional and online university settings may help here. To begin with, there is the issue of content and its delivery. The predominance of the lecturing faculty member, the bored or passive student, and the one-size-fits-all textbook is subject to much condemnation, yet the alternatives are also problematic. Discussion-oriented education, which characterizes e-education, is not easily undertaken successfully. It requires the right structure to make everyone contribute actively to his or her own education, it calls for unlimited access to unlimited resources, and it is best unconstrained by locations in “brick and mortar” classrooms and libraries. Likewise, it calls for a guidance, maturity, and discipline that are often well beyond the reach of indifferent faculty members and unmotivated students, and it is helpless in the face of a disorganized or illogical curriculum. In short, the online education world needed by the New Economy is a daunting one, with no place for jaded teachers or faulty pedagogy.

With these challenges in mind, who can step forward within the world of traditional higher education to force a changing of the rules so as to transform the institutions of the past into those that can serve the needs of the knowledge-based economy of today and tomorrow?

Principles and practices

Making front- and back-office functions convenient and accessible 24 hours a day, 7 days a week, is today primarily a matter of will, patience, and money. But creating access to nearly “24 7” academic programs able to meet the needs of the New Economy is a totally different matter. This also calls for rethinking the rules that guide higher education today. To drive home the point that this is not a simple matter and to answer the question I just posed, I must remark on the catechism that articulates our faith at UOP. We believe that the needs of working adult students can be distilled into six basic propositions, which are easy to state but difficult to practice, particularly for traditional institutions:

  • First, these students want to complete their education while working full-time. In effect, they want all necessary classes to be available in the sequence they need and at times that do not conflict with their work hours. But for this to become a reality, the rule that permits faculty to decide what they will teach and when must be modified, and that is not an easy matter, especially when it comes to tenured faculty.
  • Second, they want a curriculum and faculty that are relevant to the workplace. They want the course content to contribute to their success at work and in their career, and they want a faculty member who knows more than they do about the subject and who knows it as the subject is currently understood and as it is being practiced in fact, not merely in theory. To make this desideratum a reality, the rule that would have to be revamped is the one that decrees faculty will decide on their own what the content of their courses will be. In addition, faculty would have to stay abreast of the most recent knowledge and most up-to-date practices in their field. Here the dominant version of the meaning of academic freedom would have to be reconsidered, for otherwise there would be no force that could compel a tenured professor either to be up to date or to teach a particular content in a particular way.
  • Third, they want a time-efficient education. They want to learn what they need to learn, not what the professor may desire to teach that day; they want it in the structure that will maximize their learning; and they want to complete their degree in a timely fashion.
  • Fourth, they want their education to be cost-effective. They do not want to subsidize what they do not consume (dorms, student unions, stadiums), and they do not want to pay much overhead for the education they seek.
  • Fifth, and this should be no surprise, they expect a high level of customer service. They want their needs to be anticipated, immediately addressed, and courteously handled. They do not want to wait, stand in line, deal with indifferent bureaucrats, or be treated like petitioning intruders as opposed to valued customers.
  • Last, they want convenience: campuses that are nearby, safe, with well-lit parking lots, and with all administrative and student services provided where the teaching takes place.

The UOP model has been addressing these needs for more than a quarter of a century by focusing on an education that has been designed specifically for working adults. This means an education with concentrated programs that are offered all year round during the evening and where students take their courses sequentially, one at a time. All classes are seminar-based, with an average of 14 students in each class (9 in the online courses), and these are facilitated by academically qualified practitioner faculty members, all of whom hold doctorates or master’s degrees, all of whom have been trained by UOP to teach after undergoing an extensive selection review process, and all of whom must work full-time in the field in which they are specifically certified to teach. In turn, all of the curriculum is outcomes-oriented and centrally developed by subject matter experts, within and outside the faculty, supported by the continuous input and oversight provided by UOP’s over 6,500 practitioner faculty members who, although spread across the entire country and overseas, are each individually integrated into the university’s faculty governance structure. This curriculum integrates theory and practice, while emphasizing workplace competencies along with teamwork and communication skills–skills that are well developed in the study groups that are an integral part of each course. Last, every aspect of the academic and administrative process is continually measured and assessed, and the results are integrated into the quality-improvement mechanisms responsible for the institution’s quality assurance.

The rule that permits faculty to decide what they will teach and when must be modified, and that is not an easy matter.

Still, my tone of confidence, and indeed pride, should not lead us away from the question that follows the critical observation made of his own institution by Harvard’s provost: In the face of the challenges the new millennium portends, how durable is the UOP model, or the many others it has inspired, likely to be? For instance, although content is quickly becoming king, its sheer volume is placing a premium on Web portals, online enablers, marketing channels, and information-organizing schemes. In turn, these initiatives–demanded by the knowledge-based economy–have the capacity to transform higher education institutions into totally unrecognizable entities. Online enablers, the outsourcers who create virtual campuses within brick and mortar colleges, can provide potentially unlimited access to seemingly unlimited content sources. And the channels they establish for marketing education can easily be used to market other products to that very important consumer group.

Online information portals can provide remote proprietary and nonproprietary educational content, and more important, they can integrate themselves into the traditional institutions. Traditional institutions that begin with outsourcing educational functions to the portals could eventually find it cost-effective to outsource other academic, administrative, financial, and student services to the technologically savvy portals.

The importance of the role portals and online enablers will play in the transformation of the traditional academy cannot be overestimated. Quite apart from the Amazon.com­like possibilities they open for some higher education institutions, another way to appreciate their effect is to think of them in terms of the parallel represented by the shift of retail banking out of the branch to the ATM and then onto the desktop. Just as bank customers can use the ATMs of many banks, students may find it possible to replace or supplement their alma mater’s courses with courses or learning experiences derived from any other accredited institution, corporate university, or relevant database. Fear of this possibility has spurred traditional institutions to undermine innovations such as the ill-fated California Virtual University, to slow the efforts of Western Governors University, and to create problems for the United Kingdom’s Open University in its ambitious plans for the United States. The power of the entrenched faculty will make it difficult for traditional institutions to take advantage of new technology and adapt to the evolving needs of students.

Winners and losers

What institutions, then, are likely to be the winners in the future? Because staying ahead is critical to UOP, let me return to it once more as a source for speculation. In the light of the dramatic shifts taking place, it may be that UOP can better serve the adult learners of the future by transforming a significant part of itself so as to function as a platform or hub that emphasizes its role as a search engine (an identifier and provider of content), as a portal (a gateway to databases and links to learning experiences), as a rubric-meister (a skilled organizer of complex data), and as an assessor (a recognized evaluator of content, process, and effectiveness whose assessments can help take the guesswork out of shopping for education and training). This is a legitimate proposal for any university that has prided itself on its capacity to innovate and to transform itself. It is as legitimate, at least, as the one the railroads should have posed to themselves when confronted with the question, “Are you in the business of trains, tracks, and warehouses or of transportation?” And it is worth remembering the fate they suffered for their unanimous adherence to the former position. In effect, if, as any university that wants to survive into the next millennium must believe, UOP is primarily in the business of education rather than of brick and mortar classrooms and self-created curriculum, its transformations in the future should be and no doubt will be dictated primarily by what learners need, not by what it has traditionally done.

Consolidation in distance learning can be expected, but the behemoths lie unformed and, I suspect, unimagined.

But before the openness of future possibilities seduces us into forging untimely configurations, a simple warning is in order. A proposal such as the one I have laconically described is not easily implemented even in an innovative university such as mine. After all, UOP is fully aware that to serve its markets well in the future it must provide a variety of delivery modes and educational products, but it is not easy to identify what information technology and telecommunication products are worth investing in. For instance, although UOP pioneered interactive distance learning as early as 1989; although it has the world’s largest completely online, full-time, degree-seeking student enrollment (more than 10,000 students and growing at over 50 percent per year); and although it rightly prides itself on the effectiveness of its online degree programs, we recognize that all our experience and our new Web-enabled platform, which we developed at a substantial cost, cannot in themselves guarantee that we have a solid grasp on the future of interactive distance learning.

First of all, the evolution of distance education has not yet reached its Jurassic Age. Consolidation can be expected, but the behemoths lie unformed and, I suspect, unimagined. An acquisition that does not entail a soon-to-be-extinct technology is hard to spot when technology is changing at warp speed. And opportunities to integrate the next hot model are easy to pass up. Only deep pockets and steel nerves are likely to survive the seismic technological displacements to come.

That said, to serve its markets and thrive, UOP, like any other higher education provider that seeks to survive the next few decades, will need to keep its focus as distance education begins to blur with the edutainment and database products born of the large media companies and the entertainment and publishing giants. That focus, always oxymoronically tempered by flexibility, is most likely to be on the use of any medium–PC, television, Internet appliance, etc.–that permits the level of interaction that leads to effective education and that can command accreditation (if such is still around), a premium price, and customers whose sense of satisfaction transforms them into effective advocates.

Still, although it is a widespread mantra among futurists of higher education that colleges and universities will undergo a profound transformation primarily as a consequence of the quickly evolving information and communication technologies, this does not necessarily imply the demise of site-specific educational venues. To survive deep into the next century, UOP, like any other innovative institution, will need to reaggregate some major parts of itself to form a centralized content-producing and widely based distribution network, but it is unlikely to be able to do this without some forms of campus-based delivery. Having already advanced further than any other institution in unbundling faculty roles (that is, in separating teaching from content, development, and assessment), UOP, without abandoning its physical presence of multiple sites distributed globally, is likely to shape itself more along the lines of a media company and educational production unit than to continue solely as a brick and mortar university with a massive online campus. With media specialists as guides and content experts on retainer, UOP will probably emerge as a mega-educational system with widely distributed campuses, multiple sites in cyberspace, and possibly with a capacity for self-regulated expansion.

As education moves more toward the certification of competence with a focus on demonstrated skills and knowledge–on “what you know” rather than “what you have taken” in school–more associations and organizations, who can prove themselves worthy to the U.S. Department of Education, will be able to gain accreditation. Increased competition from corporate universities, training companies, course-content aggregators, and publisher-media conglomerates will put a premium on the ability of institutions not only to provide quality education but to do so in a way that meets consumers’ expectations. In short, as education becomes more a continuous process of certification of new skills, institutional success for any higher education enterprise will depend more on successful marketing, solid quality assurance and control systems, and effective use of the new media and not solely on the production and communication of knowledge. This is a shift that I believe UOP is well positioned to undertake, but I am less confident that many non-elite, especially private traditional academic, institutions will manage to survive successfully.

That glum conclusion leads me to a final observation: Societies everywhere expect from higher education the provision of an education that can permit them to flourish in the changing global economic landscape. Institutions that can continually change to keep up with the needs of the transforming economy they serve will survive. Those that cannot or will not change will become irrelevant, will condemn misled masses to second-class economic status or poverty, and will ultimately die, probably at the hands of those they chose to delude by serving up an education for a nonexistent world.

Building on Medicare’s Strengths

The aging of the U.S. population will generate many challenges in the years ahead, but none more dramatic than the costs of providing health care services for older Americans. Largely because of advances in medicine and technology, spending on both the old and the young has grown at a rate faster than spending on other goods and services. Combining a population that will increasingly be over the age of 65 with health care costs that will probably continue to rise over time is certain to mean an increasing share of national resources devoted to this group. In order to meet this challenge, the nation must plan how to share that burden and adapt Medicare to meet new demands.

Projections from the 1999 Medicare Trustees Report indicate that Medicare’s share of the gross domestic product (GDP) will reach 4.43 percent in 2025, up from 2.53 percent in 1998. Although this is a substantial increase, it is actually smaller than what was being projected a few years ago. This slowdown in growth does not eliminate the need to act, but it does allow some time for study and deliberation before we do act.

Projected increases in Medicare’s spending arise because of the high costs of health care and growing numbers of people eligible for the program. But most of the debate over Medicare reform centers on restructuring of Medicare. This restructuring would rely on contracting with private insurance plans, which would compete for enrollees. The federal government would subsidize a share of the costs of an average plan, leaving beneficiaries to pay the remainder. More expensive plans would require beneficiaries to pay higher premiums. The goal of such an approach is to make both plans and beneficiaries sensitive to the costs of care, leading to greater efficiency. But this is likely to address only part of the reason for higher costs of care over time. Claims for savings from options that shift Medicare more to a system of private insurance usually rest on two basic arguments: First, it is commonly claimed that the private sector is more efficient than Medicare, and second, that competition among plans will generate more price sensitivity on the part of beneficiaries and plans alike. Although seemingly credible, these claims do not hold up under close examination.

Medicare efficiency

Looking back over the period from 1970 to 1997, Medicare’s cost containment performance has been better than that of private insurance. Starting in the 1970s, Medicare and private insurance plans initially grew very much in tandem, showing few discernible differences (see Chart 1). By the 1980s, per capita spending had more than doubled in both sectors. But Medicare became more cost-conscious than private health insurance in the 1980s; and cost containment efforts, particularly through hospital payment reforms, began to pay off. From about 1984 through 1988, Medicare’s per capita costs grew much more slowly than those in the private sector.

This gap in overall growth in Medicare’s favor stayed relatively constant until the early 1990s, when private insurers began to take the rising costs of health insurance seriously. At that time, growth in the cost of private insurance moderated in a fashion similar to Medicare’s slower growth in the 1980s. Thus, it can be argued that the private sector was playing catch-up to Medicare in achieving cost containment. Private insurance thus narrowed the difference with Medicare in the 1990s, but as of 1997 there was still a considerable way for the private sector to go before its cost growth would match Medicare’s achievement of lower overall growth.


It should not be surprising that the per capita rates over time are similar between Medicare and private sector spending, because all health care spending shares technological change and improvement as a major factor driving high rates of expenditure growth. To date, most of the cost savings generated by all payers for care has come from slowing growth in the prices paid for services but making only preliminary inroads in reducing the use of services or addressing the issue of technology. Reining in the use of services will be a major challenge for private insurance as well as Medicare in the future, and it is not clear whether the public or private sector is better equipped to do this. Further, Medicare’s experience with private plans has been distinctly mixed.

Reform options such as the premium support approach seek savings by allowing the premiums paid by beneficiaries to vary so that those choosing higher-cost plans pay substantially higher premiums. The theory is that beneficiaries will become more price conscious and choose lower-cost plans. This in turn will reward private insurers that are able to hold down costs. And there is some evidence from the federal employee system and the Calpers system in California that this has disciplined the insurance market to some degree. Studies that have focused on retirees, however, show much less sensitivity to price differences. Older people may be less willing to change doctors and learn new insurance rules in order to save a few dollars each month. Thus, what is not known is how well this will work for Medicare beneficiaries.

For example, for a premium support model to work, at least some beneficiaries must be willing to shift plans each year (and to change providers and learn new rules) in order to reward the more efficient plans. Without that shifting, savings will not occur. In addition, there is the question of how private insurers will respond. (If new enrollees go into such plans each year, some savings will be achieved, but these are the least costly beneficiaries and may lead to further problems as discussed below.) Will they seek to improve service or instead focus on marketing and other techniques to attract a desirable, healthy patient base? It simply isn’t known whether competition will really do what it is supposed to do.

A concerted effort to expand benefits is necessary if Medicare is to be an efficient and effective program.

In addition, new approaches to the delivery of health care under Medicare may generate a whole new set of problems, including problems in areas where Medicare is now working well. For example, shifting across plans is not necessarily good for patients; it is not only disruptive, it can raise the costs of care. Some studies have shown that having one physician over a long period of time reduces the costs of care. And if only the healthier beneficiaries choose to switch plans, the sickest and most vulnerable beneficiaries may end up being concentrated in plans that become increasingly expensive over time. The case of retirees left in the federal employee high-option Blue Cross plan and in a study of retirees in California suggest that even when plans become very expensive, beneficiaries may be fearful of switching and end up substantially disadvantaged. Further, private plans by design are interested in satisfying their own customers and generating profits for stockholders. They cannot be expected to meet larger social goals such as making sure that the sickest beneficiaries get high-quality care; and to the extent that such goals remain important, reforms in Medicare will have to incorporate additional protections to balance these concerns as described below.

Core principles

The reason to save Medicare is to retain for future generations the qualities of the program that are valued by Americans and that have served them well over the past 33 years. This means that any reform proposal ought to be judged on principles that go well beyond the savings that they might generate for the federal government.

I stress three crucial principles that are integrally related to Medicare’s role as a social insurance program:

  • The universal nature of the program and its consequent redistributive function.
  • The pooling of risks that Medicare has achieved to share the burdens across sick and healthy enrollees.
  • The role of government in protecting the rights of beneficiaries–often referred to as its entitlement nature.

Although there are clearly other goals and contributions of Medicare, these three are part of its essential core. Traditional Medicare, designed as a social insurance program, has done well in meeting these goals. What about options relying more on the private sector?

Universality and redistribution. An essential characteristic of social insurance that Americans have long accepted is the sense that once the criterion for eligibility of contributing to the program has been met, benefits will be available to all beneficiaries. One of Medicare’s great strengths has been providing much-improved access to health care. Before Medicare’s passage, many elderly people could not afford insurance, and others were denied coverage as poor risks. That changed in 1966 and had a profound impact on the lives of millions of seniors. The desegregation of many hospitals occurred on Medicare’s watch. And although there is substantial variation in the ability of beneficiaries to supplement Medicare’s basic benefits, basic care is available to all who carry a Medicare card. Hospitals, physicians, and other providers largely accept the card without question.

Once on Medicare, enrollees no longer have to fear that illness or high medical expenses could lead to the loss of coverage–a problem that still happens too often in the private sector. This assurance is an extremely important benefit to many older Americans and persons with disabilities. Developing a major health problem is not grounds for losing the card; in fact, in the case of the disabled, it is grounds for coverage. This is vastly different than the philosophy of the private sector toward health coverage. Even though many private insurers are willing and able to care for Medicare patients, the easiest way to stay in business as an insurer is to seek out the healthy and avoid the sick.

Will reforms that lead to a greater reliance on the market still retain the emphasis on equal access to care and plans? For example, differential premiums could undermine some of the redistributive nature of the program that assures even low-income beneficiaries access to high-quality care and responsive providers.

The pooling of risks. One of Medicare’s important features is the achievement of a pooling of risks among the healthy and sick covered by the program. Even among the oldest of the beneficiaries, there is a broad continuum across individuals’ needs for care. Although some of this distribution is totally unpredictable (because even people who have historically had few health problems can be stricken with catastrophic health expenses), a large portion of seniors and disabled people have chronic problems that are known to be costly to treat. If these individuals can be identified and segregated, the costs of their care can expand beyond the ability of even well-off individuals to pay over time.

A major impetus for Medicare was the need to protect the most vulnerable. That’s why the program focused exclusively on the old in 1965 and then added the disabled in 1972. About one in every three Medicare beneficiaries has severe mental or physical health problems. In contrast, the healthy and relatively well-off (with incomes over $32,000 per year for singles and $40,000 per year for couples) make up less than 10 percent of the Medicare population. Consequently, anything that puts the sickest at greater risk relative to the healthy is out of sync with this basic tenet of Medicare. A key test of any reform should be whom it best serves.

If the advantages of one large risk pool (such as the traditional Medicare program) are eliminated, other means will have to be found to make sure that insurers cannot find ways to serve only the healthy population. Although this very difficult challenge has been studied extensively, as yet no satisfactory risk adjustor has been developed. What has been developed to a finer degree, however, are marketing tools and mechanisms to select risks. High-quality plans that attract people with extensive health care needs are likely to be more expensive than plans that focus on serving the relatively healthy. If risk adjustors are never powerful enough to eliminate these distinctions and level the playing field, then those with health problems, who also disproportionately have lower incomes, would have to pay the highest prices under many reform schemes.

The role of government. Related to the two principles above is the role that government has played in protecting beneficiaries. In traditional Medicare, this has meant having rules that apply consistently to individuals and ensure that everyone in the program has access to care. It has sometimes fallen short in terms of the variations that occur around the country in benefits, in part because of interpretation of coverage decisions but also because of differences in the practice of medicine. For example, rates of hospitalization, frequency of operations such as hysterectomies, and access to new tests and procedures vary widely by region, race, and other characteristics. But in general Medicare has to meet substantial standards of accountability that protect its beneficiaries.

If the day-to-day provision of care is left to the oversight of private insurers, what will be the impact on beneficiaries? It is not clear whether the government will be able to provide sufficient oversight to protect beneficiaries and ensure them of access to high-quality care. If an independent board–which is part of many restructuring proposals–is established to negotiate with plans and oversee their performance, to whom will it be accountable? Further, what provisions will be in place to step in when plans fail to meet requirements or leave an area abruptly? What recourse will patients have when they are denied care?

One of the advantages touted for private plans is their ability to be flexible and even arbitrary in making decisions. This allows private insurers to respond more quickly than a large government program and to intervene where they believe too much care is being delivered. But what look like cost-effectiveness activities from an insurer’s perspective may be seen by a beneficiary as the loss of potentially essential care. Which is more alarming: too much care or care denied that cannot be corrected later? Some of the “inefficiencies” in the health care system may be viewed as a reasonable response to uncertainty when the costs of doing too little can be very high indeed.

Preserving what works

Much of the debate over how to reform the Medicare program has focused on broad restructuring proposals. However, it is useful to think about reform in terms of a continuum of options that vary in their reliance on private insurance. Few advocate a fully private approach with little oversight; similarly, few advocate moving back to 1965 Medicare with its unfettered fee-for-service and absence of any private plan options. In between, however, are many possible options and variations. And although the differences may seem technical or obscure, many of these “details” matter a great deal in terms of how the program will change over time and how well beneficiaries will be protected. Perhaps the most crucial issue is how the traditional Medicare program is treated. Under the current Medicare-plus-Choice arrangement, beneficiaries are automatically enrolled in traditional Medicare unless they choose to go into a private plan. Alternatively, traditional Medicare could become just one of many plans that beneficiaries choose among–but probably paying a substantially higher premium if they choose to do so.

What are the tradeoffs from increasingly relying on private plans to serve Medicare beneficiaries? The modest gains in lower costs that are likely to come from some increased competition and from the flexibility that the private sector enjoys could be more than offset by the loss of social insurance protection. The effort necessary to create in a private plan environment all the protections needed to compensate for moving away from traditional Medicare seems too great and too uncertain. And on a practical note, many of the provisions in the Balanced Budget Act of 1997 that would be essential in any further moves to emphasize private insurance–generating new ways of paying private plans, improving risk adjustment, and developing information for beneficiaries, for example–still need a lot of work.

Special attention to the needs of the disabled population should not get lost in the broader debate.

In addition, it is not clear that there is a full appreciation by policymakers or the public at large of all the consequences of a competitive market. Choice among competing plans and the discipline that such competition can bring to prices and innovation are often stressed as potential advantages of relying on private plans for serving the Medicare population. But if there is to be choice and competition, some plans will not do well in a particular market, and as a result they will leave. In a market system, withdrawals should be expected; indeed, they are a natural part of the process by which uncompetitive plans that cannot attract enough enrollees leave particular markets. If HMOs have a hard time working with doctors, hospitals, and other providers in an area, they may decide that this is not a good market. And if they cannot attract enough enrollees to justify their overhead and administrative expenses, they will also leave an area. The whole idea of competition is that some plans will do well and in the process drive others out of those areas. In fact, if no plans ever left, that would be a sign that competition was not working well.

But plan withdrawals will result in disruptions and complaints by beneficiaries, much like those now occurring in response to the recently announced withdrawals from Medicare-plus-Choice. For various reasons, private plans can choose each year not to accept Medicare patients. In each of the past two years, about 100 plans around the country have decided to end their Medicare businesses in some or all of the counties they serve. In those cases, beneficiaries must find another private plan or return to traditional Medicare. They may have to choose new doctors and learn new rules. This situation has led to politically charged discussions about payment levels in the program, even though that is only one of many factors that may cause plans to withdraw. Thus, not only will beneficiaries be unhappy, but there may be strong political pressure to keep federal payments higher than a well-functioning market would require.

What I would prefer to see is an emphasis on improvements in the private plan options and the traditional Medicare program, basically retaining the current structure in which traditional Medicare is the primary option. Rather than focusing on restructuring Medicare to emphasize private insurance, I would place the emphasis on innovations necessary for improvements in health care delivery regardless of setting.

That is, better norms and standards of care are needed if we are to provide quality-of-care protections to all Americans. Investment in outcomes research, disease management, and other techniques that could lead to improvements in treatment of patients will require a substantial public commitment. This cannot be done as well in a proprietary for-profit environment where new ways of coordinating care may not be shared. Private plans can play an important role and may develop some innovations on their own, but in much the same way as we view basic research on medicine as requiring a public component, innovations in health care delivery also need such support. Further, innovations in treatment and coordination of care should focus on those with substantial health problems–exactly the population that many private plans seek to avoid. Some private plans might be willing to specialize in individuals with specific needs, but this is not going to happen if the environment is one that emphasizes price competition and has barely adequate risk adjusters. Innovative plans would be likely to suffer in that environment.

The current programs to provide protections to low-income beneficiaries are inadequate.

Finally, the default plan–for those who do not or cannot choose or who find a hostile environment in the world of competition–must, at least for the time being, be traditional Medicare. Thus, there needs to be a strong commitment to maintaining a traditional Medicare program while seeking to define the appropriate role for alternative options. But for the time being, there cannot and should not be a level playing field between traditional Medicare and private plans. Indeed, if Medicare truly used its market power as do other dominant firms in an industry, it could set its prices in markets in order to drive out competitors, or it could sign exclusive contracts with providers, squeezing out private plans. When private plans suggest that Medicare should compete on a level playing field, it is unlikely that they mean this to be taken literally.

Other reform issues

Although most of the attention given to reform focuses on structural questions, there are other key issues that must also be addressed, including the adequacy of benefits, provisions that pass costs on to beneficiaries, and the need for more general financing. Even after accounting for changes that may improve the efficiency of the Medicare program through either structural or incremental reforms, the costs of health care for this population group will still probably grow as a share of GDP. That will mean that the important issue of who will pay for this health care–beneficiaries, taxpayers, or a combination of the two–must ultimately be addressed to resolve Medicare’s future.

Improved benefits. It is hard to imagine a reformed Medicare program that does not address two key areas of coverage: prescription drugs and a limit on the out-of-pocket costs that any individual beneficiary must pay in a year. Critics of Medicare rightly point out that its inadequacy has led to the development of a variety of supplemental insurance arrangements, which in turn create an inefficient system in which most beneficiaries rely on two sources of insurance to meet their needs. Further, without a comprehensive benefit package that includes those elements of care that are likely to naturally attract sicker patients, viable competition without risk selection will be difficult to attain.

It is sometimes argued that improvements in coverage can occur only in combination with structural reform. And some advocates of a private approach to insurance go further, suggesting that the structural reform itself will naturally produce such benefit improvements. This implicitly holds the debate on improved benefits hostage to accepting other unrelated changes. And to suggest that a change in structure, without any further financial contributions to support expanded benefits, will yield large expansions in benefits is wishful thinking. A system designed to foster price competition is unlikely to stimulate expansion of benefits.

Expanding benefits is a separable issue from how the structure of the program evolves over time. However, it is not separable from the issue of the cost of new benefits. This is quite simply a financing issue, and it would require new revenues, probably from a combination of beneficiary and taxpayer dollars. A voluntary approach to providing such benefits through private insurance, such as we have at present, is seriously flawed. For example, prescription drug benefits generate risk selection problems; already the costs charged by many private supplemental plans for prescription drugs equal or outweigh their total possible benefits because such coverage attracts a sicker-than-average set of enrollees. A concerted effort to expand benefits is necessary if Medicare is to be an efficient and effective program.

Disability beneficiaries. A number of special problems face the under-65 disabled population on Medicare. The 18-month waiting period before a Social Security disability recipient becomes eligible for coverage creates severe hardships for some beneficiaries who must pay enormous costs out of pocket or delay treatments that could improve their disabilities if they do not have access to other insurance. In addition, a disproportionate share of the disability population has mental health needs, and Medicare’s benefits in this area are seriously lacking. Special attention to the needs of this population should not get lost in the broader debate.


Beneficiaries’ contributions. Some piece of a long-term solution probably will (and should) include further increases in contributions from beneficiaries beyond what is already scheduled to go into place. The question is how to do so fairly. Options for passing more costs of the program on to beneficiaries, either directly through new premiums or cost sharing or indirectly through options that place them at risk for health care costs over time, need to be carefully balanced against beneficiaries’ ability to absorb these changes. Just as Medicare’s costs will rise to unprecedented levels in the future, so will the burdens on beneficiaries and their families. Even under current law, Medicare beneficiaries will be paying a larger share of the overall costs of the program and spending more of their incomes in meeting these health care expenses (see Chart 2).

In addition, options to increase beneficiary contributions to the cost of Medicare further increase the need to provide protections for low-income beneficiaries. The current programs to provide protections to low-income beneficiaries are inadequate, particularly if new premium or cost-sharing requirements are added to the program. Participation in this program is low, probably in part because it is housed in the Medicaid program and is thus tainted by its association with a “welfare” program. Further, states, which pay part of the costs, tend to be unenthusiastic about it and probably also discourage participation.

Financing. Last but not least, Medicare’s financing must be part of any discussion about the future. We simply cannot expect as a society to provide care to the most needy of our citizens for services that are likely to rise in costs and to absorb a rapid increase in the number of individuals becoming eligible for Medicare without facing the financing issue head on. Medicare now serves one in every eight Americans; by 2030 it will serve nearly one in every four. And these people will need to get care somewhere. If not through Medicare, then where?

Confronting the Paradox in Plutonium Policies

The world’s huge stocks of separated, weapons-usable military and civil plutonium are at present the subject of profoundly contradictory, paradoxical policies. These policies fail to squarely confront the serious risks to the nuclear nonproliferation regime posed by civil plutonium as a fissile material that can be used by rogue states and terrorist groups to make nuclear weapons.

About 100 metric tons of weapons plutonium has been declared surplus to military needs by the United States and Russia and will be converted to a proliferation-resistant form (ultimately to be followed by geologic disposal) if present policy commitments are realized. But no comparable national or international policy applies to the civil plutonium stocks, although these are already more than 50 percent greater than the stocks of military plutonium arising from the dismantling of bombs and warheads.

Most of the separated civil plutonium has been created at commercial fuel reprocessing plants in Britain and France, to which various other countries, especially Germany and Japan, have been sending some of the spent fuel from their commercial uranium-fueled reactors, expecting to eventually use the returned plutonium as reactor fuel. But plans for recycling plutonium as reactor fuel have been slow in maturing, so large inventories of civil plutonium have accumulated.

Risks of separated plutonium

The greatest concentration of civil plutonium stocks is at the nuclear fuel reprocessing centers in France at La Hague, on the English Channel; and in Britain at Sellafield, on the Irish Sea. The total for all French and British civil stocks, at the reprocessing centers and elsewhere, is over 132 tons, part of it held for utilities in Germany, Japan, and other countries. Russia’s stock of about 30 tons is the third largest, nearly all of it stored at the reprocessing plant at Chelyabinsk, in the Urals. Smaller but significant stocks of plutonium are present in Germany, Japan, Belgium, and Switzerland, converted in considerable part to a mixed plutonium-uranium oxide fuel called MOx, now waiting to be used in designated light water reactors. The recycling of plutonium in breeder reactors is quite different technology from that found in light water reactors, and commercial development of breeders has been beset by repeated economic and technical reverses that have left the future of breeders very much in doubt. In the United States, early ventures in fuel reprocessing, plutonium recycling, and commercial breeder development all came to an end by the late 1970s and early 1980s. But several tons of separated civil plutonium remain in this country from the early reprocessing effort.

According to estimates by the International Atomic Energy Agency (IAEA), total global civil stocks of separated plutonium may exceed 250 tons by the year 2010. The stakes will continue to be enormous for keeping the separated civil and military plutonium secure and well guarded against intrusions by outsiders and malevolent designs by insiders. Granted, there are national and IAEA safeguards for closely accounting for and protecting all separated plutonium and fresh MOx. We believe that these safeguards reduce the risk of plutonium diversions, thefts, and forcible seizures to a low probability. But in our view the risk is still too great in light of the horrendous consequences of failure.

Separated plutonium could become a target for theft or diversion by a subnational terrorist group, possibly one assisted by a rogue state such as Iraq or North Korea. Less than 10 kilograms of plutonium might suffice for a crude bomb small enough to be put in a delivery van and powerful enough to explode with a force thousands of times greater than that of the bomb that destroyed the federal building in Oklahoma City.

The risk posed by separated plutonium has partly to do with the possibility of diversions for weapons use by a nation whose utilities own the plutonium. Indeed, the potential for such a diversion by a state that has a store of plutonium could increase over time. Even a respected nonnuclear-weapons state, such as Japan, might at some future time feel coerced by new and threatening circumstances to break with the nonproliferation regime and exploit its civil plutonium to make nuclear weapons.

But the greater and more immediate problem is the risk of theft or diversion by terrorists, and that risk lies chiefly in the circulation of plutonium within the nuclear fuel cycle. The many different fuel cycle operations, such as shipping, blending with uranium, fabrication into fresh MOx, storage, and further shipping, all provide opportunities for diversion. A plutonium disposition program will therefore be less than half a loaf unless accompanied by a commitment to end all further separation of plutonium.

An industrial disposition campaign

In less than two decades from start up of the campaign, the nuclear industries of France and Britain could convert virtually the entire global inventory of separated civil plutonium and half the surplus military plutonium to a proliferation-resistant form. But this would mean ending all civil fuel reprocessing and, with the completion of the plutonium conversion campaign, ending all plutonium recycling as well.

For the French and British nuclear industries to embrace so profound a change would mean a marked shift in policy by government as well as industry, for in France the nuclear industry is wholly government-owned and in the United Kingdom British Nuclear Fuels is a national company. The change in government policy in those two countries could be achieved only with the cooperation of key governments abroad–especially the governments of the United States, Russia, Germany, and Japan–and of foreign utilities that own significant amounts of plutonium.

In addition, there would have to be a growing demand for safe plutonium disposition around the world: by political leaders and their parties, by environmental and safe-energy groups, and by the peace groups, policy research groups, and international bodies that together make up the nuclear nonproliferation community. Essential to all the foregoing will be a keen awareness of the proliferation risks associated with separated plutonium and of the possibilities for safely disposing of that plutonium.

We believe that a plutonium disposition campaign relying mostly on nuclear facilities already existing in Britain and France could be carried out far more quickly than would be possible for campaigns elsewhere requiring the construction or modification of whole suites of industrial plants and reactors. Indeed, disposition of Russia’s surplus military plutonium alone is expected to depend on construction of a new MOx fuel plant that the major industrial countries will almost certainly be called upon to pay for.

The United States, where much development work has been done on plutonium disposition, will most likely continue with its own program for disposition of surplus U.S. military plutonium. But the French and British should be encouraged to assume a major, indeed dominant, role in the disposition of Russia’s surplus military plutonium as well as in the disposition of the world’s stocks of separated civil plutonium.

The French-British campaign could be expected to squarely meet what the United States and Russia have decided on (with approval by the major industrial countries, or “G-8”) as the standard appropriate for safe disposition of surplus U.S. and Russian military plutonium. The standard agreed to was first adopted by the U.S. Department of Energy (DOE) on the recommendation of the National Academy of Sciences’ (NAS’s) Committee on International Security and Arms Control (CISAC). Known as the “spent fuel standard,” it represents a rough measure of the proliferation resistance afforded by the obstacles that spent fuel presents to plutonium recovery, namely its intense radioactivity, its great mass and weight, and its requirement for remote handling. The obstacles referred to are very real, especially for any party lacking the resources of a nation-state.

The job ahead

In meeting the spent fuel standard, the United States plans to have its surplus military plutonium disposition program proceed along two tracks. On one track, plutonium will be converted to MOx to be used in certain designated reactors and thereby rendered spent. On the other track, plutonium will be immobilized by incorporating it in massive, highly radioactive glass logs. In DOE parlance, the two tracks are the “MOx option” and the “immobilization option.” Ultimately, after repositories become available, the spent MOx and the radioactive glass logs would be placed in deep geologic disposal.

We must create a global network of internationally sanctioned centers for storage and disposal of spent fuel.

The MOx and immobilization options are clearly within the capabilities of the nuclear industry in France and Britain. The MOx option offers the more immediate promise. MOx fuel manufacturing capacity in France and Britain (together with some in Belgium) will soon rise to approximately 350 tons of MOx production a year, which is more than enough for an intensive and expeditious plutonium disposition campaign. The MOx fuel plants are either operating already or are built and awaiting licensing to receive civil plutonium.

Further, Electricité de France has designated 28 of its reactors to operate with MOx as 30 percent of their fuel cores, and of these, 17 already are licensed to accept MOx. With all 28 reactors in use, half of the world inventory of 300 tons of separated civil plutonium and surplus non-U.S. military plutonium expected by the year 2010 could be converted to spent MOx in about 17 years (we exclude here the 50 or so tons of U.S. military plutonium that the United States will dispose of itself). The fresh civil MOx going into the reactors (we assume a plutonium content of 6.6 percent) would be easily handled because it emits relatively little external radiation; but the spent MOx coming out of the reactors would be intensely radioactive and present a significant barrier to plutonium diversion. The spent MOx would not be reprocessed but rather marked for eventual geologic disposal.

The immobilization option, as thus far developed in the United States, is not as well defined as the MOx option. Now favored by DOE is a “can-in-canister” concept that is still under technical review. The plutonium would be imbedded in ceramic pucks that would be placed in cans arrayed in a latticework inside large disposal canisters. Molten borosilicate glass containing highly radioactive fission products would be poured into these canisters.

At DOE’s request, an NAS panel is currently reviewing the can-in-canister design to judge whether it does in fact meet the spent fuel standard. Experts from the national laboratories have, for instance, offered conflicting views about whether terrorists using shaped explosive charges might quickly separate the cans of plutonium pucks from the radioactive glass. The NAS review panel awaits further studies, including actual physical tests, to either approve the present design or arrive at a better one.

Yet despite the uncertainties, immobilization remains an important option, to be carried out in parallel with the MOx option or, as some advocate, to be chosen in place of the MOx option. Immobilization does not entail the security problems that come from having to transport plutonium from place to place. In the MOx option, by contrast, there is a risk in transporting plutonium from reprocessing centers to MOx factories and in transporting fresh MOx to reactors. We see this risk as acceptable only because the MOx program would be completed in less than two decades and then be shut down.

If DOE can arrive at an acceptable immobilization design, the French and British could no doubt come up with an acceptable design of their own, either a variant of the can-in-canister concept or, perhaps better, a design for a homogeneous mixture of plutonium, glass, and fission products. Cogema, the French nuclear fuel cycle company, has at La Hague two industrial-scale high-level waste vitrification lines now operating and another on standby. British Nuclear Fuels Limited (BNFL) has a similar line of French design at Sellafield. Earlier we noted that with the MOx option, half of the 300-ton plutonium inventory expected by the year 2010 could be disposed of by the French and British in about 17 years; disposing of the other half by immobilization could also take about 17 years. There are not yet sufficient data to compare the costs of the MOx and immobilization options.

The nuclear industry’s future

For the nuclear industry in France and Britain, a commitment to such a plutonium disposition campaign and to ending fuel reprocessing and plutonium recycling would be truly revolutionary. It would mark a sea change in industry thinking about plutonium and proliferation risks–not just in these two countries, but far more widely.

With development of an economic breeder program proving stubbornly elusive, plutonium simply cannot compete as a nuclear fuel on even terms with abundant, relatively inexpensive, low-enriched uranium. In hindsight it seems clear that use of plutonium fuel abroad has depended more on government policy and subsidy than on economics. And politically, plutonium has been only a burden, at times a heavy one. In Germany in the early to mid 1980s, protesters came out by the thousands to confront police in riot gear at sites proposed for fuel reprocessing centers (which as things turned out were never built). For the nuclear industry worldwide, and even in France and Britain, it is vastly more important to find solutions to the problems of long-term storage and ultimate disposal of spent fuel than to sustain a politically harassed, artificially propped-up, fuel reprocessing and plutonium recycling program.

Worldwide there are about 130,000 metric tons of spent fuel, about 90,000 tons of it stored at 236 widely scattered nuclear stations in 36 different countries, the rest stored principally at spent fuel reprocessing centers and at special national spent fuel storage facilities such as those in Germany and Sweden. Of the approximately 200,000 tons of spent fuel generated since use of civil nuclear energy began, only 70,000 tons have been reprocessed. This gap promises to continue, because although about 10,000 tons of spent fuel are now being generated annually, the world’s total civil reprocessing capacity is only 3,200 tons.

Spent fuel has a curious dual personality with respect to proliferation risks. On the one hand, as made explicit by the formally ordained spent fuel standard, spent fuel is inherently resistant to proliferation because of its intense radioactivity and other characteristics. But uranium spent fuel contains about 10 kilograms of plutonium per ton, and the approximately 1,100 tons of recoverable plutonium in the present global inventory of spent fuel is about four times the amount that was in the arsenals of the United States and the Soviet Union at the peak of the nuclear arms race.

As CISAC has recognized, meeting the spent fuel standard will not be the final answer to the plutonium problem, because recovery of plutonium from spent fuel for use in nuclear explosives is possible for a rogue state such as Iraq or North Korea and even for a terrorist group acting with state sponsorship. Accordingly, the nuclear nonproliferation regime cannot be complete and truly robust until storage of nearly all spent fuel is consolidated at a relatively few global centers, the principal exception being fuel recently discharged from reactors and undergoing initial cooling in pools at the nuclear power stations. But what is particularly to the point here is that the nuclear industry will itself be incomplete until a global system for spent fuel storage and disposal exists, or at least is confidently begun. Without such a system, the nuclear industry will be in a poor position to long continue at even its present level of development, much less aspire to a larger share in electricity generation over the next century.

A lack of government urgency

Not the slightest beginning has been made in establishing the needed global network of centers for long-term storage and ultimate disposal of spent fuel. No country is close to opening a deep geologic repository even for its own spent fuel or high-level waste, quite aside from opening one that would accept such materials from other countries. A common and politically convenient attitude on the part of many governments has been to delay the siting and building of repositories until decades into the future. Under the IAEA, an international convention for radioactive waste management has been adopted; but although this may result in greater uniformity among nations with respect to standards of radiation protection for future people, the convention does not mention, even as a distant goal, establishing a global network of storage and disposal centers available to all nations.

The United States has the most advanced repository program, yet it is a prime case in point with respect to a lack of urgency and priority. Yucca Mountain, about 100 miles northwest of Las Vegas, Nevada, has long been under investigation as a repository site. But Congress lets this program poke along underfunded. This past fiscal year, more than $700 million went into the Nuclear Waste Fund from the user fee on nuclear electricity, yet rather than see all this money go to support the nuclear waste program, Congress chose to have about half of it go to federal budget reduction. The Yucca Mountain project received $282.4 million.

The time may be propitious for stopping or rapidly phasing out fuel reprocessing.

The Yucca Mountain repository is scheduled to be licensed, built, and receiving its first spent fuel by the year 2010, but as matters stand this will not happen. Even the promulgation of radiation standards for the project has languished from year to year. A delay in opening the repository would not itself be troubling if the government would adopt a policy of consolidating surface storage of spent fuel near the Yucca Mountain site. In fact, we have repeatedly urged adoption of such a policy, one benefit being that it would allow all the time needed for exploration of Yucca Mountain and for development of a repository design that meets highly demanding standards of containment.

But little progress has been made on this front either, and spent fuel continues to accumulate at the more than 70 U.S. nuclear power stations, threatening some of them with closure. The state of Minnesota, for instance, limits the amount of storage onsite at Northern States Power’s Prairie Island station. Also, the wrong example is being set from the standpoint of the nuclear nonproliferation regime. In our view, consolidated storage at a limited number of internationally sanctioned sites, with greater central control over spent fuel shipments and inventories, should be the universal rule.

One might think that opponents of nuclear energy, especially among the activists who make it their business to probe nuclear programs for weaknesses, would be deploring the lack of consolidated spent fuel storage. But neither the activists here in the United States nor those in Europe are doing so. Indeed, as part of their strategy for stopping work at Yucca Mountain, the U.S. activists insist that all spent fuel remain at the nuclear stations, for the next half century if need be. For them, the unresolved problem of long-term storage and ultimate disposal of nuclear waste should be left hanging around the neck of the nuclear enterprise in order to hasten its demise. Activists acknowledge that sooner or later safe disposal of such waste will be necessary, but in their perspective the radiation hazards are for the ages and what is urgent is to shut down nuclear power. The nonproliferation regime and the need to strengthen it don’t enter into these calculations. But the plutonium in spent fuel poses risks not just for the ages but right now. Rogue states and terrorists are here with us today.

What is needed is to have the safe disposition of plutonium become a central and widely understood rationale for the storage and disposal of spent fuel and high-level waste. In disposition of separated civil and military plutonium the final step would be geologic disposal of the spent MOx and canisters of radioactive glass. This would occur along with disposal of spent uranium fuel containing the vastly larger amount of plutonium in that fuel. The 47 kilograms of civil plutonium contained in every ton of spent MOx is nearly five times the 10 kilograms contained in a ton of spent uranium fuel, but even the latter is enough for one or two nuclear weapons. Accordingly, geologic disposal of spent fuel would be needed for a robust nonproliferation regime even if no plutonium had ever been separated.

Creating a global network of internationally sanctioned centers for storage and disposal of spent fuel and high-level waste storage and disposal centers has a powerful rationale on these grounds alone, and it is a rationale that needs to be clearly recognized.

An opportunity for industry

The nuclear industry in the United States, France, Britain, and around the world should be working determinedly to make policymakers, editorial writers, and society at large understand what is at stake. This is the most effective thing the industry can do to promote a political sea change with respect to acceptance of plans for spent fuel storage and disposal that are vital to nuclear power’s survival. But proclaiming a concern for strengthening the nonproliferation regime will ring hollow if, as in France, the further separation and recycling of plutonium are to continue and indeed expand. The MOx cycle now planned by the French would have a working level of plutonium of about 23 tons circulating through the system, either in its separated form or as fresh MOx.

The nuclear industry, especially in Europe, Russia, and Japan, must rethink its old assumptions and demonstrate in dramatic fashion its concern to ensure a technology that is far less susceptible to abuse by weapons proliferators. We see an attractive deal waiting to be struck: The nuclear industry gives up civil fuel reprocessing and plutonium separation and volunteers to assume a central role in the safe disposition of all separated plutonium, civil and military alike. In return, the governments of all nations that are able to help (not least the United States) would commit themselves to creating the global network of centers needed for storage and disposal of spent fuel and high-level waste. Underlying such a deal must be a wide societal and political understanding that to let things continue indefinitely as they are will present an unacceptable risk of eventual catastrophes.

Leaders of the nuclear enterprise, after sorting out their thinking among themselves, might propose an international conference of high officials from government, the nuclear industry, and the nonproliferation regime. This conference, addressing the realities of plutonium disposition and spent fuel storage and disposal, would try to agree on goals, the preparation of an action plan, and an appropriate division of responsibilities. Such a conference, if successful, could create a new day for nuclear energy.

One might, for instance, see a new urgency and priority on the part of the U.S. Congress and White House with respect to providing both consolidated national storage of spent fuel and a geologic repository capable of protecting future people from dangerous radiation and from recovery of plutonium for use as nuclear explosives. The United States might agree even to accept at least limited amounts of foreign spent fuel when this would achieve a significant nonproliferation objective. A similar response to the new international mandate could be expected from other countries.

Time to end reprocessing

In the 1970s, two U.S. presidents, Gerald Ford (a Republican) and Jimmy Carter (a Democrat), moved to withdraw government support for commercial reprocessing and plutonium recycling because of the proliferation risks. President Carter urged other countries to follow the U.S. lead and go to a “once-through” uranium fuel cycle, with direct geologic disposal of spent fuel. But the French and British reprocessors, unmoved by the U.S. initiative, continued on their own way, and many foreign utilities (especially in Germany and Japan) were eager to enter into contracts, for the national laws or policies under which they operated either favored reprocessing or insisted upon it.

But circumstances today are quite different. Some individuals of stature within the reprocessing nations themselves are showing a new attitude. In February 1998, the Royal Society of the United Kingdom Academy of Science, in its report Management of Separated Plutonium, found “the present lack of strategic direction for dealing with civil plutonium [to be] disturbing.” The working group that prepared the report included several prominent figures from Britain’s nuclear establishment, including the then chairman of the British Nuclear Industry Forum and a former deputy chairman of the United Kingdom Atomic Energy Authority. Although cautious and tentative in thrust, the report suggested, among other possibilities, cutting back on reprocessing.

Economically, too, the time may be propitious for stopping or rapidly phasing out reprocessing. Under the original 10-year baseload contracts for the reprocessing to be done at the new plants at Sellafield and La Hague, all the work was paid for up front, leaving these plants fully amortized from the start. With fulfillment of the baseload contracts now only a few years off, BNFL and Cogema are a long way from having their order books filled with a second round of contracts. In an article on December 10, 1998, Le Monde reported that if German utilities, under the dictates of government policy, withdraw from their post-baseload contracts, Cogema would either have to shut down UP-3 (the plant built to reprocess foreign fuel) or operate it at reduced capacity and unprofitable tariffs.

On the other hand, if the French and British nuclear industries were to undertake an intensive campaign for safe disposition of plutonium, they would surely receive fees and subsidies ensuring an attractive return on their investment in MOx fuel plants and high-level-waste vitrification lines.

The United States should proceed with all deliberate speed to establish a center for storage and disposal in Nevada.

Another reason why reprocessing nations should reexamine their belief in plutonium recycling is that past claims for waste management benefits from such recycling are, on close examination, overstated or wrong. For instance, the National Research Council’s 1995 report Separations Technology and Transmutation Systems points out that in a geologic repository the long-term hazards from contaminated ground water will be created mainly from fission products, such as technetium-99, and not from plutonium. Discharged MOx fuel will contain no less technetium than spent uranium fuel and will contain more iodine-129. Recycling fission products, along with plutonium and other transuranics, could theoretically benefit waste management, but only after centuries of operation and at the expense of more complicated and costly reprocessing.

As a possible longer-term option for plutonium disposition, France has described a MOx system that would also include a suite of 12 fast reactors deployed as plutonium burners. In this scenario, which assumes that the formidable costs of fast reactors and their reprocessing facilities are overcome, all spent fuel would be reprocessed and its plutonium recycled. But the substantial inventory of plutonium would be daunting. About 10 tons of plutonium would be needed to start up each fast reactor, or 120 tons altogether. Most of that would remain as inventory in the system. Further, the two-year working inventory of separated plutonium and fresh plutonium fuel needed by the 12 reactors would be about 50 tons. The potential here for thefts, diversions, and forcible seizures of plutonium is undeniable.

Creating the global network of centers

Under the best of circumstances and with the strongest leadership, creating a global network of storage and disposal centers for spent fuel and high-level waste will still be an extraordinary challenge. But the job is not undoable provided certain critical conditions are met.

Of overriding importance is that one of the major nuclear countries establish a geologic repository at home, inside its own boundaries. Unless this is done, the very concept of international centers falls under the suspicion that what’s afoot is an attempt by the nuclear countries to dupe countries with no nuclear industry into taking their waste. And if the proposed recipient nation should be a poor country desperate for hard currency, the whole thing looks like a cynical and egregious bribe. What this all points up is that the United States should proceed with all deliberate speed to establish a center for storage and disposal in Nevada. No other country is in a position to take the lead in this.

Once this condition is satisfied, then to offer strong economic incentives to potential host countries should become not only acceptable but expected, because the service proposed is one that should demand high compensation. The Russian Duma, for instance, might look more favorably on current proposals for storage of limited amounts of foreign spent fuel in Russia, especially knowing that part of the revenue therefrom can go toward establishing Russia’s own permanent geologic repository.

Let’s take Australia as another example. An advanced democratic society in the Western tradition, Australia is a major producer of uranium but has no nuclear power industry of its own. Beyond its well-populated eastern littoral is a vast desert interior, from which in the main the nation gets only very limited economic benefits. Pangea Resources, a Seattle-based spinoff of Golder Associates of Toronto, has been circulating a plan for a repository that would be built somewhere in the West Australian desert.

In this venture, Pangea has received substantial financial backing from British Nuclear Fuels and the Swiss nuclear waste agency. Until now, Australia’s attitude has been thumbs down, but that attitude might change if the United States should create in Nevada a repository that could be a prototype for repositories on desert terrain around the world, and if at the same time the Australians knew they would be doing their part toward strengthening the nuclear nonproliferation regime.

It’s not often that a single commercial enterprise is presented with the chance to bring about, on a global scale, an enormous improvement in its own fortunes and at the same time strengthen a regime vital to the protection of society. But just such a chance is now at hand for the civil nuclear industry. If it fails to take it, the consequences may be the industry’s gradual decline, perhaps even its ruin, and the continuation of a grave danger to us all.

Airline Deregulation: Time to Complete the Job

Deregulation of the airline industry, now more than two decades old, has been a resounding success for consumers. Since 1978, when legislation was passed ending the government’s role in setting prices and capacity in the industry, average fares are down more than 50 percent when adjusted for inflation, daily departures have more than doubled, and the number of people flying has more than tripled.

Yet even as the economy booms and people fly in record numbers, travelers are increasingly heard complaining about widely varying fares, complex booking restrictions, and crammed planes and airports. Among longtime business travelers, these complaints are often followed by fond but fuzzy recollections of the days before deregulation, when airline workers were supposedly more attentive, seating spacious, and flights usually on time and direct. Even leisure travelers, who have been paying record low fares, can be heard grousing about harried service, crowded flights, and missed connections.

High fares in some markets and a growing gap in the prices charged for restricted and unrestricted tickets have not only raised the ire of some travelers but also prompted concern about the overall state of airline industry competition. Although reregulating the airlines remains anathema to most industry analysts and policymakers, there is no shortage of proposals to fine-tune the competitive process in ways that would influence the fare, schedule, and service offerings of airlines.

Unfortunately, the history of aviation policy suggests that attempts by government to orchestrate airline pricing and capacity decisions, however well intended and narrowly applied, run a real risk of an unhealthy drift backward down a regulatory path that has stifled airline efficiency, innovation, and competition. Today, numerous detrimental policies and practices remain in place, even though they have long since outlived their original and often more narrow purposes. These enduring policies and practices–particularly those designed to control airport and airway congestion–deserve priority attention by policymakers seeking to preserve and expand consumer gains from deregulation.

Troubling legacies

The airline industry was originally regulated out of concern that carriers, left to their own devices, would compete so intensely that they would set fares too low to generate the profits needed to reinvest in new equipment and other capital. It was feared that this self-destructive behavior would, in turn, lead to the degradation of safety and service, ultimately leading to either an erosion of service in some markets or dominance by one or two surviving carriers.

Regulators on the now-defunct Civil Aeronautics Board (CAB) took seriously their mission to avert such duplicative and destructive competition. No new trunk airlines were certified after CAB was formed in 1938, and vigorous competition among the regulated carriers was expressly prohibited. Airlines were assigned specific routes and service areas and given formulas governing the fares they could charge and the profits they could earn. They were even subject to rules prescribing the kinds of aircraft they could fly and their seating configurations.

Established when the propeller-driven DC-3 was king and when air travel was almost exclusively the domain of the affluent and business travelers, CAB was slow to react to the effects of new technology and the changing demands for air travel. The widespread introduction of jet airliners during the 1960s greatly increased travel speed, aircraft seating capacity, and overall operating efficiencies. By flying the faster and more reliable jets, the airlines were able to schedule more flights and use their equipment and labor more intensively. As travel comfort and convenience increased, passenger demand escalated.

Constrained by regulation, the airlines could respond only awkwardly to changing market demands. Meanwhile, the nation’s aviation infrastructure, consisting of the federal air traffic management system and hundreds of local airports, was barely able to keep pace with the changes. Airports in many large cities desperately needed new gates and terminals to handle the larger jets and increased passenger volumes. The air traffic control system, designed and managed by the Federal Aviation Administration (FAA) for a much smaller and less demanding propeller-based industry, suddenly had to handle many more flights by faster jets operating on much tighter schedules.

Outdated government rules and practices are continuing to hinder airline competition and operations.

A fundamental shortcoming, which remains to this day, is that neither the local airports nor the air traffic control system were properly priced: that is, paid for by users in a way that reflects the cost of this use and the value of expanding airport and airway capacity. The air traffic control system has long been financed by revenues generated from federal ticket taxes and levies on jet fuel. Unfortunately, there is little correlation between the size and incidence of these taxes and the cost and benefits of air traffic control services. Likewise, airport landing fees rarely do more than cover the wear and tear on runways. Among other omissions, they are not equated with the costs imposed by users (on others) when taking up valuable runway space during peak periods. Both airport and airway capacity are allocated to users on a first-come, first-served basis, a simple queuing approach that provides little incentive for low-value users, such as small private aircraft, to shift some of their activity to less congested airports and off-peak travel times. Not only has this approach been accompanied by air traffic congestion and delays, but it has prompted a series of often arbitrary administrative and physical controls on airline and airport operations that have had anticompetitive side effects.

In the regulated airline industry of the 1960s and early 1970s, many shortcomings in the public provision of aviation infrastructure could be addressed by the relevant parties acting cooperatively. For instance, when seeking to curb mounting air traffic congestion, the FAA imposed hourly quotas on commercial operations at several of the nation’s busiest airports, including Washington’s National, New York’s LaGuardia, and Chicago’s O’Hare airports. As a practical matter, this quota system (as opposed to the queuing used elsewhere) could be smoothly implemented only because a small number of airlines were permitted by CAB to operate from these airports and could thus decide among themselves who would use the scarce take-off and landing slots.

Other airport access controls were agreed on by the airlines, the federal government, and the local authorities. Most notably, nonstop flights exceeding prescribed distances were precluded from flying into or out of National and LaGuardia airports. Similarly, aircraft headed to or from points outside of Texas (and later bordering states) were excluded from Dallas’s Love Field. The purpose of these so-called “perimeter” limits was to promote the use of the newer and more spacious Dulles, JFK, and Dallas-Fort Worth airports for long-haul travel. In a highly regulated environment–in which airline prices and service areas could be adjusted by regulators to compensate for the effects of these restrictions–the airlines had little incentive to object vigorously to these proscriptions, many of which were later codified in federal law and rulemakings.

Airlines and airports in the regulated era also cooperated in the funding of airport expansion. Concerned that airport authorities would exploit their local monopoly positions by sharply raising fees on airport users and spending the revenue on lavish facilities, the federal government placed stringent restrictions on the use of federal aid to airports. Most funds could be used only for runway and other airside improvements and were accompanied by regulations limiting the recipient’s ability to raise landing fees. Hence, when it became necessary to modernize and expand gates and other passenger facilities, particularly after the introduction of jets, many large airport authorities turned to their major airline tenants for financing help. In return, the airlines signed long-term leases with airports that often gave them control over a large share of gates and the authority to approve future expansions. The possible anticompetitive effects of these leases generated little, if any, serious attention.

Learning new tricks

Largely unforeseen 20 years ago was the extent to which major carriers, once deregulated, would shift to hub-and-spoke operations. By consolidating passenger traffic and flights from scores of “spoke” cities into hub airports, the major carriers were quickly able to gain a foothold in hundreds of additional city-pair markets. This network capability was especially valuable for attracting business travelers interested in frequent departures to a wide array of destinations. The airlines soon discovered that time-sensitive business travelers would pay more for such convenience.

The introduction of frequent flier programs made hub-and-spoke networks even more effective in attracting business fliers. By regularly using the same airline, travelers were rewarded with free upgrades to first class, preferential boarding, access to privileged airport lounges, and free trips.

Crowded airports, flight delays, and discontent over fares and services should not be viewed as shortcomings of deregulation.

Hub-and-spoke systems coupled with the frequent flier programs put the startup airlines at a competitive disadvantage. Without access to the slot-controlled airports, the new airlines faced a handicap in competing for the highly lucrative business market. A wholly voluntary process for distributing slots became impossible in a highly competitive environment. Unfortunately for the new airlines, the FAA grandfathered most of these slot assignments to the large incumbents, allowing them to sell or lease the slots as they saw fit.

New entrants were further hindered in their efforts to build desirable route systems by the persistence of perimeter rules at several key airports. Though strongly supported by residents living near these airports as a way to curtail airport traffic and noise, these limits on long-distance flights are a highly arbitrary means of regulating airport access. The switchover to hub-and spoke systems by the incumbents made it much easier for them to operate within the perimeter limits, because a high proportion of their passengers travel on short- and medium-haul flights connecting from hubs located within the perimeter. For new entrants without well-situated hubs–or the ability to effect changes in the perimeter rules, such as the extension of the limit for Washington National to Dallas-Fort Worth, a main hub for both American and Delta Airlines–these limits created another competitive disadvantage.

Many of the incumbents operated hubs from the very same airports where they also held exclusive-use gate leases and long-term facility and service contracts. The new entrants pointed to these arrangements as significant obstacles to gaining access to gates and other airport services essential for effective competition. By the end of the 1980s, these entry barriers, coupled with the business failure of many new entrants and mounting evidence of high fares in hub markets, prompted growing concern about the sufficiency of airline competition.

Predatory pricing?

During the Gulf War and the national economic recession of the early 1990s, the airlines experienced a sharp drop-off in demand and subsequent operating losses. As the industry began to recover, the excess equipment and labor shed by major carriers created conditions that were favorable for a new wave of startup airlines and further expansion of some existing niche carriers. The former Texas intrastate operator Southwest Airlines began flying in most regions of the United States. By the mid-1990s, one in five travelers was flying on Southwest and other smaller, startup airlines.

For the most part, these new entrants sought profitability through the intense use of labor and equipment and high load factors achieved by offering low fares in city-pair markets with high traffic densities or the potential to achieve such densities through lower fares. By challenging incumbent airlines at their hubs, the new carriers hoped to tap into pent-up demand from leisure travelers and even to attract a fair amount of business traffic. Almost uniquely, Southwest chose to focus its growth at secondary airports in or near major metropolitan areas, thus avoiding congested hubs and minimizing head-to-head competition with major carriers. To many observers, this new wave of entry represented a healthy and overdue development that would counter the tendency of major airlines to exploit market concentration in major hub cities such as Atlanta, Denver, and Chicago.

It was therefore a matter of concern when the new entrants complained that they encountered sharp price cutting by major incumbent carriers, particularly when entering concentrated hub markets. The Department of Transportation (DOT) questioned whether incumbents were setting fares well below cost in an effort to divert customers away from the new challengers, seeking their demise in order to raise fares back to much higher, pre-entry levels. There were also reports of incumbents using their long-term leases and other airport contractual arrangements to exclude challengers; for instance, by refusing to sublease idle gates.

We need a more rational pricing system for providing and allocating airport and airway capacity.

Concerned about possible predatory practices in the airline industry, and recognizing the uncertainty and expense involved in trying to prove such conduct through the courts under traditional antitrust law, DOT offered its own criteria for detecting predatory pricing. It proposed an administrative enforcement process to police unfair competition in the airline industry. Sharp price cutting and large increases in seating capacity in a city-pair market by a major airline in response to the entry of a lower-priced competitor would trigger an investigation and possible enforcement proceedings.

DOT’s proposal, made in April 1998, prompted strong reactions. It was lauded by some, including many startup airlines, as a necessary supplement to traditional antitrust enforcement, giving new entrants the opportunity to compete on the merits of their product. Others, including most major airlines, have criticized it as a perilous first step toward reregulation of passenger fares and service and as incompatible with traditional antitrust enforcement. Meanwhile, in May 1999, the Department of Justice (DOJ) filed a civil antitrust action against American Airlines, claiming that it engaged in predatory tactics.

Spurring more competition

Whether pursued by DOJ or DOT, the development and application of an empirical test for predatory pricing that would not inhibit legitimate pricing responses poses significant challenges. As a practical matter, it would require information, gathered retrospectively, about an airline’s cost structure and the array of options it had available to it for using resources and capacity more profitably. More important, it would do little to remove underlying impediments to entry and competition. After all, for predatory pricing to be a profitable strategy, it must be accompanied by other competitive barriers that allow the airline to gain and sustain market power. Competition is critical to making deregulation work. Accordingly, aviation policies aimed at benefiting consumers should first and foremost center on those areas where government practices are hindering competition.

A good place to start would be to correct the many longstanding inefficiencies and inequities in the provision of aviation infrastructure. Aircraft operators should be charged the cost of using and supplying airport and airway capacity. Neither the use nor the supply of airport runways and air traffic control services is determined on the basis of their highest-value uses. A commercial jet with hundreds of passengers, paying thousands of dollars in ticket and jet fuel taxes, is given no more priority in departing and landing than a small private aircraft. Access determined by first-come, first-served queuing is a guarantee that demand and supply will be chronically mismatched and congestion and delays will ensue, with air travelers suffering as a result. For low-cost airlines that must make intensive use of their aircraft and labor, recurrent congestion and delays are especially troublesome impediments to market entry, and ones that are only likely to get worse as demand for air travel escalates.

Airports still subject to outmoded slot and perimeter controls would make ideal candidates for experimentation with congestion-based landing fees and other market-based methods for financing the supply of airport and airway capacity. Not only would such cost-based pricing offer a way to control airport externalities such as noise and delay, it would do so with far fewer anticompetitive side effects. In addition, it is past time to reassess the competitive effects and incentives of federal aid rules that limit the ability of airports to raise revenues through higher landing fees.

The laggard performance of the public sector in providing and allocating the use of critical aviation infrastructure is a serious deficiency that will become more troublesome as air travel continues to grow. However, crowded airports, flight delays, and discontent over passenger fares and services should not be viewed as shortcomings of deregulation itself, but as clarion calls to complete the deregulation process, instilling more market incentives wherever sensible and feasible.

Making the Internet Fit for Commerce

The laws of commerce, which were established in a marketplace where sellers and buyers met face to face, cannot be expected to meet the needs of electronic commerce, the rapidly expanding use of computer and communications technology in the commercial exchange of products, services, and information. E-commerce sales, which exceeded $30 billion in 1998, are expected to double annually, reaching $250 billion in 2001 and $1.3 trillion in 2003. In addition, by 2003 the Internet will compete with radio to be the third largest medium for advertising, surpassing magazines and cable television. Online banking and brokerage are becoming the norm. In early 1998, 22 percent of securities trades were made online, and this figure is rising rapidly. Now is the time to review and update the laws of commerce for the digital marketplace.

In any commercial transaction, there are multiple interests to protect. Buyers and sellers desire protection from transactions that go wrong due to fraud, a defective product, a buyer that refuses to pay, or other reasons. Buyers and sellers may also want privacy, limiting how others obtain or use information about them or the transaction. Governments need effective and efficient tax collection. This includes sales or value-added taxes imposed on a transaction as well as profit or income taxes imposed on a vendor. Finally, society as a whole has an interest in restricting sales that are considered harmful, such as the sale of guns to criminals.

The legal, financial, and regulatory environment that has developed to protect buyers, sellers, and society as a whole is inconsistent with emerging technology. When purchases are made over a telecommunications network rather than in person, there is inherent uncertainty about the identity of each party to the transaction and about the purchased item. Furthermore, it is difficult for either party to demonstrate that transaction records are accurate and complete. This results in uncertainty and potential conflict in four critical areas: taxation, privacy protection, restricted sales such as weapons to criminals and pornography to minors, and fraud protection.

Telephone and mail order businesses face similar problems, but e-commerce is different. With mail order, buyer and seller know each other’s address, so tax jurisdictions are clear, and perpetrators of fraud and sellers of illegal goods can be traced. This is not true with e-commerce. Mail order revenues are a negligible fraction of the economy, so the fact that sales taxes are rarely collected for mail order is tolerable. E-commerce revenues will be significant. Current law is particularly inapplicable to e-commerce of information products such as videos, software, music, and text, which can be delivered directly over the Internet. These sales produce no physical evidence, such as shipping receipts or inventory records. As a result, auditors cannot enforce tax law, and postal workers cannot check identification when making a delivery. And if either party claims fraud, it may be impossible to retrieve the transmitted item, prove that the item was ever transmitted, or locate the other party.

Two schools of thought have emerged about how to deal with e-commerce conflicts. One is that the infant industry needs protection from regulation. Lack of government interference has helped e-commerce grow, and heavy-handed regulation could cripple its burgeoning infrastructure and deny citizens its benefits. This philosophy underlies the position that all e-commerce should be tax-exempt, that all Internet content should be unregulated, and that consumers are sufficiently served by whatever privacy and fraud protections develop naturally from technological innovation and market forces. Proponents call this industry self-regulation.

Others argue that policies governing traditional commerce evolved for good reasons and that those reasons apply to e-commerce. They warn of the dangers of having different rules for different forms of commerce. If digitized music purchased online is tax-free and compact disks purchased in stores are taxed, then e-commerce is favored, and consumers who cannot afford Internet access from home suffer. Moreover, if a particular sale is illegal in stores but is legal online, then e-commerce undermines society’s ability to restrict some purchases.

The problem is that rules developed for traditional commerce may not be applicable or enforceable for e-commerce. To meet old objectives, proponents push additional laws, sometimes with significant side effects. For example, the state of Washington considered legislation to impose criminal penalties on adults who make it possible for minors to access pornography on the Internet. Because there is no perfect pornography filter, this could effectively ban Internet use in schools and prohibit a mother from giving her 17-year-old son unsupervised Internet access from home. Australia prohibited Australian Web sites from displaying material inappropriate for minors, thereby denying material to adults as well. Similarly, laws have been proposed to ensure that sales taxes are always collected, except when transactions are provably tax-exempt. Some proposals include unachievable standards of proof, forcing vendors to tax all sales. Worse, laws could make tax collection so expensive that e-commerce could not survive.

Policymakers are often forced to choose between conflicting societal goals–for example, between collecting taxes and promoting valuable new services–because policies and institutions are not equipped to meet both objectives. This need not be the case here. The United States can devise a system that protects against misuse of e-commerce without stifling its growth.

Pornography, cryptography, and other restrictions. The most prominent e-commerce controversy is the easy availability of pornography on the Internet. The draconian solutions are to censor material intended for adults or deny minors Internet access. In the 1996 Communications Decency Act, Congress penalized those who provide indecent material to minors. The U.S. Supreme Court found the law unconstitutional because it would interfere with communications permitted between adults. The fundamental problem is the inability of vendors to ascertain a customer’s age.

Congress passed a less restrictive version in 1998 that affects only commercial Web sites. It allows pornography vendors to assume that customers are adults if they have credit cards. This protects the financial interests of pornographers, but it allows minors with access to credit cards to obtain pornography without impediment and prevents adults with poor credit from doing so. This also undermines the privacy of adults who do not want pornography purchases on their credit card records.

Other restrictions have been proposed in Congress to protect children, including bans on Internet gambling and liquor sales. Such restrictions might protect children, but they would deprive adults of these services and reduce revenues for the respective industries. If these services do remain legal, some customers may insist on anonymity to participate, further complicating the need to check customers’ ages. In addition, sales may be restricted in some jurisdictions and not others, which is problematic on the global Internet. For example, a New York court found that an online casino in the Caribbean violated New York laws, because New Yorkers can lie about their location and gamble. This court would shut down online casinos worldwide if they cannot determine whether customers are in New York.

Current law is particularly inapplicable to products such as music and software, which can be delivered directly over the Internet.

The desire to maintain security in online transactions has led to a debate over the use of encryption. Law-abiding individuals use encryption to promote security, but criminals can use it to evade law enforcement. The United States does not regulate domestic sale of encryption software but tightly restricts its export. This is difficult to enforce, because popular products such as Web browsers often incorporate encryption capability. Besides, a vendor who sells and distributes software over the Internet must determine a buyer’s nationality from an Internet address, which is an unreliable indicator. The upshot is that legal sales could be hampered, whereas savvy foreign buyers can readily circumvent the rules.

Security issues also arise in other contexts. For example, legislation has been proposed to ban gun sales via the Internet, because online gun vendors cannot check customer identification to prevent sales to criminals. This blanket prohibition would deny law-abiding citizens this convenience.

The alternative to broad restrictions is a system in which vendors can access and reasonably believe customer credentials, which might indicate whether a customer has a criminal record or is a minor or a U.S. citizen. Policymakers should penalize those who ignore credentials in cases where they could be available, and only in those cases. A final point about sales restrictions: U.S. laws affect only U.S. vendors. If other nations do not impose and enforce similar laws, U.S. restrictions may achieve little or nothing.

Fraud and other failed transactions. Two problems must be addressed in order to provide protection against fraud. First, a transaction must create an incorruptible record. In traditional commerce, this can be accomplished with a paper receipt that is hard to forge. In e-commerce, one might reveal all information about the transaction to a third party. This is not always effective, because the resulting record may not be trustworthy or available when needed. Moreover, this reduces the privacy of buyers and sellers.

Second, it must be possible to check the credentials of other parties. Credentials could include a buyer’s identity or just a credit rating. The chief technical officer of Internet software vendor CyberSource Corporation told Congress that in its early years, 30 percent of the company’s sales were fraudulent; many buyers were thieves using stolen credit card numbers. CyberSource could not collect because the buyer could not be identified or located, and the item could not be retrieved. Buyers also need to check sellers’ credentials for protection. For example, does that online pharmacy really have licensed pharmacists on staff?

Fraud would be more difficult if a unique identifier were embedded in each computer. Intel provided this feature in its latest processor, and Microsoft did the same in software. But the public immediately and loudly expressed its opposition, because such identifiers could undermine privacy. For example, Web sites could use identifiers to track the viewing habits of individuals in tremendous detail, or an identifier could reveal the authorship of documents created or distributed anonymously.

Another way to identity parties is through electronic signatures. Some commercial “certificate authorities” already provide such services. When a customer establishes an account, the certificate authority validates the customer’s identity. The company then assigns the customer an electronic “secret key.” Encryption techniques allow a customer to demonstrate that he knows this secret key by applying an electronic signature.

Unfortunately, there is no guarantee that certificate authorities operate honestly. Anyone can offer this service, and there is no government oversight. Consequently, it is not clear that their assurances should be legally credible. Moreover, today’s commercial services often undermine privacy by presenting all information about a given customer, rather than just the minimal credentials needed for a particular transaction. They may do so because providing all the information makes it harder for a dishonest certificate authority to remain undetected, which is important given the lack of oversight.

Tax collection. A total of 46 states tax e-commerce, but taxes are collected on only 1 percent of e-commerce sales. This tax is simply unenforceable. As a result, e-commerce vendors have an unfair advantage, and state revenues are decreased. Many states depend heavily on sales tax revenues, so they want enforcement even if it damages e-commerce. Taxation of e-commerce has all the practical difficulties posed by restricted sales and fraud protection, and more. Sometimes, vendors must know about their customers to determine whether a given tax applies. For example, taxes may not be collected from customers in some locations or from licensed wholesalers. Such customers must supply trustworthy credentials, but this raises corresponding privacy concerns.

Neither sales tax on a transaction nor revenue tax on a vendor can be enforced without auditable records that are trustworthy. Traditional commerce generates paper trails of cash register logs, signed bills of sale, and shipping records that are difficult to alter or forge. E-commerce often produces only electronic records that are easily changed, especially when the transaction takes place entirely over a network. Without exchanging physical currency or touching pen to paper, people can buy stocks and airline tickets; transfer funds to creditors; “sign” contracts; and download magazines, music, videos, and software. The enormous increase in speed and decrease in costs in these transactions will make commerce without exchange of physical objects increasingly common.

Such transactions create two problems for tax auditors. First, transactions leave no physical evidence behind. Second, unlike a physical product, information can be sold many times. Thus, revenue figures cannot be corroborated by examining inventory. Auditors must depend entirely on transaction records. If transaction records can be changed without risk of detection, any policy that requires such records for enforcement is doomed.

Many policies neither support taxation nor protect privacy. Vendors in the state of Washington, for example, are expected to ask customers for their names and addresses, and collect taxes when customers give a Washington address or no address. Thus, anonymous out-of-state sales are taxed when they should not be. More important, name and address need not be verified or even verifiable, so customers within the state can establish false out-of-state accounts and easily evade taxes.

The 1998 Internet Tax Freedom Act prohibited new taxes on e-commerce for three years, although it does not affect existing taxes applicable to e-commerce, many of which predate computers. The act established a commission to advise Congress by April 2000 on policies to enact before this three-year moratorium ends. The first year was spent arguing about who should be on the commission, and the commission never met. It is unclear whether this group will develop any policies or, if it does, whether its recommendations will be followed.

Privacy. There are already calls for legislation to further regulate the way today’s credit card companies, banks, stores, and others use and share personal information. Online vendors can capture extensive information about their customers; for example, they know what products customers look at, not just what they buy. Privacy protection creates a particularly thorny dilemma because it works against fraud protection, restricted sales, and taxation. These other objectives could be easier to achieve if transaction details were public.

On the other hand, some capabilities required for these other objectives, such as the ability to retrieve trustworthy credentials, are also essential when applying traditional privacy policies to e-commerce. For example, people are legally entitled to view their personal credit records and correct any errors. Applying this policy to e-commerce would fail unless a vendor can verify the identity of the person requesting access to this information.

Similar problems arise when different privacy policies apply to different users. For example, the 1998 Children’s Online Privacy Protection Act prohibited vendors from collecting personal information from children without parental permission. Consequently, vendors must be able to distinguish minors from adults and to identify a minor’s parent, which should require trustworthy credentials. (Today, a minor can lie about age without detection.)

Missing links

The most controversial issues of e-commerce have common underlying causes. Because buyers and sellers lack trustworthy information about each other during the transaction and auditors lack trustworthy records after the transaction, it has been necessary to compromise important policy objectives such as privacy and fair taxation. Rather than fight over which sacrifice to make, we should create an environment in which these objectives are compatible. We must supply the missing elements.

Records must be generated for each transaction. Any attempt to forge, destroy, or retroactively alter records must face a significant risk of detection. Records stored electronically can be changed without detection. If a vendor and customer agree to such a change, or if the customer’s records will be unavailable, then vendors can alter records with impunity. A third party is necessary if transaction records are to be trustworthy. This might be a credit card company. But how do you know the third party’s records are correct and complete? Today it is impossible, making problems inevitable.

Moreover, transaction records must go to third parties without undermining privacy. Today, many e-commerce customers and merchants entirely surrender their privacy to a credit card company and often to each other. It is no surprise that Internet users routinely cite privacy concerns as their primary reason for not engaging in more e-commerce. Parties to a transaction should not be forced to reveal anything beyond the credentials necessary for that particular transaction, which need not include identity. Even that information should be unavailable to everyone outside the transaction, except for authorized auditors. It should even be impossible to determine whether a particular person has engaged in any transactions at all.

Government should use commercial services whenever practical, rather than developing its own.

I want to propose a system that solves many of these problems. Conceptually, it works as follows: All parties create a record containing the specifics of a transaction. All parties sign it. A party that is subject to audits then has its copy notarized. To enable a true audit, outside entities must be involved in recording the transaction. This system therefore includes verifiers, notaries, and auditors. Verifiers check the identity of all parties and vouch for credentials. Notaries oversee every transaction record, establishing a time and date and insuring that any subsequent modifications are detectable. Auditors review and confirm the accuracy of records.

Separating verifier and notary functions is crucial. A verifier knows the true identity of some customers. Notaries know whether that verifier’s unidentified customer is engaged in transactions, and perhaps some information about those transactions. If an organization (such as a credit card company) served as both verifier and notary, it could know that a given person is participating in specific transactions, thereby undermining that person’s privacy.

Technically, the system is based on public-key encryption. Each entity E gets a public key, which is available to everyone, plus a secret key, which only E knows. A message encoded using E’s public key can only be decoded with E’s secret key, so only E can decode it. E can “sign” a record by encoding it with E’s secret key. If a signed record can be decoded with E’s public key, then E must have signed the record. Public-key encryption operations are executed transparently by software.

Any person (or company) who wants an audit trail must first register with one or more verifiers. To register, this person tells the verifier her public key but not her secret key. She has the option of providing additional information, which she may designate as either public or private. Public information can be used as credentials during transactions. Private information may be accessed later by authorized auditors. The verifier is responsible for checking the veracity of all customer information, public and private.

For example, one individual might provide her name and social security number as private information and her U.S. citizenship as public information. Her nationality, public key, and account number are publicly displayed on the verifier’s Web site. Auditors can check her identity if necessary. Vendors know only her verifier account number and citizenship, allowing her to anonymously purchase U.S. encryption software that is subject to export restrictions.

This individual might also register with a second verifier. This time, she declares as public information that she is a software retailer, so she can avoid certain sales taxes. She keeps her nationality confidential. Because she has two verifier accounts, no one can determine that she is both a software retailer and a U.S. citizen. This would enable her, for example, to purchase stock with both accounts without revealing that there is only one buyer.

For each verifier account, a relationship is established with one or more notaries. The auditor must be informed of all verifier accounts and relationships with notaries. Then, e-commerce transactions can begin. In a transaction, all parties create a description of the relevant details using a standardized format. For a software purchase, the description might include the software title, warranty, price, date, time, and the locations of buyer and seller. A transaction record would consist of this description, plus the electronic signature and verifier account of each party. The record would be equivalent to a signed bill of sale and would prove that all parties agreed.

Each party in a transaction that requires an audit trail would submit its copy of the transaction record to an associated notary. This makes it possible to later audit one party without viewing the records of the others. It is possible to “hash” the record so that the notary cannot read all parts of the record, which protects the privacy of all parties. A party submitting a record must also provide verifiable proof of identity, probably using a verifier account number and an electronic signature. This allows the notary to later assemble all records submitted by a given vendor, so auditors can catch a vendor that fails to report some transactions. The notary adds a time stamp and processes the record. Once a record is processed, subsequent changes are detectable by an auditor, even if all parties to the transaction and the notary cooperate in the falsification. The notary also creates a receipt. Anyone with a notarized record and the associated receipt can verify who had the record notarized and when, and can determine that no information has subsequently been altered.

Entities that are not subject to audits, including most consumers, would be largely unaffected by this system. Most could register over the Internet with the click of a mouse button. A customer who makes restricted purchases such as guns or encryption software might be required to register once in person. Software executes other functions transparently.

Several companies currently provide some necessary verifier and notary functions, but not all. For example, there are notaries that establish the date of a transaction, but none can produce a list of all transactions notarized for a given vendor, which is essential. There is little incentive for entrepreneurs to offer such services, given that a notary’s output is rarely called for, or recognized, under today’s laws.

If government leads, industry will build

Trustworthy commercial verifiers and notaries are needed. A government agency or government contractor could provide the services, but private companies would be more efficient at adapting to rapid changes in technology and business conditions. Commercial competition would also protect privacy, because it allows customers to spread records of their transactions among multiple independent entities. Like notaries public, banks, and bail bondsmen, e-commerce verifiers and notaries would be private commercial entities that play a crucial role in the nation’s financial infrastructure and its law enforcement.

How would anyone know that services provided by a private company are trustworthy? The federal government should support voluntary accreditation of verifiers and notaries. Only accredited firms would be used when generating records to comply with federal laws or to interact with federal agencies. Others are likely to have confidence in a firm accredited by the government, which could further bolster e-commerce, but a state or private company would be free to use unaccredited firms.

To obtain accreditation, a verifier or notary would demonstrate that its technology has certain critical features. A notary, for example, would show that any attempt by the notary or its customers to alter or delete a notarized record would be detectable. The system must also be secure and dependable, so that the chances of lost data are remote. The specific underlying technology used to achieve this is irrelevant. Accredited firms must also be financially secure and well insured against error or bankruptcy. The insurer guarantees that even if a notary or verifier business fails, the information it holds will be maintained for a certain number of years. The insurer therefore has incentive to provide effective oversight.

A new corps of government auditors is needed to make this system work. Auditors must randomly check records from notaries and verifiers to ensure that nothing has been altered. Auditors would also keep track of the verifiers and notary accounts held by each electronic vendor and by any other parties subject to audit.

Many current laws and regulations require written records, written signatures, or written time stamps. Federal and state legislation should allow electronic versions to be accepted as equivalent when the technology is adequate. Contracts should not be unenforceable simply because they are entirely electronic. It should be possible to legally establish the date and time of a document with an electronic notary. A notice sent electronically should be legally equivalent to a notice sent in writing, when technology is adequate. For example, a bank could send a foreclosure notice electronically, provided that accredited verifiers and notaries produce credible evidence that the foreclosure was received in time.

Similarly, electronic records of commercial transactions should carry legal weight when technology is adequate. For example, commercial vendors must show records to stockholders and tax auditors. The Securities and Exchange Commission, the Internal Revenue Service, and state tax authorities should establish standards for trustworthy electronic records using accredited verifiers and notaries. Government could improve its own efficiency by using these systems. Congress took the first small step in 1998 by directing the federal government to develop a strategy for accepting electronic signatures. Government should use commercial services whenever practical, rather than developing its own.

The new approaches used to identify parties in e-commerce raise novel policy issues regarding identity. Who is responsible if a verifier incorrectly asserts that an online doctor is licensed? Certificate authorities already want to limit their liability, but such limits discourage the use of appropriate technology and sound management. It should also be illegal for customers to provide inaccurate information to a verifier. If this inaccurate information is used to commit another crime such as obtaining a gun for a criminal, that is an additional offense. Other identity issues are related to electronic signatures. It should be illegal to steal someone’s secret code, which would enable forgery of electronic signatures. It should even be illegal to deliberately reveal one’s secret code to a friend. Forgery and fraud are illegal, but these acts that enable forgery and fraud in e-commerce may currently be legal because they are new.

Accredited and unaccredited verifiers and notaries should be required to notify customers about privacy policies, so that consumers can make informed decisions. Vendors could be allowed to sell restricted products such as pornography, encryption software, guns, and liquor if and only if they check credentials and keep verifiable records where appropriate. For dangerous physical goods such as guns, double checking is justified. Online credentials would include name, mailing address, and criminal status; the name would be verified again when the guns were delivered.

One of the most difficult outstanding issues is to determine when a vendor must collect sales tax and to which government entity it should be paid. At present, taxes are collected only when buyer and seller are in the same state. Policies should change so that collection does not depend on the location of the seller, because too many e-commerce businesses can easily be moved to avoid taxes. This forces vendors to collect taxes for customers in all jurisdictions. There are 30,000 tax authorities in the United States with potentially different policies; monitoring them all is a costly burden. Tax rates and policies should be harmonized throughout larger regions with one organization collecting taxes. For example, there might be a single tax policy throughout each state. Because cities often have higher taxes than rural areas, cities may oppose harmonization.

A vendor still cannot tell a customer’s location (and vice versa). A verifier could provide trustworthy static information, such as billing address or tax home, but not actual location at the time of purchase. Taxing based on static information is more practical, although a few people may manipulate this information to evade taxes. At minimum, each party should state its location in a notarized transaction record, so that retroactive changes are detectable.

Now is the time to devise policies that are both technically and economically appropriate for e-commerce, before today’s practices are completely entrenched. This can only be accomplished by addressing fundamental deficiencies in the e-commerce system rather than by debating individual controversies in isolation. This includes the creation of commercial intermediaries. Verifiers can provide trustworthy credentials, and notaries can ensure that transaction records are complete and unaltered. Dividing responsibilities for these functions among competing notaries and verifiers will capture enough information for tax auditors and law enforcement agents to pursue illegal activities without sacrificing the privacy of the law-abiding.

To make this happen, government should develop accreditation procedures for verifiers and notaries. It should update laws and regulations to allow electronic records to replace written records when and only when the technology is adequate. Government should also use these new services. It should develop new policies on taxation and restricted sales that are consistent with e-commerce. And for those who try to exploit the new technology illegally, criminal codes should provide appropriate punishments.

Changing paths, changing demographics for academics

The decade of the 1990s has seen considerable change in the career patterns for new doctorates in science and engineering. It was once common for new doctorates to move directly from their graduate studies into tenure track appointments in academic institutions. Now it is more likely that they will find employment in other sectors or have nonfaculty research positions. This change has created a great deal of uncertainty in career plans and may be the reason for recent decreases in the number of doctorates awarded in many science and engineering fields.

Another change is that the scientific and engineering workforce is growing more diverse in gender, race, and ethnicity. Throughout the 1970s and 1980s, men dominated the science and engineering workplace, but substantial increases in the number of female doctorates in the 1990s has changed the proportions. Underrepresented minorities have also increased their participation but not to the same extent as have female scientists and engineers.

The narrowing tenure track

The most dramatic growth across all the employment categories has been in nonfaculty research positions. The accompanying graph documents the growth in such positions between 1987 and 1997. The 1987 data reflects the percentage of academic employees who earned a doctorate from 1977 to 1987 who were employed in nontenured positions in 1987. The 1997 data reflects the percentages for those who earned doctorates between 1987 and 1997. In many fields the percentage of such appointment almost doubled between 1987 and 1997.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

A rapidly growing role for women

Between 1987 and 1997, the number of women in the academic workforce increased substantially in the fields in which they had the highest representation–biological sciences, medical sciences, and the social and behavioral sciences. The rate of increase for women was even faster in the fields in which they are least represented–agricultural sciences, engineering, mathematics, and physical sciences. Still, women are underrepresented in almost all scientific and technical fields.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

Slow growth in minority participation

Minority participation also expanded during the period but at a slower rate than for women. African Americans, hispanics and native Americans comprise about 15 percent of the working population but only about 5 percent of the scientists and engineers working in universities. The data also shows substantial increases in the proportion of underrepresented minorities, but they are still not represented at a rate commensurate with their share of the population.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

Universities Change, Core Values Should Not

Half a dozen years ago, as I was looking at the research university landscape, the shape of the future looked so clear that only a fool could have failed to see what was coming, because it was already present. It was obvious that a shaky national economy, strong foreign competition, large and escalating federal budget deficits, declining federal appropriations for research and state appropriations for education, diminished tax incentives for private giving, public and political resistance to rising tuition, and growing misgivings about the state of scientific ethics did not bode well for the future.

Furthermore, there was no obvious reason to expect significant change in the near future. It was plain to see that expansion was no longer the easy answer to every institutional or systemic problem in university life. At best, U.S. universities could look forward to a period of low or no growth; at worst, contraction lay ahead. The new question that needed to be answered, one with which university people had very little experience, was whether the great U.S. universities had the moral courage and the governance structures that would enable them to discipline the appetites of their internal constituencies and capture a conception of a common institutional interest that would overcome the fragmentation of the previous 40 years. Some would, but past experience suggested that they would probably be in the minority.

So that is what I wrote, and I did not make it up. For the purposes of a book I was writing at the time, I had met with more than 20 past and present university presidents for extensive discussions of their years in office and their view of the future. The picture I have just described formed at least a part of every individual’s version of the challenge ahead. Some were more optimistic than others; some were downright gloomy. But to one degree or another, all saw the need for and the difficulty of generating the kind of discipline required to set priorities in a process that was likely to produce more losers than winners. As predictions go, this one seemed safe enough.

Well, a funny thing happened on the way to the world of limits: The need to set limits apparently disappeared. Ironically, it was an act of fiscal self-restraint on the part of a normally unrestrained presidency and Congress that helped remove institutional self-restraint from the agenda of most universities. The serious commitment to a balanced federal budget in 1994, made real by increased taxes and a reasonably enforceable set of spending restraints, triggered the longest economic expansion in the nation’s history. That result was seen in three dramatic effects on university finances: Increased federal revenues eased the pressure on research funding; increased state revenues eased the appropriation pressure on state universities; and the incredible rise in the stock market generated capital assets in private hands that benefited public and private university fundraising campaigns. The first billion-dollar campaign ever undertaken by a university was successfully completed by Stanford in 1992. Precisely because of the difficult national and institutional economic conditions at the time, it was viewed as an audacious undertaking. However, within a few years billion-dollar campaigns were commonplace for public and private universities alike.

To be fair, the bad days of the early 1990s did force many institutions into various kinds of cost-reduction programs. In general, these focused on administrative downsizing, the outsourcing of activities formerly conducted by in-house staff, and something called “responsibility-centered management.” It could be argued, and was, that cutting administrative costs was both necessary and appropriate, because administrations had grown faster than faculties and therefore needed to take the first reductions. That proposition received no argument from faculties. The problem of how to deal with reductions in academic programs was considerably more difficult. I believed that the right way to approach the problem was to start with the question, “How can we arrange to do less of what we don’t do quite so well in order to maintain and improve the quality of what we know we can do well?” Answering that question was sure to be a very difficult exercise, but I believed it would be a necessary one if a university were to survive the hard times ahead and be ready to take advantage of the opportunities that would surely arise when the economic tide changed.

As it happened, the most common solution to the need to lower academic costs was to offer financial inducements for early retirement of senior faculty and either reduce the size of the faculty or replace the more expensive senior people with less expensive junior appointments or with part-time, non-tenure-track teachers. These efforts were variously effective in producing at least short-term budget savings.

All in all, it was a humbling lesson, and I have learned it. I am out of the prediction business. Well, almost out. To be precise, I now believe that swings in economic fortune–and present appearances to the contrary notwithstanding, there will surely be bad times as well as good ones ahead–are not the factors that will determine the health and vitality of our universities in the years to come. One way or another, there will always be enough money available to keep the enterprise afloat, although never enough to satisfy all academic needs, much less appetites. Instead, the determining factors will be how those responsible for these institutions (trustees, administrations, and faculties) respond to issues of academic values and institutional purpose, some of which are on today’s agenda, and others of which undoubtedly lie ahead. The question for the future is not survival, or even prosperity, but the character of what survives.

Three issues stand out as indicators of the kind of universities we will have in the next century: the renewal of university faculties as collective entities committed to agreed-on institutional purposes; the terms on which the growing corporate funding of university research is incorporated into university policy and practice; and the future of the system of allocating research funding that rests on an independent review of the merits of the research and the ability of the researcher. All three are up for grabs.

Faculty and their institutions

Without in any way romanticizing the past, which neither needs nor deserves it, it is fair to say that before World War II the lives of most university faculty were closely connected to their employing institutions. Teaching of undergraduates was the primary activity. Research funding was scarce, opportunities for travel were limited, and very few had any professional reason to spend time thinking about or going to Washington, D.C. This arrangement had advantages and disadvantages. I think the latter outweighed the former in the prewar academic world, but however one weighs the balance, there can be no dispute that what followed the war was radically different. The postwar story has been told many times. The stimulus of the GI Bill created a boom in undergraduate enrollment, and government funding of research in science and technology turned faculty and administrations toward Washington as the major source of good things. The launching of Sputnik persuaded Congress and the Eisenhower administration, encouraged by educators and their representatives in Washington, that there was a science and education gap between the Soviet Union and the United States. There was, but it was actually the United States that held the advantage. Nevertheless, a major expansion of research funding and support for Ph.D. education followed.

At the same time, university professors were developing a completely different view of their role. What had once been a fairly parochial profession was becoming one of the most cosmopolitan ones. Professors’ vital allegiances were no longer local. Now in competition with traditional institutional identifications were connections with program officers in federal agencies, with members of Congress who supported those agencies, and with disciplinary colleagues around the world.

Swings in economic fortune are not the factors that will determine the health and vitality of our universities in the years to come.

The change in faculty perspectives has had the effect of greatly attenuating institutional ties. Early signs of the change could be seen in the inability of instruments of faculty governance to operate effectively when challenged by students in the 1960s, even when those challenges were as fundamental as threats to peace and order on campus. Two decades after the student antiwar demonstrators had become respectable lawyers, doctors, and college professors, Harvard University Dean Henry Rosovsky captured the longer-term consequences of the changed relationship of faculty to their employing universities. In his 1990-91 report to the Harvard Faculty of Arts and Sciences, Rosovsky noted the absence of faculty from their offices during the important reading and exam periods and the apparent belief of many Harvard faculty that if they teach their classes they have fulfilled their obligations to students and colleagues. He said of his colleagues, “. . . the Faculty of Arts and Sciences has become a society largely without rules, or to put it slightly differently, the tenured members of the faculty–frequently as individuals–make their own rules . . . [a]s a social organism, we operate without a written constitution and with very little common law. This is a poor combination, especially when there is no strong consensus concerning duties and standards of behavior.”

What Rosovsky described at Harvard can be found at every research university, and it marks a major shift in the nature of the university. The question of great consequence for the future is whether faculties, deans, presidents, and trustees will be satisfied with a university that is as much a holding company for independent entrepreneurs as it is an institution with a collective sense of what it is about and what behaviors are appropriate to that understanding. I have no idea where on the continuum between those two points universities will lie 20 or 50 years from now. I am reasonably confident, however, that the question and the answer are both important.

Universities and industry

In March 1992, the presidents of five leading research universities met at Pajaro Dunes, California. Each president was accompanied by a senior administrator involved in research policy, several faculty members whose research involved relations with industry, and one or two businessmen close to their universities. The purpose of the meeting was to examine the issues raised by the new connections between universities and the emerging biotechnology industry. So rapidly and dramatically have universities and industry come together in a variety of fields since then that reading some of the specifics in the report of the meeting is like coming across a computer manual with a chapter on how to feed Hollerith cards into a counter-sorter. But even more striking than those details is the continuity of the issues raised by these new relationships. Most of them are as fresh today as they were nearly two decades ago, a fact that testifies both to their difficulty and their importance.

These enduring issues have to do with the ability of universities to protect the qualities that make them distinctive in the society and important to it. In the words of the report, “Agreements (with corporations) should be constructed . . . in ways that do not promote a secrecy that will harm the progress of science, impair the education of students, interfere with the choice of faculty members of the scientific questions or lines of inquiry they pursue, or divert the energies of faculty members from their primary obligations to teaching and research.” In addition, the report spoke to issues of conflict of interest and what later came to called “conflict of commitment,” to the problems of institutional investment in the commercial activities of its faculty, the pressures on graduate students, and issues arising out of patent and licensing practices.

All of those issues, in their infancy when they were addressed at Pajaro Dunes, have grown into rambunctious adolescents on today’s university campuses. They can be brought together in a single proposition: When university administrators and faculty are deciding how much to charge for the sale of their research efforts to business, they must also decide how much they are willing to pay in return. For there will surely be a price, as there is in any patronage relationship. It was government patronage, after all, that led universities to accept the imposition of secrecy and other restrictions that were wholly incompatible with commonly accepted academic values. There is nothing uniquely corrupting about money from industry. It simply brings with it a set of questions that universities must answer. By their answers, they will define, yet again, what kind of institutions they are to be. Here are three questions that will arise with greater frequency as connections between business and university-based research grow:

  • Will short-term research with clearly identified applications be allowed to drive out long-term research of unpredictable practical value, in a scientific variation of Gresham’s Law?
  • Can faculty in search of research funding and administrators who share that interest on behalf of their institution be counted on to assert the university’s commitment to the openness of research processes and the free and timely communication of research results?
  • Will faculty whose research has potential commercial value be given favored treatment over their colleagues whose research does not?

I have chosen these three among many other possible questions because we already have a body of experience with them. It is not altogether reassuring. Some institutions have been scrupulous in attempting to protect institutional values. Others have been considerably less so. The recent large increases in funding for the biomedical sciences have relieved some of the desperation over funding pressures that dominated those fields in the late 1980s and early 1990s, but there is no guarantee that the federal government’s openhandedness will continue indefinitely. If it does not, then the competition for industrial money will intensify, and the abstractions of institutional values may find it hard going when pitted against the realities of the research marketplace.

Even in good times, the going can be hard. A Stanford University official (not a member of the academic administration, I hasten to add) commented approvingly on a very large agreement reached between an entire department at the University of California at Berkeley and the Novartis Corporation: “There’s been a culture for many years at Stanford that you do research for the sake of doing research, for pure intellectual thought. This is outdated. Research has to be useful, even if many years down the line, to be worthwhile.” I have no doubt that most people at Stanford would be surprised to learn that their culture is outdated. I am equally certain, however, that the test of usefulness as a principal criterion for supporting research is more widely accepted now than in the past, as is the corollary belief that it is possible to know in advance what research is most likely to be useful. Since both of those beliefs turn the historic basis of the university on its head, and since both are raised in their starkest form by industry-supported research, it is fair to say that the extent to which those beliefs prevail will shape the future course of research universities, as well as their future value.

Preserving research quality

Among the foundation stones underlying the success of the U.S. academic research enterprise has been the following set of propositions: In supporting research, betting on the best is far more likely to produce a quality result than is settling for the next best. Although judgments are not perfect, it is possible to identify with a fair degree of confidence a well-conceived research program, to assess the ability of the proposer to carry it out, and to discriminate in those respects among competing proposers. Those judgments are most likely to be made well by people who are themselves skilled in the fields under review. Finally, although other sets of criteria or methods of review will lead to the support of some good research, the overall level of quality will be lower because considerations other than quality will be weighed more heavily in funding decisions.

It is remarkable how powerful those propositions have been and, until recently, how widely they were accepted by decisionmakers and their political masters. To see that, it is only necessary to contrast research funding practices with those in other areas of government patronage, where the decimal points in complicated formulas for distributing money in a politically balanced manner are fought over with fierce determination. Reliance on the system of peer review (for which the politically correct term is now “merit review”) has enabled universities to bring together aggregations of top talent with reasonable confidence that research funding for them will be forthcoming because it will not be undercut by allocations based on some other criteria.

The future of research universities will continue to be determined by the extent to which they are faithful to the values that have always lain at their core.

Notwithstanding the manifest success of the principle that research funding should be based on research quality, the system has always been vulnerable to what might be called the “Lake Woebegone Effect”: the belief that all U.S. universities and their faculty are above average, or that given a fair chance would become so. That understandable, and in some respects even admirable, belief has always led to pressures to distribute research support more broadly on a geographic (or more accurately, political-constituency) basis. These pressures have tended to be accommodated at the margins of the system, leaving the core practice largely untouched.

Since it remains true that the quality of the proposal and the record and promise of the proposer are the best predictors of prospective scientific value, there is reason to be concerned that university administrators, faculty, and members of Congress are increasingly departing from practices based on that proposition. The basis for that concern lies in the extent to which universities have leaped into the appropriations pork barrel in an effort to obtain funds for research and research facilities that is based not on an evaluation of the comparative merits of the project for which they seek funds but on the ability of their congressional representatives to manipulate the appropriations process on their behalf. In little more than a decade, the practice of earmarking appropriations has grown from a marginal activity conducted around the fringes of the university world to an important source of funds. A record $787 million was appropriated in that way in fiscal year 1999. In the past decade, a total of $5.8 billion was given out directly by Congress with no evaluation more rigorous than the testimony of institutional lobbyists. Most of this largesse was directed to research and research-related projects. Even in Washington, those numbers approach real money.

More important than the money, though, is what this development says about how pressures to get in or stay in the research game have changed the way in which faculty and administrators view the nature of that game. The change can be seen in the behavior of members of the Association of American Universities (AAU), which includes the 61 major research universities. In 1983, when two AAU members won earmarked appropriations, the association voted overwhelmingly to oppose the practice and urged universities and members of Congress not to engage in it. If a vote were taken today to reaffirm that policy, it is not clear that it would gain support from a majority of the members. Since 1983, an increasing number of AAU members have benefited from earmarks, and for that reason it is unlikely that the issue will be raised again in AAU councils.

Even in some of the best and most successful universities there is a sense of being engaged in a fierce and desperate competition. The pressure to compete may come from a need for institutional or personal aggrandizement, from demands that the institution produce the economic benefits that research is supposed to bring to the local area, or those and other reasons combined. The result, whatever the reasons, has been a growing conclusion that however nice the old ways may have been, new circumstances have produced the need for a new set of rules.

At the present moment, we are still at an early stage in a movement toward the academic equivalent of the tragedy of the commons. It is still possible for each institution that seeks to evade the peer review process to believe that its cow can graze on the commons without harm to the general good. As the practice becomes more widespread, the commons will lose its value to all. Although the current signs are not hopeful, the worst outcome is not inevitable. The behavior of faculty and their administrations in supporting or undermining a research allocation system based on informed judgments of quality will determine the outcome and will shape the nature of our universities in the decades ahead.

There are other ways of looking at the future of our universities than the three I have emphasized here. Much has been written, for example, about the effects of the Internet and of distance education on the future of the physical university. Much of this speculation seems to me to be overheated; more hype than hypothesis. No doubt universities will change in order to adapt to new technologies, as they have changed in the past, but it seems to me unlikely that a virtual Harvard will replace the real thing, however devoutly its competitors might wish it so. The future of U.S. universities, the payoff that makes them worth their enormous cost, will continue to be determined by the extent to which they are faithful to the values that have always lain at their core. At the moment, and in the years immediately ahead, those values will be most severely tested by the three matters most urgently on today’s agenda.

Support Them and They Will Come

On May 6, 1973, the National Academy of Engineering convened an historic conference in Washington, D.C., to address a national issue of crisis proportions. The Symposium on Increasing Minority Participation in Engineering attracted prominent leaders from all sectors of the R&D enterprise. Former Vice President Hubert H. Humphrey in his opening address to the group underscored the severity of the problem, “Of 1.1 million engineers in 1971, 98 percent were white males.” African Americans, Puerto Ricans, Mexican-Americans, and American Indians made up scarcely one percent. Other minorities and women made up the remaining one percent.

Symposium deliberations led to the creation of National Action Council for Minorities in Engineering (NACME), Inc. Its mission was to lead a national initiative aimed at increasing minority participation in engineering. Corporate, government, academic, and civil rights leaders were eager to lend their enthusiastic support. In the ensuing quarter century, NACME invested more than $100 million in its mission, spawned more than 40 independent precollege programs, pioneered and funded the development of minority engineering outreach and support functions at universities across the country, and inspired major policy initiatives in both the public and private sectors. Building the largest private scholarship fund for minority students pursuing engineering degrees, NACME supported 10 percent of all minority engineering graduates from 1980 to the present.

By some measures, progress has been no less than astounding. The annual number of minority B.S. graduates in engineering grew by an order of magnitude, from several hundred at the beginning of the 1970s to 6,446 in 1998. By other measures, though, we have fallen far short of the mark. Underrepresented minorities today make up about a quarter of the nation’s total work force, 30 percent of the college-age population, and a third of the birth rate, but less than 6 percent of employed engineers, only 3 percent of the doctorates awarded annually, and just 10 percent of the bachelor’s degrees earned in engineering. Even more disturbing, in the face of rapidly growing demand for engineers over the past several years, freshman enrollment of minorities has been declining precipitously. Of particular concern is the devastating 17 percent drop in freshman enrollment of African Americans from 1992 to 1997. Advanced degree programs also have declining minority enrollments. First-year graduate enrollment in engineering dropped a staggering 21.8 percent for African Americans and 19.3 percent for Latinos in a single year, between 1996 and 1997. In short, not only has progress come to an abrupt end, but the gains achieved over the past 25 years are in jeopardy.

Why we failed

One reason why the progress has been slower than hoped is that financial resources never met expectations. After the 1973 symposium, the Alfred P. Sloan Foundation commissioned the Task Force on Minority Participation in Engineering to develop a plan and budget for achieving parity (representation equal to the percentage of minorities in the population cohort) in engineering enrollment by 1987. The task force called for a minimum of $36.1 million (1987 dollars) a year, but actual funding came to about 40 percent of that. And as it happened, minorities achieved about 40 percent of parity in freshman enrollment.

Leaping forward to the present, minority freshman enrollment in the 1997-98 academic year had reached 52 percent of parity. Again the disappointing statistics and receding milestones should not come as a surprise. In recent years, corporate support for education, especially higher education, has declined. Commitments to minority engineering programs have dwindled. Newer companies entering the Fortune 500 list have not yet embraced the issue of minority underrepresentation. Indeed, although individual entrepreneurs in the thriving computer and information technology industry have become generous contributors to charity, the new advanced technology corporate section has not yet taken on the mantle of philanthropy or the commitment to equity that were both deeply ingrained in the culture of the older U.S. companies they displaced.

The failure to attract freshman engineering majors is compounded by the fact that only 36 percent of these freshman eventually receive engineering degrees, and a disproportionately small percentage of these go on to earn doctorates. This might have been anticipated. Along with the influx of significant numbers of minority students came the full range of issues that plague disenfranchised groups: enormous financial need that has never been adequately met; poor K-12 schools; a hostile engineering school environment; ethnic isolation and consequent lack of peer alliances; social and cultural segregation; prejudices that run the gamut from overt to subtle to subconscious; and deficient relationships with faculty members, resulting in the absence of good academic mentors. These factors drove minority attrition to twice the nonminority rate.

It should be obvious that the fastest and most economical way to increase the number of minority engineers is to make it possible for a higher percentage of those freshman engineering students to earn their degrees. And that’s exactly what we have begun to do. Over the past seven years, NACME developed a major program to identify new talent and expand the pipeline, while providing a support infrastructure that ensures the success of selected students. In the Engineering Vanguard Program, we select inner-city high-school students–many with nonstandard academic backgrounds–using a nontraditional, rigorous assessment process developed at NACME. Through a series of performance-based evaluations, we examine a set of student attributes that are highly correlated with success in engineering, including creativity, problem-solving skill, motivation, and commitment.

Because the inner-city high schools targeted by the program, on average, have deficient mathematics and science curricula, few certified teachers, and poor resources, NACME requires selected students to complete an intense academic preparation program, after which they receive scholarships to engineering college. Although many of these students do not meet standard admissions criteria for the institutions they attend, they have done exceedingly well. Students with combined SAT scores 600 points below the average of their peers are graduating with honors from top-tier engineering schools. Attrition has been virtually nonexistent (about 2 percent over the past six years). Given the profile of de facto segregated high schools in predominantly minority communities (and the vast majority of minority students attend such schools), Vanguard-like academic preparation will be essential if we’re going to significantly increase enrollment and, at the same time, ensure high retention rates in engineering.

Using the model, we at NACME believe that it is possible to implement a program that, by raising the retention rate to 80 percent, could within six years result in minority parity in engineering B.S. degrees. That is, we could raise the number of minority graduates from its current annual level of 6,500 to 24,000. Based on our extensive experience with supporting minority engineering students and with the Vanguard program, we estimate that the cost of this effort will be $370 million. That’s a big number–just over one percent of the U.S. Department of Education budget and more than 10 percent of the National Science Foundation budget. However, a simple cost-benefit analysis suggests that it’s a very small price for our society to pay. The investment would add almost 50,000 new engineering students to the nation’s total engineering enrollment and produce about 17,500 new engineering graduates annually, serving a critical and growing work force need. This would reduce, though certainly not eliminate, our reliance on immigrants trained as engineers.

Crudely benchmarking the $367.5 million cost, it’s equivalent to the budget of a typical, moderate-sized polytechnic university with an undergraduate enrollment of less than 10,000. Many universities have budgets that exceed a billion dollars, and there are none that produce 17,000 graduates annually. The cost, too, is modest when contrasted with the cost of not solving the underrepresentation problem. For example, Joint Ventures, a Silicon Valley research group, estimates that the work force shortage incrementally costs Silicon Valley high-technology companies between $3 billion and $4 billion dollars annually because of side effects such as productivity losses, higher turnover rates, and premium salaries. At the same time, minorities, who make up almost half of California’s college-age population, constitute less than 8 percent of the professional employees in Silicon Valley companies. Adding the social costs of an undereducated, underutilized talent pool to the costs associated with the labor shortage, it’s clear that investment in producing more engineers from underrepresented populations would pay enormous dividends.

Given the role of engineering and technological innovation in today’s economy and given the demographic fact that “minorities” will soon make up a majority of the U.S. population, the urgency today is arguably even greater than it was in 1973. The barriers are higher. The challenges are more exacting. The threats are more ominous. At the same time, we have a considerably more powerful knowledge base. We know that engineering is the most effective path to upward mobility, with multigenerational implications. We know what it takes to solve the problem. We have a stronger infrastructure of support for minority students. We know that the necessary investment yields an enormous return. We know, too, that if we fail to make the investment, there will be a huge price to pay in dollars and in lost human capital. The U.S. economy will not operate at its full potential. Our technological competitiveness will be challenged. Income gaps among ethnic groups will continue to widen.

We should also remember that this is not simply about social justice for minorities. The United States needs engineers. Many other nations are increasing their supply of engineers at a faster rate. In recent years, the United States has been able to meet the demand for technically trained workers only by allowing more immigration. That strategy may no longer be tenable in a world where the demand for engineers is growing in many countries. Besides, it’s not necessary.

In the coming fall, 600,000 minority students will be entering their senior year in high school in the United States. We need to enroll only 5 percent of them in engineering in order to achieve the goal of enrollment parity. If we invest appropriately in academic programs and the necessary support infrastructure, we can achieve graduation parity as well. If we grasp just how important it is for us to accomplish this task, if we develop the collective will to do it, we can do it. Enthusiasm and rhetoric, however, cannot solve the problem as long as the effort to deliver a solution remains substantially underfunded. Borrowing from the vernacular, we’ve been there and done that.

Winter 2000 Update

Independent drug evaluation

Innovation in public policy requires patience. Five years have passed since Raymond L. Woosley, chairman of the Department of Pharmacology at Georgetown University, made the case for the need for independent evaluation of pharmaceuticals (“A Prescription for Better Prescriptions,” Issues, Spring 1994). On December 10, 1999, the Wall Street Journal reported that Woosley’s recommendations were finally becoming policy.

Woosley’s article expressed concern that the primary source of information about the effectiveness of pharmaceuticals comes from research funded by the pharmaceutical companies. He observed that the companies have no incentive to support research that might undermine sales of their products and no incentive to publish research that does not benefit their bottom line. Further, he noted that the $11 billion that the drug companies were spending annually on marketing their products was more than the $10 billion they spent on drug development. Woosley worried that for too many physicians the primary sources of information about pharmaceuticals were advertising and the sales pitches of company representatives. He reports on research that indicates that physicians often do not prescribe the best drug or the correct dosage. Yet the Food and Drug Administration (FDA) has little power to influence drug selection once a product has been approved. Woosley recounted several unsuccessful efforts to discourage the use of drugs found to have undesirable side effects.

Seeing a need for independent pharmaceutical research as well as objective and balanced information about drugs on the market, Woosley recommended the creation of 15 federally funded regional centers for education and research in therapeutics (CERTS) with a combined annual budget of $75 million. The CERTS would conduct research on the relative effectiveness of therapies, study the mechanisms by which drugs produce their effects, develop new methods to test generic drugs, evaluate new clinical applications for generic drugs, determine dosage and safety guidelines for special populations such as children and the elderly, and assess the cost effectiveness of various drugs within specific populations. In addition, the CERTS would play a role in educating physicians and in monitoring drug safety.

The article generated little action at first (except for some strong criticism from the pharmaceutical industry in the Forum section of the next Issues), but Woosley’s continuing efforts eventually earned the support of Sen. Bill Frist (R-Tenn), and in 1997 Congress passed legislation to create CERTS. The initial plan, funded at $2.5 million, is much smaller than what Woosley thinks is needed, but it’s a beginning. Under the direction of the Agency for Healthcare Research and Quality, CERTS have been established at Duke University, the University of North Carolina at Chapel Hill, Vanderbilt University, and Georgetown University. In addition, the National Institutes of Health (NIH) has begun its own effort to study drug effectiveness, including a five-year $42.1 million effort to evaluative the effectiveness of some new antipsychotic drugs.

Although Woosley is pleased to see that his proposal has finally generated some action, he is disappointed with the level of funding. Noting that independent authorities such as the General Accounting Office, the Journal of the American Medical Association, and the Institute of Medicine have supported his view that research could significantly improve the effectiveness of pharmaceutical use, he believes that the case for much more funding is strong. Yet, Congress turned down an FDA request for $15 million to support this work. He recommends collaborative efforts that would also involve NIH and the Centers for Disease Control. He sees value in the NIH initiative on antipsychotic drugs, but explains that its goal is limited to comparison of the effectiveness of various chemicals. The CERTS goal is to go beyond comparisons to improving drug use, an achievement that would benefit patients, physicians, and the pharmaceutical industry.

Creating Havens for Marine Life

The United States is the world’s best-endowed maritime nation, its seas unparalleled in richness and biological diversity. The waters along its 150,000 kilometers of shoreline encompass virtually every type of marine habitat known and a profusion of marine species–some of great commercial value, others not. It is paradoxical, then, that the United States has done virtually nothing to conserve this great natural resource or to actively stem the decline of the oceans’ health.

As a result, the U.S. national marine heritage is gravely threatened. The damage goes on largely unnoticed because it takes place beneath the deceptively unchanging blanket of the ocean’s surface. The marine environment is rapidly undergoing change at the hands of humans, revealing the notion of vast and limitless oceans as folly. Human degradation takes many forms and results from many activities, such as overfishing, filling of wetlands, coastal deforestation, the runoff of land-based fertilizers, and the discharge of pollution and sediment from rivers, almost all of which goes on unchecked. Out of sight, out of mind.

The signs of trouble are everywhere. The formerly rich and commercially critical fish stocks of Georges Bank in the Northeast have collapsed, gutting the economy and the very nature of communities along New England’s shores. In Long Island Sound, Narragansett Bay, the Chesapeake, and throughout the inlets of North Carolina, toxic blooms of algae disrupt the food chain and affect human health. In Florida, the third largest barrier reef in the world is suffering from coral bleaching, coral diseases, and algal overgrowth. Just inland, fixing the ecological damage to the great Everglades is expected to cost billions of dollars. Conditions are even worse in the Gulf of Mexico, where riverborne runoff has created a “dead zone” of lifeless water that covers thousands of square miles and is expanding fast.

In California, rampant overfishing has depleted stocks of abalone and other organisms of the kelp forests, spelling potential doom for the beloved sea otter in the process. At the same time, the state’s most valued symbol–its golden beaches–are more and more frequently closed to swimming because bacteria levels exceed health standards. Along the Northwest coast, several runs of salmon have been placed on the endangered species list, creating huge protection costs to states that contain their native rivers. And in Alaska, global climate change, accumulation of toxins such as PCBs and DDT, and radical shifts in the food web in response to stock collapses and fisheries technologies have caused dramatic declines in seabird, steller’s sea lion, and otter populations. All this is taking place in the world’s wealthiest and most highly advanced nation, which prides itself on its commitment to the environment.

Worse still, scientists now consider these ominous signs mere droplets of water that presage the bursting of a dam. Yet the nation remains stuck in the reactive mode. Unable to anticipate where the next trouble spot will be and unwilling to invest in measures such as creating protected areas, the United States is far from being the world leader in coastal conservation that it claims to be.

Marine protected areas are urgently needed to stem the tide of marine biodiversity loss. They can protect key habitats and boost fisheries production inside and outside the reserves. They also can provide model or test areas for integrating the management of coastal and marine resources across various jurisdictions and for furthering scientific understanding of how marine systems function and how to aid them.

To date, the nation has designated only 12 National Oceanic and Atmospheric Administration (NOAA) marine sanctuaries in federal waters (from 3 to 200 miles out). Together, they cover far less than 1 percent of U.S. waters. This is simply much too small to promote conservation of marine ecosystems. Furthermore, less than 0.1 percent of this area is actually designated as no-take reserve or closed area. Most of the sanctuaries cater to commercial and recreational needs and have no teeth whatsoever for providing the necessary controls on damage. Even the newest sanctuaries have no way of addressing degradation due to runoff from land-based activities. Similar situations exist for the smattering of no-take areas designated by states within their jurisdiction (from shore to three miles out). The case of California reserves is typical, where no-take zones make up only 0.2 percent of state waters.

Coastal and marine protected areas can come in many types, shapes, and sizes. Around the world, they encompass everything from small “marine reserves” established to protect a threatened species, unique habitat, or site of cultural interest to vast multiple-use areas that have a range of conservation, economic, and social objectives. “Harvest refugia” or “no-take zones” are small areas closed to fisheries extraction, designed to protect a particular stock or suite of species (usually fish or shellfish) from overexploitation. “Biosphere reserves” are multiple-use zones with core and buffer areas that exist within the United Nations’ Educational, Scientific, and Cultural Organization’s (UNESCO’s) network of protected areas. Then there are “marine sanctuaries,” which one might think are quiet wilderness areas left to nature. But in the United States, just the opposite is true. The 12 sanctuaries are places bustling with sightseers, fishermen, divers, boaters, and entrepreneurs hawking souvenirs. Elsewhere in the world, the term means a closed area.

So we are left with a useful mix of options but a confusing array of terminology. The term “marine protected area,” though admittedly not very sexy, is the only one that encompasses the full range of intentions and designs.

Although the United States has not established truly effective marine protected areas, the time is right for surging ahead with a new system. There is growing public awareness of national ineptitude in dealing with marine environmental issues. A solid body of data has been amassed that suggests that marine protected areas are truly effective in meeting many important conservation goals. Momentum is growing to take what has been learned from conservation on land and apply it to the seas. Furthermore, sectors of society that might not have supported protected areas in the past now seem ready to do so, as is the case with Northeast fishermen who historically resisted regulations but now demand better conservation as their industry collapses.

The United States is at a crossroads. It can choose to ignore the declining health and productivity of the oceans, or it can use marine protected areas to conserve what is healthy and bring back some of what has been lost. These areas are needed on three fronts as a way to manage marine resources and prevent overfishing, conserve the various coastal and marine habitats, and create demonstrations of how to integrate the management of activities on land, around rivers, in the sea, and between state and federal jurisdictions. In considering how to meet each of these purposes, it would be wise to heed lessons from marine protected areas established in other parts of the world.

Limiting overexploitation

Decisionmakers and the public are increasingly aware that fisheries commonly deplete resources beyond levels that can be sustained. Over two-thirds of the world’s commercially fished stocks are overfished or at their sustainable limits, according to Food and Agricultural Organization statistics. The examples in U.S. waters have become well known: cod off New England, groupers in the Gulf of Mexico, abalone along California, and so on. Overfishing affects not only the stock itself but also communities of organisms, ecological processes, and even entire ecosystems that are critical to the oceans’ overall health.

The continuing drive to exploit marine resources stems from an increasing reliance on protein from the sea to feed burgeoning human populations, livestock, and aquaculture operations. Factory ships cause clear damage, but extensive small-scale fishing can also be devastating to marine populations. In light of what seems to be serial mismanagement of commercial fisheries, the United States must take several measures. The first is to acquire better information on the true ecosystemwide effects of fisheries activity. Second is to shift the way evidence of impact is gathered, so that the burden of proof and the resources spent on trying to establish that proof are not solely the responsibility of conservationists. Third is to make greater use of marine protected areas and fisheries reserves to strengthen current management and provide control sites for further scientific understanding of new management techniques.

The 12 NOAA marine sanctuaries cover far less than 1 percent of U.S. coastal waters.

The marine fisheries crisis stems not just from the amount of stock removed but also from how it is removed. Fishing methods commonly used to catch commercially valuable species also kill other species that do not carry a good price tag. This “bycatch” can constitute a higher percentage of the catch than the targeted fish–in some cases, nearly 30 times more by weight. Most of the bycatch is accidentally killed or intentionally destroyed, and many of the species are endangered. For example, surface longline fishing kills thousands of seabirds annually; midwater longlining has been implicated in the dramatic population decline of the leatherback turtle. Habitat alteration can be an even greater problem. For example, bottom trawling kills the plants and animals that live on the sea floor and interrupts key ecological processes. Clearly, controls on the quantity of catch do not slow the habitat destruction that results from how we fish. Marine protected areas would reduce overfishing while also staving off habitat destruction.

Protected areas would also actually boost the recovery of depleted stocks. Scientific studies on the effect of no-take reserves in East Africa, Australia, Jamaica, the Lesser Antilles, New Zealand, the Philippines, and elsewhere all suggest that small, strictly protected no-take areas result in increased fish production inside those areas. Preliminary evidence from a 1997 fishing ban in 23 small coral reef reserves by the Florida Keys Marine Sanctuary indicates that several important species, including spiny lobsters and groupers, are already beginning to rebound. Protected areas can even increase production outside the reserve by providing safe havens for regional fish in various life stages, notably increasing the survivorship of juvenile fish. Fears that no-take areas merely attract fish and thus give a false impression of increased productivity have been put to rest.

The results of these studies has sparked excitement in the fisheries management community. Garry Russ of James Cook University and Angel Alcala of the Philippines’ Department of Environmental and Natural Resources have shown that a small protected area by Apo Island in the Philippines increased fish yields well outside its boundaries in less than a decade after its establishment. Recent scientific papers, including fisheries reviews from the Universities of East Anglia, Wales, York, and Newcastle upon Tyne in Britain document the success of marine protected areas in helping manage fisheries, including Kenyan refuges, closed areas and coral reef reserves throughout the Caribbean, New Zealand fishery reserves, several Mediterranean reserves, invertebrate reserves in Chile, Red Sea reserves, and fisheries zones in Florida. The ideal situation seems to be the establishment of closed areas within larger, multiple-use protected areas such as a coastal biosphere reserve or marine sanctuary. However, as results from studies in Jamaica have shown, if the larger area is badly overused or degraded, the closed areas within it cannot survive.

Reducing degradation

There are myriad ways beyond fishing by which we alter marine ecosystems. Perhaps the most ubiquitous and insidious is the conversion of coastal habitat: the filling in of wetlands, urbanization of the coastline, transformation of natural harbors into ports, and siting of industrial centers on coastal land. Such development eliminates or pollutes the ocean’s ecologically most important areas: estuaries and wetlands that serve as natural nurseries, feeding areas, and buffers for maintaining balance between salt and fresh water. A recent and alarming trend has been the conversion of such critical habitats for aquaculture operations, in which overall biodiversity is undermined to maximize production of a single species.

We degrade marine ecosystems indirectly as well. Land-based sources of fertilizers, pesticides, sewage, heavy metals, hydrocarbons, and debris enter watersheds and eventually find their way to coastal waters. This causes imbalances, including the condition known as eutrophication (the depletion of oxygen from the water), which in turn spurs algal blooms and kills fish. Eutrophication is prevalent the world over and is considered by many coastal ecologists to be the most serious threat to marine ecosystems. The problem is now notorious in the Chesapeake Bay, North Carolina’s Pamlico Sound, Santa Monica Bay, and other areas along the U.S. coast. Vast dead zones, the ultimate choking of life, are growing steadily larger in areas such as the Gulf of Mexico.

Toxins also exact a heavy toll on wildlife and ecosystems, and the persistent nature of these chemicals means recovery is often slow and sometimes incomplete. Diversion of freshwater from estuaries raises their salinity, rendering them unsuitable as habitat for the young of many marine species.

What is the resounding message from this complex suite of threats? We have to deal with all the sources of degradation simultaneously. Trying to regulate each source individually is too complicated, politically tenuous, and ultimately ineffective. Designating marine protected areas is the only comprehensive way to do it. Protected areas work to mitigate against degradation simply because they define a region on the receiving end of the threats. The reality is that it is possible to create sufficient public and political support to clean up sources of degradation only if a well-defined ocean area has been marked and shown to be suffering. People need a geographic zone to relate to a sense of place. Experience around the world shows that once an area is marked, people become focused and find the motivation to clean it up. These areas would become the starting points for finding solutions that could be applied to larger areas.

Occasionally, marine protected areas established to protect the critical habitats of a single highly endangered species can play a similar role. Such “umbrella” species can serve as the conservation hook for a comprehensive system that protects all life in the target waters. This is happening in newly established Leatherback Conservation Zones off the southeastern United States, designated to protect portions of leatherback sea turtle habitat. The hundreds of species that live on the sea floor and in the vertical column of water in these zones receive de facto protection. Similarly, scientists of the Chesapeake Research Consortium recently recommended that 10 percent of the Chesapeake Bay’s historic oyster habitat be protected in permanent reef sanctuaries. If that action is taken, other species in these areas would be protected as well.

Dealing with multiple threats and economic sectors is the business of coastal zone management. The United States prides itself on its coastal management, and in line with the new federalism, each of the 28 coastal states and territories has significant authority and funds to deal with all these issues. Yet there is little focus on integrating coastal management between federal and state jurisdictions, as well as between water and land jurisdictions. For example, state coastal management agencies rarely have any mandate to control fisheries within their three-mile jurisdictions and have virtually no ability to influence land use in the watershed along the coastline.

Marine protected areas would serve as control sites where scientific research, experimentation, and tests of management techniques could take place. Without such rigorous trials, management techniques will never become usefully adaptable. Lacking hard science, our attempts at flexible techniques are no more than hedged bets.

Testing the world’s waters

There are many good examples of marine protected areas that have successfully prevented overexploitation, mitigated habitat degradation, and served as models for integrated management. One frequently cited example is Australia’s Great Barrier Reef Marine Park, a vast multiple-use area encompassing the world’s largest barrier reef system. It is the first large marine protected area to succeed in accommodating various user groups by designating different internal zones for different uses, such as sponge fishing, oil exploration, diving, and recreational fishing. And indeed, the act of designating the region as a marine park has elevated its perceived value, drawing more public and political attention to protecting it.

The United States can learn from mistakes made there, too. Chief among them is that the boundary for the protected area stops at the shoreline, preventing the Park Authority from influencing land use in the watersheds that drain into the park’s waters. The consequence is that the reef is now experiencing die-back as sediments and land-based pollution stress the system.

Protected areas can boost marine life populations not only within reserves but outside them as well.

Guinea Bissau’s Bijagos Archipelago Biosphere Reserve in West Africa, by contrast, includes some control over adjacent land use. The reserve covers some 80 islands, the coastal areas in between, some offshore areas, and portions of the mainland, including major river deltas. As is true for all biosphere reserves designated by UNESCO, there are no-take areas delineated within core zones and areas of regulated activity in surrounding buffer zones. And now, as the national government creates a countrywide economic development plan, it is using the reserve map to help determine where to site factories and other potentially damaging industries, as well as attractive areas within the reserve that would promote ecotourism. Designation of the Bijagos Reserve is prompting the government of Guinea Bissau to protect its national treasure while providing incentives for West African governments to work toward protecting what is an important base for the marine life of the entire region.

The emerging efforts of coastal nations to protect marine resources and the livelihoods of people who depend on them have largely relied on top-down controls, in which government ministries take jurisdictional responsibility to plan and implement reserves. Such is the case in Europe, where there has been a great proliferation of marine protected areas in the last decade. France has established five fully operational marine reserves. Spain has decreed 21. Italy has established 16, of which 3 are fully functional, with another 7 proposed. Greece has one Marine National Park and plans to implement another, and Albania, Bosnia, and Croatia all have reserves.

Although each of these countries uses different criteria for site selection, all have acted to establish and enforce protected areas in relatively pristine coastal and insular regions. Each country has decided its that waters are vital to its national interests and has systematically analyzed them to identify the most sensitive areas. The United States has not even taken a systematic look at its waters, much less protected them.

In contrast to government-led efforts, some African marine protected areas and newly established community reserves in the Philippines and Indonesia are being driven by local communities and fishers’ groups. These bottom-up initiatives result from local conservation efforts that are then legitimized by government. An exciting example is the new community-based marine protected area in Blongko, North Sulawesi, Indonesia, the country’s first locally managed marine park.

The attempts of communities, local governments, and nations to leverage marine protected areas provide the United States with valuable means to learn. By not taking time to assess the experiences of others and by pretending to have all the answers, the United States lags way behind. The lack of commitment to make sacrifices today that will conserve the ocean environment of tomorrow highlights how hypocritical it is for the United States to preach that other nations make sacrifices of their own.

Systematic approach needed

To optimally protect whole ecosystems or to promote conservation, networks of reserves may be more effective than large, individually protected areas. A large part of the damage to marine systems stems from the degradation or loss of critical areas that are linked in various ways. For example, many species in Australia’s Great Barrier Reef spawn in a section of the reef near Brisbane, but recruits (or larvae) travel with ocean currents and settle 200 kilometers away before they settle. If the entire region could not be designated as a marine protected area, it would be much more valuable to protect these two spots, rather than random sections of the reef. In this way, a network of the most critical areas could protect an environment and perhaps be more politically tenable than a single large zone.

This patchwork pattern of life is seen around the world. Mangrove forests along Gulf of Mexico shores provide nutrients and nursery areas for offshore reefs that are tens of kilometers away. Seed reefs have recently been shown to provide recruits to mature reef systems hundreds of kilometers away. Recognizing this connectivity, scientists have begun to explore how extensive systems of small, discrete marine reserves can effectively combat biodiversity loss.

Networks of marine protected areas can achieve several of the major goals of marine protection, including preserving wilderness areas, resolving conflicts among users, and restoring degraded or overexploited areas. Networks are a very new idea, and none have been formally designated, but several promising plans are under way. Parks Canada is currently designing a system of Marine National Conservation Areas to represent each of the 29 distinct ecoregions of Canada’s Atlantic, Great Lakes, Pacific, and arctic coasts. The long-term goal is to establish protected wilderness areas covering habitat types within each region. Australia’s federal government is developing a strategy for a National Representative System to set aside portions of its many different habitats.

Networks would greatly aid conflict resolution among user groups or jurisdictional agencies, which is a problem in virtually all the world’s coastal and near-shore areas. Shipping and mineral extraction, for instance, conflict with recreation. Commercial and subsistence fishing conflict with skin- and scuba-diving and ecotourism. Designating a network of smaller protected areas can amount to zoning for different uses, which is much easier than trying to overlay regulations on one continuous reserve. The network can also provide each group of local communities, decisionmakers, and other stakeholders with their own defined arena in which to promote effective management, giving each group a sense of place and a focused goal.

By designating more smaller areas of protection, networks also provide manageable starting points for efforts to reverse degradation or overexploitation. Because a given area is smaller and would not have to attempt to provide solutions for different goals (such as recreation, overfishing, and pollution runoff), they would be up and running faster, speeding restoration. These starting points could then form the basis for more comprehensive management later. This is the underlying philosophy behind the effort of a group of scientists who have recently developed a systematic plan for marine protected areas in the Gulf of Maine. The group has mapped out some three dozen regions of ocean floor as the most important ones for protection against trawling and dredging. It is hoped that this baseline will serve as the foundation for future marine protected area designations in the region.

The time to commit is now

U.S. coastal areas are being spoiled, fisheries are in trouble, and the once great-wealth of natural capital is rapidly being spent. Yet the U.S. government has made no commitment to a systematic approach to protect the marine environment. With recent media attention on marine issues and increased advocacy and lobbying for reform, one might think that the government is ready to assume leadership in marine conservation. But there is no concrete evidence that this is so. Ironically, campaigning by environmental groups may be contributing to a hesitancy to consider marine protected areas. Many conservation groups have invested a lot of time and energy trying to convince consumers to dampen their demand for overexploited species. Campaigns to boycott certain fish, such as the Save the Swordfish campaign, are useful in putting a face (even if it is a fish face) on the issue of overexploitation, but they can also lure the public and decisionmakers into a dangerous complacency, believing that sacrificing their occasional swordfish meal will be enough.

Conservationists are not advocating fencing off the oceans and prohibiting use. The solution is to modify the way we manage marine resources and to use public awareness to help raise political will for taking responsibility for that. If we can couple consumer awareness and purchasing power with strong marine management, we could indeed alleviate many pressures on marine systems and allow their recovery.

Critical to this effort would be a real willingness among government agencies and decisionmakers to protect areas needed for fish spawning, feeding, migration, and other ecologically critical sites through marine reserves, as well as entering into enforceable international agreements to protect shared resources. This means not only talking about essential fish habitat, as has been done in the reauthorization of the U.S. Magnuson-Stevens Fisheries Conservation and Management Act, but also actually biting the bullet and setting aside strictly enforced marine protected areas that include no-take zones. If successful, the United States could finally set an example for the world.

Thus far, the United States has not even cataloged its coastal or offshore resources and habitats. This should be done immediately. As this is done, the government should designate marine protected areas systematically and look for networks of individual reserves that act to conserve the whole.

Designating an area as a marine park can elevate its perceived value, creating more public support for protecting it.

In terms of implementation, a dual track should be employed. The first track is strengthening the federal commitment by making sure that federal agencies recognize their responsibility to adequately protect the oceans and their commons. This requires getting beyond the hype and fluff of the Marine Sanctuaries Program, whose mandate is really only to create recreation areas, and into the hard work of designating ecologically critical areas that are off limits to some or all kinds of activities, and then dedicating adequate resources to surveillance and enforcement of these areas.

The second track is strengthening states’ commitments to protecting the shore and coastline, where the greatest sites of damage and sources of threats lie. This includes linking management of coastal waters with management of land along the coastline. Ultimately, federal and state authorities should integrate their work to create a comprehensive strategy that begins on land and in rivers, crosses the shoreline, and extends out to the deep sea.

Meanwhile, policy should empower local communities and user groups to help conserve resources. Communities in the San Juan Islands of Washington state are already moving in this direction by establishing citizen-run, volunteer no-take zones. The nation should learn from this example and make it possible for other communities to follow in its footsteps. Protected areas that are co-managed bring oceans and marine life into view as crucial parts of the national heritage, helping to overcome the out-of-sight, out-of-mind dilemma.

Without decisionmakers taking better responsibility for marine conservation and protection of the oceans, marine biodiversity the world over will be permanently compromised. We have an obligation to be stewards. It is also in the U.S. national interest to protect the natural resources within its borders in order to become less dependent on other countries, avoid the huge recuperation costs of damaged areas, protect fishing and other ocean industries, and preserve a way of life along the shores.

Though the lack of political will to protect the sea can be discouraging, the half-empty glass is, as always, also half full. The United States is lucky that its history of tinkering with the oceans is thus far brief, and it hasn’t had the time yet to establish entrenched bureaucracies and rigid systems of rules. It now has an opportunity that must not be wasted. If there was ever a time to go forward with a well-planned and executed system of marine reserves, it is now. It may well be that the future of Earth’s oceans will rest firmly on the shoulders of the new generation of marine protected areas.

Reshaping National Forest Policy

During his two and a half years as chief of the U.S. Forest Service, Mike Dombeck has received considerable attention and praise from some unlikely sources. On June 15 this year, for instance, the American Sportfishing Association gave Dombeck its “Man of the Year” award. Two days earlier, the New York Times Magazine featured Dombeck as “the environmental man of the hour,” calling him “the most aggressive conservationist to head the Forest Service in at least half a century.”

Dombeck has also drawn plenty of criticism, especially from the timber industry and members of Congress who want more trees cut in the 192-million-acre National Forest System. Last year, angered by Dombeck’s conservation initiatives, four western Republicans who chair the Senate and House committees and subcommittees that oversee the Forest Service threatened to slash the agency’s budget. They wrote to Dombeck, “Since you seem bent on producing fewer and fewer results from the National Forests at rapidly increasing costs, many will press Congress to seriously consider the option to simply move to custodial management of our National Forests in order to stem the flow of unjustifiable investments. That will mean the Agency will have to operate with significantly reduced budgets and with far fewer employees.”

Based on his performance to date, Dombeck is clearly determined to change how the Forest Service operates. He has a vision of the future of the national forests that is fundamentally at odds with the long-standing utilitarian orientation of most of his predecessors. Dombeck wants the Forest Service to focus on protecting roadless areas, repairing damaged watersheds, improving recreation opportunities, identifying new wilderness areas, and restoring forest health through prescribed fire.

Although Dombeck’s conservation-oriented agenda seems to resonate well with the U.S. public, it remains to be seen how successful he will be in achieving his goals. To succeed, he must overcome inertial or hostile forces within the Forest Service and Congress, while continuing to build public support by taking advantage of opportunities to implement his conservation vision.

An historic shift

Dombeck’s policies and performance signify an historic transformation of the Forest Service and national forest management. Since the national forests were first established a century ago, they have been managed principally for utilitarian objectives. The first chief of the Forest Service, Gifford Pinchot, emphasized in a famous 1905 directive that “all the resources of the [national forests] are for use, and this use must be brought about in a prompt and businesslike manner.” After World War II, the Forest Service began in earnest to sell timber and build logging access roads. For the next 40 years, the national forests were systematically logged at a rate of about 1 million acres per year. The Forest Service’s annual timber output of 11 billion board feet in the late 1980s represented 12 percent of the United States’ total harvest. By the early 1990s, there were 370,000 miles of roads in the national forests.

During the postwar timber-production era of the Forest Service, concerns about the environmental impacts of logging and road building on the national forests steadily increased. During the 1970s and 1980s, Forest Service biologists such as Jerry Franklin and Jack Ward Thomas became alarmed at the loss of biological diversity and wildlife habitat resulting from logging old-growth forests. Aquatic scientists from federal and state agencies and the American Fisheries Society presented evidence of serious damage to streams and fish habitats caused by logging roads. At the same time, environmental organizations stepped up their efforts to reform national forest policy by lobbying Congress to reduce appropriations for timber sales and roads, criticizing the Forest Service in the press, and filing lawsuits and petitions to protect endangered species.

The confluence of science and environmental advocacy proved to be the downfall of the Forest Service’s timber-oriented policy. Change came first and most dramatically in the Pacific Northwest, when federal judge William Dwyer in 1989 and again in 1991 halted logging of old-growth forests in order to prevent extinction of the northern spotted owl. In 1993, President Clinton held a Forest Conference in Portland, Oregon, and directed a team of scientists, including Franklin and Thomas, to develop a “scientifically sound, ecologically credible, and legally responsible” plan to end the stalemate over the owl. A year later, the Clinton administration adopted the scientists’ Northwest Forest Plan, which established a system of old-growth reserves and greatly expanded stream buffers. Similar court challenges, scientific studies, and management plans occurred in other regions during the early 1990s.

The uproar over the spotted owl and the collapse of the Northwest timber program caused the Forest Service to modify its traditional multiple-use policy. In 1992, Chief Dale Robertson announced that the agency was adopting “ecosystem management” as its operating philosophy, emphasizing the value of all forest resources and the need to take an ecological approach to land management. The appointment of biologist Jack Ward Thomas as chief in 1994–the first time the Forest Service had ever been headed by anyone other than a forester or road engineer–presaged further changes in the Forest Service.

Meanwhile, Congress was unable to agree on legislative remedies to the Forest Service’s problems. The only significant national forest legislation enacted during this period of turmoil was the temporary “salvage rider” in 1995. That law directed the Forest Service to increase salvage logging of dead or diseased trees in the national forests and exempted salvage sales from all environmental laws during a 16-month “emergency” period. Congress also compelled the agency to complete timber sales in the Northwest that had been suspended or canceled due to endangered species conflicts.

The salvage rider threw gasoline on the flames of controversy over national forest management. Chief Thomas’s efforts to achieve positive science-based change were largely sidetracked by the thankless task of attempting to comply with the salvage rider. Thomas resigned in frustration in 1996, warning that the Forest Service’s survival was threatened by “demonization and politicization.”

Fish expert with a land ethic

Dombeck took over as chief less than a month after the salvage rider expired. With a Ph.D. in fisheries biology, Dombeck has brought a perspective and agenda to the Forest Service that are very different from those of past chiefs. He has made it clear that watershed protection and restoration, not timber production, will be the agency’s top priority.

What sets Dombeck apart as a visionary leader, though, is not his scientific expertise but his philosophical beliefs and his desire to put his beliefs into action. The land ethic of fellow Wisconsinite Aldo Leopold is at the root of Dombeck’s policies and motivations. He first read Leopold’s land conservation essays in A Sand County Almanac while attending graduate school. Dombeck now considers it to be “one of the most influential books about the relationship of people to their lands and waters,” and he often quotes from Leopold in his speeches and memoranda.

In his first appearance before Congress on February 25, 1997, Dombeck made it clear that he would be guided by the land ethic. The paramount goal of the Forest Service under his leadership would be “maintaining and restoring the health, diversity, and productivity of the land.” What really caught the attention of conservationists, though, were Dombeck’s remarks regarding management of “controversial” areas. Citing the recommendations of a forest health report commissioned by Oregon Governor John Kitzhaber, Dombeck stated, “Until we rebuild [public] trust and strengthen those relationships, it is simply common sense that we avoid riparian, old growth, and roadless areas.”

The damaging effects of roads

Roadless area management has long been a lightning rod of controversy in the national forests. Roadless areas cover approximately 50 to 60 million acres, or about 25 to 30 percent of all land in the national forests, and another 35 million acres are congressionally designated wilderness. The rest of the national forests contain some 380,000 miles of roads, mostly built to access timber to be cut for sale. During the 1990s, Congress became increasingly reluctant to fund additional road construction because of public opposition to subsidized logging of public lands. In the summer of 1997, the U.S. House of Representatives came within one vote of slashing the Forest Service’s road construction budget. Numerous Forest Service research studies shed new light on the ecological values of roadless areas and the damaging effects of roads on water quality, fish habitat, and biological diversity.

Watershed protection and restoration, not timber production, will be the agency’s top priority.

Still, many observers were shocked when in January 1998, barely a year after starting his job, Dombeck proposed a moratorium on new roads in most national forest roadless areas. The moratorium was to be an 18-month “time out” while the Forest Service developed a comprehensive plan to deal with its road system. Although the roads moratorium would not officially take effect until early 1999, the Forest Service soon halted work on several controversial sales of timber from roadless areas in Washington, Oregon, Idaho, and elsewhere. The moratorium catapulted Dombeck into the public spotlight, bringing editorial acclaim from New York to Los Angeles, along with harsh criticism in congressional oversight hearings.

The big question for Dombeck and the Clinton administration is what will happen once the roadless area moratorium expires in September 2000. There is substantial public and political support for permanent administrative protection of the roadless areas. Recent public opinion polls indicate that more than two-thirds of registered voters favor a long-term policy that protects roadless areas from road building and logging. In July 1999, 168 members of Congress signed a letter urging the administration to adopt such a policy.

One possible approach for Dombeck is to deal with the roadless areas through the agency’s overall road management strategy and local forest planning process. This may be the preferred tactic among Dombeck’s more conservative advisors, since it could leave considerable discretion and flexibility to agency managers to determine what level of protection is appropriate for particular roadless areas. However, it would leave the fate of the roadless areas very much in doubt, while ensuring continued controversy over the issue.

A better alternative is simply to establish a long-term policy that protects all national forest roadless areas from road building, logging, and other ecologically damaging activities. Under this scenario, the Forest Service would prepare a programmatic environmental impact statement for a nationwide roadless area management policy that would be adopted through federal regulation. This approach may engender more controversy in the short term, but it would provide much stronger protection for the roadless areas and resolve a major controversy in the national forests.

The roadless area issue gives Dombeck and the administration an historic opportunity to conserve 60 million acres of America’s finest public lands. Dombeck should follow up on his roadless area moratorium with a long-term protection policy for roadless areas.

Water comes first

Shortly after the roadless area moratorium announcement in early 1998, Dombeck laid out his broad goals and priorities for the national forests in A Natural Resource Agenda for the 21st Century. The agenda included four key areas: watershed health, sustainable forest management, forest roads, and recreation. Among the four, Dombeck made it clear that maintaining and restoring healthy watersheds was to be the agency’s first priority.

According to Dombeck, water is “the most valuable and least appreciated resource the National Forest System provides.” Indeed, more than 60 million people in 3,400 communities and 33 states obtain their drinking water from national forest lands. A University of California study of national forests in the Sierra Nevada mountains found that water was far more valuable than any other commodity resource. Dombeck’s view that watershed protection is the Forest Service’s most important duty is widely shared among the public. An opinion survey conducted by the University of Idaho in 1995 found that residents in the interior Pacific Northwest consider watershed protection to be the most important use of federal lands.

If the Forest Service does indeed give watersheds top billing in the coming years, that will be a major shift in the agency’s priorities. Although watershed protection was the main reason why national forests were originally established a century ago, it has played a minor role more recently. As Dombeck observed in a speech to the Outdoor Writers Association of America, “Over the past 50 years, the watershed purpose of the Forest Service has not been a co-equal partner with providing other resource uses such as timber production. In fact, watershed purposes were sometimes viewed as a ‘constraint’ to timber management.” Numerous scientific assessments have documented serious widespread impairment of watershed functions and aquatic habitats caused by the cumulative effects of logging, road building, grazing, mining, and other uses.

Forest Service watershed management should be guided by the principle of “protect the best and restore the rest.” Because roadless areas typically provide the ecological anchors for the healthiest watersheds, adopting a strong, long-term, roadless area policy is probably the single most important action the agency can take to protect high-quality watersheds. The next step will be to identify other relatively undisturbed watersheds with high ecological integrity to create the basis for a system of watershed conservation reserves.

Actively restoring the integrity of degraded watersheds throughout the national forests will likely be an expensive long-term undertaking. The essential starting point is to conduct interagency scientific assessments of multiple watersheds in order to determine causes of degradation, identify restoration needs, and prioritize potential restoration areas and activities. Effective restoration often will require the cooperation of other landowners in a watershed. Once a restoration plan is developed, the Forest Service will have to look to Congress, state governments, and others for funding.

The revision of forest plans could provide a good vehicle to achieve Dombeck’s watershed goals. Dombeck has repeatedly stated that watershed health and restoration will be the “overriding priorities” of all future forest plans. Current plans, which were adopted during the mid-1980s, generally give top billing to timber production and short shrift to watershed protection. This fall, the Forest Service expects to propose new regulations to guide the plan revisions. Dombeck should take advantage of this opportunity to ensure that the planning regulations fully reflect his policy direction and priorities regarding watersheds and that the new plans do more than just update the old timber-based plans.

Designating wilderness areas

In May 1999, Dombeck traveled to New Mexico to commemorate the 75th anniversary of the Gila Wilderness, which was established through the efforts of Aldo Leopold while he was a young assistant district forester in Albuquerque. Dombeck said that the Wilderness Act of 1964 was his “personal favorite. It has a soul, an essence of hope, a simplicity and sense of connection.” Dombeck pledged that “wilderness will now enjoy a higher profile in national office issues.”

Presently, there are 34.7 million acres of congressionally designated wilderness areas in the national forests, or 18 percent of the National Forest System. The Forest Service has recommended wilderness designation for another 6.1 million acres. Because of congressional and administrative inaction, very little national forest wilderness has been designated or recommended since the mid-1980s, but Dombeck wants to change that. “The responsibility of the Forest Service is to identify those areas that are suitable for wilderness designation. We must take this responsibility seriously. For those forests undergoing forest plan revisions, I’ll say this: our wilderness portfolio must embody a broader array of lands–from prairie to old growth.”

Dombeck should follow up his roadless area moratorium with a nationwide roadless area management policy.

To his credit, Dombeck has begun to follow through on his wilderness objectives. Internally, he has formed a wilderness advisory group of Forest Service staff from all regions to improve training, public awareness, and funding of wilderness management. He has also taken the initiative in convening an interagency wilderness policy council to develop a common vision and management approaches regarding wilderness.

A significant test of Dombeck’s sincerity regarding future wilderness will come in his decisions on pending administrative appeals of four revised forest plans in Colorado and South Dakota. The four national forests contain a total of 1,388,000 acres of roadless areas, of which conservationists support 806,000 acres for wilderness designation. However, the revised forest plans recommend wilderness for only 8,551 acres–less than one percent of the roadless areas. The chief can show his agency and the public that he is serious about expanding the wilderness system by remanding these forest plans and insisting that they include adequate consideration and recommendation of new wilderness areas.

Recreational uses

Dombeck sees a bright future for the national forests and local economies in satisfying Americans’ insatiable appetite for quality recreation experiences. National forests receive more recreational use than any other federal land system, including national parks. Recreation in the national forests has grown steadily from 560 million recreational visits in 1980 to 860 million by 1996. The Forest Service estimates that national forest recreation contributes $97.8 billion to the economy, compared to just $3.5 billion from timber.

However, Dombeck has cautioned that the Forest Service will not allow continued growth in recreational use to compromise the health of the land. In February this year, Dombeck explained the essence of the recreation strategy he wants the agency to pursue: “Most Americans value public lands for the sense of open space, wildness and naturalness they provide, clean air and water, and wildlife and fish. Other uses, whether they are ski developments, mountain biking trails, or off-road vehicles have a place in our multiple use framework. But that place is reached only after we ensure that such activities do not, and will not, impair the productive capacity of the land.”

Off-road vehicles (ORVs) are an especially serious problem that Dombeck needs to address. Conflicts between nonmotorized recreationists (hikers, horse riders, and cross-country skiers) and motorized users (motorcyclers and snowmobilers) have escalated in recent years. The development of three- and four-wheeled all-terrain vehicles, along with larger and more powerful snowmobiles, has allowed ORV users to expand their cross-country routes and to scale steeper slopes. Ecological consequences include disruption of remote habitat for elk, wolverine, wolves, and other solitude-loving species, as well as soil erosion and stream siltation. Yet the Forest Service has generally shied away from cracking down on destructive ORV use. Indeed, in 1990 the agency relaxed its rules to accommodate larger ORVs on trails.

One way for Dombeck to deal firmly with the ORV issue is to adopt a regulation that national forest lands will be closed to ORV use except on designated routes. ORVs should be permitted only where the Forest Service can demonstrate that ORV use will do no harm to the natural values, wildlife, ecosystem function, and quality of experience for other recreationists. The chief clearly has the authority to institute such a policy under executive orders on ORVs issued in the 1970s.

The need for institutional reform

Perhaps Dombeck’s biggest challenge is to reorient an agency whose traditions, organizational culture, and incentives system favor commercial exploitation of national forest resources. For most of the past 50 years, the Forest Service’s foremost priority and source of funding has been logging and road building. During the 1990s, the Forest Service has sold only one-third as much timber as it did in the 1980s and 1970s, while recreation use has steadily grown in numbers and value. Yet many of the agency’s 30,000 employees still view the national forests primarily as a warehouse of timber and other commodities.

The Forest Service urgently needs a strong leader who is able to inspire the staff and communicate a favorable image to the public. For the past decade, the Forest Service has been buffeted by demands for reform and reductions in budgets and personnel. The number of agency employees fell by 15 percent between 1993 and 1997, largely in response to the decline in timber sales. Yet the public’s expectations and the agency’s workload have grown in other areas such as recreation management, watershed analysis, and wildlife monitoring, creating serious problems of overwork and burnout. Consequently, even Forest Service staff who are philosophically supportive of Dombeck’s agenda worry about the potential for additional “unfunded mandates” from their leader. They are watching–some hopefully, others skeptically–to see if Dombeck can deliver the personnel and funding necessary to carry out his agenda.

Dombeck has shown that he is willing to make significant personnel changes to move out the old guard in the agency. In his first two years as chief, he replaced all six deputy chiefs and seven of the nine regional foresters. He has made a concerted effort to bring more women, ethnic minorities, and biologists into leadership roles. The Timber Management division has been renamed the Forest Ecosystems division. Now he needs to take the time to go to the national forests to visit and meet with the rangers and specialists who are responsible for carrying out his agenda. Dombeck has been remarkably successful at communicating with the media and the public and gaining support from diverse interest groups. But he needs to do a better job of connecting with and inspiring his field staff.

Dombeck has also taken on the complex task of reforming the Forest Service’s timber-based system of incentives. During the agency’s big logging era, agency managers were rated principally on the basis of how successful they were in “getting out the cut”: the quantity of timber that was assigned annually to each region, national forest, and ranger district. On his first day as chief, Dombeck announced that every forest supervisor would have new performance measures for forest health, water quality, endangered species habitat, and other indicators of healthy ecosystems.

Far more daunting is the need to reform the agency’s financial incentives. A large chunk of the Forest Service’s annual budget is funded by a variety of trust funds and special accounts that rely exclusively on revenue from timber sales. Dombeck summed up the problem as follows at a meeting of Forest Service officials in fall 1998. “For many years, the Forest Service operated under a basic formula. The more trees we harvested, the more revenue we could bring into the organization, and the more people we could hire . . . [W]e could afford to finance the bulk of the organization on the back of the timber program.”

Not surprisingly, the management activities that have primarily benefited from timber revenues are logging and other resource utilization activities. An analysis of the Forest Service budget between 1980 and 1997 by Wilderness Society economist Carolyn Alkire shows that nearly half of the agency’s expenditures for resource-use activities has been funded through trust funds and special accounts. In contrast, virtually all funds for resource-protection activities, such as soil and wilderness management, have come from annual appropriations, which are subject to the vagaries of congressional priorities and whims.

Although clearly recognizing the problem of financial incentives, Dombeck has had little success in solving it thus far. He has proposed some administrative reforms, such as limiting the kinds of logging activities for which the salvage timber sale trust fund can be used. However, significant reform of the Forest Service’s internal financial incentives will depend on the willingness of Congress to appropriate more money for nontimber management activities.

Dombeck could force the administration and Congress to address the incentives issue by proposing an annual budget for the coming fiscal year that is entirely funded through appropriations. Dispensing with the traditional security of trust funds and special accounts would doubtless meet resistance from those in the agency who have benefited from off-budget financing. Still, bold action is appropriate and essential to eliminate a solidly entrenched incentive system that is blocking Dombeck’s efforts to achieve ecological sustainability in the national forests.

Dombeck’s second major challenge is to convince Congress to alter funding priorities from commodity extraction to environmental restoration. The timber industry has traditionally had considerable sway over the agency’s appropriations, and the recent decline in timber production from the national forests has happened in spite of continued generous funding of the timber program. However, Congress has become increasingly skeptical of appropriating money for new timber access roads, partly because of the realization that new roads will add to the Forest Service’s $8.5 billion backlog in road maintenance. In July 1999, the House voted for the first time to eliminate all funding for new timber access roads.

The challenge is to reorient an agency whose culture favors commercial exploitation of national forest resources.

Congress has also shown somewhat greater interest in funding restoration-oriented management. For example, funding for fire prevention activities such as prescribed burning and thinning of small trees has increased dramatically. This year’s Senate appropriations bill includes a new line item requested by the administration for forest ecosystem restoration and improvement. On the other hand, the Senate appropriations committee gave the Forest Service more money than it requested for timber sales, stating that “the Committee will continue to reject Administration requests designed to promote the downward spiral of the timber sales program.”

Probably the best hope for constructive congressional action in the short term is legislation to reform the system of national forest payments to counties. Since the early 1900s, the Forest Service has returned 25 percent of its receipts from timber sales and other management activities to county governments for roads and schools. As a consequence of the decline in logging on national forests, county payments have dropped substantially in recent years, prompting affected county officials to request congressional help. Legislation has been introduced that would restore county payments to historical levels, irrespective of timber sale receipts.

Environmentalists and the Clinton administration want to enact legislation that will permanently decouple county payments from Forest Service revenues. Decoupling would stabilize payments and eliminate the incentive for rural county and school officials to promote more logging. The timber industry and some county officials want to retain the link between logging and schools in order to maintain pressure on the Forest Service and to avoid reliance on annual congressional appropriations. However, the legislation could avoid the appropriations process and ensure stable funding by establishing a guaranteed entitlement trust fund in the Treasury, much as Congress did in 1993 to stabilize payments to counties in the Pacific Northwest affected by declining timber revenues.

Guided by a scientific perspective and a land ethic philosophy, Chief Dombeck has brought new priorities to the Forest Service. He has succeeded in communicating an ecologically sound vision for the national forests and a sense of purpose for his beleaguered agency. He has begun to build different, more broadly based constituencies and receive widespread public support for his policies. Dombeck still faces considerable obstacles to achieving his vision within the Forest Service and in Congress. But by remaining true to his values and taking advantage of key opportunities to gain public support, he may go down in history as one of America’s greatest conservationists.

Archives – Fall 1999

Photo: National Science Foundation

Drilling a Hole in the Ocean

Project Mohole represented, as one historian described it, the earth sciences’ answer to the space program. The project involved a highly ambitious attempt to retrieve a sample of material from the Earth’s mantle by drilling a hole through the crust to the Mohorovicic Discontinuity, or Moho. Such a sample, it was hoped, would provide new information on the Earth’s age, makeup, and internal processes, as well as evidence bearing on the then still controversial theory of continental drift. The plan was to drill through to the Moho through the seafloor at points where the Earth’s crust is thinnest.

Only the first phase of the projected three-phase program was completed. During that phase, the converted Navy barge pictured here conducted drilling trials off Guadalupe Island in the spring of 1961 and in the process broke existing drilling depth records by a wide margin. Although Project Mohole failed in its intended purpose of obtaining a sample of the Earth’s mantle, it did demonstrate that deep ocean drilling is a viable means of obtaining geological samples.

Pork Barrel Science

In 1972, three architects—Robert Venturi, Denise Scott Brown, and Steven Izenour—published a book entitled Learning from Las Vegas. Its premise was simple if controversial: That however garish, ugly, and bizarre an outsider judged the architecture of Las Vegas, lots of people still chose to live, work, and play there. Why? What was attractive in what seemed to outsiders so repellent? It was an influential book.

Now we have James Savage’s book, which might just as easily have been called Learning from Earmarking. If earmarking public funds for specified research projects and facilities is for many “garish, ugly, and bizarre,” why is the practice so robust? Why, despite steadfast condemnations from major research institutions, university presidents, and leading politicians in both federal branches, is the practice alive and well? In a very extensive survey of earmarking, the Chronicle of Higher Education recently reported a record $797 million in earmarked funds for FY 1999, a 51 percent increase over 1998. The institutions receiving the FY 1999 earmarks include 45 of the 62 members of the Association of American Universities (AAU), the organization of major research universities. The AAU president told the Chronicle that he is “deeply concerned” by these earmarks, following the tradition of his predecessors who condemned earmarks while their members cashed the checks. To be fair, AAU presidents are not alone in finding themselves having to take both forks of the road. As Savage tells us, few are without sin, and many very public opponents of earmarks also accepted them. Quoting Kurt Vonnegut, so it goes.

But why? A simplistic answer would be Willie Sutton’s explanation for why he robbed banks: That’s where the money is. But it’s more nuanced than that. First, what is an “earmark”? Savage defines it as “a legislative provision that designates special consideration, treatment, funding, or rules for federal agencies or beneficiaries.” Proponents of earmarks often embellish this definition with rhetoric about the value of earmarks, rhetoric that Savage examines with care and depth. An asserted value is that earmarks can help states and institutions “bootstrap” themselves to a level where they can compete fairly for federal research funds that are available through the peer review system, which is still the dominant mode of federal research funding.

Since Savage dates academic earmarking back to 1977, when Tufts University sought and received the first earmark for research, we should be able to tell whether earmarks gave the recipients traction in competing for federal research funds. The short answer is not significantly. For example, when Savage examines changes in research rankings of states against earmarks they received between 1980 and 1996, he finds that “the total earmarked dollars a state obtained had a positive, though limited, relationship to improved rank. Among the top ten states receiving earmarks, four increased their rank, two declined, and three experienced no change.” Further, “the poorest states in terms of receiving R&D funds have received relatively few earmarks.” The exception is West Virginia, whose Senator, Robert Byrd, chaired the appropriations committee when the Democrats controlled the Senate. As the late Rep. George Brown, Jr., easily the most vigorous congressional opponent of earmarking, pointed out: “Earmarks are allocated not on the basis of need (as many would suggest), but in fact in direct proportion to the influence of a few senior and influential members of Congress.” But, then, as former Senator Russell Long of Louisiana remarked: “[I]f Louisiana is going to get something, I would rather depend on my colleagues on the Appropriations Committee than on one of those peers. I know a little something about universities . . . They have their own brand of politics, just as we have ours.” Senator Long went on to ask the full Senate: “When did we ever vote for peer review?”

The story of states and earmarks is much the same for universities. Savage looks at changes in research ranks for universities receiving $40 million or more in earmarks between 1980 and 1996, reasonably believing that this level of funding gave an institution substantial help in improving its ability to compete for peer-reviewed federal funds. Thirty-five institutions are included, with the top and bottom being the University of Hawaii ($159 million) and the University of South Carolina ($40 million). The results are mixed: “Of the thirty-five institutions identified, thirteen improved their rankings and ten experienced a decline.” The other schools were unranked when they received their first earmarks, and remained so. True, one has to go beyond numbers for a fuller story, and Savage does so, pointing out that the lasting impact of earmarks depended on how well an institution used the money to strengthen itself in areas where the federal dollars are, which is principally in programs funded by the National Institutes of Health (NIH) and the National Science Foundation (NSF).

He cites the contrasting experiences of the Oregon Health Sciences University (OHSU) and Oregon State University (OSU), both of whom did very well in federal earmarks when Oregon Senator Mark Hatfield chaired the Senate Appropriations Committee. OHSU used its earmarks to strengthen its capacities in health and related sciences, enabling it to compete far more effectively for NIH funds, whereas OSU used its earmarks for agricultural programs, for which competitive research funds are sparse. OHSU’s research ranking went up and that of OSU fell. More generally, Savage makes a “live by the sword, die by the sword” point about accepting earmarks when he notes that unlike peer review, earmarking has neither an institutionalized structure nor a routine process. As a key congressional supporter goes, so usually goes the earmark. For example, the Consortium for the International Earth Science Information Network (CIESIN), was created through earmarking with the considerable help of Michigan Rep. Bob Traxler, who chaired an appropriations subcommittee. CIESIN was in Michigan but Traxler retired, and CIESIN is now part of Columbia University in New York, at a very-much-reduced budget level.

Savage does give credence to motives for earmarks other than gaining equity in competing for federal funds, notably weaknesses in federal support for the construction and operation of research facilities. Federal support for facilities reached about a third of total facilities funding in 1968 and then declined in the 1970s and 1980s for various reasons, including a federal shift away from institutional research grants in favor of student aid and support of individual investigators; a shift favored by a substantial part of the academic research community and its associations. Certainly, the academic community made it plain in the context of severe pressures on the federal research budget that it did not support facilities at the expense of funds for research projects. The upshot was that by the 1980s, federal funding for facilities was extremely meager, so much so that the president of Columbia University could reasonably argue in 1983, when he sought earmarked money for a new chemistry building, that the earmark didn’t compete with peer-reviewed programs because “the federal government’s peer-reviewed facilities program had ceased to exist.”

Earmarking goes big time

Columbia got its money, which was taken out of Department of Energy funds intended for Yale University and the University of Washington. Columbia’ action was widely condemned (as was a similar action at the time by Catholic University), but it was only a trickle in what became a river. AAU members alone received $1.5 billion in earmarks between 1980 and 1996, 28 percent of the total. Much of that was acquired using the same tactics that Columbia and Catholic had used: Hire knowledgeable Washington insiders who know how the appropriations process really works. Savage notes that the firm of Schlossberg and Cassidy “perfected the art of academic earmarking, as they located the money for earmarking in the remotest and most obscure accounts in the federal budget. All the while, they have aggressively encouraged the expansion of earmarking by promising universities, some of them eager, others reluctant, for the scarce dollars needed for their most desired research projects and facilities.” The firm, transmuted to Cassidy and Associates in 1984, has become, notes Savage, “one of the largest, most influential, and most aggressive lobbying firms in Washington.” Fees are high, but there seemed to be few complaints. A university vice president comments that “it’s extraordinarily cost-effective, if you think about the amount the university has paid, and the amount the university has been paid.” For example, Columbia University paid $90,000 for a $1 million earmark, and the Rochester Institute of Technology paid $254,000 for $1.75 million. Schlossberg and Cassidy were unapologetic about their work, their position being, in Savage’s words, that “of an entrepreneurial, commission-based, fee-for-earmarking lobbying firm: when a university client approached them for help on a project that was politically feasible, and likely to be successfully funded, they usually accepted.”

During the 1980s and into the 1990s, several legislative efforts to control academic earmarking were launched but failed. As Savage comments, “the fragmented and uncoordinated opposition offered by individual members of Congress and a handful of authorizations committees has been insufficient to beat the resourceful and tenacious appropriations committees.” And although there have been occasional noises in the press about earmarks, they are, in Savage’s view, so much spitting into the wind. Indeed, Rep. Brown, speaking of the press coverage of his fight against earmarking said “I received no electoral benefit from it.” Efforts have been made to reduce the incentive to seek earmarks by appropriating funds specifically for those states that receive little federal research money. Most notably, NSF teamed up with several other agencies to create the Experimental Program to Stimulate Competitive Research (EPSCOR) and is requesting $48 million for the program in FY2000. Although the goal of the program is commendable, I am aware of no evidence that it has discouraged earmarking.

Savage is now an associate professor at the University of Virginia, and his well-traveled resume includes work on earmarking and other issues with several congressional support agencies: the Congressional Research Service, the former Office of Technology Assessment, and the General Accounting Office. That worldly, inside-the-beltway experience combined with an equally impressive resume as a scholar results in a book that is fair, thorough, and well-researched. He offers sympathetic interpretations of actions where it might have been easy to simply hammer away, especially at the more greedy players. He does, however, break out into quite palpable scorn for the academic establishment, “as the leaders of many of its prominent institutions say one thing and do another.”

Will earmarking go on? Well, in August 1999, the president’s chief of staff, John Podesta, in offering the White House version, observed in prepared remarks about the House Republicans’ treatment of research that ” by digging deep into the pork-barrel, they earmarked nearly $1 billion in R&D projects, undermining the discipline of competition and peer review, and slashing funding for higher priority projects. Although in 1994 Republicans pledged to cut wasteful spending, it’s clear that they’re more interested in larding up the budget than pursuing cutting-edge research.” The chair of the House Science Committee, James Sensenbrenner, responded in kind: “I am encouraged by the Administration’s sudden interest in science funding. Over the past seven years, overall science budgets, which include both defense and civilian R&D, when indexed for inflation, have been flat or decreasing. Science needs a boost.”

So it goes.

Imagining the Future

Speculating about humanity’s future has become a fairly dreary business of late. Although there are many attempts to sketch landscapes for the coming millennium, the pictures generated are typically stiff and lifeless. In countless books and news stories on the year 2000, the basic assumption is that the future will simply be an endless string of technological breakthroughs to which humanity will somehow adapt. Yes, there will be powerful new forms of communication, computing, transit, medicine, and the like. As people reshape their living patterns, using the market to express incremental choices, new social and cultural forms will emerge. End of story.

Strangely missing from such gadget-oriented futurism is any sense of higher principle or purpose. In contrast, ideas for a new industrial society that inspired thinkers in the 19th and early 20th centuries were brashly idealistic. Theorists and planners often upheld human equality as a central commitment, proposing structures of community life that matched this goal and seeking an appropriate mix of city and country, manufacturing and agriculture, solidarity and freedom. In this way of thinking, philosophical arguments came first and only later the choice of instruments. On that basis, the likes of Robert Owen, Charles Fourier, Charlotte Perkins Gilman, Ebenezer Howard, and Frank Lloyd Wright offered grand schemes for a society deliberately transformed, all in quest of a grand ideal.

As one of today’s leading post-utopian futurists, Freeman J. Dyson places technological devices at the forefront of thinking about the future. “New technologies,” he says, “offer us real opportunities for making the world a happier place.” Although he recognizes that social, economic, and political influences have much to do with how new technologies are applied, he says that he emphasizes technology “because that is where I come from.” In that vein his book sets out to imagine an appealing future predicated on technologies identified in its title: The Sun, the Genome, & the Internet.

The project is somewhat clouded by the fact that he has made similar attempts in the past with limited success. Infinite in All Directions, published in 1985, upheld genetic engineering, artificial intelligence, and space travel as the truly promising sources of change in our civilization. Now, with disarming honesty, he admits that two of these guesses were badly mistaken and have been removed from his hot list. “In the short run, he concludes, “space travel is a joke. We look at the bewildered cosmonauts struggling to survive in the Mir space station.” By the same token, artificial intelligence has been a tremendous disappointment. “Robots,” he laments, “are not noticeably smarter today than the were fourteen years ago.” From his earlier crystal ball, only genetic engineering still holds much luster. Evidently, Yogi Berra’s famous maxim holds true: “It’s tough to make predictions, especially when you’re talking about the future.”

What makes solar energy, biotechnology, and the Internet appealing to Dyson is that they translate ingenious science into material well-being; sources of wealth that, he believes, will now be more widely distributed than ever before. Because sunlight is abundant in places where energy is now most needed, improvements in photovoltaic systems will bring electric power to isolated Third World villages. Soon the development of genetically modified plants will offer bountiful supplies of both food and cheap fuel, relieving age-old conditions of scarcity. As the Internet expands to become a universal utility, the world’s resources of information and problem solving will finally be accessible to everyone on the planet. Interacting in ways that multiply their overall benefit, the three developments will lead to an era of prosperity, peace, and contentment. The book’s most lively sections are ones that identify avenues of research likely to bear fruit in decades and even centuries ahead. Of particular fascination to Dyson are instruments that were initially created for narrow programs of scientific inquiry (John Randall’s equipment for X-ray crystallography, for example) but have a wide range of beneficial applications. Occasionally he goes so far as to suggest currently feasible but yet unrealized devices that scientists should be busy making. Both the desktop sequencer and desktop protein microscope, he insists, are among the inventions that await someone’s creative hand.

Slapdash social analysis

Alas, the charming vitality of Dyson’s techno-scientific imagination is not matched by a thoughtful grasp of human problems. News has reached him that there remain people in the world who, despite two centuries of rapid scientific and technological progress, remain desperately poor. He worries about the plight of the world’s downtrodden and urgently hopes that coming applications of science will turn things around. Unfortunately, nothing in the book shows any knowledge of the actual conditions of grinding poverty that confront a quarter of the world’s populace. Neither does he seem aware of the voluminous research by social scientists–Nobel Prize winner Amartya Sen for one–that explain how deeply entrenched patterns of inequality persist generation after generation, despite technological and economic advance. Emblematic of Dyson’s unwillingness to tackle these matters is his chapter on “Technology and Social Justice,” which offers neither a definition of social justice nor even a rudimentary account of alternative political philosophies that might shed light on the question.

The slapdash quality of the book’s social analysis sometimes leads to ludicrous conclusions. In one passage, Dyson surveys the 20th-century history of the introduction of electric appliances to the modern home. In the early decades of this century, he notes, even middle-class families would hire servants to handle much of the housework. With the coming of labor-saving appliances, however, the servants were sent packing and housewives began to do most of the cooking and cleaning. Up to this point, Dyson’s account is pretty much in line with standard histories of domestic work. But then his version of events takes a bizarre turn.

He recalls that middle-class women of his mother’s generation, supported by crews of servants, were sometimes able to leave the home to engage in many varieties of creative work. One such woman was the distinguished archeologist Hetty Goldman, appointed to Princeton’s Institute for Advanced Study in the 1930s. But for the next half-century until 1985, he laments, the institute hired no other women at all. “It seemed that there was nobody of her preeminence in the next generation of women,” he writes. What accounts for this astonishing setback in the fortunes of women within the highest levels of the scientific community? Dyson concludes that it must have been the coming of household appliances. No longer supported by servants, women were chained to their ovens, dishwashers, and toasters and were no longer able produce those “preeminent” contributions to human knowledge they so cherish at the institute. “The history of our faculty encapsulates the history of women’s liberation,” he observes without a hint of irony, “a glorious beginning in the 1920s, a great backsliding in the 1950s, a gradual recovery in the 1980s and 1990s.” Are there other possible explanations for this yawning gap? The sexism of the old boys’ club perhaps? Dyson does not bother to ask.

Ignoring political reality

The shortcomings in Dyson’s grasp of social reality cast a shadow on his map of a glorious tomorrow. His book shows little recognition of the political and economic institutions that shape new technologies: forces that will have a major bearing on the very improvements he recommends. In recent decades, for example, choices in solar electricity have been strongly influenced by multinational energy firms with highly diverse investment agendas. In corporate portfolios, the sun is merely one of a number of potential profit centers and by no means the one business interests place at the top of the list. Before one boldly predicts the advent of a solar age, one must understand not only the technological horizons but also the agendas of powerful decisionmakers and the economic barriers they pose. Similarly, the emerging horizons of biotechnology are, to a great extent, managed by large firms in the chemical and pharmaceutical industries. When compared to the product development and marketing schemes of the corporate giants, Dyson’s vision of an egalitarian global society nourished by genetically modified organisms has little credibility. He seems oblivious to a growing resistance to biotechnology among some of the farmers in Third World countries, a revolt against the monopolies that genetically engineered seed stocks might impose.

Occasionally, Dyson notices with apparent surprise that promising innovations are not working out as expected. “Too much of technology today,” he laments, “is making toys for the rich.” His solution to this problem is technology “guided by ethics,” an excellent suggestion, indeed. Once again, however, he does not explain what ethics of this kind would involve. Rather than argue a clear position in moral philosophy, the book regales readers with vague yearnings for a better world.

Some of Dyson’s own deepest yearnings emerge in the last chapter, “The High Road,” where he muses about manned space travel in the very long term. Eventually, he argues, there will be low-cost methods for launching space vehicles and for establishing colonies on distant planets, moons, asteroids, and comets. But the key to success will have less to do with the engineering of spaceships than with the reengineering of the genomes of living things. In another 100 years or so, we will have learned how to produce warm-blooded plants that could survive in chilly places such as the Kuiper Belt comets outside the orbit of Neptune. More important, in centuries to come humanity itself will divide into several distinct species through the wonders of reprogenic technology. Of course, this will create problems; the separate varieties of human beings are likely to hate each others’ differences and wage war on one another. “Sooner or later, the tensions between diverging ways of life must be relieved by emigration, some of us finding new places away from the Earth while others stay behind.”

At this rapturous moment Dyson begins to use the pronoun “we,” clearly identifying himself with the superior, privileged creatures yet to be manufactured. “In the end we must travel the high road into space, to find new worlds to match our new capabilities. To give us room to explore the varieties of mind and body into which our genome can evolve, one planet is not enough.”

I put the book down. I pondered my response to its bizarre final proposal. Should we say a fond “Farewell!” as Dyson’s successors rocket off to the Kuiper Belt? I think not. A more appropriate valediction would be “Good riddance.”

The False Dichotomy: Scientific Creativity and Utility

The call by Gerald Holton and Gerhard Sonnert in the preceding article for government support for Jeffersonian research that is basic in nature but clearly linked to specific goals raises several practical questions. How might the institutions of government be expected to generate programs of Jeffersonian research? How might such programs be managed, and how might success and failure be assessed? Would the redirecting of a significant fraction of public research into this third channel attract political support, perhaps leading to more effective use of public resources as well as a broader consensus on research investments?

No one doubts that modern science and engineering radically expand humankind’s technological choices and can give us the knowledge with which to choose among them. But many politicians, listening to their taxpaying constituents, also feel it is the public’s right to know what the goals of the massive federal investments in research are. Some are vocally skeptical of large sums invested in basic science when the advocates of basic science insist that its outcomes cannot be predicted or values be assigned to the effort without a long passage of time.

Congress expects the managers of public science, whether in government or university laboratories, not only to articulate the goals of public investment but to measure progress toward those goals. These expectations of more explicit accountability by the publicly supported research community are embodied in the 1993 Government Performance and Results Act (GPRA), which requires the sponsoring agencies to document that progress in their budget submissions to Congress.

Many scientists, on the other hand, are fearful that these expectations, however well-intentioned, will lead to micromanagement of research by the sponsoring agencies, suppressing the intellectual freedom so necessary to scientific creativity. Planning of scientific research, they say, implies the advance selection of strategies, thus foregoing the chance to discover new pathways that might offer far more rapid progress in the long run. The only way to insure a truly dynamic scientific enterprise, they say, is to leave the scientists free to choose the problems they want to explore, probing nature at every point where progress in understanding may be offered. The practical benefits that flow from basic research, they would argue, far exceed what even the most visionary managers of a utilitarian research policy could have produced.

Skeptics of this Newtonian view of public science (research in response to curiosity about the workings of nature, with no other pragmatic motivation) will acknowledge that some part of the public research effort, especially that associated with the postgraduate training of the next generation of scientists and engineers, must be driven by the insatiable curiosity of the best researchers. But the politicians tell us that the federal investment in science is too big and the pressures to spend the money on competing social or fiscal needs are too great to allow blind faith in the power of intellectual commitment to substitute for an accountable process based on clearly stated goals. Some members of Congress who make this argument, such as the late George Brown, Jr., can lay claim to being the best friends of science. Without the politician’s ability to explain to the voters why all this money is being spent, the support for science may shrink and the public and intellectual benefits be lost.

Scylla and Charybdis

Must the nation chose between these two views and the policies they imply? Are we faced with a Hobson’s choice between a withering vine of public support for a free and creative science that is seen by many as irrelevant to public needs and a bureaucratic array of agency-managed applied research, pressing incrementally toward public goals it hasn’t the imagination to reach? There is a third way, well known to the more visionary research managers in government, that deserves to comprise a much more substantial part of the public research investment than it does today. We do not have to settle for a dichotomy of Newtonian science and Baconian research (application of existing knowledge on behalf of a sponsor with a specified problem to solve). We can and should dedicate a significant part of our national scientific effort to creating the skills, capacity, and technical knowledge with which the entire scientific enterprise of the country can address the most important issues facing humankind, while carrying out the work in the most imaginative, creative way.

In the urgent desire to protect the freedom of researchers to chose the best pathways to progress, science has often been sold to the politicians as something too mysterious and too risky for them to understand, and too unpredictable to allow the evaluation of the returns to the public interest until many years have passed. The promise of unpredictable new opportunities for society is, of course, a strong justification for Newtonian research. A portion of the federal research budget should be exempted from the normal political weighing of costs and near-term benefits. A recent study by the Committee on Science, Engineering and Public Policy (COSEPUP) of the National Academies has suggested that Newtonian research can be evaluated in compliance with GPRA, but only if tests of intellectual merit and comparative accomplishment internationally are the metric.

But much of America’s most creative science does contribute to identifiable areas of national interest in really important ways. There is every reason to recognize those connections where they are apparent and to adopt a set of national strategies for such basic scientific and technological research that can earn the support of Congress and form the centerpiece of a national research and innovation strategy. We need a new model for public science, and Jeffersonian research offers one way of articulating a central element of that new model.

The third category

An innovative society needs more research driven by societal need but performed under the conditions of imagination, flexibility, and competition that we associate with traditional basic science. Donald Stokes presented a matrix of utility and fundamentality in science and called the upper right corner “Pasteur’s Quadrant,” describing Pasteur’s research as goal-oriented but pursued in a basic research style. Some in Europe call it “strategic research,” intending “strategic” to imply the existence of a goal and a strategy for achieving it, but suggesting a lot of flexibility in the tactics for reaching the goal most effectively.

Discomfort with the binary categorization of federal research into basic and applied goes back a good many years. More recently, a 1995 study on the allocation of scientific resources carried out by COSEPUP under the leadership of Frank Press, former science adviser to President Carter, suggested that the U.S. government budget process should isolate a category of technical activity called Federal Science and Technology (FS&T). The committee felt that it was misleading to present to Congress a budget proposing R&D expenditures of some $80 billion without pointing out that only about half of this sum represented additions to the basic stock of scientific and engineering knowledge. The committee’s objective was to distinguish the component of the federal budget that called for creative research (in our parlance, the sum of the Newtonian and Jeffersonian budgets) from the development, testing, and evaluation that consume a large part of the military R&D budget but add relatively little to basic technological knowledge. Press’s effort, like our own, was aimed at gaining acceptance for the idea that it is in the national interest for much of the government’s R&D to be carried out under highly creative conditions.

Vannevar Bush has been much misunderstood; his position was much more Jeffersonian than most scientists believe.

I believe it would be much easier to understand what is required if the agencies would define basic research not by the character of the benefits the public expects to gain (large but unpredictable and long-delayed benefits in the case of Newtonian research) but rather by the highly creative environment in which the best basic research is carried out. If this idea is accepted, basic research may describe the environment in which both Newtonian and Jeffersonian science are carried out. In contrast, Baconian research is, like most industrial research, carried out in a more tightly managed and disciplined environment, since the knowledge to solve the identified problem is presumed to be substantially in hand.

If we pursue this line of reasoning, we are immediately led to the realization that the goals to which Jeffersonian research is dedicated require progress in both scientific understanding and in new technological discoveries. Thus not only basic science but a broad range of basic technology research of great value to society is required. The key idea here is to separate in our policy thinking the motives for spending public money on research from the choice of environments in which to perform the work. Thus, the idea of a Jeffersonian research strategy also serves to diminish the increasingly artificial distinction between science and technology (or engineering).

A long-running debate

The debate between Congress and the White House over post­World War II science policy was intense in 1946 and 1947. Congressional Democrats, led by Senator Harley Kilgore of West Virginia, wanted the impressive power of wartime research in government laboratories to address the needs of civil society, as it had done in such spectacular form in the war. Vannevar Bush, head of the Office of Scientific Research and Development (OSRD) in President Roosevelt’s administration, observed that university scientists had demonstrated great creativity in the development of radar, nuclear weapons, and tactics based on the new field of operations research. He concluded that conventional industrial and government research organizations were well suited to incremental advances accomplished in close relationships to the intended users. But to get the creativity and originality that produced radical progress, the researchers needed a lot of independence. In the United States, this kind of creative research atmosphere was most often found in the best research universities. His proposal was to fund that work through a National Research Foundation.

Bush has been much misunderstood; his position was much more Jeffersonian than most scientists believe. His concept for the National Research Foundation was strongly focused on empowering researchers outside government with a lot of independence, but it also contained divisions devoted to medical and military goals that were clearly focused on important long-term societal goals. He quite clearly stated that although the military services should continue to do the majority of defense R&D, they could be expected to push back the frontiers only incrementally. He argued that the Defense Department needed a more freewheeling, inventive research program, drawing on the power of creative thinking in the universities.

By the time Congress crafted a national science funding agency it had already been stripped of its more pragmatic national goals. Its role would be to advance science broadly. When finally enacted, the foundation’s director would be appointed by a science board, not by the president. President Truman’s veto message, crafted by Donald Price, objected to this lack of accountability to the president, who must ask Congress for the agency’s money. What emerged in 1953 was a National Science Foundation devoted to the broad support of autonomous academic science (and, by subsequent amendment, engineering).

The mission-oriented agencies had long since inherited their research agendas from the dissolution of the OSRD and established their own goal-oriented programs of research: the National Institutes of Health (NIH), the Atomic Energy Commission, and the research agencies of the three military services. Although the Office of Naval Research (ONR) inherited much of Bush’s philosophy, it was not until the late 1950s that an Advanced Research Projects Agency (later renamed the Defense Advanced Research Projects Agency) was created under the civilian control of the office of the Secretary of Defense to pursue more radical innovations, which would not be likely to emerge from military research agencies. Thus, NSF became a Newtonian agency in large measure, and the Jeffersonian concept would have to find a home in NIH and to some extent in the other mission-oriented agencies.

The concept that the mission agencies were responsible for sustaining the technical skills and knowledge infrastructure in support of national interests goes back to the Steelman Report in 1947 and was implemented by President Eisenhower in Executive Order 10521 on March 17, 1954. It might be said that a commitment to Jeffersonian science is thus the law of the land. Nevertheless, convincing the agencies to create such strategies and sell them to the White House and Congress has been a long struggle. In many cases, the agencies responded with modest investments in Newtonian research without a strong Jeffersonian program that identified additional research linked to a long-range strategy. However, the record does include some bright spots.

Jeffersonian tendencies

One can find many examples of federally funded research that is responsive to a vision of the future but supported by a highly creative and flexible research program. The most dramatic and successful examples are found in pursuit of two major national goals: defense and health. Defense research is a special case, from a public policy perspective, because the government is the customer for the products of private-sector innovation. Although the military services have pursued a primarily Baconian strategy that produced continuous advances in existing weapons systems, the Defense Advanced Research Projects Agency (DARPA) has invested strategically in selected areas of new science that were predictably of great, if not well-defined, importance. In creating the nation’s leading academic capability in computer science and in digital computer networking, largely through extended investments in a selected group of elite universities, DARPA was following in the visionary path defined by ONR in the years after World War II. However, the end of the Cold War has already led to a serious retrenchment in the Defense Department’s share of the nation’s most creative basic research.

The physical sciences are not without isolated Jeffersonian programs. Much of the search for renewable sources of energy that was initiated in the Carter administration, but is now substantially attenuated, was of this character. So too is advanced materials research that focuses on specific properties; this work draws on physics, chemistry, and engineering to create useful new properties that find their way into practical use. The program on thermonuclear fusion has pushed back the frontiers of plasma physics and has made significant progress toward its goal of fusion energy production. This year the administration appears ready to launch a national research program in nanotechnology, another potentially good example of Jeffersonian science.

The best current example of Jeffersonian research is provided by NIH, where biomedical and clinical research continues to satisfy the public’s expectations for fundamental advances in practical medicine on a broad front. If this model could be translated to the rest of science, the apparent conflict between creativity and utility would be largely resolved. It was this model that Senator Barbara Mikulski had in mind in challenging NSF to identify a substantial fraction of its research as “strategic,” specifying the broad societal goal to which the research might contribute. But health science is a special area in which, at least until recently, most of the benefit-delivery institutions (hospitals) were public or nonprofit private institutions, and no one objected to support by government of the clinical research that links biological science to medical practice. In the pursuit of economic objectives, on the other hand, the U.S. government is expected to let industry take responsibility for translating new scientific discoveries into commercial products, except where government is the customer for the ultimate product.

Still other programs, such as the NSF program on Research Applied to National Needs (RANN), were much more controversial than NIH biomedical science or DARPA computer networking. RANN was a response to public pressures of the early 1970s for more relevance of university science to social needs. RANN called for research to be performed on relatively narrowly defined, long-term national needs, such as research to mitigate the damage caused by fire. It was probably more successful than it is given credit for, but the appearance of the word “applied” in the title made many scientists, accustomed to NSF’s support of basic research, feel threatened.

Federal agencies should be investing in programs to enhance the nation’s capacity to address specific issues in the most creative way.

At about the same time, Congress passed the Mansfield Amendment (section 203 of the Defense Procurement Authority Act of 1970), which stated that no research could be funded unless it “has a direct and apparent relationship to a specific military function or operation.” The defense research agencies then required that academic proposals document the contribution to military interests that each project might make. The academic scientists could only speculate about possible military benefits; most simply had no knowledge that would permit them to make such a judgement. Clearly, even if the government program officers had made those judgements and communicated a broad strategy to the scientific community, the requirement that the researchers document the government’s strategy was inappropriate. This requirement, imposed at a time when universities were caught up in opposition to the Vietnam War and suspicious of defense research support, seemed to validate the scientists’ fears of what goal-oriented public research would entail.

Researchers are concerned not about the fact that government agencies have public interest goals for the research that they support but about the way in which agency goals are allowed to spill over into the conduct of the research. The NIH precedent demonstrates that as long as the agency’s scientific managers defend the goals (diagnosing and ameliorating disease) and defend the strategy for achieving them (basic research in biology and clinical medicine) with equal vigor, a Jeffersonian strategy for progress toward goals through creative research can be successful. When, as in the case of the Mansfield Amendment, the government takes a political shortcut by transferring responsibility for justifying the investment from the agency to the individual researchers, both science and the public interest suffer.

The corporate research managers in the best firms offer examples of the appropriate way to manage Jeffersonian research. Corporate laboratories that engage in basic research hire the most talented scientists whose training and interest lie within the scope of the firm’s future needs. Research managers make sure that the scientists are aware of those commercial needs and have access to the best information about work in the field around the world. They reward technological progress when it seems of special value. They recognize that progress in scientific understanding can not only offer new possibilities for products but can also inform technological choice and support the construction of technology roadmaps. In such laboratories one hears very little talk of basic or applied research. These labels are not felt to be useful. All long-range industrial research is seen as both need-driven and opportunistic, and in that sense Jeffersonian.

Losing balance

The leaders of the conservative 104th Congress waged a broad attack on national research programs that were justified by goals defined by the government (other than defense and health). At the same time, Rep. Robert Walker (R-Pa.), the new chair of the House Science Committee, claimed to be the defender of basic science. To symbolize this position, he removed the word “technology” from the committee’s name. But Mary Good, undersecretary of commerce for technology in the first Clinton administration, often pointed out the dangers of a strategy of relying solely on research performed for the satisfaction of intellectual curiosity. The politicians would soon realize, she said, that U.S. basic science was part of an internationally shared system from which all nations benefit. Failing to see how U.S. citizens would gain economic advantage from a national strategy that made no effort to target U.S. needs and opportunities, future Congresses might cut back funding of basic science even further. Equally dangerous, of course, is a nearsighted program of incremental research aimed at marginal improvements in the way things are done today. The nation will not be able to transform its economy into an environmentally sustainable one, develop safe and secure new energy sources, or learn how to educate its children effectively without a great many new ideas.

The United States should rely primarily on research performed under highly creative conditions, the conditions we associate with basic science. But we need not forego the benefits and the accountability that identifying our collective goals can bring. Indeed, if government agencies would generate long-term investment strategies and clearly articulate the basis for their expectations of progress, the nation would end up with the best of both worlds: research that is demonstrably productive and that helps build the future.

Next steps

To achieve this goal, government leaders must begin taking a much longer view, justifying and managing the work to maximize public benefits, taking into account both public and private investments. In every area of government activity, the responsible agencies should be investing in carefully planned programs to enhance the nation’s capacity to address specific issues in the most creative way. This strategy brings leverage to private-sector innovation, which can be expected to produce many, if not most, of the practical solutions to public problems. For this reason, a much larger fraction of the federal research agenda should be pursued under basic research conditions. At the same time, a larger fraction of the agenda should be linked directly to identified national interests. These two objectives are not only not in conflict; they support one another. Achieving these objectives will require a recognition that Jeffersonian research is as important to the future of the United States today as was the Lewis and Clark expedition two centuries ago, as well as a federal budgeting system that accommodates Jeffersonian as well as Newtonian and Baconian research.

To put the government on the right path, the Office of Science and Technology Policy should begin by selecting a few compelling, long-range issues facing the nation for which there is a widely recognized need for new technological options and new scientific understanding. This exercise would be similar to the one that Frank Press and President Carter conducted 20 years ago. Identifying a target issue would engage all of the relevant agencies, which now develop separate plans for their individual missions, in a concerted strategy of long-range creative research.

A candidate for such a project is the issue of the transition to sustainability in the United States and the world. A soon-to-be-released four-year study by the NRC’s Board on Sustainable Development, entitled Our Common Journey: A Transition Toward Sustainability, will outline what research is needed in a wide range of disciplines and how this research needs to be coordinated in order to be effective. Indeed, the report will go beyond research concerns to analyze how today’s techno-economic systems must be restructured in order to achieve environmentally sustainable growth. The preparation of a Jeffersonian research strategy for the transition to sustainability would provide the next president with an initiative that would compare favorably in scope, importance, and daring with the launching of the Lewis and Clark expedition by President Jefferson.

When the administration presented its R&D budget to Congress for FY 2000, the president called special attention to a collection of budget items that, in the administration’s view, were the creative (Newtonian and Jeffersonian) components of the budget. He called these items “The 21st Century Research Fund” and asked Congress to give them special consideration. This initiative was quite consistent with the spirit of the 1995 Press Report’s recommendation that the budget isolate for special attention the FS&T component as they defined it. When the Office of Management and Budget (OMB) director announced the president’s budget, he made specific reference to his intent to implement the spirit of the FS&T proposal, weeding out budget items that do not reflect the creativity, flexibility, and originality requirements that we associate with research as distinct from development.

Based on these two precedents, the staffs of the appropriations committees in the House and Senate, together with experts from OMB, should restructure the current typology of “basic, applied, and development” in a way that accommodates, separately, the public justification for research investments and the management environment in which the work is conducted. Such a restructuring has been urged in the past by others, particularly the General Accounting Office.

To explore the practicality of these ideas and to engage the participation of a broader community of stakeholders in the national research enterprise, a national conference should be called to prepare a nonpartisan proposal for consideration by all the candidates for president. The bicentenary of Jefferson’s assumption of the presidency would seem a good year to initiate this change.

Technology Needs of Aging Boomers

It happens every seven seconds: Another baby boomer turns 50 years old. As they have done in other facets of American life, the 75 million people born between 1946 and 1964 are about to permanently change society again. The sheer number of people that will be living longer and be more active than in previous generations will alter the face of aging forever.

One of the greatest challenges in the new century will be how families, business, and government will respond to the needs, preferences, and lifestyles of the growing number of older adults. In so many ways, technology has made longer life possible. Policymakers must now go beyond discussions of health and economic security to anticipate the aging boom and the role of technology in responding to the needs of an aging society. They must craft policies that will spur innovation, encourage business investment, and rapidly commercialize technology-based products and services that will promote well-being, facilitate independence, and support caregivers.

Society has invested billions of dollars to improve nutrition, health care, medicine, and sanitation to increase the average lifespan. In fact, longevity can be listed as one of the nation’s greatest policy achievements. The average American can plan to live almost twice as long as his relatives did at the turn of the century. Life expectancy in 1900 was little more than 47 years. In 2000, life expectancy will be at least 77, and some argue that the real number may be in the early- to mid-80s. Instead of looking at the high likelihood of death upon turning 50, as was the case in 1900, Horace Deets, executive director of the American Association of Retired Persons (AARP), has observed that an American who turns 50 today has more than half of his or her adult life remaining.

Although people are living longer, the natural aging process does affect vision; physical strength and flexibility; cognitive ability; and, for many, susceptibility to illness and injury. These changes greatly affect an individual’s capacity to interact with and manipulate the physical environment. The very things that we cherished when younger, such as a home and a car, may now threaten our independence and well-being as older adults.

Therein lies the paradox: After spending billions to achieve longevity, we have not made equitable investments in the physical infrastructure necessary to ensure healthy independent living in later years. Little consideration has been given by government, business, or individuals of how future generations of older adults will continue to live as they choose. These choices include staying in the homes that they spent a lifetime paying for and gathering memories in; going to and from the activities that collectively make up their lives; remaining connected socially; or, for an increasing number, working.

Moreover, as the oldest old experience disability and increased dependence, the nation is unprepared to respond to the needs of middle-aged adult children who must care for their children and their elderly parents while maintaining productive employment. Ensuring independence and well-being for as long as possible is more than good social policy, it is good economics as well.

All of us will pay higher health care costs if a large portion of the population is unable to access preventative care on a routine basis. Likewise, the inability of many older adults to secure adequate and reliable assistance with the activities of daily living may lead to premature institutionalization–a personal loss to family members and a public loss to society. Clearly, the power and potential of technology to address the lifestyle preferences and needs of an older population, and those who care about them, must be fully and creatively exploited.

Realities of aging

The baby boomers are not the first generation to grow old. However, their absolute numbers will move issues associated with their aging to the top of the policy agenda. Although chronological age is an imperfect measure of what is “old,” 65 is the traditional milepost of senior adulthood.

As Figure 1 shows, the proportion of adults age 65 and over has steadily increased over the past four decades, and it will continue to grow. From nearly 13 percent today, the proportion of older adults is likely to increase to almost 21 percent in the middle of the next century–a shift from nearly one in eight adults over 65 to one in five. Although the growth in the proportion of the nation’s population that will be older is impressive, their actual numbers are even more dramatic. According to the U.S. Census, the number of people 65 and over increased 11-fold during this century, from a little over 3 million in 1900 to more than 33 million in 1994. Over the next 40 years, the number of adults over 65 will climb to more than 80 million.


As Figure 2 indicates, the baby boomers are the first great wave of older adults who will lead a fundamental shift in the demographic structure of the nation that will affect all aspects of public policy.


Equally important as the large numbers are the qualitative changes occurring within the older population. Unlike previous generations, tomorrow’s older adults will be dramatically older and represent greater racial diversity. For example, over the next five decades, the number of adults 85 and older will quadruple and approach nearly 20 million. Although still a relatively small proportion of the nation’s total population, the oldest old will certainly represent the largest segment needing the most costly care and services. Although the majority will remain white, over the next five decades the older population will reflect far more people of Hispanic, African-American, and Asian origins. Such diversity will require government and business to be more flexible on how policies, services, and products are delivered to accommodate the varied needs and expectations of a segmented older population.

To assume that the needs and preferences of yesterday’s, or even today’s, older adults will be the same as those of future generations would be misleading. Data indicate that tomorrow’s older adults will be in better health, have more years of education, and have larger incomes. These characteristics predict a far more active population than has been the case in recent older adult groups.

Improved health. The National Long Term Care Survey indicates that chronic disability rates fell 47 percent between 1989 and 1994 and that functional problems have generally become less severe for older adults. In 1990, more than 72 percent of older adults surveyed assessed themselves in excellent, very good, or good health. Baby boomers are predicted to enjoy better health due to continued improvements in nutrition, fitness, and health care.

Increased education. Tomorrow’s older adults will be better educated than previous generations. Twice as many young old (60 to 70 years old) will have a college degree–a jump from 16 percent in 1994 to about 32 percent by 2019. Even the percentage of adults age 85 and over with a college education will double from about 11 percent to between 20 and 25 percent for the same period.

Larger income. Although many older adults may continue to live in poverty, most will be far better off than their grandparents were. Compared to 1960, when more than 30 percent were below the poverty line, only 10 percent are considered poor today. Moreover, baby boomers will soon be inheriting from their parents anywhere from $10 to $14 trillion–the largest transfer of wealth in history.

The relative improvement of socioeconomic status and well-being suggests real changes in the lifestyle of older adults. Active engagement will typify healthy aging. If people have good health, a wider range of interests, and greater income with which to pursue those activities, then it is very likely that they will choose to lead more active lives. A recent Wall Street Journal-NBC poll revealed that between 62 and 89 percent of the next wave of retirees anticipate devoting more time to learning, study, travel, volunteering, and work. Improved well-being overall will raise the expectations of what it means to age for older adults and their adult children. Both will place unprecedented priority on the infrastructure that will facilitate active independent aging and the capacity to provide care for the oldest old.

Physical environment of aging

Tapping technology to meet the needs of older adults is not new. There are countless families of “assistive technologies”– even an emerging field of “gerontechnology”–and “universal design” theory to address the multiple use, access, and egress needs of those with physical disabilities. In general, however, these efforts are fragmented and address single physical aspects of living: a better bed for the bedroom, a better lift for the senior van, or more accessible appliances for the home.

We do not live in single environments. Life is made up of multiple and interrelated activities and of interdependent systems. Throughout life we work, we play, we communicate, we care, we learn, we move, and although it is crucial that we be able to function within a setting, it is the linkage among those activities that makes a quality life possible. An integrated infrastructure for independent aging should include a healthy home, a productive workplace, personal communications, and lifelong transportation. As the baby boomers matured, the government built schools, constructed sidewalks and parks, and invested in health care to create an infrastructure to support their well-being. Today, the challenge for policymakers and industry is to continue that commitment: to fully leverage advances in information, communications, nanotechnology, sensors, advanced materials, lighting, and many other technologies to optimize existing public and private investments and to create new environments that respond to an aging society’s needs.

Lifelong transportation. The ability to travel from one place to another is vital to our daily lives. Transportation is how people remain physically connected to each other, to jobs, to volunteer activities, to stores and services, to health care, and to the multitude of activities that make up living. For most, driving is a crucial part of independent aging. However, the natural aging process may diminish many of the physical and mental capacities that are needed for safe driving. Drivers over 75 cause the second highest fatality rate on the nation’s roads, second only to drivers age 16 to 24. A recent study conducted for the Department of Health and Human Services and the Department of Transportation suggests that over the next 25 years, the road fatalities of those over 75 could top 20,000, nearly tripling today’s number of deaths. Consequently, transportation must be rethought to determine how technology can be applied to the automobile to address the specific problems of older drivers and passengers in the 21st century.

Driving may not remain a lifelong option because of diminished capacity or even fear. Many may live in communities with inadequate sidewalks, short-duration traffic signals, and hard-to-read signage that can cause problems for older pedestrians. For those who pursued the American dream of a single-family detached home in the suburbs, the inability to drive may maroon them far from shops and friends. Most older adults will live in the suburbs or rural areas where public transportation is limited or nonexistent. Leveraging existing information and vehicle technologies to provide responsive public transportation systems that provide door-to-door services will be critical to the millions of older adults who choose to age in the homes they built and paid for.

Healthy home. Home is the principal space where we give and receive care, have fun, and live. The home should be a major focus of technology-related research to address how we can prevent injury, access services such as transportation, entertain and care for ourselves, shop, and conduct the other activities that constitute daily living. Most older adults choose to remain in their own homes as they age. From a public policy perspective, this is a cost-effective option provided that the home can be used as a platform to ensure overall wellness. For example, introduction of a new generation of appliances, air filtration and conditioning systems, health monitors, and related devices that could support safe independence and remote caregiving could make the home a viable alternative to long-term care for many older adults. Advances are already being made in microsensors that could be embedded in a toilet seat and used to automatically provide a daily checkup of vital signs. Research should go beyond questions of design and physical accessibility to the development of an integrated home that is attractive to us when we are younger and supportive of us as we age.

Aging, once considered a personal problem, will surely become public and political.

Personal communications. One of the greatest risks in aging is not necessarily poor health but isolation. Communication with friends, relatives, health care providers, and others is crucial to healthy aging. Advances in information technologies make it possible and affordable for older adults to remain connected to the world around them. Moreover, a new generation of interactive and easy-to-use applications can be developed for caregivers to ensure that their mothers, fathers, spouses, friends, or patients are safe and well.

Although personal emergency response systems have been invaluable, a new generation of “wireless caregiving” will enable caregivers at any distance to respond to the needs of older friends, family, residents, and patients. Systems that make full use of the existing communications infrastructure can be used to ensure that medicine has been taken, that physical functions are normal, and that minor symptoms are not indicators of a larger problem. They can provide early identification of problems that, if left untreated, may result in hospitalization for the individual and higher health care costs to society.

Yet health is more than a physical status; well-being includes all the other activities and joys that make up a healthy life. For the majority of older adults, connectedness means the ability to learn, to enjoy new experiences, to have fun, and to manage necessary personal services such as transportation and meal delivery. Today’s information systems enable access to these and other activities.

Productive workplace. At one time, retirement age was a fixed point, an inevitable ending to one’s productive years. The workforce is now composed of three generations of workers. Retirement age is increasingly an historical artifact rather than a reality. New careers, extending income, or simply staying active are incentives for many people to continue working and volunteering. Numerous corporations now actively recruit older workers. A recently completed AARP survey of baby boomers’ expectations for their retirement years reveals that 8 in 10 anticipate working at least part-time.

An older workforce introduces new challenges to the workplace. For example, changes in the design of workspace will include more than features that enable improved physical movement and safety. Workplace technology will need to address a wide range of physical realities, including manipulation challenges for those with arthritis or auditory problems for those with hearing difficulty. Employed caregivers, particularly adult children, will seek ways to extend their capacity to balance multiple demands on their time and personal well-being. Likewise, employers will seek and adopt new technologies and services that will enable their employees to remain productive and ensure the well-being of their older loved ones.

Perhaps the greatest reality of the older workplace will be the need for continuing education technology that will enable the older worker to acquire new skills. As we choose to stay on the job longer or elect to change careers after two or more decades, technology will be instrumental in ensuring that an aging workforce remains productive and competitive.

Supporting the caregiver

No matter how conducive to independent living the physical environment may be, many older adults will need some form of support, from housecleaning or shopping to bathing or health care. Most caregiving for those who cannot live alone without assistance is provided by a spouse, an adult child, or sometimes a friend. Today, one in four households provides some form of direct care to an older family member. However, societal changes will affect this pattern of caring for future generations.

Many adult children are moving further from their parents. For many, this can mean living on the other side of a metropolitan area; for others, it may mean living out of state. In both instances, providing daily or even semiregular assistance can be problematic. In addition to distance, most caregivers (typically adult daughters) have careers that they are trying to balance along with children and a home. The challenges of balancing these multiple pressures are a major source of caregiver stress and lost productivity on the job. Findings from a survey conducted by the Conference Board reveal that human resources executives in major firms now identify eldercare as a major worklife issue, replacing childcare among their employees’ chief concerns. Moreover, the composition of the family has changed. The high rate of divorce and remarriage has created a complex matrix of relationships and family constellations that make it difficult to decide who is responsible for what.

Technology, whether it be remote interactive communication with a loved one or a way to contract private services to care for a parent living at home, will be a critical component of caregiving in the next century. Such technology will help caregivers meet their multiple responsibilities. Indeed, virtual caregiving networks may become crucial to delivering publicly and privately provided services such as preventative health care, meals, and transportation.

The politics of unmet expectations

Improved well-being is likely to contribute to a very different vision of aging. In addition to wanting products, services, and activities that were not important to their predecessors, older boomers will also want new public policies to support their desire to remain independent. National efforts to ensure income and health security are already on the political agenda; concerns about the quality of life and demands of caregiving will be there soon.

Baby boomers have become accustomed to being the center of public policy. As children, they caused the physical expansion of communities; as young adults, they drove social and market change; and as older adults, they will expect their needs and policy preferences to be met. According to AARP’s recent survey of boomer attitudes about retirement, more than two-thirds are optimistic about their futures and expect to be better off than their parents. What if, after a lifetime of anticipating a productive retirement and fully investing in the American dream, millions find themselves unable to travel safely, no longer able to remain in their homes, or unable to care for loved ones? Aging, once considered a personal problem, will surely become public and political.

State and local governments will be closest to these issues, but most will be ill-equipped to respond to the scope of the problems or politics that will arise. Numerous grassroots political organizations could form to represent the special needs of various older adult groups, such as safe housing for the poor, emergency services for those living alone, or transportation for those in rural areas. Intergenerational politics might emerge as the mainstay of local policy decisions affecting issues such as school budgets and road building. Because these issues are closest to the daily lives of voters, the political conflict could be far greater than today’s Social Security debate, which focuses on the intangible future.

Technology is one tool that offers a wide range of responses that can enhance individual lives, facilitate caregiving, and improve the delivery of services. Boomers experience technological innovation every day in their cars, their office computers, and their home appliances. They will expect technological genius to respond to their needs in old age.

Stimulating innovation

Throughout the federal government, individual offices are beginning to consider policy strategies to address the needs of older adults, due in part to the fact that this is the United Nations’ International Year of the Older Person. The White House is working on crafting an interagency ElderTech initiative, representatives on Capitol Hill aging committees are investigating the potential of technology, and various cabinet departments are pursuing individual activities in their respective areas. A strategic approach is necessary to leverage each of these activities, to provide a national vision, and to begin building the political coalition necessary to support sustained investment in technology for an aging society. This strategy includes two sets of activities: create new or restructured institutions that will administer aging and technology policy, and implement policies that will set the agenda, stimulate the market, and ensure technological equity.

Policy networks of administrative agencies, stakeholder groups, legislators, and experts typically control an issue area. Consensus within the group is generally based on existing definitions of problems, combined with a predetermined set of available and acceptable solutions. This structured bias keeps many issues from being considered and is a major barrier to policy innovation. The current array of interest groups and federal institutions that dominate aging policy was formed over 30 years ago to alleviate poverty among older adults. Special congressional committees and federal agencies, along with their state networks of service providers, were created to address the needs of the older poor. The fact that the majority of older adults are now above the poverty line is a tribute to many of these programs.

Congress should consider a broad range of tax incentives to encourage industry to invest in an emerging market.

The aging of the baby boom generation creates a new frontier with new problems and opportunities. Aging on the scale and with the diversity that will occur over the coming decades will challenge the nation’s current policies and the underpinnings of the existing institutions. Aging policy today, on Capitol Hill and within the bureaucracy, is typically defined around discussions of the “old and poor” or the “old and disabled.” Although this will continue to be appropriate for many older adults, these definitions alone do not allow for the policy innovation necessary to respond to a new generation of older adults. Congressional committee structure and federal agencies should be realigned to allow a broader debate that would include examination of how technology might be used in coming years. Existing government agencies should develop greater capacity to conduct and manage R&D that addresses aging and the physical environment.

Federal policy should seek to encourage the rapid development and commercialization of technology to address the needs of older adults and caregivers. To achieve this objective, federal strategy should include three goals: agenda setting, market stimulation, and technological equity.

Agenda setting. Discussions of Social Security and Medicare have begun to alert the public to coming demographic changes. However, the extent to which the graying of America will affect all aspects of public policy and business is less well understood. The White House is in an ideal position to use its bully pulpit to educate the public about the nation’s demographic trends. Interagency initiatives are an appropriate beginning; however, this should result in a real and unified budget proposal with a single lead agency. This include resources to advance human factors engineering and aging, policy research to develop new models of service delivery and related data development to better understand older adult preferences, and demonstration projects to evaluate the efficacy and market potential of various technologies. Likewise, Congress should consider investing in direct R&D to place the issue of aging on the agenda of the engineering community. Government should prioritize its investment to include research that first improves the delivery of existing public services and, second, provides the resources necessary to develop new applications that leapfrog the current array of technologies available to older adults and caregivers. An increase in funding for aging research that relates to disease and physiological problems does not replace the need to stimulate research in re-engineering the physical environment of aging. Moreover, this will jump-start industry research in a market where the return on investment may be too far in the future.

Market stimulation. Congress should consider a broad range of tax incentives to encourage industry to invest in an emerging market. This would include tax benefits for companies who invest in systems integration to adapt existing technologies and for those who conduct R&D to develop new products and services. Such product innovations would benefit older adults in this country and enhance the U.S. competitive position abroad. In Japan, for example, the proportion of older adults in the population is even higher than it is in the United States. Similarly, families should be given tax incentives to purchase technologies or services. This would create a defined market and assist those who might find the first generation of technologies too expensive. Finally, some long-term care insurers have begun to give premium breaks for households investing in home technologies. The federal government should work with the states to encourage insurers across the country to grant similar technology discounts.

Technological equity. Good policy must ensure equity. The federal government should develop a combination of incentives and subsidies to ensure that low-income older adults and their families have access to new technologies. The faster new technologies are commercialized, the more affordable they will become. Moreover, the government should become a major consumer of technology to improve the delivery of its services. For example, innovative states such as Massachusetts and California are already working to integrate new remote health monitoring systems into public housing for older adults to enhance preventative care and to improve the well-being of lower-income elderly people. To ensure consistency of service, the federal government should work with industry to facilitate technology standards, such as communication protocols for “smart” home appliances.

The aging of the baby boomers will affect every aspect of society. Healthy old age is the one characteristic that each of us hopes to achieve. The nation must begin today to ensure that one of its greatest achievements–longevity–does not become one of its greatest problems. Leveraging the technological power that in part helped us achieve our longer life span will be an important part of how we will live tomorrow.

Remembering George E. Brown, Jr.

Issues is honored that the article on the Small Business Innovation Research program that George Brown coauthored with James Turner for the Summer 1999 Issues was the last article that Rep. Brown worked on before his death on July 15. As he did with so many topics, Rep. Brown approached the subject with deep knowledge, astute judgment, fearless independence, and an unshakable commitment to do what was right. It was not enough that the program provided support to small companies; he wanted to be certain that the money was spent on the best research and that it enhanced the quality of research performed by small firms. If he were still alive to work on the subject in Congress, he would also have engaged in the push and pull of congressional politics and accepted the practicality of compromise. But what was most admirable and memorable about Brown was that he always began with a vision of what was best and right. This caused no end of anguish for his staff, supporters, and allies. There was room for compromise in his life, but only after he had made clear his ideal solution.

Brown wrote several articles and numerous Forum comments for Issues. He wrote about the need to improve the quality of Third World science and about his notions of federal budgeting. He even wrote a book review. In a city of one-page briefing memos and staff-written speeches, it is difficult to imagine a member of Congress carefully reading an entire book and then sitting down himself to write about it, but it was perfectly in character for Brown

Brown was often introduced at conferences as science’s best friend in Congress, but if one listened to the comments in the hallway after his talks, it sometimes seemed that he was viewed as a traitor to the research community. In truth, Brown was the most knowledgable member of Congress on science and technology issues, but he was not and S&T lap dog. Although he believed firmly in the value of S&T to society, he did not put the well-being of S&T before the good of the nation. He understood that there are higher values than researcher autonomy. Scientists are wary whenever anyone suggests that science has any social purpose other than the advancement of scientific knowledge. Although Brown believed in the value of curiosity-driven research, he saw no inconsistency in also calling on scientists to use their research to help solve concrete world problems. Brown sincerely believed in the social responsibility of science, but he also understood that Congress and the public would be more willing to fund research if they could see more clearly the connection between research and practical benefits.

Even in death, his ideas inform our discussions. In preparing the articles for this issue I was not surprised to find several direct references to Brown’s work and ideas. Lewis Branscomb rightly invokes Brown’s commitment to the idea that scientific research should be linked to society’s goals. And in Norman Metzger’s discussion of earmarking, it would be impossible not to mention Brown, the most outspoken critic of the practice. Robert Rycroft and Don Kash cite Brown as the member of Congress most aware of the importance of worker training. It would have been just as appropriate to find references to his support for more stringent protection of the oceans and forests, the Landsat remote sensing program, and the nurturing of S&T expertise in the developing countries.

Other members of Congress will speak up for S&T interests, but no one will fill George Brown’s role. During 18 terms in Congress and two terms as chair of the House Science Committee, Brown grew into S&T’s advocate, conscience, philosopher, critic, and comic. The rumpled suit, the gnawed cigar, and the mischievous twinkle in the eye fit him alone.

Commercial Satellite Imagery Comes of Age

Since satellites started photographing Earth from space nearly four decades ago, their images have inspired excitement, introspection, and, often, fear. Like all information, satellite imagery is in itself neutral. But satellite imagery is a particularly powerful sort of information, revealing both comprehensive vistas and surprising details. Its benefits can be immense, but so can its costs.

The number of people able to use that imagery is exploding. By the turn of the century, new commercial satellites will have imaging capabilities approaching those of military spy satellites. But the commercial satellites possess one key difference: Their operators will sell the images to anyone.

A joint venture between two U.S. companies, Aerial Images Inc. and Central Trading Systems Inc., and a Russian firm, Sovinformsputnik, is already selling panchromatic (black-and-white) imagery with ground resolution as small as one and a half meters across. (Ground or spatial resolution refers to the size of the objects on the ground that a satellite sensor can distinguish.) Another U.S. company, Space Imaging, has a much more sophisticated satellite that was launched in late September 1999. It can take one-meter panchromatic and three- to five-meter multispectral (color) images of Earth. Over the next five years, nearly 20 U.S. and foreign organizations are expected to launch civilian and commercial high-resolution observation satellites in an attempt to capture a share of the growing market for remote-sensing imagery.

The uses of satellite images

These new commercial satellites will make it possible for the buyers of satellite imagery to, among other things, distinguish between trucks and tanks, expose movements of large groups such as troops or refugees, and identify the probable location of natural resources. Whether this will be good or bad depends on who chooses to use the imagery and how.

Governments, international organizations, and humanitarian groups may find it easier to respond quickly to sudden refugee movements, to document and publicize large-scale atrocities, to monitor environmental degradation, or to manage international disputes before they escalate to full-scale wars. The United Nations, for example, is studying whether satellite imagery could help to significantly curtail drug trafficking and narcotics production over the next 10 years. The International Atomic Energy Agency is evaluating commercial imagery for monitoring compliance with international arms control agreements.

But there is no way to guarantee benevolent use of satellite images. Governments, corporations, and even small groups of individuals could use commercial imagery to collect intelligence, conduct industrial espionage, plan terrorist attacks, or mount military operations. And even when intentions are good, it can be remarkably difficult to derive accurate, useful information from the heaps of transmitted data. The media have already made major mistakes, misinterpreting images and misidentifying objects, including the number of reactors on fire during the Chernobyl nuclear accident in 1986 and the location of the Indian nuclear test sites just last year.

The trend toward transparency

Bloopers notwithstanding, the advent of these satellites is important in itself and also as a case study for a trend sweeping the world: the movement toward transparency. It is more and more difficult to hide information, not only because of improvements in technology but also because of changing concepts about who is entitled to have access to what information. Across issues and around the world, the idea that governments, corporations, and other concentrations of political and economic power are obliged to provide information about themselves is gaining ground.

In politics, several countries are enacting or strengthening freedom-of-information laws that give citizens the right to examine government records. In environmental issues, the current hot topic is regulation by revelation, in which polluters are required not to stop polluting but to reveal publicly just how much they are polluting. Such requirements have had dramatic effects, shaming many companies into drastically reducing noxious emissions. In arms control, mutual inspections of sensitive military facilities have become so commonplace that it is easy to forget how revolutionary the idea was a decade or two ago. As democratic norms spread, as civil society grows stronger and more effective in its demands for information, as globalization gives people an ever-greater stake in knowing what is going on in other parts of the world, and as technology makes such knowledge easier to attain, increased transparency is the wave of the future.

The legitimacy of remote-sensing satellites themselves is part of this trend toward transparency. Images from high-resolution satellites are becoming available now not only because technology has advanced to the point of making them a potential source of substantial profits, but because government policies permit and even encourage them to operate. Yet governments are concerned about just how far this new source of transparency should be allowed to go. The result is inconsistent policies produced by the conflicting desires of states to both promote and control the free flow of satellite imagery. Although fears about the impact of the new satellites are most often expressed in terms of potential military vulnerabilities, in fact their impact is likely to be far more sweeping. They shift power from the former holders of secrets to the newly informed. That has implications for national sovereignty, for the ability of corporations to keep proprietary information secret, and for the balance of power between government and those outside it.

The new satellite systems challenge sovereignty directly. If satellite operators are permitted to photograph any site anywhere and sell the images to anyone, governments lose significant control over information about their turf. Both spy and civilian satellites have been doing this for years, but operators of the spy satellites have been remarkably reticent about the information they have collected, making it relatively easy for countries to ignore them. Pakistan and India may not have liked being observed by the United States and Russia, but as long as satellite operators were not showing information about Pakistan to India and vice versa, no one got too upset. Although the civilian satellites that operated before the 1990s did provide imagery to the public, they had low resolution, generally not showing objects smaller than 10 meters across. This provides only limited military information, nothing like what will be available from the new one-meter systems.

Under international law, countries have no grounds for objecting to being imaged from space. The existing standards, the result largely of longstanding U.S. efforts to render legitimate both military reconnaissance and civilian imaging from space, are codified in two United Nations (UN) documents. The 1967 Outer Space Treaty declared that outer space cannot be claimed as national territory, thus legitimizing satellite travel over any point on Earth. And despite years of lobbying by the former Soviet bloc and developing countries, who wanted a right of prior consent to review and possibly withhold data about their territories, the UN General Assembly in 1986 adopted legal principles regarding civilian remote sensing that made no mention of prior consent. Instead, the principles merely required that “as soon as the primary data and the processed data concerning the territory under its jurisdiction are produced, the sensed state shall have access to them on a nondiscriminatory basis and on reasonable cost terms.” In other words, if a country knows it is being imaged, it is entitled to buy copies at the going rate. Even then, countries would not know who is asking for specific images and for what purposes. If an order is placed for imagery of a country’s military bases, is that an nongovernmental organization (NGO) trying to monitor that country’s compliance with some international accord or an adversary preparing for a preemptive strike?

There is a major economic concern as well. Corporations with access to satellite imagery may know more about a country’s natural resources than does the country’s own government, putting officials at a disadvantage when negotiating agreements such as drilling rights or mining contracts. And as we have all seen recently, highly visible refugee flows and humanitarian atrocities can attract intense attention from the international community. The growing ability of NGOs and the media to track refugee flows or environmental catastrophes may encourage more interventions, even in the face of resistance from the governments concerned. Will the lackadaisical protection of sovereignty in the 1986 legal principles continue to be acceptable to governments whose territory is being inspected?

Over the next five years, nearly 20 U.S. and foreign organizations are expected to launch civilian and commercial high-resolution observation satellites.

Corporations may also feel a new sense of vulnerability if they are observed by competitors trying to keep tabs on the construction of new production facilities or to estimate the size of production runs by analyzing emissions. This is not corporate espionage as usually defined, because satellite imaging is thoroughly legal. But it could make it difficult for corporations to keep their plans and practices secret.

Not only its competitors will want to keep an eye on a particular corporation. Environmentalists, for example, may find the new satellites useful for monitoring what it is doing to the environment. This use will develop more slowly than will military applications, because one-meter spatial resolution is not significantly better than existing systems for environmental monitoring. Political scientist Karen Litfin has pointed out that environmental organizations already make extensive use of existing publicly available satellite images to monitor enforcement of the U.S. Endangered Species Act, to document destruction of coral reefs, and to generate plans for ecosystem management. Environmental applications will become far more significant when hyperspectral systems are available, because they will be able to make fine distinctions among colors and thus provide detailed information about chemical composition. That day is not far off; the Orbview 4 satellite, due to be launched in 2000, will carry a hyperspectral sensor.

Environmental groups are not the only organizations likely to take advantage of this new source of information. Some groups that work on security and arms control, such as the Verification Technology and Information Centre (www.fhit.org/vertic) in London and the Federation of American Scientists (www.fas.org) in Washington have already used, and publicized, satellite imagery. As publicly available imagery improves from five-meter to one-meter resolution, humanitarian groups may find it increasingly useful in dealing with complex emergencies and tracking refugee flows. They will be able to gather and analyze information independent of governments–an important new source of power for civil society.

In short, the new remote-sensing satellites will change who can and will know what, and thus they raise many questions. Who is regulating the remote-sensing industry, who should, and how? Does the new transparency portend an age of peace and stability, or does it create new vulnerabilities that will make the world more rather than less unstable and violent? When should satellite imagery be treated as a public good to be provided (or controlled) by governments, and when should it be treated as a private good to be created by profit seekers and sold to the highest bidder? Who gets to decide? Is it possible to reconcile the public value of the free flow of information for pressing purposes such as humanitarian relief, environmental protection, and crisis management with the needs of the satellite industry to make a profit by selling that information? Is it even possible to control and regulate the flow of images from the new satellites? Or must governments, and people, simply learn to live with relentless eyes in the sky?

Present U.S. policies fail to address some of these questions and give the wrong answers to others. By and large, U.S. policies on commercial and civilian satellites lack the long-term perspective that can help remote sensing fulfill its promise. And there are distressing signs that other countries may be following the United States down the wrong path.

The trials of Landsat

U.S. policy on remote sensing has gyrated wildly among divergent goals. First, there has long been a dispute over the purpose of the U.S. remote-sensing program. Should it be to ensure that the world benefits from unique forms of information, or should it be to create a robust private industry in which U.S. firms would be dominant? Second, the question of which agency within the U.S. government should take operational responsibility for the civilian remote-sensing program has never been resolved. Several different agencies use the data, but none has needed it enough to fight for the continued survival of the program. These two factors have slowed development of a private observation satellite industry and at times have nearly crippled the U.S. civilian program.

The story begins with the launch of Landsat 1 by the National Aeronautics and Space Administration (NASA) in 1972. However, Landsat 1’s resolution (80 meters multispectral) was too coarse for most commercial purposes; scientists, educators, and government agencies were its principal patrons. In an effort to expand the user base and set the stage for commercialization, the Carter administration transferred the program from NASA to the National Oceanic and Atmospheric Administration (NOAA). Ronald Reagan, a strong believer in privatization, decided to pick up the pace despite several studies showing that the market for Landsat data was not nearly strong enough to sustain an independent commercial remote-sensing industry. To jump-start private initiatives, NOAA selected Earth Observation Satellite Company (EOSAT), a joint venture of RCA Corporation and Hughes Aircraft Company, to operate the Landsat satellites and market the resulting data.

The experiment failed disastrously because the market for Landsat imagery was just as poor as the studies had foretold and because the government failed to honor its financial commitments. Prices were raised dramatically, leading to a sharp drop in demand. For several years Landsat hung by a thread.

During this low point, France launched Landsat’s first competitors, which had higher resolutions and shorter revisit times; their images were outselling Landsat’s by 1989. The fate of Landsat’s privatization was sealed when the United States discovered its national security utility during the Gulf War. The U.S. Department of Defense spent an estimated $5 million to $6 million on Landsat imagery during operations Desert Shield and Desert Storm. In 1992, Congress transferred control back to the government.

But Landsat’s troubles were not yet over. In 1993, Landsat 6, the only notable product of the government’s contract with EOSAT, failed to reach orbit, and the $256.5 million spacecraft plunged into the Pacific. Fortunately, Landsat 7 was launched successfully in April 1999, and it is hoped that it will return the United States to the forefront of civilian remote sensing.

Commercial remote sensing emerges

Congress established the legal framework for licensing and regulating a private satellite industry in 1984, but no industry emerged until 1993, when WorldView Inc. became the first U.S. company licensed to operate a commercial land observation satellite. Since then, 12 more U.S. companies have been licensed, and U.S. investors have put an estimated $1.2 billion into commercial remote sensing.

This explosion of capitalist interest reflects political and technological changes. First, the collapse of the Soviet Union removed barriers that stifled private initiatives. Throughout the Cold War, U.S. commercial interests were constantly subordinated to containment of the Soviet threat. Investors were deterred from developing technologies that might be subjected to government scrutiny and regulation.

Second, a newfound faith that the market for remote-sensing data will grow exponentially has spurred expansion of the U.S. private satellite industry. Despite enormous discrepancies among various estimates of the future volume of the remote-sensing market, which range from $2 billion to $20 billion by 2000, most investors believe that if they build the systems, users will come. Potential consumers of remote-sensing data include farmers, city planners, map makers, environmentalists, emergency response teams, news organizations, surveyors, geologists, mining and oil companies, timber harvesters, and domestic as well as foreign military planners and intelligence organizations. Many of these groups already use imagery from French, Russian, and Indian satellites in addition to Landsat, but none of these match the capabilities of the new U.S. commercial systems.

It would be self-defeating for the United States to violate the long-held international norm of noninterference with satellite operations.

Third, advances in panchromatic, multispectral, and even hyperspectral data acquisition, storage, and processing, along with the ability to quickly and efficiently transfer the data, have further supported industry growth. In the 1980s, information technology could not yet provide a robust infrastructure for data. Now, powerful personal computers capable of handling large data files, geographic information system software designed to manipulate spatial data, and new data distribution mechanisms such as CD-ROMs and the Internet have all facilitated the marketing and sale of satellite imagery.

Fourth, after Landsat commercialization failed, the U.S. government took steps to promote an independent commercial satellite industry. Concerned that foreign competitors such as France, Russia, and India might dominate the market, President Clinton in 1994 loosened restrictions on the sale of high-resolution imagery to foreigners. The government has also tried to promote the industry through direct subsidies to companies and guaranteed data purchases. Earth Watch, Space Imaging, and OrbImage, for example, have been awarded up to $4 million to upgrade ground systems that will facilitate transfer of data from their satellites to the National Imagery and Mapping Agency (NIMA). In addition, the Air Force has agreed to give OrbImage up to $30 million to develop and deploy the WarFighter sensor, which is capable of acquiring eight-meter hyperspectral images of Earth. Although access to most of WarFighter’s imagery will be restricted to government agencies, OrbImage will be permitted to sell 24-meter hyperspectral images to nongovernment sources. The Office of Naval Research has agreed to give Space Technology Development Corporation approximately $60 million to develop and deploy the NEMO satellite, with 30-meter hyperspectral and 5-meter panchromatic sensors. The U.S. intelligence community has also agreed to purchase high-resolution satellite imagery. Since fiscal 1998, for example, NIMA has reportedly spent about $5 million annually on commercial imagery, and Secretary of Defense William Cohen says he expects this figure to increase almost 800 percent over the next five years.

Shutter control

To legitimize satellite remote sensing, the United States pushed hard, and successfully, for international legal principles allowing unimpeded passage of satellites over national territory and for unimpeded distribution of the imagery flowing from civilian satellites. To regain U.S. commercial dominance in the technology, the United States is permitting U.S.-based companies to launch commercial satellites with capabilities substantially better than those available elsewhere. But the United States, like other governments, hesitates to allow the full flowering of transparency. Now that the public provision of high-resolution satellite imagery is becoming a global phenomenon, policy contradictions are becoming glaringly apparent. What are the options?

One possibility is to take unilateral measures, such as the present policy of export control with a twist. Unlike other types of forbidden exports, where the point is to keep some technology within U.S. boundaries, imagery from U.S.-controlled satellites does not originate within the country. Satellites collect the data in outer space, then transmit them to ground stations, many of which are located in other countries. To maintain some degree of export control in this unusual situation, the United States has come up with a policy called “shutter control.” The licenses NOAA has issued for commercial remote-sensing satellites contain this provision: “During periods when national security or international obligations and/or foreign policies may be compromised, as defined by the secretary of defense or the secretary of state, respectively, the secretary of commerce may, after consultation with the appropriate agency(ies), require the licensee to limit data collection and/or distribution by the system to the extent necessitated by the given situation.”

But shutter control raises some major problems. For one thing, satellite imagery is a classic example of how difficult it is to regulate goods with civilian as well as military applications. Economic interests want to maintain a major U.S. presence in what could be a large and highly profitable industry that the United States pioneered. National security interests want to prevent potential adversaries from using the imagery against the United States or its allies, and foreign policy interests prefer no publicity in certain situations.

Yet denying imagery to potential enemies undercuts the market for U.S. companies, and may only relinquish the field to other countries. Potential customers who know that their access to imagery may be cut off at any time by the vagaries of U.S. foreign policy may prefer to build commercial relationships with other, more reliable providers. These difficulties are further complicated by the fact that the U.S. military relies increasingly on these systems and therefore has a stake in their commercial success. Not only does imagery provide information for U.S. military operations, but unlike imagery from U.S. spy satellites, that information can also be shared with allies–a considerable advantage in operations such as those in Bosnia or Kosovo.

An extreme form of shutter control is to prohibit imaging of a particular area. Although it runs counter to longstanding U.S. efforts to legitimize remote sensing, the government has already instituted one such ban. U.S. companies are forbidden to collect or sell imagery of Israel “unless such imagery is no more detailed or precise than satellite imagery . . . that is routinely available from [other] commercial sources.” Furthermore, the president can extend the blackout to any other region. Israel already operates its own spy satellite (Ofeq-3) and plans to enter the commercial remote-sensing market with its one-meter-resolution EROS-A satellite in December 1999. Thus, allegations persist that Israel is at least as interested in protecting its commercial prospects by hamstringing U.S. competitors as it is in protecting its own security.

Shutter control also faces a legal challenge. It may be unconstitutional. The media have already used satellite imagery extensively, and some news producers are eagerly anticipating the new high-resolution systems. The Radio-Television News Directors Association argues vehemently that the existing standard violates the First Amendment by allowing the government to impose prior restraint on the flow of information, with no need to prove clear and present danger or imminent national harm. If shutter control is exercised in any but the most compelling circumstances, a court challenge is inevitable.

Even if it survives such a challenge, shutter control will do little to protect U.S. interests. Although the U.S. satellites will be more advanced than any of the systems currently in orbit other than spy satellites, they hardly have the field to themselves. Russia, France, Canada, and India are already providing high-resolution optical and radar imagery to customers throughout the world, and Israel, China, Brazil, South Korea, and Pakistan are all preparing to enter the commercial market. Potential customers will have many alternative sources of imagery.

Persuasion and voluntary cooperation

An alternative is to persuade other operators of high-resolution satellites to voluntarily restrict their collection and dissemination of sensitive imagery. However, the U.S. decision to limit commercial imagery of Israel was based on 50 years of close cooperation between the two countries. Would the United States be able to elicit similar concessions from other states that operate high-resolution remote-sensing satellites but do not value U.S. interests to the extent that the United States values Israel’s interests? There is little reason to believe that the Russians, Chinese, or Indians would respect U.S. wishes about what imagery should be disseminated or to whom.

The prospect for controlling imagery through international agreements becomes even more precarious as remote-sensing technology proliferates, coming within the grasp of other countries. Canada, for example, plans to launch RADARSAT 2, with three-meter resolution. Initially, NASA was to launch the satellite but expressed reservations once it became clear just how good RADARSAT’s resolution would be. Whether the two countries can agree on how the imagery’s distribution should be restricted remains to be seen, but Canada’s recent announcement of its own shutter-control policy may help to alleviate some U.S. concerns.

The only practical choice is to embrace emerging transparency, take advantage of its positive effects, and learn to manage its negative consequences.

If, as certainly seems possible, it proves unworkable to control the flow of information from satellites, two options remain: taking direct action to prevent satellites from seeing what they would otherwise see or learning to live with the new transparency. Direct action requires states to either hide what is on the ground or disable satellites in the sky. Satellites generally travel in fixed orbits, making it easy to predict when one will be overhead. Hiding assets from satellite observation is an old Cold War tactic. The Soviets used to deploy large numbers of fake tanks and even ships to trick the eyes in the sky. Objects can be covered with conductive material such as chicken wire to create a reflective glare that obscures whatever is underneath. One security concern for the United States is whether countries that currently do not try to conceal their activities from U.S. spy satellites will do so once they realize that commercial operators can sell imagery of them to regional adversaries. Officials fear that commercial imagery may deprive the United States of information it currently acquires from its spy satellites.

Although concealment is often possible, it will become harder as satellites proliferate. High-resolution radar capable of detecting objects as small as one meter across–day or night, in any weather, even through clouds or smoke–will reduce opportunities for carrying out sensitive activities unobserved. Moreover, many new systems can look from side to side as well as straight down, so knowing when you are being observed is not so easy.

If hiding does not work, what about countermeasures against the satellite itself? There are many ways to put satellites out of commission other than shooting them down, especially in the case of unprotected civilian systems that are of necessity in low orbits. Electronic and electro-optical countermeasures can jam or deceive satellites. Satellites can also be spoofed: interfered with electronically so that they shut down or change orbit. The operator may never know whether the malfunction is merely a technical glitch or the result of a hostile action. (And the spoofer may never know whether the target satellite was actually affected.) Such countermeasures could prove useful during crises or war to prevent access to pictures of a specific temporary activity without the legal bother of shutter control or the political hassle of negotiated restraints. But during peacetime, they would become obvious if carried out routinely to prevent imaging of a particular site.

The more dramatic approach would be to either shoot a satellite down or destroy its data-receiving stations on the ground. Short of imminent or actual war, however, it is difficult to imagine that the United States would bring international opprobrium on itself by destroying civilian satellites or committing acts of aggression against a sovereign state. If the United States could live with Soviet spy satellites during some of the most perilous moments of the Cold War, it is unthinkable that it would violate international law in order to avoid being observed by far less threatening adversaries. Moreover, the U.S. economy and national security apparatus are far more dependent on space systems than is the case is any other country. It would be self-defeating for the United States to violate the long-held international norm of noninterference with satellite operations.

Get used to it

The instinctive reaction of governments confronted by new information technologies to try to control them, especially when the technologies are related to power and politics. In the case of high-resolution remote-sensing satellites, however, the only practical choice is to embrace emerging transparency, take advantage of its positive effects, and learn to manage its negative consequences. No one is fully prepared for commercial high-resolution satellite imagery. The U.S. government is trying to maintain a kind of export control over a technology that has long since proliferated beyond U.S. borders. The international community agreed more than a decade ago to permit the unimpeded flow of information from satellite imagery, but that agreement may come under considerable strain as new and far more capable satellites begin to distribute their imagery publicly and widely. Humanitarian, environmental, and arms control organizations can put the imagery to good use. Governments, however, are likely to be uncomfortable with the resulting shift in power to those outside government, especially if they include terrorists. And many, many people will make mistakes, especially in the early days. Satellite imagery is hard to interpret. Junior analysts are wrong far more often than they are right.

Despite these potential problems, on balance the new transparency is likely to do more good than harm. It will allow countries to alleviate fear and suspicion by providing credible evidence that they are not mobilizing for attack. It will help governments and others cope with growing global problems by creating comprehensive sources of information that no single government has an incentive to provide. Like any information, satellite imagery is subject to misuse and misinterpretation. But the eyes in the sky have rendered sustained secrecy impractical. And in situations short of major crisis or war, secrecy rarely works to the public benefit.

Forum – Fall 1999


Science and foreign policy

Frank Loy, Under Secretary for Global Affairs at the U.S. State Department, and Roland Schmitt, president emeritus of Renssalaer Polytechnic Institute, recognize that the State Department has lagged behind the private sector and the scientific community in integrating science into its operations and decisionmaking. This shortfall has persisted despite a commitment from some within State to take full advantage of America’s leading positions in the scientific field. As we move into the 21st century, it is clear that science and technology will continue to shape all aspects of our relations with other countries. As a result, the State Department must implement many of the improvements outlined by Loy and Schmitt.

Many of the international challenges we face are already highly technical and scientifically complex, and they are likely to become even more so as technological and scientific advances continue. In order to work with issues such as electronic commerce, global environmental pollution, and infectious diseases, diplomats will need to understand the underpinning scientific theories and technological workings. To best maintain and promote U.S. interests, our diplomatic corps needs to broaden its base of scientific and technological knowledge across all levels.

As Loy and Schmitt point out, this requirement is already recognized within the State Department and has been highlighted by Secretary Madeleine Albright’s requested review of the issue within the State Department by the National Research Council (NRC). The NRC’s preliminary findings highlight this existing commitment. And as Loy reiterated to an audience at the Woodrow Wilson Center (WWC), environmental diplomacy in the 21st century requires that negotiators “undergird international agreements with credible scientific data, understanding and analysis.”

Sadly, my experience on Capitol Hill teaches me that a significant infusion of resources for science in international affairs will not be forthcoming. Given current resources, State can begin to address the shortfall in internal expertise by seeking the advice of outside experts to build diplomatic expertise and inform negotiations and decisionmaking. In working with experts in academia, the private sector, nongovernmental organizations, and scientific and research institutions, Foreign Service Officers can tap into some of the most advanced and up-to-date information available on a wide range of issues. Institutions such as the American Association for the Advancement of Science, NRC, and WWC can and do support this process by facilitating discussions in a nonpartisan forum designed to encourage the free exchange of information. Schmitt’s arguments demonstrate a private-sector concern and willingness to act as a partner as well. It is time to better represent America’s interests and go beyond the speeches and the reviews to operationalize day-to-day integration of science and technology into U.S. diplomatic policy and practice.

LEE H. HAMILTON

Director

Woodrow Wilson International Center

for Scholars

Washington, D.C.


Frank Loy and Roland Schmitt are optimistic about improving science at State. So was I in 1990­91 when I wrote the Carnegie Commission’s report on Science and Technology in U.S. International Affairs. But it’s hard to sustain optimism. Remember that in 1984, Secretary of State George P. Schultz cabled all diplomatic posts a powerful message: “Foreign policy decisions in today’s high-technology world are driven by science and technology . . . and in foreign policy we (the State Department) simply must be ahead of the S&T power curve.” His message fizzled.

The record, in fact, shows steady decline. The number of Science Counselor positions dropped from about 22 in the 1980s to 10 in 1999. The number of State Department officials with degrees in science or engineering who serve in science officer positions has, according to informed estimates, shrunk during the past 15 years from more than 25 to close to zero. Many constructive proposals, such as creating a Science Advisor to the Secretary, have been whittled down or shelved.

So how soon will Loy’s excellent ideas be pursued? If actions depend on the allocation of resources, the prospects are bleak. When funding choices must be made, won’t ensuring the physical security of our embassies, for example, receive a higher priority than recruiting scientists? Congress is tough on State’s budget because only about 29 percent of the U.S. public is interested in other countries, and foreign policy issues represent only 7.3 percent of the nation’s problems as seen by the U.S. public (figures are from a 1999 survey by the Chicago Council on Foreign Relations). If the State Department doesn’t make its case, and if Congress is disinclined to help, what can be done?

In Schmitt’s astute conclusion, he said he was “discouraged about the past but hopeful for the future.” Two immediate steps could be taken that would realize his hopes and mine within the next year.

First, incorporate into the Foreign Service exam a significant percentage of questions related to science and mathematics. High-school and college students confront such questions on the SATs and in exams for graduate schools. Why not challenge all those who seek careers in the foreign service in a similar way? Over time, this step would have strong leverage by increasing the science and math capacity among our already talented diplomatic corps, especially if career-long S&T retraining were also sharply increased.

Second, outsource most of the State Department’s current S&T functions. The strong technical agencies, from the National Science Foundation and National Institutes of Health to the Department of Energy and NASA, can do most of the jobs better than State. The Office of Science and Technology Policy could collaborate with other White House units to orchestrate the outsourcing, and State would ensure that for every critical country and issue, the overarching political and economic components of foreign policy would be managed adroitly. If the president and cabinet accepted the challenge, this bureaucratically complex redistribution of responsibilities could be accomplished. Congressional committees would welcome such a sweeping effort to create a more coherent pattern of action and accountability.

RODNEY NICHOLS

President

New York Academy of Sciences

New York, New York


In separate articles in the Summer 1999 edition of Issues, Roland W. Schmitt (“Science Savvy in Foreign Affairs”) and Frank E. Loy (“Science at the State Department”) write perceptively on the issue of science in foreign affairs, and particularly on the role the State Department should play in this arena. Though Loy prefers to see a glass half full, and Schmitt sees a glass well on its way down from half empty, the themes that emerge have much in common.

The current situation at the State Department is a consequence of the continuing deinstitutionalization of the science and technology (S&T) perspective on foreign policy. The reasons for this devolution are several. One no doubt was the departure from the Senate of the late Claiborne Pell, who authored the legislation that created the State Department’s Bureau of Oceans, Environment, and Science (OES). Without Pell’s paternal oversight from his position of leadership on the Senate Foreign Relations Committee, OES quickly fell on hard times. Less and less attention was paid to it either inside or outside the department. Some key activities, such as telecommunications, that could have benefited from the synergies of a common perspective on foreign policy were never integrated with OES; others, such as nuclear nonproliferation, began to be dispersed.

Is there a solution to what seems to be an almost futile struggle for State to come to terms with a world of the Internet, global climate change, gene engineering, and AIDS? Loy apparently understands the need to institutionalize S&T literacy throughout his department. But it’s tricky to get it right. Many of us applauded the establishment in the late 1980s of the “Science Cone” as a career path within the Foreign Service. Nevertheless, I am not surprised by Loy’s analysis of the problems of isolation that that path apparently held for aspiring young Foreign Service Officers (FSOs). It was a nice try (although questions linger as to how hard the department tried).

Other solutions have been suggested. Although a science advisor to a sympathetic Secretary of State might be of some value, in most cases such a position standing outside departmental line management is likely to accomplish little. Of the last dozen or so secretaries, perhaps only George Schultz might have known how to make good use of such an appointee. Similarly, without a clear place in line management, advisory committees too are likely to have little lasting influence, however they may be constituted.

But Loy is on to something when he urges State to “diffuse more broadly throughout the Department a level of scientific knowledge and awareness.” He goes on to recommend concrete steps that, if pursued vigorously, might ensure that every graduate of the Foreign Service Institute is as firmly grounded in essential knowledge of S&T as in economics or political science. No surprises here; simply a reaffirmation of what used to be considered essential to becoming an educated person. In his closing paragraph, Schmitt seems to reach the same conclusion and even offers the novel thought that the FSO entrance exam should have some S&T questions. Perhaps when a few aspiring FSOs wash out of the program because of an inability to cope with S&T issues on exit exams we’ll know that the Department is serious.

Another key step toward reinstitutionalizing an S&T perspective on foreign affairs is needed: Consistent with the vision of the creators of OES, the department once again should consolidate in one bureau responsibility for those areas–whether health, environment, telecommunications, or just plain S&T cooperation–in which an understanding of science is a prerequisite to an informed point of view on foreign policy. If State fails to do so, then its statutory authority for interagency coordination of S&T-related issues in foreign policy might as well shift to other departments and agencies, a process that is de facto already underway.

The State Department needs people schooled in the scientific disciplines for the special approach they can provide in solving the problems of foreign policy in an age of technology. The department takes some justifiable pride in the strength of its institutions and in their ability to withstand the tempests of political change. And it boasts a talented cadre of FSOs who are indeed a cut above. But only if it becomes seamlessly integrated with State Department institutions is science likely to exert an appropriate influence on the formulation and practice of foreign policy.

FRED BERNTHAL

President

Universities Research Association

Washington, D.C.

The author is former assistant secretary of state for oceans, environment, and science (1988-1990).


Roland W. Schmitt argues that scientists and engineers, in contrast to the scientifically ignorant political science generalists who dominate the State Department, should play a critical role in the making of foreign policy and international agreements. Frank E. Loy demurs, contending that the mission of the State Department is to develop and conduct foreign policy, not to advance science. The issues discussed by these authors raise an important question underpinning policymaking in general: Should science be on tap or on top? Both Schmitt and Loy are partially right and wrong.

Schmitt is right about the poor state of U.S. science policy. Even arms control and the environment, which Loy (with no disagreement from Schmitt) lauds as areas where scientists play key roles, demonstrate significant shortcomings. For example, there is an important omission in the great Strategic Arms Reduction Treaty (START), which reduced the number of nuclear weapons and warheads, including submarine-launched ballistic missiles (SLBMs). The reduction of SLBMs entailed the early and unanticipated retirement or decommissioning of 31 Russian nuclear submarines, each powered by two reactors, in a short space of time. Any informed scientist knows that the decommissioning of nuclear reactors is a major undertaking that must deal with highly radioactive fuel elements and reactor compartments, along with the treatment and disposal of high- and low-level wastes. A scientist working in this area would also know of the hopeless inadequacy of nuclear waste treatment facilities in Russia.

Unfortunately, the START negotiators obviously did not include the right scientists, because the whole question of what to do with a large number of retired Russian nuclear reactors and the exacerbated problem of Russian nuclear waste were not addressed by START. This has led to the dumping of nuclear wastes and even of whole reactors into the internationally sensitive Arctic Ocean. Belated and expensive “fire brigade” action is now being undertaken under the Cooperative Threat Reduction program paid for by the U.S. taxpayer. The presence of scientifically proficient negotiators would probably have led to an awareness of these problems and to agreement on remedial measures to deal effectively with the problem of Russian nuclear waste in a comprehensive, not fragmented, manner.

But Loy is right to the extent that the foregoing example does not make a case for appointing scientists lacking policy expertise to top State Department positions. Scientists and engineers who are ignorant of public policy are as unsuitable at the top as scientifically illiterate policymakers. For example, negotiating a treaty or formulating policy pertaining to global warming or biodiversity calls for much more than scientific knowledge about the somewhat contradictory scientific findings on these subjects. Treaty negotiators or policymakers need to understand many other critical concepts, such as sustainable development, international equity, common but differentiated responsibility, state responsibility, economic incentives, market mechanisms, free trade, liability, patent rights, the north-south divide, domestic considerations, and the difference between hard and soft law. It would be intellectually and diplomatically naive to dismiss the sophisticated nuances and sometimes intractable problems raised by such issues as “just politics.” Scientists and engineers who are unable to meld the two cultures of science and policy should remain on tap but not on top.

Science policy, with a few exceptions, is an egregiously neglected area of intellectual capital in the United States. It is time for universities to rise to this challenge by training a new genre of science policy students who are instructed in public policy and exposed to the humanities, philosophy, and political science. When this happens, we will see a new breed of policy-savvy scientists, and the claim that such scientists should be on top and not merely on tap will be irrefutable.

LAKSHMAN GURUSWAMY

Director

National Energy-Environment Law and Policy Institute

University of Tulsa

Tulsa, Oklahoma


Education and mobility

Increasing the effectiveness of our nation’s science and mathematics education programs is now more important than ever. The concerns that come into my office–Internet growth problems, cloning, nuclear proliferation, NASA space flights, and global climate change, to name a few–indicate the importance of science, mathematics, engineering, and technology. If our population remains unfamiliar and uncomfortable with such concepts, we will not be able to lead in the technically driven next century.

In 1998, Speaker Newt Gingrich asked me to develop a new long-range science and technology policy that was concise, comprehensive, and coherent. The resulting document, Unlocking Our Future: Toward A New National Science Policy, includes major sections on K-12 math and science education and how it relates to the scientific enterprise and our national interest. As a former research physicist and professor, I am committed as a congressman to doing the best job I can to improve K-12 science and mathematics education, using the limited involvement of the federal government in this area.

The areas for action mentioned by Eamon M. Kelly, Bob. H. Suzuki, and Mary K. Gaillard in “Education Reform for a Mobile Population” (Issues, Summer 1999) are in line with my thinking. I offer the following as additional ideas for consideration.

By bringing together the major players in the science education debate, including scientists, professional associations, teacher groups, textbook publishers, and curriculum authors, a national consensus could be established on an optimal scope and sequence for math and science education in America. Given the number of students who change schools and the degree to which science and math disciplines follow a logical and structured sequence, such a consensus could provide much-needed consistency to our K-12 science and mathematics efforts.

The federal government could provide resources for individual schools to hire a master teacher to facilitate teacher implementation of hands-on, inquiry-based course activities grounded in content. Science, math, engineering, and technology teachers need more professional development, particularly with the recent influx of technology into the classroom and the continually growing body of evidence describing the effectiveness of hands-on instruction. Given that teachers now must manage an increasing inventory of lab materials and equipment, computer networks, and classes actively engaged in research and discovery, resources need to be targeted directly at those in the classroom, and a master teacher would be a tremendous resource for that purpose.

Scientific literacy will be a requirement for almost every job in the future, as technology infuses the workforce and information resources become as valuable as physical ones. Scientific issues and processes will undergird our major debates. A population that is knowledgeable and comfortable with such issues will result in a better functioning democracy. I am convinced that a strengthened and improved K-12 mathematics and science education system is crucial for America’s success in the next millenium.

REP. VERNON J. EHLERS

Republican of Michigan


The publication of “Education Reform for a Mobile Population” in this journal and the associated National Science Board (NSB) report are important milestones in our national effort to improve mathematics and science education. The point of departure for the NSB was the Third International Mathematics and Science Study (TIMSS), in which U.S. high-school students performed dismally.

In my book Aptitude Revisited, I presented detailed statistical data from prior international assessments, going back to the 1950s. Those data do not support the notion that our schools have declined during the past 40 years; U.S. students have performed poorly on these international assessments for decades.

Perhaps the most important finding to emerge from such international comparisons is this: When U.S. students do poorly, parents and teachers attribute their failure to low aptitude. When Japanese students do poorly, parents and teachers conclude that the student has not worked hard enough. Aptitude has become the new excuse and justification for a failure to educate in science and mathematics.

Negative expectations about their academic aptitude often erode students’ self-confidence and lower both their performance and aspiration levels. Because many people erroneously attribute low aptitude for mathematics and science to women, minority students, and impoverished students, this domino effect increases the educational gap between the haves and the have-nots in our country. But as Eamon M. Kelly, Bob H. Suzuki, and Mary K. Gaillard observe, for U.S. student achievement to rise, no one can be left behind.

Consider the gender gap in the math-science pipeline. I studied a national sample of college students who were asked to rate their own ability in mathematics, twice as first-year students and again three years later. The top category was “I am in the highest ten percent when compared with other students my age.” I studied only students who clearly were in the top 10 percent, based on their score on the quantitative portion of the Scholastic Aptitude Test (SAT). Only 25 percent of the women who actually were in the top 10 percent believed that they were on both occasions.

Exciting research about how this domino effect can be reversed was carried out by Uri Treisman, at the University of California, Berkeley. While a teaching assistant in calculus courses, he observed that the African American students performed very poorly. Rejecting a remedial approach, he developed an experimental workshop based on expectations of excellence in which he required the students to do extra, more difficult homework problems while working in cooperative study groups. The results were astounding: The African American students went on to excel in calculus. In fact, these workshop students consistently outperformed both Anglos and Asians who entered college with comparable SAT scores. The Treisman model now has been implemented successfully in a number of different educational settings.

Mathematics and science teachers play a crucial role in the education of our children. I would rather see a child taught the wrong curriculum or a weak curriculum by an inspired, interesting, powerful teacher than to have the same child taught the most advanced, best-designed curriculum by a dull, listless teacher who doesn’t fully understand the material himself or herself. Other countries give teachers much more respect than we do here in the United States. In some countries, it is considered a rare honor for a child’s teacher to be a dinner guest in the parents’ home. Furthermore, we don’t pay teachers enough either to reward them appropriately or to recruit talented young people into this vital profession.

One scholar has suggested that learning to drive provides the best metaphor for science and mathematics education and, in fact, education more generally. As they approach the age of 16, teenagers can hardly contain their enthusiasm about driving. We assume, of course, that they will master this skill. Some may fail the written test or the driving test once, even two or three times, but they will all be driving, and soon. Parents and teachers don’t debate whether a young person has the aptitude to drive. Similarly, we must expect and assume that all U.S. students can master mathematics and science.

DAVID E. DREW

Joseph B. Platt Professor of Education and Management

Claremont Graduate University

Claremont, California


A strained partnership

In “The Government-University Partnership in Science” (Issues, Summer 1999), President Clinton makes a thoughtful plea to continue that very effective partnership. However, he is silent on key issues that are putting strain on it. One is the continual effort of Congress and the Office of Management and Budget to shift more of the expenses of research onto the universities. Limits on indirect cost recovery, mandated cost sharing for grants, universities’ virtual inability to gain funds for building and renovation, and increased audit and legal costs all contribute. This means that the universities have to pay an increasing share of the costs of U.S. research advances. It is ironic that even when there is increased money available for research, it costs the universities more to take advantage of the funds.

The close linkage of teaching and research in America’s research universities is one reason why the universities have responded by paying these increased costs of research. However, we are moving to a situation where only the richest of our universities can afford to invest in research infrastructure. All of the sciences are becoming more expensive to pursue as we move to the limits of parameters (such as extremely low temperatures and single-atom investigations) and we gather larger data sets (such as sequenced genomes and astronomical surveys). Unless the federal government is willing to fund more of the true costs of research, there will be fewer institutions able to participate in the scientific advances of the 21st century. This will widen the gap between the education available at rich and less rich institutions. It will also lessen the available capacity for carrying out frontier research in our country.

An important aspect of this trend is affecting our teaching and research hospitals. These hospitals have depended on federal support for their teaching capabilities–support that is uncertain in today’s climate. The viability of these hospitals is key to maintaining the flow of well-trained medical personnel. Also, some of these hospitals undertake major research programs that provide the link between basic discovery and its application to the needs of sick people. The financial health of these hospitals is crucial to the effectiveness of our medical schools. It is important that the administration and Congress look closely at the strains being put on these institutions.

The government-university partnership is a central element of U.S. economic strength, but the financial cards are held by the government. It needs to be cognizant of the implications of its policies and not assume that the research enterprise will endure in the face of an ever more restrictive funding environment.

DAVID BALTIMORE

President

California Institute of Technology

Pasadena, California


Government accountability

In “Are New Accountability Rules Bad for Science?” (Issues, Summer, 1999), although Susan E. Cozzens is correct in saying that “the method of choice in research evaluation around the world was the expert review panel,” a critical question is by whom the above choice is actually endorsed and whose interests it primarily serves.

Research funding policies are almost invariably geared toward the interests of highly funded members of the grantsmanship establishment (the “old boys’ network”), whose prime interest lies in increasing their own stature and institutional weight. As a result, research creativity and originality are suppressed, or at best marginalized. What really counts is not your discoveries (if any), but what your grant total is.

The solution? Provide small “sliding” grants that are subject to only minimal conditions, such as the researcher’s record of prior achievements. To review yet-to-be-done work (“proposals”) makes about as much sense as scientific analysis of Baron Munchausen’s stories. Past results and overall competence are much easier to assess objectively. Cutthroat competition for grants that allegedly should boost excellence in reality leads to proliferation of mediocrity and conformism.

Not enough money for small, no-frills research grants? Nonsense. Much of in-vogue research is actually grossly overfunded. In many cases (perhaps the majority), lower funding levels would lead to better research, not the other way around.

Multiple funding sources should also be phased out. Too much money from several sources often results in a defocusing of research objectives and to a vicious grant-on-grant rat race. University professors should primarily work themselves. Instead, many of them act primarily as mere managers of their ludicrously big research staffs of cheap research labor. How much did Newton, Gauss, Faraday, or Darwin rely on postdocs in their work? And where are the people of their caliber nowadays? Ask the peer-review experts.

ALEXANDER A. BEREZIN

Professor of Engineering Physics

McMaster University

Hamilton, Ontario, Canada


I thank my friend and colleague Susan E. Cozzens for her favorable mention of the Army Research Laboratory (ARL) in her article. ARL was a Government Performance and Results Act (GPRA) pilot project and the only research laboratory to volunteer for that “honor.” As such, we assumed a certain visibility and leadership role in the R&D community for developing planning and measuring practices that could be adapted for use by research organizations feeling the pressure of GPRA bearing down on them. And we did indeed develop a business planning process and a construct for performance evaluation that appears to be holding up fairly well after six years, and has been recognized as a potential solution to some of the problems that Susan discusses by a number of organizations, both in and out of government.

I would like to offer one additional point. ARL, depending on how one analyzes the Defense Department’s organizational chart, is 5 to 10 levels down from where the actual GPRA reporting responsibility resides. So why did we volunteer to be a pilot project in the first place, and why do we continue to follow the requirements of GPRA even though we no longer formally report on them to the Office of Management and Budget? The answer is, quite simply, that these methods have been adopted by ARL as good business practice. People sometimes fail to realize that government research organizations, and public agencies in general, are in many ways similar to private-sector businesses. There are products or services to be delivered; there are human, fiscal, and capital resources to be managed; and there are customers to be satisfied and stakeholders to be served. Sometimes who these customers and stakeholders are is not immediately obvious, but they are surely there. Otherwise, what is your purpose in being? (And why does someone continue to sign your paycheck?) And there also is a type of “bottom line” that we have to meet. It may be different from one organization to another, and it usually cannot be described as “profit,” but it is there nonetheless. This being so, it seems only logical to me for an organization to do business planning, to have strategic and annual performance plans, and to evaluate performance and then report it to stakeholders and the public. In other words, to manage according to the requirements of GPRA.

Thus ARL, although no longer specifically required to do so, continues to plan and measure; and I believe we are a better and more competitive organization for it.

EDWARD A. BROWN

Chief, Special Projects Office and GPRA Pilot Project Manager

U.S. Army Research Laboratory

Adelphi, Maryland


Small business research

In “Reworking the Federal Role in Small Business Research” (Issues, Summer 1999) George E. Brown, Jr. and James Turner do the academic and policy community an important service by clearly reviewing the institutional history of the Small Business Innovation Research (SBIR) program, and they call for changes in SBIR.

Until I had the privilege of participating in several National Research Council studies related to the Department of Defense’s (DOD’s) SBIR program, what I knew about SBIR was what I read. This may also characterize others’ so-called “experience” with the program. Having now had a first-hand research exposure to SBIR, my views of the program have matured from passive to extremely favorable.

Brown and Turner call for a reexamination, stating that “the rationale for reviewing SBIR is particularly compelling because the business environment has changed so much since 1982.” Seventeen years is a long time, but one might consider an alternative rationale for an evaluatory inquiry.

The reason for reviewing public programs is to ensure fiscal and performance accountability. Assessing SBIR in terms of overall management efficiency, its ability to document the usefulness of direct outputs from its sponsored research, and its ability to describe–anecdotally or quantitatively– social spillover outcomes should be an ongoing process. Such is simply good management practice.

Regarding performance accountability, there are metrics beyond those related to the success rate of funded projects, as called for by Brown and Turner. Because R&D is characterized by a number of elements of risk, the path of least resistance for SBIR, should it follow the Brown and Turner recommendation, would be to increase measurable success by funding less risky projects. An analysis of SBIR done by John Scott of Dartmouth College and myself reveals that SBIR’s support of small, innovative, defense-related companies has a spillover benefit to society of a magnitude approximately equal to what society receives from other publicly funded privately performed programs or from research performed in the industrial sector that spills over into the economy. The bottom line is that SBIR funds socially beneficial high-risk research in small companies, and without SBIR that research would not occur.

ALBERT N. LINK

Professor of Economics

University of North Carolina at Greensboro

Greensboro, North Carolina


George E. Brown, Jr. and James Turner pose the central question for SBIR: What is it for? For 15 years, it has been mostly a small business adjunct to what the federal agencies would do anyway with their R&D programs. It has shown no demonstrable economic gain that would not have happened if the federal R&D agencies had been left alone. Brown and Turner want SBIR either to become a provable economic gainer or to disappear if it cannot show a remarkable improvement over letting the federal R&D agencies just fund R&D for government purposes.

Although SBIR has a nominal rationale and goal of economic gain, Congress organized the program to provide no real economic incentive. The agencies gain nothing from the economic success of the companies they fund, with one exception: BMDO (Star Wars) realizes that it can gain new products on the cheap by fostering new technologies that are likely to attract capital investment as they mature. To do so, BMDO demands an economic discipline that almost every other agency disdains.

Brown and Turner recognize such deficiency by suggesting new schemes that would inject the right incentives. A central fund manager could be set up with power and accountability equivalent to those of a manager of a mutual fund portfolio or a venture capital fund. The fund’s purpose, and scale of reward for the manager, would depend on the fund’s economic gain.

But the federal government running any program for economic gain raises a larger question of the federal role. Such a fund would come into competition with private investors for developments with reasonable market potential. The federal government should not be so competing with private investors.

If SBIR is to have an economic purpose, it must be evaluated by economic measures. The National Research Council is wrestling with the evaluation question, but its testimony and reports to date do not offer much hope of a hard-hitting conversion to economic metrics for SBIR. If neither the agencies nor the metrics focus on economics, SBIR cannot ever become a successful economic program.

CARL W. NELSON

Washington, D.C.


Perils of university-industry collaboration

Richard Florida’s analysis of university-industry collaborations provides a sobering view of the gains and losses associated with the new partnerships (“The Role of the University: Leveraging Talent, Not Technology,” Issues, Summer 1999). Florida is justifiably concerned about the effect academic entrepreneurship has on compromising the university’s fundamental missions, namely the production and dissemination of basic knowledge and the education of creative researchers. There is another loss that is neglected in Florida’s discussion: the decline of the public interest side of science. As scientists become more acclimated to private-sector values, including consulting, patenting of research, serving on industry scientific advisory boards, and setting up for-profit companies in synergism with their university, the public ethos of science slowly disappears, to the detriment of the communitarian interests of society.

To explain this phenomenon, I refer to my previous characterization of the university as an institution with at least four personalities. The classical form (“knowledge is virtue”) represents the view that the university is a place where knowledge is pursued for its own sake and that the problems of inquiry are internally driven. Universal cooperation and the free and open exchange of information are preeminent values. According to the defense model (“knowledge is security”), university scientists, their laboratories, and their institutions are an essential resource for our national defense. In fulfilling this mission, universities have accommodated to secrecy in defense contracts that include military weaponry, research on insurgency, and the foreign policy uses of propaganda.

The Baconian ideal (“knowledge is productivity”) considers the university to be the wellspring of knowledge and personnel that contribute to economic and industrial development. Responsibility for the scientist begins with industry-supported discovery and ends with a business plan for the development and marketing of products. The pursuit of knowledge is not fully realized unless it results in greater productivity for an industrial sector.

Finally, the public interest model (“knowledge is human welfare”) sees the university’s role as the solution of human health and welfare problems. Professors, engaged in federally funded medical, social, economic, and technological research, are viewed as a public resource. The norms of public interest science are consonant with some of the core values of the classical model, particularly openness and the sharing of knowledge.

Since the rise of land-grant institutions in the mid-1800s, the cultivation of university consultancies by the chemical industry early in the 20th century, and the dramatic rise of defense funding for academia after World War II, the multiple personalities of the university have existed in a delicate balance. When one personality gains influence, its values achieve hegemony over the norms of the other traditions.

Florida focuses on the losses among the classical virtues of academia (unfettered science), emphasizing restrictions on research dissemination, choice of topics of inquiry, and the importance given to intellectual priority and privatization of knowledge. I would argue that there is another loss, equally as troubling but more subtle. University entrepreneurship shifts the ethos of academic scientists toward a private orientation and away from the public interest role that has largely dominated the scientific culture since the middle of the century. It was, after all, public funds that paid and continue to pay for the training of many scientists in the United States. An independent reservoir of scientific experts who are not tied to special interests is critical for realizing the potential of a democratic society. Each time a scientist takes on a formal relationship with a business venture, this public reservoir shrinks. Scientists who are tethered to industrial research are far less likely to serve in the role of vox populi. Instead, society is left with advocacy scientists either representing their own commercial interests or losing credibility as independent spokespersons because of their conflicts of interest. The benefits to academia of knowledge entrepreneurship pale against this loss to society.

SHELDON KRIMSKY

Tufts University

Boston, Massachusetts


Richard Florida suggests that university-industry research relationships and the commercialization of university-based research may interfere with students’ learning and inhibit the ability of universities to produce top talent. There is some anecdotal evidence that supports this assertion.

At the Massachusetts Institute of Technology (MIT), an undergraduate was unable to complete a homework assignment that was closely related to work he was doing for a company because he had signed a nondisclosure agreement that prohibited him from discussing his work. Interestingly, the company that employed the student was owned by an MIT faculty member, and the instructor of the class owned a competing firm. In the end, the instructor of the course was accused of using his homework as a form of corporate espionage, and the student was given another assignment.

In addition to classes, students learn through their work in laboratories and through informal discussions with other faculty, staff, and students. Anecdotal evidence suggests that joint university-industry research and commercialization may limit learning from these less formal interactions as well. For example, it is well known that in fields with high commercial potential such as human genetics, faculty sometimes instruct students working in their university labs to refrain from speaking about their work with others in order to protect their scientific lead and the potential commercial value of their results. This suppression of informal discussion may reduce students’ exposure to alternative research methodologies used in other labs and inhibit their relationships with fellow students and faculty.

Policymakers must be especially vigilant with respect to protecting trainees in the sciences. Universities, as the primary producers of scientists, must protect the right of students to learn in both formal and informal settings. Failure to do so could result in scientists with an incomplete knowledge base, a less than adequate repertoire of research skills, a greater tendency to engage in secrecy in the future, and ultimately in a slowing of the rate of scientific advancement.

ERIC G. CAMPBELL

DAVID BLUMENTHAL

Institute for Health Policy

Harvard Medical School

Massachusetts General Hospital

Boston, Massachusetts


Reefer medics

“From Marijuana to Medicine” in your Spring 1999 issue (by John A. Benson, Jr., Stanley J. Watson, Jr., and Janet E. Joy) will be disappointing on many counts to those who have long been pleading with the federal government to make supplies of marijuana available to scientists wishing to gather persuasive data to either establish or refute the superiority of smoked marijuana to the tetrahydrocannabinol available by prescription as Marinol.

There is no argument about the utility of Marinol to relieve (at least in some patients) the nausea and vomiting associated with cancer chemotherapy and the anorexia and weight loss suffered by AIDS patients. Those indications are approved by the Food and Drug Administration. What remains at issue is the preference of many patients for the smoked product. The pharmacokinetics of Marinol help to explain Marinol’s frequently disappointing performance and the preference for smoked marijuana of the sick, of oncologists, and of AIDS doctors. These patients and physicians would disagree vehemently with the statement by Benson et al. that “in most cases there are more effective medicines” than smoked marijuana. So would at least some glaucoma suffers.

And why, pray tell, must marijuana only be tested in “short-term trials”? Furthermore, do Benson et al. really know how to pick “patients that are most likely to benefit,” except by the anecdotal evidence that they find unimpressive? And how does recent cannabinoid research allow the Institute of Medicine (IOM) (or anyone else) to draw “science-based conclusions about the medical usefulness of marijuana”? And what are those conclusions?

Readers wanting a different spin on this important issue would do well to read Lester Grinspoon’s Marijuana–the Forbidden Medicine or Zimmer and Morgan’s Marijuana Myths, Marijuana Facts. The Issues piece in question reads as if the authors wanted to accommodate both those who believe in smoked marijuana and those who look on it as a work of the devil. It is, however, comforting to know that the IOM report endorses “exploration of the possible therapeutic benefits of cannabinoids.” The opposite point of view (unfortunately espoused by the retired general who is in charge of the federal “war on drugs”) is not tenable for anyone who has bothered to digest the available evidence.

LOUIS LASAGNA

Sackler School of Graduate Biomedical Sciences

Tufts University

Boston, Massachusetts


In the past few years, an increasing number of Americans have become familiar with the medical uses of cannabis. The most striking political manifestation of this growing interest is the passage of initiatives in more than a half dozen states that legalize this use under various restrictions. The states have come into conflict with federal authorities, who for many years insisted on proclaiming medical marijuana to be a hoax. Finally, under public pressure, the director of the Office of National Drug Policy, Barry McCaffrey, authorized a review of the question by the Institute of Medicine (IOM) of the National Academy of Sciences.

Its report, published in March of 1999, acknowledged the medical value of marijuana, but grudgingly. Marijuana is discussed as if it resembled thalidomide, with well-established serious toxicity (phocomelia) and limited clinical usefulness (for treatment of leprosy). This is entirely inappropriate for a drug with limited toxicity and unusual medical versatility. One of the report’s most important shortcomings is its failure to put into perspective the vast anecdotal evidence of these qualities.

The report states that smoking is too dangerous a form of delivery, but this conclusion is based on an exaggerated estimate of the toxicity of the smoke. The report’s Recommendation Six would allow a patient with what it calls “debilitating symptoms (such as intractable pain or vomiting)” to use smoked marijuana for only six months, and then only after all approved medicines have failed. The treatment would be monitored in much the same way as institutional review boards monitor risky medical experiments–an arrangement that is inappropriate and totally impractical. Apart from this, the IOM would have patients who find cannabis most helpful when inhaled wait years for the development of a way to deliver cannabinoids smoke-free. But there are already prototype devices that take advantage of the fact that cannabinoids vaporize at a temperature below the ignition point of dried cannabis plant material.

At least the report confirms that even government officials no longer doubt the medical value of cannabis constituents. Inevitably, cannabinoids will be allowed to compete with other medicines in the treatment of a variety of symptoms and conditions, and the only uncertainty involves the form in which they will be delivered. The IOM would clearly prefer the forms and means developed by pharmaceutical houses. Thus, patients now in need are asked to suffer until we have inhalation devices that deliver yet-to-be-developed aerosols or until isolated cannabinoids and cannabinoid analogs become commercially available. This “pharmaceuticalization” is proposed as a way to provide cannabis as a medicine while its use for any other purposes remains prohibited.

As John A. Benson, Jr., Stanley J. Watson, Jr., and Janet E. Joy put it, “Prevention of drug abuse and promotion of medically useful cannabinoid drugs are not incompatible.” But it is doubtful that isolated cannabinoids, analogs, and new respiratory delivery systems will be much more useful or safer than smoked marijuana. What is certain is that because of the high development costs, they will be more expensive; perhaps so much more expensive that pharmaceutical houses will not find their development worth the gamble. Marijuana as a medicine is here to stay, but its full medical potential is unlikely to be realized in the ways suggested by the IOM report.

LESTER GRINSPOON

Harvard Medical School

Boston, Massachusetts


The Institute of Medicine (IOM) report Marijuana and Medicine: Assessing the Science Base has provided the scientific and medical community and the lay press with a basis on which to present an educated opinion to the public. The report was prepared by a committee of unbiased scientists under the leadership of Stanley J. Watson, Jr., John A. Benson, Jr., and Janet E. Joy, and was reviewed by other researchers and physicians. The report summarizes a thorough assessment of the scientific data addressing the potential therapeutic value of cannabinoid compounds, issues of chemically defined cannabinoid drugs versus smoking of the plant product, psychological effects regarded as untoward side effects, health risks of acute and chronic use (particularly of smoked marijuana), and regulatory issues surrounding drug development.

It is important that the IOM report be used in the decisionmaking process associated with efforts at the state level to legislate marijuana for medicinal use. The voting public needs to be fully aware of the conclusions and recommendations in the IOM report. The public also needs to be apprised of the process of ethical drug development: determination of a therapeutic target (a disease to be controlled or cured); the isolation of a lead compound from natural products such as plants or toxins; the development of a series of compounds in order to identify the best compound to develop as a drug (more potent, more selective for the disease, and better handled by the body); and the assessment of the new drug’s effectiveness and safety.

The 1938 amendment to the federal Food and Drug Act demanded truthful labeling and safety testing, a requirement that a new drug application be evaluated before marketing of a drug, and the establishment of the Food and Drug Administration to enforce the act. It was not until 1962, with the Harris-Kefauver amendments, that proof of drug effectiveness to treat the disease was required. Those same amendments required that the risk-to-benefit ratio be defined as a documented measure of relative drug safety for the treatment of a specified disease. We deliberately abandon the protection that these drug development procedures and regulatory measures afford us when we legislate the use of a plant product for medicinal purposes.

I urge those of us with science and engineering backgrounds and those of us who are medical educators and health care providers to use the considered opinion of the scientists who prepared the IOM report in our discussions of marijuana as medicine with our families and communities. I urge those who bear the responsibility for disseminating information through the popular press and other public forums to provide the public with factual statements and an unbiased review. I urge those of us who are health care consumers and voting citizens to become a self-educated and aware population. We all must avoid the temptation to fall under the influence of anectodal testimonies and unfounded beliefs where our health is concerned.

ALLYN C. HOWLETT

School of Medicine

St. Louis University

St. Louis, Missouri


Engineering’s image

Amen! That is the first response that comes to mind after reading Wm. A. Wulf’s essay on “The Image of Engineering” (Issues, Winter 1998-­99). If the profession were universally perceived to be as creative and inherently satisfying as it really is, we would be well on our way to breaking the cycle. We would more readily attract talented women and minority students and the best and the brightest young people in general, the entrepreneurs and idealists among them.

It is encouraging that the president of the National Academy of Engineering (NAE), along with many other leaders of professional societies and engineering schools, is earnestly addressing these issues. Ten years ago, speaking at an NAE symposium, Simon Ramo called for the evolution of a “greater engineering,” and I do believe that strides, however halting, are being made toward that goal.

There is one aspect of the image problem, however, that I haven’t heard much about lately and that I hope has not been forgotten. I refer to the question of liberal education for engineers.

The Accreditation Board for Engineering and Technology (ABET) traditionally required a minimum 12.5 percent component of liberal arts courses in the engineering curriculum. Although many schools, such as Stanford, required 20 percent and more, most engineering educators (and the vast majority of engineering students) viewed the ABET requirement as a nuisance; an obstacle to be overcome on the way to acquiring the four-year engineering degree.

Now, as part of the progressive movement toward molding a new, more worldly engineer, ABET has agreed to drop its proscriptive requirements and to allow individual institutions more scope and variety. This is commendable. But not if the new freedom is used to move away from study of the traditional liberal arts. Creative engineering taught to freshmen is exciting to behold. But it is not a substitute for Shakespeare and the study of world history.

Some of the faults of engineers that lead to the “nerd” image Wulf decries can be traced to the narrowness of their education. Care must be taken that we do not substitute one type of narrowness for another. The watchwords of the moment are creativity, communication skills, group dynamics, professionalism, leadership, and all such good things. But study of these concepts is only a part of what is needed. While we work to improve the image of our profession, we should also be working to create the greater engineer of the future: the renaissance engineer of our dreams. We will only achieve this with young people who have the opportunity and are given the incentive to delve as deeply as possible into their cultural heritage; a heritage without which engineering becomes mere tinkering.

SAMUEL C. FLORMAN

Scarsdale, New York


Nuclear futures

In “Plutonium, Nuclear Power, and Nuclear Weapons” (Issues, Spring 1999), Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham argue that if nuclear power is to have a future, a new strategy is needed for managing the back end of the nuclear fuel cycle. They also argue that achieving this will require international collaboration and the involvement of governments.

Based on my own work on the future of nuclear energy, I thoroughly endorse these views. Nuclear energy is deeply mistrusted by much of the public because of the fear of weapons proliferation and the lack of acceptable means of disposing of the more dangerous nuclear waste, which has to be kept safe and isolated for a hundred thousand of years or so. As indicated by Wagner et al., much of this fear is connected with the presence of significant amounts of plutonium in the waste. Indeed, the antinuclear lobbies call the projected deep repositories for such waste “the plutonium mines of the future.”

If nuclear power is to take an important part in reducing CO2 emissions in the next century, it should be able to meet perhaps twice the 7 percent of world energy demand it meets today. Bearing in mind the increasing demand for energy, that may imply a nuclear capacity some 10 to 15 times current capacity. With spent fuel classified as high-level waste and destined for deep repositories, this would require one new Yucca Mountain­sized repository somewhere in the world every two years or so. Can that really be envisaged?

If the alternative fuel cycle with reprocessing and fast breeders came into use in the second half of the 21st century, there would be many reprocessing facilities spread over the globe and a vast number of shipments between reactors and reprocessing and fresh fuel manufacturing plants. Many of the materials shipped would contain plutonium without being safeguarded by strong radioactivity–just the situation that in the 1970s caused the rejection of this fuel cycle in the United States.

One is thus driven to the conclusion that today’s technology for dealing with the back end of the fuel cycle is, if only for political reasons, unsuited for a major expansion of nuclear power. Unless more acceptable means are found, nuclear energy is likely to fade out or at best become an energy source of little significance.

Successful development of the Integrated Actinide Conversion System concept or of an alternative having a similar effect could to a large extent overcome the back end problems. In the cycle described by Wagner et al., shipments containing plutonium would be safeguarded by high radioactivity; although deep repositories would still be required, there would need to be far fewer of them, and the waste would have to be isolated for a far shorter time and would contain virtually no plutonium and thus be of no use to proliferators. The availability of such technology may drastically change the future of nuclear power and make it one of the important means of reducing CO2 emissions.

The development of the technology will undoubtedly take a few decades before it can become commercially available. The world has the time, but the availability of funds for doing the work may be a more difficult issue than the science. Laboratories in Western Europe, Russia, and Japan are working on similar schemes; coordination of such work could be fruitful and reduce the burden of cost to individual countries. However, organizing successful collaboration will require leadership. Under the present circumstances this can only come from the United States–will it?

PETER BECK

Associate Fellow

The Royal Institute of International Affairs

London


Stockpile stewardship

“The Stockpile Stewardship Charade” by Greg Mello, Andrew Lichterman, and William Weida (Issues, Spring 1999) correctly asserts that “It is time to separate the programs required for genuine stewardship from those directed toward other ends.” They characterize genuine stewardship as “curatorship of the existing stockpile coupled with limited remanufacturing to solve any problems that might be discovered.” The “other ends” referred to appear in the criteria for evaluating stockpile components set forth in the influential 1994 JASON report to the Department of Energy (DOE) titled Science-Based Stockpile Stewardship.

The JASON criteria are (italics added by me): “A component’s contribution to (1) maintaining U.S. confidence in the safety and reliability of our nuclear stockpile without nuclear testing through improved understanding of weapons physics and diagnostics. (2) Maintaining and renewing the technical skill base and overall level of scientific competence in the U.S. defense program and the weapons labs, and to the nation’s broader scientific and engineering strength. (3) Important scientific and technical understanding, including in particular as related to national goals.”

Criteria 1 and 2, without the italic text, are sufficient to evaluate the components of the stewardship program. The italics identify additional criteria that are not strictly necessary to the evaluation of stockpile stewardship but provide a basis for support of the National Ignition Facility (NIF), the Sandia Z-Pinch Facility, and the Accelerated Strategic Computing Initiative (ASCI), in particular. Mello et al. consider these to be “programmatic and budgetary excesses” directed toward ends other than genuine stewardship.

The DOE stewardship program has consisted of two distinct parts from the beginning: A manufacturing component and a science-based component. The JASONs characterize the manufacturing component as a “narrowly defined, sharply focused engineering and manufacturing curatorship program” and the science-based component as engaging in “(unclassified) research in areas that are akin to those that are associated with specific issues in (classified) weapons technology.”

Mello et al. call for just such a manufacturing component but only support those elements of the science-based component that are plainly necessary to maintain a safe and reliable stockpile. NIF, Z-Pinch, and ASCI would not be part of their stewardship program and would need to stand or fall on their own scientific merits.

The JASONs concede that the exceptional size and scope of the science-based program “may be perceived by other nations as part of an attempt by the U.S. to continue the development of ever more sophisticated nuclear weapons,” and therefore that “it is important that the science-based program be managed with restraint and openness including international collaboration where appropriate,” in order not to adversely effect arms control negotiations.

The openness requirement of the DOE/JASON version of stockpile stewardship runs counter to the currently perceived need for substantially increased security of nuclear weapons information. Arms control, security, and weapons-competence considerations favor a restrained, efficient stewardship program that is more closely focused on the primary task of maintaining the U.S. nuclear deterrent. I believe that the diversionary, research-oriented stewardship program adopted by DOE is badly off course. The criticism of the DOE program by Mello et al. deserves serious consideration.

RAY E. KIDDER

Lawrence Livermore National Laboratory (retired)


Traffic congestion

Although the correspondents who commented on Peter Samuel’s “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999) properly commended him for dealing with the problem where it is–on the highways–they all missed what I consider to be some serious technical flaws in his proposed solutions. One of the key steps in his proposal is to separate truck from passenger traffic and then to gain more lanes for passenger vehicles by narrowing the highway lanes within existing rights of way.

Theoretically, it is a good idea to separate truck and passenger vehicle traffic. That would fulfil a fervent wish of anyone who drives the expressways and interstates. It can be done, to a degree, even now by confining trucks to the two right lanes on all highways having more than two lanes in each direction (that has been the practice on the New Jersey Turnpike for decades). However, because it is likely that the investment in creating separate truck and passenger roadways will be made only in exceptional circumstances (in at a few major metropolitan areas such as Los Angeles and New York, for example) and then only over a long period of time as existing roads are replaced, the two kinds of traffic will be mixed in most places for the foreseeable future. Traffic lanes cannot be narrowed where large trucks and passenger vehicles mix.

Current trends in the composition of the passenger vehicle fleet will also work against narrowing traffic lanes, even where heavy truck traffic and passenger vehicles can be separated. With half of the passenger fleet becoming vans, sport utility vehicles, and pickup trucks, and with larger versions of the latter two coming into favor, narrower lanes would reduce the lateral separation between vehicles just as a large fraction of the passenger vehicle fleet is becoming wider, speed limits are being raised, and drivers are tending to exceed speed limits by larger margins. The choice will therefore be to increase the risk of collision and injury or to leave highway lane widths and shoulder widths pretty much as they are.

Increasing the numbers of lanes is intended to allow more passenger vehicles on the roads with improved traffic flow. However, this will not deal with the flow of traffic that exits and enters the highways at interchanges, where much of the congestion at busy times is caused. Putting more vehicles on the road between interchanges will make the congestion at the interchanges, and hence everywhere, worse.

Ultimately, the increases in numbers of vehicles (about 20 percent per decade, according to the U.S. Statistical Abstract) will interact with the long time it takes to plan, argue about, authorize, and build new highways (10 to 15 years, for major urban roadways) to keep road congestion on the increase. Yet the flexibility of movement and origin-destination pairs will keep people and goods traveling the highways. The search for solutions to highway congestion will have to go on. This isn’t the place to discuss other alternatives, but it doesn’t look as though capturing more highway lanes by narrowing them within available rights of way will be one of the ways to do it.

SEYMOUR J. DEITCHMAN

Chevy Chase, Maryland


Conservation: Who should pay?

In the Spring 1999 Issues, R. David Simpson makes a forceful argument that rich nations [members of the Organization for Economic Cooperation and Development (OECD)] should help pay for efforts to protect biodiversity in developing countries (“The Price of Biodiversity”). It is true that many citizens in rich countries are beneficiaries of biological conservation, and given that developing countries have many other priorities and limited budgets, it is both practical and equitable to expect that rich countries should help pay the global conservation bill.

In his enthusiasm to make this point, however, Simpson goes too far. He appears to argue that rich nations should pay the entire conservation bill because they are the only beneficiaries of conservation. He claims that none of the local services produced by biological conservation are worth paying for. Hidden drugs, nontimber forest products, and ecotourism are all to be dismissed as small and irrelevant. He doesn’t even bother to mention nonmarket services such as watershed protection and soil conservation much less protecting biological diversity for local people.

Simpson blames a handful of studies for showing that hidden drugs, nontimber forest products, and ecotourism are important local conservation benefits in developing countries. The list of such studies is actually longer, including nontimber forest product studies in Ecuador, Belize, Brazil, and Nepal, and a study of ecotourism values in Costa Rica.

What bothers Simpson is that the values in these studies are high–high enough to justify conservation. He would dismiss these values if they were just a little lower. This is a fundamental mistake. Even if conservation market values are only a fraction of development values, they (and all the local nonmarket services provided by conservation) still imply that local people have a stake in conservation. Although one should not ignore the fact that OECD countries benefit from global conservation, it is important to recognize that there are tangible local benefits as well.

Local people should pay for conservation, not just distant nations of the OECD. This is a critical point, because it implies that conservation funds from the OECD can protect larger areas than the OECD alone can afford. Further, it implies that conservation can be of joint interest to both developing nations and the OECD. Developing countries should take an active interest in biological conservation, making sure that programs serve their needs as well as addressing OECD concerns. A conservation program designed by and for the entire world has a far greater chance of succeeding than a program designed for the OECD alone.

ROBERT MENDELSOHN

Edwin Weyerhaeuser Davis Professor

School of Forestry and Environmental Studies

Yale University

New Haven, Connecticut


Correction

In the Forum section of the Summer 1999 issue, letters from Wendell Cox and John Berg were merged by mistake and attributed to Berg. We print both letters below as they should have appeared.

In “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999), Peter Samuel’s prescriptions for dealing with traffic congestion are both thought-provoking and insightful. There clearly is a need for more creative use of existing highway capacity, just as there continue to be justified demands for capacity improvements. Samuel’s ideas about how capacity might be added within existing rights-of-way are deserving of close attention by those who seek new and innovative ways of meeting urban mobility needs.

Samuel’s conclusion that “simply building our way out of congestion would be wasteful and far too expensive,” highlights a fundamental question facing transportation policymakers at all levels of government–how to determine when it is time to improve capacity in the face of inefficient use of existing capacity. The solution recommended by Samuel­to harness the power of the market to correct for congestion externalities­is long overdue in highway transportation.

The costs of urban traffic delay are substantial, burdening individuals, families, businesses, and the nation. In its annual survey of congestion trends, the Texas Transportation Institute estimated that, in 1996, the cost of congestion (traffic delay and wasted fuel), amounted to $74 billion in 70 major urban areas. Average congestion costs per driver were estimated at $333 per year in small urban areas and at $936 per year in the very large urban areas. And, these costs may be just the tip of the iceberg, when one considers the economic dislocations that mispricing of our roads gives rise to. In the words of the late William Vickrey, 1996 Nobel laureate in economics, pricing in urban transportation is “irrational, out-of-date, and wasteful.” It is time to do something about it.

Greater use of economic pricing principles in highway transportation can help bring more rationality to transportation investment decisions and can lead to significant reductions in the billions of dollars of economic waste associated with traffic congestion. The pricing projects mentioned in Samuel’s article, some of them supported by the Federal Highway Administration’s Value Pricing Pilot Program, are showing that travelers want the improvements in service that road pricing can bring and are willing to pay for them. There is a long way to go before the economic waste associated with congestion is eliminated, but these projects are showing that traffic congestion is, indeed, a solvable problem.

JOHN BERG

Office of Policy

Federal Highway Administration

Washington, D.C.


Peter Samuel comes to the same conclusion regarding the United States as that reached by Christian Gerondeau with respect to Europe: Highway-based strategies are the only way to reduce traffic congestion and improve mobility. The reason is simple: In both the United States and the European Union, trip origins and destinations have become so dispersed that no vehicle with a larger capacity than the private car can efficiently serve the overwhelming majority of trips.

The hope that public transit can materially reduce traffic congestion is nothing short of wishful thinking, despite its high degree of political correctness. Portland, Oregon, where regional authorities have adopted a pro-transit and anti-highway development strategy, tells us why.

Approximately 10 percent of employment in the Portland area is downtown, which is the destination of virtually all express bus service. The two light rail lines also feed downtown, but at speeds that are half that of the automobile. As a result, single freeway lanes approaching downtown carry three times the person volume as the light rail line during peak traffic times (so much for the myth about light rail carrying six lanes of traffic!).

Travel to other parts of the urbanized area (outside downtown) requires at least twice as much time by transit as by automobile. This is because virtually all non-downtown oriented service operates on slow local schedules and most trips require a time-consuming transfer from one bus route to another.

And it should be understood that the situation is better in Portland than in most major U.S. urbanized areas. Portland has a comparatively high level of transit service and its transit authority has worked hard, albeit unsuccessfully, to increase transit’s market share (which dropped 33 percent in the 1980s, the decade in which light rail opened).

The problem is not that people are in love with their automobiles or that gas prices are too low. It is much more fundamental than that. It is that transit does not offer service for the overwhelming majority of trips in the modern urban area. Worse, transit is physically incapable of serving most trips. The answer is not to reorient transit away from downtown to the suburbs, where the few transit commuters would be required to transfer to shuttle buses to complete their trips. Downtown is the only market that transit can effectively serve, because it is only downtown that there is a sufficient number of jobs (relatively small though it is) arranged in high enough density that people can walk a quarter mile or less from the transit stop to their work.

However, wishful thinking has overtaken transportation planning in the United States. As Samuel puts it, “Acknowledging the futility of depending on transit . . . to dissolve road congestion will be the first step toward more realistic urban transportation policies.” The longer we wait, the worse it will get.

WENDELL COX

Belleville, Illinois

From the Hill – Fall 1999

Congress split on FY 2000 funding for R&D

As the September 30 deadline loomed for approval of all FY 2000 appropriations bills, Congress was deeply split on R&D funding, with the House approving significant cuts and the Senate favoring spending increases. Because of severe disagreements between the president and Congress, several appropriations bills were expected to be vetoed. A repeat of last year’s budget process was expected, with all unsigned bills being merged into a massive omnibus bill.

Congress has insisted on adhering this year to strict budget caps on discretionary spending that were imposed when the federal budget was running an annual deficit. This has forced cuts in many programs. The House would increase defense spending substantially while cutting R&D funding as well as funding for several key White House programs. The Senate, which has made R&D spending a priority, would increase R&D spending while providing less money for defense.

Although neither chamber had approved all appropriations bills by mid-September, the House thus far had cut nondefense R&D by 5.1 percent, or $1.1 billion from FY 1999 levels. Especially hard hit would be R&D spending in the National Aeronautics and Space Administration (NASA) and the Departments of Commerce and Energy. R&D spending would decline by 2.4 percent in the National Science Foundation (NSF).

Basic research in agencies whose budgets the House has approved would be up by 2.2 percent. Among those agencies receiving increases would be the Department of Defense (DOD) (up 3.1 percent), the U.S. Department of Agriculture (USDA) (up 2.3 percent), and NASA (up 7.1 percent). NSF’s basic research budget would decline by 0.3 percent to $2.3 billion. The Department of Energy’s (DOE’s) basic research would stay nearly level at $2.2 billion.

Current status

What follows are highlights of the appropriations for key R&D agencies. The summary focuses primarily on the House, which has approved more bills concerning key R&D agencies than has the Senate.

The NASA budget would decline steeply in the House plan to $12.7 billion, a cut of $1 billion or 7.4 percent. NASA’s Science, Aeronautics, and Technology account, which funds most of NASA’s R&D, would decline 12 percent to $5 billion because of deep cuts in the Earth Science and Space Science programs. The House would cancel several missions and dramatically reduce planning and development funds for future missions in the Discovery and Explorer space science programs. The House would also reduce supporting research and technology funds and mission support funds, which could affect all NASA programs.

The House would cut the NSF budget by 2 percent to $3.6 billion. Most of the research directorates would receive level funding; NSF had requested increases of between 2 and 5 percent. Cuts in facilities funding would result in a 2.7 percent decline in total NSF R&D. The House would also dramatically scale back first-year funding for the administration’s proposed Information Technology for the Twenty-First Century (IT2) initiative. NSF requested $146 million for its role in IT2, but the House would provide only $35 million. The new Biocomplexity initiative would receive $35 million, less than the president’s $50 million request.

The House would provide $844 million for Commerce Department R&D, a reduction of $231 million or 21.5 percent. The House would eliminate the Advanced Technology Program (ATP) and cut most R&D programs in the National Oceanic and Atmospheric Administration. In contrast, the Senate would provide generous increases for most Commerce R&D programs, including ATP, for a 15.8 percent increase in total Commerce R&D to $1.2 billion.

In the wake of congressional anger over allegations of security breaches and mismanagement at DOE weapons labs, the House would impose restrictions by withholding $1 billion until DOE is restructured and would also cut funding for R&D programs. DOE’s R&D would total $6.8 billion, 2.9 percent less than in FY 1999. The Stockpile Stewardship program, which funds most of the R&D performed at the weapons labs, would receive $2 billion, a reduction of 6 percent after several years of large increases. The DOE Science account, which funds research on physics, fusion, and energy sciences, would receive $2.6 billion, a cut of 2.8 percent. The House would deny the requested $70 million for DOE’s contribution to the IT2 initiative and would also trim the request for the Spallation Neutron Source from $214 million to $68 million. R&D on solar and renewable energy technologies would decrease by 7.7 percent. The Senate would provide increases for most DOE programs, without restrictions, for a total R&D appropriation of $7.3 billion, an increase of 4.9 percent.

The House would boost DOD funding of basic and applied research above both the president’s request and the FY 1999 funding level. DOD’s basic research would total $1.1 billion, up 3.1 percent, and applied research would total $3.4 billion, up more than 7 percent. The House would provide $60 million for DOD’s role in the IT2 initiative, down from a requested $100 million. The House would also create a separate $250 million appropriation for medical R&D, including $175 million for breast cancer research and $75 million for prostate cancer research. The Senate would provide similar increases for DOD basic, applied, and medical research accounts.

The USDA would receive $1.6 billion for R&D, a cut of 2.1 percent. The House would block a new, nonappropriated, competitive research grants program from spending a planned $120 million in FY 2000. The Senate would allow the release of $50 million for this program. An existing competitive grants program, the National Research Initiative, would be cut 11.6 percent to $105 million. Congressionally designated Special Research Grants, however, would receive $63 million, $8 million more than this year and $58 million more than USDA requested. The Senate would be more generous with an appropriation of $1.7 billion for total USDA R&D, up 3.8 percent.

The Environmental Protection Agency would receive $643 million for its R&D from the House, a decline of 3.5 percent, but this would be the same amount that the agency requested.

Much of the Department of Transportation (DOT) budget is exempt from the budget caps because of two new categories of spending created last year for transportation programs. Spending on these categories is automatically augmented by increased gas tax revenues. As a result, the House would allow DOT’s R&D to increase 8.9 percent to $656 million in FY 2000, with substantial increases for highway, aviation, and transit R&D. The Senate would provide similar amounts.

GOP bills on database protection clash

The debate over protecting proprietary information in electronic databases took a new twist when Rep. Tom Bliley (R-Va.), chairman of the House Commerce Committee, proposed a bill that clashes with a bill sponsored by his GOP colleague, Rep. Howard Coble (R-N.C.). Coble’s bill passed the House twice during 1998 but was dropped because of severe criticism from the science community. Coble introduced a revised version of the bill, H.R. 354, earlier this year. (“Controversial database protection bill reintroduced,” Issues, Summer 1999.)

Bliley’s bill, H.R. 1858, would allow the free use of information from online databases but not the use of the database itself, which would be protected by virtue of its unique design and compilation of data. The bill adheres to legal precedents in copyright law by not protecting the duplication of “any individual idea, fact, procedure, system, method of operation, concept, principle, or discovery.” It allows the duplication and dissemination of a database if used for news reporting, law enforcement and intelligence, or research. It extends protection only to databases created after its enactment; H.R. 354 would protect databases in existence for less than 15 years.

H.R. 1858 proponents argue that it is more narrowly focused than the Coble bill. “Any type of information that is currently provided on the Internet could be jeopardized by an overly broad statute or one that does not adequately define critical terms,” argued Matthew Rightmire, director of business development for Yahoo! Inc., during a June 15 hearing on H.R. 1858. At the same hearing, Phyllis Schlafly, president of Eagle Forum, said H.R. 1858 has four major advantages over the Coble bill: It provides the right to extract essential data, does not create new federal penalties for violations, does not protect those who misuse data, and treats database protection as a commercial issue rather than an intellectual property issue.

Database protection as a commercial rather than an intellectual property issue underlies the premises of both bills. The Coble bill views database piracy not only as a threat to the market share of the original providers but also as a theft of original work. Thus, H.R. 354 includes criminal as well as civil penalties for violations. The Bliley bill, on the other hand, sees databases only as a compilation of facts, so that copying a database infringes only on the commercial success of the providers. It provides only civil penalties for violations. H.R. 1858 gives the Federal Trade Commission the authority to determine which violations would be governed under fair competition statutes.

Opponents of the Bliley bill believe that it is too specific and does not adequately protect database owners. H.R. 1858, they point out, protects databases only as whole entities. They argue that the theft and copying of parts of a database would be enough to inflict substantial commercial damage on the owner. They also maintain that pirates could add just a small amount of data to a duplicated database to exempt themselves from prosecution. And, they say, because database owners will have to provide free use of data for scientific, research, and educational purposes, they may not be able to earn enough revenue to maintain these databasse.

Congress set to create separate agency to run DOE weapons labs

In an attempt to bolster security at the Department of Energy’s (DOE’s) nuclear weapons labs, Congress, as of mid-September, was on the verge of passing a bill that would create a semiautonomous agency within DOE to run the labs. The White House is not happy with the bill (S. 1059) but may find it difficult to veto because it also includes increased funding for popular military services programs, including money for combat readiness and training, pay raises, and health care.

The legislation comes in the wake of reports of Chinese espionage at the labs as well as scathing criticism from the President’s Foreign Intelligence Advisory Board, chaired by former Senator Warren Rudman. An advisory board report, Science at its Best, Security at its Worst, concluded that DOE had failed in countering security threats and that it is a “dysfunctional bureaucracy that has proven it is incapable of reforming itself.” The report recommended setting up either a semiautonomous or wholly autonomous agency within DOE.

The bill would establish a National Nuclear Security Administration (NNSA) responsible for nuclear weapons development, naval nuclear propulsion, defense nuclear nonproliferation, and fissile material disposition. NNSA would be headed by an administrator/undersecretary for nuclear security who would be subject to “the authority, direction, and control” of the secretary of energy. The administrator would have authority over agency-specific policies, the agency’s budget, and personnel, legislative, and public affairs.

The Clinton administration fears that the new agency would be too insular, with vague accountability to the secretary of energy, no clear links to nonweapons activities within DOE, and no responsibility for environmental, health, and safety issues. Echoing the administration’s concerns, Sen. Carl Levin (D-Mich.) issued a statement outlining a Congressional Research Service (CRS) memorandum that raises questions about the reorganization. The memorandum states that “the Department’s staff offices will be unable to have authority, direction, or control over any officer and employee of the [new] Administration.” It also says that the NNSA would not be directly subject to DOE’s general counsel, inspector general, and chief financial officer. Other criticism has come from 46 state attorneys general, who sent a letter to Congress in early September expressing concern that the reorganization would undercut a 1992 law that gives the states regulatory control over DOE’s hazardous waste management and cleanup activities.

OMB revises proposed rule on release of research data

Attempting to meet objections from the science community, the Office of Management and Budget (OMB) has revised a proposed rule governing the release of research data. But the science community still believes that the rule could compromise sensitive data and hinder research progress.

OMB proposed the rule after Sen. Richard Shelby (R-Ala.) inserted a request in last year’s omnibus appropriations bill that OMB amend its Circular A-110 rule to require that all data produced through funding from a federal agency be made available through procedures established under the Freedom of Information Act (FOIA). Scientific organizations are not necessarily opposed to the release of data but don’t want it to be done under what they consider FOIA’s ambiguous rules. OMB is now asking for comments on the revisions to the proposed rule, seeking to clarify issues that were problematic in the first proposal.

Originally, the proposed rule allowed the public to request access to “all data” but did not clearly define the phrase. Letter writers pointed out that “data” could include phone logs, physical equipment, financial records, private medical information, and proprietary information. OMB now defines data as “the recorded factual material commonly accepted in the scientific community as necessary to validate research findings, but not any of the following: preliminary analyses, drafts of scientific papers, plans for future research, peer reviews, or communications with colleagues. This ‘recorded’ material excludes physical objects (e.g., laboratory samples).” Proprietary trade secrets and private information such as medical files would also be excluded.

The original proposal also did not clearly define the term “published,” leading to concerns that scientists might be forced to release data before a research project was concluded. OMB has now defined “published” as “either when (A) research findings are published in a peer-reviewed scientific or technical journal, or (B) a Federal agency publicly and officially cites the research findings in support of an agency action.”

Perhaps the most significant change concerns OMB’s interpretation of what constitutes a federal regulation. Initially, OMB said that only data “used by the Federal Government in developing policy or rules” would be available through FOIA. But commentators pointed out that this could lead to a situation in which any action taken by an agency that was influenced by a research study would place that study under scrutiny. OMB has now narrowed the wording to include only data “used by the Federal Government in developing a regulation.” Further, OMB said that the regulation must meet a $100 million impact threshold, a precedent set by other laws.

Proponents of the rule, which include many politically conservative organizations, were not pleased by the revisions. They have long argued for the broadest possible release of data, so that the scientific process can be scrutinized as completely as possible. A U.S. Chamber of Commerce representative, quoted in Science magazine, called the new OMB interpretation “unacceptable.”

The Association of American Universities (AAU), in its response to the revisions, has not changed its view that the Shelby amendment is “misguided and represents bad policy.” AAU asked that the economic impact threshold be raised to $500 million and that the new proposal include only future research, not already completed studies.

Big boost in information technology spending sought

A bill introduced Rep. F. James Sensenbrenner, Jr. (R-Wisc.), chairman of the House Science Committee, would nearly double, to $4.8 billion, federal funding for research in information technology (IT) and related activities over the next five years for six agencies under the committee’s jurisdiction. The bill, H.R. 2086, would go significantly beyond the Clinton administration’s IT2 information technology initiative.

In addition to what is now being spent on IT, the Sensenbrenner bill would increase basic research by $60 million in FY 2000 and 2001, $75 million in FY 2002 and 2003, and $80 million in FY 2004. It also authorizes $95 million for providing internships in IT companies for college students and $385 million for terascale computing hardware. The bill would make permanent the R&D tax credit, extend funding for the Next Generation Internet program until 2004, and require the National Science Foundation (NSF) to review and report on the types of and availability of encryption products in other countries.

If the bill is eventually approved, the biggest winner would be NSF, which would receive more than half of the total authorizations of the bill during its five-year span. Although FY 2000 funding levels in H.R. 2086 are lower than the president’s budget request–$445 million versus $460 million–the funding increases in later years are dramatic. NSF’s funding for IT research would increase by more than $100 million to $571 million in 2004 for a total authorization of $2.5 billion. NASA would also benefit, with proposed authorizations totaling $1.03 billion over five years.

The administration’s IT2 program allocates $228 million for basic and applied IT R&D for FY 2000; $123 million for multidisciplinary applications of IT; and $15 million for social, economic, and workforce implications for IT. All told, it allocates an additional $366 million to existing IT programs.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

A Vision of Jeffersonian Science

The public attitude toward science is still largely positive in the United States; but for a vocal minority, the fear of risks and even catastrophes that might result from scientific progress has become paramount. Additionally, in what is called the “science wars,” central claims of scientific epistemology have come under attack by nonscientists in the universities. Some portions of the political sector consider basic scientific research far less worthy of government support than applied research, whereas other politicians castigate the support of applied research as “corporate welfare.”

Amid the choir of dissonant voices, Congress has shown interest in developing what is being called “a new contract between science and society” for the post­Cold War era. As the late Representative George E. Brown, Jr., stated, “A new science policy should articulate the public’s interest in supporting science–the goals and values the public should expect of the scientific enterprise.” Whatever the outcome, the way science has been supported during the past decades, the motivation for such support, and the priorities for spending are likely to undergo changes, with consequences that may well test the high standing that U.S. science has achieved over the past half century.

In this situation of widespread soul-searching, our aim is to propose an imperative for an invigorated science policy that adds to the well-established arguments for government-sponsored basic scientific research. In a novel way, that imperative tightly couples basic research with the national interest. The seemingly quite opposite two main types of science research projects that have been vying for support in the past and to this day are often called basic or “curiosity-driven” versus applied or “mission-oriented.” Although these common characterizations have some usefulness, they harbor two crucial flaws. The first is that in actual practice these two contenders usually interact and collaborate closely, despite what the most fervent advocates of either type may think. The history of science clearly teaches that many of the great discoveries that ultimately turned out to have beneficial effects for society were motivated by pure curiosity with no thought given to such benefits; likewise, the history of technology recounts magnificent achievements in basic science by those who embarked on their work with practical or developmental interests.

As the scientist-statesman Harvey Brooks commented, we should really be talking about a “seamless web.” The historian’s eye perceives the seemingly unrelated pursuits of basic knowledge, technology, and instrument-oriented developments in today’s practice of science to be a single, tightly-woven fabric. Harold Varmus, the director of the National Institutes of Health (NIH), eloquently acknowledged the close association of the more applied biomedical advances with progress in the more basic sciences: “Most of the revolutionary changes that have occurred in biology and medicine are rooted in new methods. Those, in turn, are usually rooted in fundamental discoveries in many different fields. Some of these are so obvious that we lose sight of them–like the role of nuclear physics in producing radioisotopes essential for most of modern medicine.” Varmus went on to cite a host of other examples that outline the seamless web between medicine and a wide range of basic science disciplines.

The second important flaw in the usual antithesis is that these two widespread and ancient modes of thinking about science, pure versus applied, have tended to displace and derogate a third way that combines aspects of the two. This third mode now deserves the attention of researchers and policymakers. But we by no means advocate that the third mode replace the other two modes. Science policy should never withdraw from either basic or applied science. We argue that the addition of the third mode to an integrated framework of science policy would contribute tremendously to mobilizing widespread support for science and to propelling societal as well as scientific progress. Before we turn to a discussion of it, we will briefly survey the other two modes of scientific research.

Newtonian and Baconian research

The concept of pursuing scientific knowledge “for its own sake,” letting oneself be guided chiefly by the sometimes overpowering inner necessity to follow one’s curiosity, has been associated with the names of many of the greatest scientists, and most often with that of Isaac Newton. His Principia (1687) may well be said to have given the 17th-century Scientific Revolution its strongest forward thrust. It can be seen as the work of a scientist motivated by the abstract goal of eventually achieving complete intellectual “mastery of the world of sensations” (Max Planck’s phrase). Newton’s program has been identified with the search for omniscience concerning the world accessible to experience and experiment, and hence with the primary aim of developing a scientific world picture within which all parts of science cohere. In other words, it is motivated by a desire for better and more comprehensive scientific knowledge. That approach to science can be called the Newtonian mode. In this mode, the hope for practical and benign applications of the knowledge gained in this way is a real but secondary consideration.

Turning now to the second of the main styles of scientific research, popularly identified as “mission-oriented,” “applied,” or “problem-solving,” we find ourselves among those who might be said to follow the call of Francis Bacon, who urged the use of science not only for “knowledge of causes and secret motion of things,” but also in the service of omnipotence: “the enlarging of the bounds of human empire, to the effecting of all things possible.”

Research in the Baconian mode has been carried out more commonly in the laboratories of industry than of academe. Unlike basic research, mission-oriented research by definition hopes for practical, and preferably rapid, benefits; and it proceeds, where it can, by using existing knowledge to produce applications.

Jeffersonian research

Recognition of the third mode may open a new window of opportunity in the current reconsiderations, not least in Congress and the federal agencies, of what kinds of science are worth supporting. It is a conscious combination of aspects of the Newtonian and Baconian modes, and it is best characterized by the following formulation: The specific research project is motivated by placing it in an area of basic scientific ignorance that seems to lie at the heart of a social problem. The main goal is to remove that basic ignorance in an uncharted area of science and thereby to attain knowledge that will have a fair probability–even if it is years distant–of being brought to bear on a persistent, debilitating national (or international) problem.

An early and impressive example of this type of research was Thomas Jefferson’s decision to launch the Lewis and Clark expedition into the western parts of the North American continent. Jefferson, who declared himself most happy when engaged in some scientific pursuit, understood that the expedition would serve basic science by bringing back maps and samples of unknown fauna and flora, as well as observations of the native inhabitants of that blank area on the map. At the same time, however, Jefferson realized that such knowledge would eventually be desperately needed for such practical purposes as establishing relations with the indigenous peoples and would further the eventual westward expansion of the burgeoning U.S. population. The expedition thus implied a dual-purpose style of research: basic scientific study of the best sort (suitable for an academic Ph.D. thesis, in modern terms) with no sure short-term payoff but targeted in an area where there was a recognized problem affecting society. We therefore call this style of research the Jeffersonian mode.

Congress has shown interest in developing what is being called “a new contract between science and society.”

This third mode of research can provide a way to avoid the dichotomy of Newtonian versus Baconian styles of research, while supplementing both. In the process, it can make public support of all types of research more palatable to policymakers and taxpayers alike. It is, after all, not too hard to imagine basic research projects that hold the key to alleviating well-known societal dysfunctions. Even the “purest” scientist is likely to agree that much remains to be done in cognitive psychology; the biophysics and biochemistry involved in the process of conception; the neurophysiology of the senses such as hearing and sight; molecular transport across membranes; or the physics of nanodimensional structures, to name a few. The results of such basic work, one could plausibly expect, will give us in time a better grasp of complex social tasks such as, respectively, childhood education, family planning, improving the quality of life for handicapped people, the design of food plants that can use brackish water, and improved communication devices.

Other research areas suited to the Jeffersonian mode would include the physical chemistry of the stratosphere; the complex and interdisciplinary study of global changes in climate and in biological diversity; that part of the theory of solid state that makes the more efficient working of photovoltaic cells still a puzzle; bacterial nitrogen fixation and the search for symbionts that might work with plants other than legumes; the mathematics of risk calculation for complex structures; the physiological processes governing the aging cell; the sociology underlying the anxiety of some parts of the population about mathematics, technology, and science itself; or the anthropology and psychology of ancient tribal behavior that appears to persist to this day and may be at the base of genocide, racism, and war in our time.

It is of course true that Jeffersonian arguments are already being made from time to time and from case to case, as problems of practical importance are used to justify federal support of basic science. For instance, current National Science Foundation­sponsored research in atmospheric chemistry and climate modeling is linked to the issue of global warming, and Department of Energy support for plasma science is justified as providing the basis for controlled fusion. NIH has been particularly successful in supporting Jeffersonian efforts in the area of health-related basic research. Yet what seems to be missing are an overarching theoretical rationale and institutional legitimization of Jeffersonian science within the federal research structure.

The current interest in rethinking science and technology policy beyond the confining dichotomy of basic versus applied research has spawned some efforts kindred to ours. In Donald Stokes’s framework, the linkage of basic research and the national interest appeared in what he called “Pasteur’s Quadrant,” which overlaps to a degree with what we have termed the Jeffersonian mode. Our approach also heeds Lewis Branscomb’s warning that the level of importance that utility considerations have in motivating research does not automatically determine the nature and fundamentality of the research carried out. Branscomb appropriately distinguishes two somewhat independent dimensions of how and why: the character of the research process itself (ranging from basic to problem-solving) and the motivation of the research sponsor (ranging from knowledge-seeking to concrete benefits). For instance, a basic research process, which for Branscomb comprises “intensely intellectual and creative activities with uncertain outcomes and risks, performed in laboratories where the researchers have a lot of freedom to explore and learn,” may characterize research projects with no specific expectations of any practical applications, as well as projects that are clearly intended for application. Branscomb’s category of research that is both motivated by practical needs and conducted as basic research is very similar to our concept of Jeffersonian science.

The Carter/Press initiative

Jeffersonian science is not an empty dream. A general survey of related science policy initiatives can be found in the article by Branscomb that follows this one. Here we briefly turn to a concrete 20th-century example of the attempt to institute a Jeffersonian research program on a large scale. Long neglected, that effort is eminently worth remembering as the covenant between science and society is being reevaluated.

Jeffersonian science can make public support of all types of research more palatable to policymakers and taxpayers alike.

In November 1977, at President Carter’s request, Frank Press, presidential science adviser and director of the Office of Science and Technology Policy, polled the federal agencies about basic research questions whose solutions, in the view of these agencies, were expected to help the federal government significantly in fulfilling its mission. The resulting master list, which was assembled in early 1978, turned out to be a remarkable collection of about 80 research questions that the heads of the participating federal government agencies (including the Departments of Agriculture, Defense, Energy, and State and the National Aeronautics and Space Administration) at that time considered good science (good, here, in the sense of expected eventual practical pay-offs) but which, at the same time, would resonate with the intrinsic standards of good basic science within the scientific community. It should be added here that the agency heads could make meaningful scientific suggestions thanks in good part to two of Press’s predecessors, science advisers Jerome Wiesner and George Kistiakowsky. They had helped to build serious science research capacities into the various federal mission agencies, thus ensuring that highly competent advice was available from staff scientists within the agencies.

Consider, for instance, this question from the Department of Agriculture: “What are mechanisms within body cells which provide immunity to disease? Research on how cell-mediated immunity strengthens and relates to other known mechanisms is needed to more adequately protect humans and animals from disease.” That question, framed in 1978 as a basic research question, was to become a life-and-death issue for millions only a few years later with the onset of the AIDS epidemic. This selection of a research topic illustrates that Press’s Jeffersonian initiative was able in advance to target a basic research issue whose potential benefits were understood in principle at the time but whose dramatic magnitude could not have been foreseen (and might well not have been targeted in a narrow application-oriented research program).

Other remarkable basic research questions included one by the Department of Energy about the effects of atmospheric carbon dioxide concentrations on the climate and on global social, economic, and political structures, as well as one by the Department of Defense about superconductivity at higher temperatures, almost a decade before the sensational breakthrough in this area.

A Jeffersonian revival

The Carter-Press initiative quickly slid into oblivion when Carter was not elected to a second term, yet it should not be forgotten. A revitalization of the Jeffersonian mode of science would provide a promising additional model for future science policies, one that would be especially relevant in the current state of disorientation about the role of science in society.

For many scientists, a Jeffersonian agenda would be liberating. Scientists who intended to do basic research in the defined areas of national interest would be shielded from pressures to demonstrate the social usefulness of their specific projects in their grant applications. Once these areas of interest were determined, the awards of research grants could proceed according to strictly “science-internal” standards of merit.

Moreover, a Jeffersonian agenda provides an overarching rationale for the government support of basic research that is both theoretically sound and can be easily understood by the public. It defuses the increasingly heard charge that science is not sufficiently concerned with “useful” applications, for this third mode of research is precisely located in the area where the national and international welfare is a main concern. The way basic research in the interest of health is already legitimized and supported under the auspices of NIH may well serve as a successful example that other sciences could adopt.

Finally, the strengthened public support for science induced by a visible and explicit Jeffersonian agenda is likely to generalize and transfer to other sectors of federal science policy. (Again, we do not advocate the total replacement of the Newtonian and Baconian modes by the Jeffersonian mode; all must be part of an integrated federal science policy.) Even abstract-minded high-energy physicists have learned the hard way that their funding depends on a generally favorable public attitude toward science as a whole. Moreover, they too can be proud of the use of the campus cyclotron for the production of radioisotopes for cancer treatment and the use of nuclear magnetic resonance or synchrotrons for imaging. Nor should we forget the current valued participation of pure theorists on the President’s Science Advisory Committee and other important government panels; nor their sudden usefulness, with historic consequences, during World War I and World War II. From every perspective, ranging from the cultural role of science to national preparedness, even the “purest” scientists can continue to claim their share of the total support given to basic science. But that total can more easily be enlarged by the change we advocate in the public perception of what basic research can do for the needs of humankind.

Global Growth through Third World Technological Progress

During the past four decades, the study of technological innovation has moved to center stage from its previous sideshow status in the economics profession. Most economists recognize that sustained increases in material standards of living depend critically on improvements in technology–operating, to be sure, in tandem with improvements in education and human skills, vigorous new plant and equipment investment, and appropriate governmental institutions. Two key questions for the future, however, are: How can the pace of technological advance be maintained, and how can the benefits of improved technology be distributed more widely to low-income nations, in which most of the world’s inhabitants reside?

Within the United States, the ebb and flow of technological change has been propelled by impressive increases in the amount of resources allocated to formally organized research and development (R&D) activities. Between 1953 and 1994, federal government support for basic science, measured in dollars of constant purchasing power, increased at an average annual rate of 5.8 percent; industrial basic research expenditures, at a rate of 5 percent; and all company-funded industrial R&D (mostly for D, rather than R), at a rate of 4.9 percent. These growth rates far exceed the rate at which the U.S. population is growing. If similar real R&D growth rates are needed to sustain technological progress in the future, which may be necessary unless our most able scientists and engineers can somehow learn how to be more creative, from where are the requisite resources to come? And what role can resources from the rest of the world, and especially underutilized resources, play in meeting the growth challenge?

Barriers to expansion

Expanding the R&D work force is not a challenge to be taken lightly. In 1964, Princeton University convened a small colloquium on the U.S. government’s nascent program to send astronauts to the moon. As a first-year assistant professor of economics, my assignment was to analyze the economic costs and benefits of what became the Apollo program. My talk focused on the program’s opportunity costs, that is, the sacrifices of other technological accomplishments that would follow from reallocating talent to the Apollo program. My discussant was Martin Schwarzschild, director of the Princeton Observatory. He insisted that we should consider the opportunity costs of not having a moon program. The effort would so fire young people’s imaginations, he argued, that many who would otherwise not do so would choose careers in science and engineering (S&E), augmenting the United States’ capacity to exploit new technological opportunities and having a direct impact on material living standards.

On that day and throughout the next three decades, I accepted that Schwarzschild’s analysis was superior to mine. Revisiting the question recently, however, has made me more skeptical. Figures 1 and 2 provide perspective. Figure 1 reveals that there was in fact brisk growth in the number of U.S. students receiving bachelors’ degrees in S&E during the 1960s and 1970s. The relatively slow growth of degree awards in the physical sciences in areas most closely related to the Apollo program prompts only modest qualms concerning the Schwarzschild conjecture. However, much of the 1960s and 1970s growth was propelled at least in part by the baby boom that followed World War II. When degree awards are related to the number of Americans in relevant age cohorts, as in Figure 2, a different picture emerges. The number of degrees per thousand 22-year-olds grew quite slowly during the 1960s and 1970s; indeed, most of that increase was in the life sciences, preparing students for, among other things, lucrative careers in medicine. If the Apollo program motivated scientific career choices, the linkage was more subtle than my aggregate statistics could identify.



Clearly, there are substantial barriers to internal expansion of the U.S. S&E work force. The uneven quality of U.S. primary and secondary education, especially in mathematics and the sciences, is one impediment. The relative dearth of new academic positions as professors hired to meet post-World War II baby boom demands remain in their tenured slots discourages young would-be academicians. The substantially higher salaries received by MBAs, attorneys, and physicians than by bench scientists and engineers pose an appreciable disincentive. These barriers have been thoroughly explored by scholars. My question here takes a broader geographic perspective. Although the United States is now the world’s leading scientific and technological power, it does not labor alone in extending the frontiers of knowledge. From where else in the world can the growth of scientific and technological effort be sustained as the next millennium unfolds?

Table 1 provides broad insight. Using United Nations survey data, it tallies the number of individuals engaged in university-level S&E studies during 1992 in 65 nations (accounting for 80 percent of the world’s population) for which the data were reasonably complete. The last two columns extrapolate to the whole world on the basis of less complete data.

Table 1
World science and engineering education, 1992

GNP per
Capita
Number
of Nations
S&E Students
per 100,000
Population
Million S&E
Students
Adjusted
for
Undercount
World
Population
Percent
More than $12,000 21 801.6 6.40 6.45 14.5%
$5,000 to $11,999 21 764.5 6.47 7.45 18.3%
$2,000 to $4,999 12 395.6 1.69 3.71 16.3%
Less than $2,000 11 105.0 2.44 2.74 50.9%
ALL NATIONS 65 386.7 17.00 20.35 100.0%

Source: United Nations Economic and Social Council, World Education Report: 1995 (Oxford, 1995), tables 1, 8, and 9; originally published in F. M. Scherer, New Perspectives on Economic Growth and Technological Innovation (Washington, D.C.: Brookings Institution, 1999), p. 107.

The last column yields a well-known statistic: More than half the world’s population lives in nations with a gross national product (GNP) of less than $2,000 per capita. Those least developed nations educate relatively few of their young people in S&E–roughly 105 per 100,000 population as compared to 802 per 100,000 in wealthy nations with a GNP of more than $12,000 per capita. For the least developed nations, sparse resources make it difficult to emulate the wealthy nations in providing S&E training, but meager S&E training in turn leaves them with inadequate endowments of the human capital necessary to sustain modern economic development. More than two-thirds of the world’s S&E students reside in nations with GNP per capita of $5,000 or more, where in the future they will help the rich to become even richer.

Somewhat different insights emerge from a tabulation listing the 10 nations with the largest absolute numbers of S&E students in 1992:

  million
Russia 2.40
United States 2.38
India 1.18
China 1.07
Ukraine 0.85
South Korea 0.74
Germany (united) 0.73
Japan 0.64
Italy 0.45
Philippines 0.44

First, even though they educate a relatively small fraction of their young citizens, China and India (and also the Philippines) have such large populations that they are world leaders in the total number of new scientists and engineers trained. Those resources could be critical to the future economic development of Asia.

Second, at least early in the decade, Russia and Ukraine were turning out huge numbers of technically trained individuals for jobs that have vanished with the collapse of Soviet-style industries that once served both military and civilian needs. Many other scientists and engineers in the former Soviet Union have lost their jobs as industrial enterprises and laboratories were downsized. Among those who remain employed, salary payments are so erratic and low that considerable time must be diverted to gardening, bartering, and scrounging at odd jobs to keep body and soul together. Few resources are available to support ambitious R&D efforts. The Soviet collapse is causing, and is likely for some time to continue causing, an enormous waste of S&E talent.

How the United States has helped

The United States has responded to the phenomenon of underutilized S&E talent abroad in a number of ways. Foreign-born students comprise a majority or near majority in many U.S. S&E doctoral programs. In 1995, 40 percent of the 26,515 U.S. S&E doctorate recipients were foreign citizens. Many of these individuals remain in the United States to do R&D work. Their numbers are augmented by individuals trained abroad who immigrate under H-1B visas to meet booming U.S. demand for technically adept staff. Although the number of H-1B visas was increased from 65,000 to 115,000 per year in 1998, the supply of visas for fiscal year 1999 was exhausted by June 1999. Difficult choices must be made to set skilled worker immigration quotas at levels that meet current demands while remaining sustainable over the longer run.

Exacting too high a price for U.S.-based technology could stifle other nations’ technological progress.

U.S. institutions have reached offshore to conduct demanding technical tasks under contract. Bangalore, India, for example, has become a center of software writing expertise for some U.S. companies. Analogous contracts have been extended to scientists and engineers in the former Soviet Union and its satellites. Equally important, joint projects such as the International Space Station absorb Russian talent that otherwise would be underused or, even worse, find alternative employment in developing and producing weapons systems to fuel arms races among Third World nations or support possible terrorist threats. Nevertheless, such efforts leave much of the potential untapped.

Most of the young people receiving S&E training in less developed countries will be needed to help their home nations absorb modern technology and achieve higher living standards. The same will be true of the former Soviet Union if–a huge if–it accelerates its thus far dismal progress toward creating institutions conducive to technological entrepreneurship and adapting existing enterprises to satisfy pent-up demand for high-quality industrial and consumer products. Even if these changes are spurred by domestic initiative, there are still actions that technologically advanced nations such as the United States can take to enhance their effectiveness.

Technology transfer is one way for high-productivity nations to help others build their technological proficiency. In many respects, the United States has done this well–for example, by providing first-rate university education to tens of thousands of foreign visitors, by exporting capital goods embodying up-to-date technological advances (except in nuclear weapons-sensitive fields), through the overseas investments of multinational enterprises, and by entering countless technology licensing arrangements.

In technology licensing, however, our policies might well be improved. During the past decade, the U.S. government, in alliance with the governments of other technologically advanced nations, has placed a premium on strengthening the bargaining power of U.S. technology suppliers relative to their clients in less developed countries. The main embodiment of this policy was the insistence that the Uruguay Round international trade treaty include provisions requiring less developed countries to adopt patent and other intellectual property laws as strong as those existing in the most highly industrialized nations. This was done to enhance the export and technology-licensing revenues of U.S. firms–a desirable end viewed in isolation, strengthening among other things the incentives of U.S. firms to support R&D. However, in pursuing that objective, we have lost sight of the historical fact that U.S. industry benefited greatly during the 19th century from weak intellectual property laws, facilitating the inexpensive emulation and transfer of foreign technologies. To promote the development of less fortunate nations, if not for altruistic reasons then to expand markets for U.S. products and make the world a more peaceful place, the U.S. government should recognize that exacting too high a price for U.S.-based technology could stifle the technological progress of other nations. Thus, we should relax our currently strenuous efforts to ensure through World Trade Organization complaints and the unilateral application of Section 301 of U.S. trade law that less developed nations enact intellectual property laws as stringent as our own.

Boosting worldwide energy research

Energy problems pose both an impediment and an opportunity for the technological development of the world’s less advanced nations. I cannot resolve here the question of whether global warming is a serious threat to the long-run viability of Earth’s population. My own belief is that it is, but the appropriate instruments to combat it should not be knee-jerk reactions but well-considered incremental adaptations. The extent to which the growth of greenhouse gas-emitting fuel usage should be curbed in highly industrialized nations as compared to less developed nations was a key sticking point at the international climate negotiations in Tokyo and Buenos Aires. Dodging the key questions of how much and how quickly fossil fuel use should be reduced, two points seem of paramount importance. First, there are huge disparities among the nations of the world in the use of fossil fuels. The European nations and Japan consume roughly 5,000 coal-equivalent kilograms of energy per capita per year at present; the United States and Canada more than 10,000 kilograms; China less than 1,000 kilograms; and nations such as India and Indonesia less than 500 kilograms. Second, if the less developed nations are to approach standards of living approximating those we enjoy in the United States, they must increase their energy usage; if not to profligate North American levels, then at least toward those prevailing in Europe and Japan.

Substantial resources should be invested in building a network of energy technology research institutes in less developed countries.

This does not mean that they should squander energy. Underdevelopment is all about using resources, human and physical, less efficiently than they might be used if state-of-the-art technologies were in place. Therein lies a major opportunity to link solutions to the problem of underutilized scientists and engineers in Russia and the Third World to the problem of global warming. Those scientists and engineers, and especially the individuals emerging, or about to emerge, from the universities, should be given the education and training needed to implement advanced energy-saving technologies in their home countries.

What I propose is a new kind of Marshall Plan designed to ensure that these possibilities are fully realized. The United States, together with its leading European counterparts and Japan, should allocate substantial financial resources toward building and supporting a network of energy technology research, development, and diffusion institutes in the principal underdeveloped regions of the world. Those institutes would be supported not only financially but also through two-way interchanges with scientists and engineers from the most industrialized nations. At first the transfer of existing energy-saving technologies would be the focus. This would entail not only the development of appropriate local adaptations but also concerted efforts to ensure that the technologies are thoroughly diffused into local production and consumption practice. The appropriate model here is the International Rice Research Institute and its offspring, which have worked not only to develop new and superior hybrid seeds but also to demonstrate to farmers their efficacy under local climate and soil conditions. As the Third World energy technology institutes and their business enterprise counterparts achieve mastery over existing technologies, they would begin to perform R&D of a more innovative character in energy and nonenergy areas, just as Japan began decades ago, after imitating Western technologies, to pioneer new methods of shipbuilding and automobile manufacture and to devise superior new products such as point-and-shoot cameras, facsimile machines, and fiber optical cable terminal equipment.

The role of these programs should not be confined solely to bench S&E work. Developing and implementing modern technology requires solid entrepreneurial management and social institutions within which entrepreneurship flourishes. Here too the industrialized nations and especially the United States can contribute. Two decades ago, few MBA students at top schools received systematic full-term exposure to technological innovation management. Today, many do. There are excellent courses at several universities. Through faculty visits and the training of foreign students in the United States, courses on innovation management and the functioning of high-technology venture capital markets could be replicated at the technology transfer institutes developed under the program proposed here.

I advance this proposal in the hope that it will not only help break the existing stalemate between industrialized and developing nations over global warming policies, but also utilize more fully the vast human potential for good scientific and technical work being cultivated in universities of the former Soviet Union and the Third World. If it succeeds, we are all likely to be winners.

Innovation Policy for Complex Technologies

The complexity of the technologies that drive economic performance today is making obsolete the mythic image of the brilliant lone inventor as well as undermining the effectiveness of traditional U.S. technology policy. Innovation in what we define as complex technologies is the work of organizational networks; no single person, not even a Thomas Edison, is capable of understanding them fully enough to be able to explain them in detail. But our mythmaking is not all that needs to be updated. The new processes of innovation are also undermining the effectiveness of traditional U.S. technology policy, which places a heavy focus on research and development (R&D) and unfettered markets as the major sources of the learning involved in innovation, while downplaying human resource development, the generation of technology-specific skills and know-how, and market-enhancing initiatives. This is inconsistent with what we now know about complex technologies. Innovation policies should be reformulated to include a self-conscious learning component.

In 1970, complex technologies made up 43 percent of the 30 most valuable world goods exports. By 1995, their portion had risen to 82 percent. With the rapid growth in the economic importance of complex technologies has come a parallel growth in the importance of complex organizational networks that often include firms, universities, and government agencies. According to a recent survey, more than 20,000 corporate alliances were formed between 1988 and 1992 in the United States; and since 1985, the number of alliances has increased by 25 percent annually. Complex networks coevolve with their complex technologies.

As complexity increases, the rate of growth and the characteristics of organizational networks will be significantly affected by public policy. The most important effects will be on network learning. Technological progress requires that networks repeatedly learn, integrate, and apply a wide variety of knowledge and know-how. The computer industry, for instance, has required repeated syntheses of knowledge from diverse scientific fields (such as solid-state physics, mathematics, and language theory) and a bewildering array of hardware and software capabilities (including architectural design and chip manufacturing). No single organization, not even the largest and most sophisticated firm, can succeed by pursuing a go-it-alone strategy in the arena of complex technologies. Thus, complex technologies depend on self-organizing networks that behave as learning organizations for success in innovation.

Networks have proven especially capable of incorporating tacit knowledge into their learning processes (such as unwritten know-how that often can be understood only with experience). Examples of tacit knowledge include rules of thumb from previous engineering design work, experience in manufacturing operations on the shop floor, or skill in using research instruments. Learning based on tacit knowledge tends to move less easily across organizational and geographical boundaries than more explicit or codified learning. Therefore, tacit learning can be a major source of competitive advantage.

Policy aims

Because of the centrality of network learning, public policy aimed at fostering innovation in complex technologies must give attention to three broad initiatives.

Developing network resources. Networks have at least three sets of resources: existing core capabilities, already internalized complementary assets, and completed organizational learning. A successful network must hold some core capabilities–that is, it must excel in certain aspects of innovation. Among the most important and difficult core capabilities to learn (or to imitate) are those that are essential to systems integration. Because there are many different ways to organize the designing, prototyping, manufacturing, and marketing of a complex technology, it is obvious that the ability to quickly conceptualize the technology as a whole and carry it through to commercialization represents a powerful, often dominant, capability. The design of a modern airplane, for instance, demands the ability to understand the problems and opportunities involved in integrating advanced mechanical techniques, digital information technology, new materials, and other specialized sets of technologies.

Engineering design teams in the aircraft industry typically contain about 100 technical specialties. The design activity requires a systems capability that may constitute a temporary knowledge monopoly built on some of the most complex kinds of organizational learning. In something as complex as the design of an aircraft, systems integration involves the ability to synthesize participation from a range of network partners. There is no way to achieve analytical understanding and control of integration of this type; the capability is, in part, experience-based, experimental, and embodied in the structure and processes of the network. Some have called this integration an organizational technology that “sits in the walls.”

U.S. national innovation policy has paid little attention to network resources. Federal policies relevant to the education and training of the workers essential to network capabilities have been limited, tentative, and contradictory, reflecting an ideological predisposition against a significant government role. However, the realization that broadly based human resource policies are critical to the future of U.S. innovative capacity seems to be gaining ground. Jack Gibbons, former director of the White House Office of Science and Technology Policy, has urged assigning a higher priority to the lifelong learning needs of the future science and technology workforce. And Rep. George Brown, who until his recent death served as the ranking minority member of the House Science Committee, made education and training a key part of his “investment budget” proposal.

But even Gibbons and Brown did not identify the special educational needs that result from the growing importance of networks. In addition to scientific and technical knowledge, successful networks require people who know how to function effectively in groups, teams, and sociotechnical systems that include individuals and organizations with diverse tacit and explicit knowledge. The importance of this kind of social knowledge is underlined by the fact that companies such as Intel spend major training resources on teaching their employees how to function in groups. Nothing would be more useful for evolving innovation networks than a national capacity for appropriate education and training. Appropriate training and retraining of personnel that provides continuous upgrading of needed skills would contribute to the workforce’s ability to adapt with changes in technology. Human resource competencies that include both technical and social knowledge are inseparable from network core capabilities. To the extent that public policy can help provide the needed range of worker skills and know-how, networks will be better able to make rapid adjustments.

Direct U.S. interventions designed to develop capabilities at the firm, network, or sector level have been rare. Governmental support for companies to develop their capabilities in flat-panel displays is an obvious exception. The effort was aimed at recapturing state-of-the-art video capabilities lost when Japan drove the United States out of the television manufacturing business in the 1970s. Advanced video capabilities are widely believed to be essential core technologies in the information society. The justification for government funding was defined largely in terms of defense requirements. But because the need to rebuild core video capabilities in U.S. firms was seen as critical, the civilian sector was included in the initiative.

The heavy policy focus on R&D is becoming increasingly inadequate.

If technologies and the capabilities they embody diffused rapidly across company boundaries and national borders in a global economy, there would be no need for this type of policy. But because not all relevant know-how is explicitly accessible and because the ability to absorb new capabilities depends in part on an available mix of old capabilities, technology diffusion is a process of learning in which today’s ability to exploit technology grows out of yesterday’s experience and practices. Thus, the demise of the U.S. consumer electronics industry brought with it a corresponding decline in the capability to produce liquid crystal displays in high volume, even though the basic technology was explicitly available. Active resource development policies such as the flat-panel display program have been attacked as industrial policy that picks winners and defended as a dual-use exception to the general rule of no government involvement. This debate illustrates the U.S. preoccupation with the concepts and language of an earlier era. Nonetheless, Richard Nelson of Columbia University is persuasive when he argues that technology policy ought to have a broader industry focus than the relatively narrow flat-panel initiative. Not only are the politics of broader programs more ideologically and politically plausible in the United States, but more effective public-private governance mechanisms appear easier to develop when industry-wide technological issues are being addressed.

Creating learning opportunities. Many of the most important changes needed in U.S. technology policy are related to learning opportunities. The history of any network includes a set of boundaries (sometimes called a path dependency) that both restricts and amplifies the learning possibilities and the potential for accessing new complementary assets (such as sources of knowledge outside the network). The network learning that has taken place in the past is a good indicator of where learning is likely to take place in the future. Most networks tend to learn locally by engaging in search and discovery activities close to their previous learning. Localized learning thus tends to build upon itself and is a major source of positive feedback, increasing returns, and lock-in.

A history of flexible and adaptive learning relationships within a network (with suppliers, customers, and others) provides member organizations with formidable sources of competitive advantage. Alternatively, allowing learning-based linkages to atrophy can lead to costly results. For instance, inadequate emphasis on manufacturing and a lack of cooperation among semiconductor companies contributed to an inability to respond rapidly to the early 1980s Japanese challenge. When the challenge became a crisis, cooperative industry initiatives moved U.S. companies toward closer interactions with government (such as the Semiconductor Trade Arrangement) and eventually to a network (the Sematech consortium) that improved the ability of industry participants to learn in mutually beneficial ways, including collaborative standards setting.

The Sematech experience is an example of efforts to enhance network learning through government-facilitated collaborative activities. Like the flat-panel display initiative, Sematech was justified largely in national security terms, and like so much technology policy, exaggerated emphasis was initially focused on R&D, with too little attention given to other learning opportunities.

R&D funding must continue to be a high government priority, but the primacy given to R&D is a problem for innovation in complex technologies. It is assumed that support for R&D–and sometimes only the “R”–is synonymous with technology policy. Rep. Brown called this an “excessive faith in the creation of new knowledge as an engine of economic growth and a neglect of the processes of knowledge diffusion and application.” Support for R&D certainly creates learning opportunities, but many other learning avenues exist that have little or nothing to do with R&D, and these are especially evident in networks (see sidebar).

The policy overemphasis on R&D skews learning and the generation of capabilities that are often needed for innovation success. R&D support is easy for the government to justify, but it is often not what companies and networks need. What are frequently needed are new or enhanced organizational capabilities that facilitate development of tacit know-how and skills, integrated production process improvements, and ways to synthesize and integrate the talents and expertise of individuals into work groups and teams. These organizational “black arts” are not usually a part of the R&D-dominated policy agenda, but if the challenge of innovating complex technologies is to be met, policy will need to be flexible enough to incorporate them as well as other nontraditional ideas.

Enhancing markets. U.S. innovation policy has tended to emphasize only one set of factors that affect markets: improving competitive incentives to firms by measures such as strengthening intellectual property rights and the R&D tax credit. The U.S. fascination with factors such as patents as stimulators of technological innovation ignores the need for other kinds of market-enhancing policies.

Markets left unfettered except for incentives to compete have trouble coping with the cooperative network learning dynamics that are at the core of innovation in complex technologies. Learning in complex networks is often very risky and can encompass prolonged tacit knowledge acquisition and application sequences. Complex learning involves substantial coordination, because investments must be made in different activities, often in different sectors, and increasingly in different countries. Collective learning tends to induce a self-reinforcing dynamic; it becomes more valuable the more it is used. Failure to recognize and adapt to these characteristics of complex network learning is a major source of market failure.

U.S. technology policy must pay more attention to the importance of networked learning in driving innovation.

When technological innovation is incremental, which is the normal pattern of the evolution of most complex organizations and technologies, learning-generated market failures are less common. Well-defined market segments and well-developed network relationships with a wide array of users and suppliers are usually built on extensive incremental learning and adaptation. Over time, incremental innovations enhance the stability of markets, because a consensus develops concerning technological expectations. These expectations, in conjunction with demonstrated capabilities, provide a framework within which market signals can be effectively read and evaluated.

Alternatively, when innovation is not incremental, learning-based market failures proliferate. During periods of major change, stability and predictability erode, and markets provide unclear signals. When the innovation process is highly exploratory, the networks that are being modified are less responsive to economic signals, because the market has little knowledge of the new learning that is taking place and the new capabilities being developed. In such situations, linkages to other organizations (such as relationships with other networks or government) or the status of institutions (such as regulatory regimes) matter more than market processes, because they provide some stability and limited foundations on which decisions can be based. Even when well-defined markets do emerge, achieving stability can take a long time, often more than a decade for the most radical complex innovations.

Because innovation in complex technologies tends to foster many market failures, significant benefits arise from having connections to at least three kinds of institutions: (1) state-of-the-art infrastructure, including communication, transportation, and secondary educational systems; (2) appropriate standards-setting arrangements ranging from environmental regulations to network-specific product or process standards; and (3) closer linkages between firms and other national science and technology organizations, including national laboratories and universities. Establishing these connections can be facilitated by public policy.

Complex innovation takes place through market (competitive) and nonmarket (cooperative) transactions, and the latter involve not only businesses but also other institutions and organizations. Networks seeking to enhance the success of their innovations frequently find themselves involved in what Jacqueline Senker of Edinburgh University, in a study of the industrial biotechnology sector, refers to as “institutional engineering,” the process of “negotiating with, convincing or placating regulatory authorities, the legal system, and the medical profession.” In the United States, the federal government is most capable of affecting these market and nonmarket relationships in a systematic way.

Policy guideposts

Policymaking aimed at complex technologies is fraught with uncertainty. There is no way to be assured of successful policy in advance of trying it; the formulation of successful policy is unknowable in a detailed sense. Thus, policy prescriptions developed in the absence of the specific context of innovation are as dangerous as they are tempting. With this uncertainty in mind, the following policy guideposts seem useful.

Complex networks offer, through their capacity to carry out synthetic innovation, a broad capability for innovating, although it is not possible for individuals to understand the process in detail. Policy, too, must be made without any capacity for understanding in a detailed sense what will work. It will always be an approximation–never final, but always subject to modification. Flexibility is key. Small, diverse experiments will tend to be more productive in learning terms than one big push along what appears to be the most likely path at the outset.

Many existing U.S. technology projects and programs are relatively small and have had significant learning effects, but few were designed as learning experiments. For instance, the Advanced Technology Program (ATP) currently operates at a relatively modest level and has been credited with encouraging collaboration among industry, government laboratories, and universities. According to an analysis by Henry Etzkowitz of the State University of New York at Purchase and Magnus Gulbrandsen of the Norwegian Institute for Studies in Higher Education, ATP’s stimulation of industrial collaboration may be more significant than the work it supports. “ATP conferences have become a marketplace of ideas for research projects and a recruiting ground for partners in joint ventures,” they write. Such generation of learning by interaction was largely unanticipated. In the world of complex technologies, the unanticipated has become the norm.

Although policy must be adaptable and made without detailed understanding, it does not follow that knowledge and information are of little or no value. To the contrary, the most successful policymaking will usually be that which is best informed. Being informed in the era of complex technologies requires exploiting as much expertise as possible. Designing and administering complex policies requires, at a minimum, technological, commercial, and financial knowledge and skills. If these cannot be developed inside government, outside expertise must be accessed. Only policy informed by state-of-the-art knowledge of the repeated nonlinear changes taking place in the various technology sectors can be appropriately adaptive. Only those who are intimately involved in innovation in complex technologies can provide knowledge of what is happening. As a start in the right direction, the White House should take the initiative in reforming conflict of interest laws and regulations that are barriers to public-private sector interaction.

For example, to protect against collusion, current regulations preclude ex-government employees from closely interacting with their former colleagues for prescribed periods of time following their departure. But because technology can change rapidly, the knowledge of the former employee can quickly become obsolete. In today’s world of accelerating technological innovation, the costs of knowledge obsolescence probably outweigh the costs of collusion.

Another possibility involves policies that encourage those at project and program levels in government to make frequent use of advisory groups composed of people from industry, nonprofit organizations, universities, governmental organizations, and other countries. Beginning in the 1970s, advisory panels fell into disrepute and their creation at times required prior approval from the Office of Management and Budget. They were frequently seen as being not only costly but also as vehicles for inappropriate influence. What they offer in the era of complex technology is a valuable vehicle for knowledge exchange and learning. Such groups can facilitate the kind of trust that is especially valuable in dealing with tacit knowledge.

The objective of broader private-sector involvement is to enhance policy learning. Negotiations between private and government policymakers most likely will lead to consensus in some areas, but even if the immediate outcome is the recognition of conflicting interests, there is learning taking place if in the process new network practices, routines, or behaviors are identified, and new data sets are cataloged. Particularly important would be new insights from the private sector regarding the effects of previous public policies.

Traditional boundaries are of less use to those making policy. Complex networks and their technologies blur boundaries across the spectrum. For example, the proliferation of complex networks has made it difficult to define the boundaries of an organization as a policy target or objective. When a label such as “virtual corporation” is used to describe interfirm networks or “business ecosystem” is applied to networks that include not only companies but also universities, government agencies, and other actors, one can appreciate how amorphous the object of policy has become. This complexity is even greater when the focus is network learning, which typically involves a messy set of interactions among a variety of organizational carriers of both tacit and explicit core capabilities and complementary assets. In such situations, running even small learning experiments informed by private-sector expertise puts a premium on incorporating evaluation procedures to determine which organizations are being affected, and how. These policy evaluations must provide for reviews, amendments, and/or cancellation.

But policy evaluation must be more systemic than the traditional U.S. emphasis on cost efficiency for particular actors and projects. Network learning often confers benefits that are broader than immediate economic payoffs. For instance, networks may interact in ways that generate a form of social capital, a “stock” of collective learning that can only be created when a group of organizations develops the ability to work together for mutual gain. Here too, ATP is credited with producing positive, if largely unintended, social outcomes as a consequence of learning by interaction. More effort needs to be made to build such social factors into assessments of program success or failure. A promising option is to make more use of systems-oriented evaluation “benchmarking” or assessments of system-wide “best evaluation practices,” as compiled by international bodies such as the Organization for Economic Cooperation and Development.

Continuous coevolution between complex organizations and technologies is the norm. The dominant pattern will be the continuous emergence of small, incremental organizational and technological adaptations, but this pattern will be punctuated by highly discontinuous and disruptive change. The need for policy is greatest when change is discontinuous–when coevolving networks and their technologies have to adapt to major transitions or transformations. Policy that is sensitive to this process of adaptation must be informed by strategic scanning and intelligence. Government participation in the generation of industrial technology roadmaps is a particularly valuable way to gather intelligence regarding impending changes in innovation patterns. Roadmaps such as those produced by the semiconductor industry generally represent a collective vision of the technological future that serves as a template for ways to integrate core capabilities, complementary assets, and learning in the context of rapid change. Roadmaps also facilitate open debates about alternative technological strategies and public policies. Cross-sectoral and international road mapping exercises would be particularly valuable, because many of the sources of discontinuous change in complex technological innovation originate in different sectors and economies.

Small, diverse experiments tend to be more productive in learning terms than a big push in one direction.

The great challenge for policymakers is to find an accommodation between the set of industrial system ideas and concepts that are the currency of contemporary policy debate and formulation and the reality of continuous technological innovation that has moved beyond that currency and is incompatible with it. We need a new policy language based on new policy metaphors. Metaphors are in many ways the currency of complex systems. By way of metaphors, groups of people can put together what they know, both tacitly and explicitly, in new ways, and begin to communicate knowledge. This is as true of the making of public policy as it is of technological innovation. The terminology we have used in this article allows one to address large portions of the technological landscape (such as the role of core capabilities in network self-organization) that are completely ignored when traditional labels and terms are used.

Policy guidelines that stress the shared public-private governance of continuous small experiments, chosen and legitimized in a new language, backed by strategic intelligence, and subject to careful evaluation may not sound like much.

But the study of complexity in organizations and technologies communicates no message more clearly than that even small events have unanticipated consequences, and one of the most dramatic messages is that sometimes small events have dramatic unanticipated consequences. Our policy guideposts are not a prescription for pessimism. Indeed, a major implication of innovation in complex technologies is that even modest, well-crafted, adaptive policy can have positive consequences that are enormous.

Fall 1999 Update

Fusion program still off track

Since publication of our article, “Fusion Research with a Future” (Issues, Summer 1997), the Department of Energy (DOE) Office of Fusion Energy Science (OFES) program has undergone some change. Congress has mandated U.S. withdrawal from the $10-billion-plus International Thermonuclear Experimental Reactor (ITER) tokamak project; the program has been broadened to include a significant basic science element; and the program has and is undergoing a number of reviews. One review by a Secretary of Energy Advisory Board (SEAB) subcommittee recommends major changes in fusion program management but does not mention the critical change tht we recommended of connecting the fusion program to its eventual marketplace.

Our experience and that of so many others is that one cannot do high-probability-of-success applied R & D without a close connection with end-users and an understanding of the marketplace, including alternative technologies as they exist today and are likely to evolve. The fusion program has never had serious connections with the electric utilities, nor does the fusion program have a real understanding of the commercial electric power generation marketplace.

A closer look at the current OFES budget allocation and plans indicates that although the United States has abandoned ITER, not much else has really changed. The primary OFES program focus is still on deuterium-tritium (DT) fusion in toroidal (donut-shaped) plasma confinement systems. DT fusion produces copious quantities of neutrons, which induce large amounts of radioactivity. Although it can be argued that radioactivity from fusion is less noxious than that from fission, it is not clear that the public would make that distinction.

If at some future date a U.S. electric power generating entity were willing to build a plant using technology that produces radioactivity, it could chose the fission option, which is a well-developed, commercial technology. For radioactive fusion to supplant fission, it will have to be significantly better, many would say on the order of 20 percent better in cost. The inherent nature of DT fusion will always require a physically large facility with expensive surrounding structures, resulting in high capital costs. It’s simple geometry. An inherently large, complex DT fusion “firebox” will never come close to the cost of a relatively compact, simple fission firebox. Our experience with the design of ITER illustrated that reality.

Thus, the fusion research program has to identify and develop different approaches, ones that have a chance of being attractive in the commercial marketplace and that will probably be based on low- or zero-neutron-generating fuel cycles. Thankfully, fusion fuel cycles that do not involve neutron emissions exist, but they will likely involve different regimes of plasma physics than are currently being pursued. Unfortunately, DOE and its researchers are still a long way from making the program changes necessary to move in that direction.

From the Hill – Summer 1999

Lab access restrictions sought in wake of Chinese espionage reports

In the wake of reports detailing the alleged theft by China of U.S. nuclear and military technology, bills have been introduced that would severely restrict or prohibit visits by foreign scientists to Los Alamos, Lawrence Livermore, and Sandia National Laboratories. Although intended to bolster national security, approval could inhibit the free exchange of scientific information, and the proposed legislation was severely criticized by Secretary of Energy Bill Richardson.

After the release of a report on alleged Chinese espionage by a congressional panel led by Rep. Christopher Cox (R-Calif.), the House Science Committee adopted an amendment to the Department of Energy (DOE) authorization bill (H.R. 1655) placing a moratorium on DOE’s Foreign Visitors Program. The amendment, introduced by Rep. George Nethercutt (R-Wash.), would restrict access to any classified DOE lab facility by citizens of countries that are included in DOE’s List of Sensitive Countries. Those countries currently include the People’s Republic of China, India, Israel, North Korea, Russia, and Taiwan. The Nethercutt amendment would allow the DOE secretary to waive the restriction if justification for doing so is submitted in writing to Congress. The moratorium would be lifted once certain safeguards, counterintelligence measures, and guidelines on export controls are implemented.

In early May 1999, the Senate Intelligence Committee also approved a moratorium on the Foreign Visitors Program, although it too allows the Secretary of Energy to waive the prohibition on a case-by-case basis. Committee Chairman Sen. Richard Shelby (R-Ala.) termed the moratorium an “emergency” measure that is needed while the Clinton administration’s new institutional counterintelligence measures are being implemented.

In another DOE-related bill, H.R. 1656, the House Science Committee approved an amendment introduced by Rep. Jerry Costello (D-Ill.) that would apply civil penalties of up to $100,000 for each security violation by a DOE employee or contractor. The House recently passed the bill.

DOE’s Foreign Visitors Program, initiated in the late 1970s, was designed to encourage foreign scientists to participate in unclassified research activities conducted at the national labs and to encourage the exchange of information. Most of the visitors are from allied nations. In cases in which the subject matter of a visit or the visitor is deemed sensitive, DOE must follow long-established guidelines for controlling the visits or research projects within the lab facilities.

Critics say that the program has long lacked sufficient security controls. In a September 1997 report, the General Accounting Office concluded that DOE’s “procedures for obtaining background checks and controlling dissemination of sensitive information are not fully effective.” It noted that two of the three laboratories conducted background checks on only 5 percent of foreign visitors from sensitive countries. The report said that in some cases visitors have access to sensitive information and that counterintelligence programs lacked effective mechanisms for assessing threats.

In response to the various congressional efforts to impose a moratorium, Secretary Richardson attacked the proposals recently in a speech at the National Academy of Sciences. He said that “instead of strengthening our nation’s security, this proposal would make it weaker.” He said that during his tenure DOE has established improved safeguards for protecting national secrets, including background checks on all foreign visitors from sensitive countries. He emphasized that “scientific genius is not a monopoly held by any one country” and that it is important to collaborate in research as well as to safeguard secrets. A moratorium would inhibit partnerships between the United States and other countries. He noted that the United States has access to labs in China, Russia, and India and participates in nuclear safety and nonproliferation exercises, and that curbing the Foreign Visitors Program could lead to denial of access to the laboratories of other countries. “If we isolate our scientists from the leaders in their fields, they will be unable to keep current with cutting-edge research in the disciplines essential to maintaining the nation’s nuclear deterrent,” he said.

Conservatives challenge science community on data access

Politically conservative organizations have made a big push in support of a proposed change to a federal regulation governing the release of scientific research data. The scientific community strongly opposes the change.

In last year’s omnibus appropriations bill, Sen. Richard Shelby (R-Ala.) inserted a provision requesting that the Office of Management and Budget (OMB) amend its Circular A-110 rule to require that all data produced through funding from a federal agency be made available through procedures established under the Freedom of Information Act (FOIA). Subsequently, OMB asked for public comment in the Federal Register but narrowed the scope of the provision to “research findings used by the federal government in developing policy or rules.” During the 60-day comment period, which ended on April 5, 1999, OMB received 9,200 responses, including a large number of letters from conservative groups.

Conservatives have been pushing for greater access to research data ever since they were rebuffed a couple of years ago in their attempts to examine data from a Harvard University study that was used in establishing stricter environmental standards under the Clean Air Act. Pro-gun groups have sought access to data from Centers for Disease Control and Prevention studies on firearms and their effects on society.

The research community fears that the Shelby provision would compromise sensitive research data and hinder research progress. Scientists are not necessarily opposed to the release of data but don’t want it to be done under what they consider to be FOIA’s ambiguous rules because of the fear that it would open a Pandora’s box. They are concerned that the privacy of research subjects could be jeopardized, and they think that operating under FOIA guidelines would impose large administrative and financial burdens.

A letter from the Association of American Universities, the National Association of State Universities and Land-Grant Colleges, and the American Council on Education questioned whether FOIA was the correct mechanism for the release of sensitive data: “Does interpretation of FOIA . . . concerning, ‘clearly unwarranted invasion of personal privacy,’ offer sufficient protection to honor assurances that have been given and will necessarily continue to be given to private persons, concerning the confidentiality and anonymity that are needed for certain types of studies?”

The American Mathematical Society (AMS) argued that the proposed changes will “lead to unintended and deleterious consequences to U.S. researchers and research accomplishments.” It cited the misinterpretation or delay of research, discouragement of research subjects, the imposition of significant administrative and financial burdens, and the hindrance of public-private cooperative research because of industry fears of losing valuable data to competitors. AMS proposed that the National Academy of Sciences be asked to study alternative mechanisms in order to determine a policy for sharing data instead of using FOIA.

Even with strong scientific opposition, the final tally of letters was 55 percent for the provision and 45 percent against or with serious concerns. The winning margin was undoubtedly related to a last-minute deluge of letters from groups that included the National Rifle Association, the Gun Owners of America, the United States Chamber of Commerce, and the Eagle Forum. These groups argued for a broad, wide-ranging provision that would allow for the greatest degree of access to all types of research data. The Chamber proclaimed that “there may never be a more important issue!” The Gun Owners of America argued that “we can expose all the phony science used to justify many restrictions on firearms ownership.”

Senators Shelby, Trent Lott (R-Miss.), and Ben Nighthorse Campbell (R-Colo.) cosigned a letter criticizing the narrow approach of OMB and supporting the Shelby amendment. “The underlying rationale for the provision rests on a fairly simple premise–that the public should be able to obtain and review research data funded by taxpayers,” they said. “Moreover, experience has shown that transparency in government is a principle that has improved decisionmaking and increased the public’s trust in government.”

Rita Colwell, director of the National Science Foundation, opposed the provision, arguing that its ambiguity could hamper the research process. “Unfortunately, I believe that it will be very difficult to craft limitations that can overcome the underlying flaw of using FOIA procedures,” Colwell said. “No matter how narrowly drawn, such a rule will likely harm the process of research in all fields by creating a complex web of expensive and bureaucratic requirements for individual grantees and their institutions.”

OMB seems to be sympathetic to both sides of the issue. An OMB official said that before any changes were made, OMB would consult with both parties on the Hill, since the original directive came from Congress. OMB will then produce a preliminary draft of a provision using FOIA, which will also be placed in the Federal Register and accompanied by another public comment period.

Bills to protect confidentiality of medical data introduced

With a congressional deadline looming for the adoption of federal standards ensuring the confidentiality of individual health information, bills have been introduced in Congress that would establish guidelines for patient-authorized release of medical records.

S. 578, introduced by Senators Jim M. Jeffords (R-Vt.) and Christopher J. Dodd (D-Conn.), would require one blanket authorization from a patient for the release of records. The bill would also cede most authority in setting confidentiality standards to the states. S. 573, introduced by Senators Patrick J. Leahy (D-Vt.) and Edward M. Kennedy (D-Mass.), would require patient authorization for each use of medical records and allow states to pass stricter privacy laws.

Many states already have patient privacy laws, but there is a growing demand for federal standards as well. The Health Insurance Portability and Accountability Act of 1996 requires Congress to adopt federal standards ensuring individual health information confidentiality by August 1999. The law was prompted by concern that the increasing use of electronic recordkeeping and the need for data sharing among health care providers and insurers has made it easier to misuse confidential medical information. If Congress fails to meet the deadline, the law authorizes HHS to assume responsibility for regulation. Proposed standards submitted in 1997 by HHS Secretary Donna Shalala stated that confidential health information should be used for health purposes only and emphasized the need for researchers to obtain the approval of institutional review boards (IRBs).

Earlier this year, the Senate Committee on Health, Education, Labor, and Pensions held a hearing on the subject, using a recent General Accounting Office (GAO) report as the basis of discussion. The report, Medical Records Privacy: Access Needed for Health Research, but Oversight of Privacy Protections Is Limited, focused on the use of medical information for research and the need for personally identifiable information; the types of research currently not subject to federal oversight; the role of IRBs; and safeguards used by health care organizations.

The 1991 Federal Policy for the Protection of Human Subjects stipulates that federally funded research or research regulated by federal agencies must be reviewed by an IRB to ensure that human subjects receive adequate privacy and protection from risk through informed consent. This approach works well for most federally funded research. However, privately funded research, which has increased dramatically in recent years, is not subject to these rules.

The GAO report found that a substantial amount of research involving human subjects relies on the use of personal identification numbers, which allow investigators to track treatment of individuals over time, link multiple sources of patient information, conduct epidemiological research, and identify the number of patients fitting certain criteria. Brent James, executive director of the Intermountain Health Care (IHC) Institute for Health Care Delivery Research in Utah, testified that his patients benefited when other physicians had access to electronic records. For example, he cited a computerized ordering system accessed by multiple users that can warn physicians of potentially harmful drug interactions. He emphasized, however, the need to balance the use of personal medical information with patient confidentiality.

IHC ensures privacy by requiring administrative employees who work with patient records to sign confidentiality agreements and by monitoring those with access to electronic records. Patient identification numbers are separated from the records, and particularly sensitive information, such as reproductive history or HIV status, is segregated. Some organizations are using encryption and other forms of coding, whereas others have agreed to Multiple Project Assurance (MPA) agreements that place them in compliance with HHS regulations. MPAs are designed to ensure that institutions comply with federal rules for the protection of human subjects in research.

James argued that increased IRB involvement would hamper the quality of care given by health care providers. The GAO study indicates that current IRB review may not necessarily ensure confidentiality and that in most cases IRBs rely on existing mechanisms within institutions conducting research. Familiar criticisms of IRBs, such as hasty reviews, little expertise on the matter, and little training for new IRB members, compound the problem.

An alternative is the establishment of stronger regulations within the private institutions conducting the research. Elizabeth Andrews of the Pharmaceutical Research and Manufacturers Association argued at the hearing for the establishment of uniform national confidentiality rules instead of the IRB process.

Controversial database protection bill reintroduced

A bill designed to prevent the unauthorized copying of online information that was strongly opposed by the scientific community last year has been reintroduced with changes aimed at assuaging its critics. However, the revisions still do not go far enough to satisfy the bill’s critics, who believe that the bill provides too much protection for database owners and thus would stifle information sharing and innovation.

H.R. 354, the Collections of Information Antipiracy Act, introduced by Rep. Howard Coble (R-N.C.), is the reincarnation of last year’s H.R. 2562, which passed the House twice but was subsequently dropped because of severe criticism from the science community. The bill’s intent is to ensure that database information cannot be used for financial gain by anyone other than its producer without compensation. Without adequate protection from online piracy, the bill’s supporters argue, database creators will be discouraged from making investments that would benefit a wide range of users.

Last year’s legislation encountered problems concerning the amount of time that information can be protected, ambiguities in the type of information to be protected, and the instances in which data can be accessed freely. This year’s bill has introduced a 15-year time limit on data protection and has also made clear the type of data to be protected. Further, it clarifies the line between legitimate uses and illegal misappropriation of databases, stating that “an individual act of use or extraction of information done for the purpose of illustration, explanation, example, comment, criticism, teaching, research, or analysis, in an amount appropriate and customary for that purpose, is not a violation of this chapter.”

“The provisions of H.R. 354 represent a significant improvement over the provisions of H.R. 2562,” stated Marybeth Peters of the U.S. Copyright Office of the Library of Congress during her testimony this spring before the House Judiciary Subcommittee on Courts and Intellectual Property. However, she tempered that statement, saying that “several issues still warrant further analysis, among them the question of possible perpetual protection of regularly updated databases and the appropriate mix of elements to be considered in establishing the new, fair use-type exemption.”

Although researchers still oppose the bill and are unwilling to accept it in its present form, they recognize that progress has been made since last year. “We were encouraged by the two changes that already have been made to this committee’s previous version of this legislation,” said Nobel laureate Joshua Lederberg in his testimony to the committee. “The first revision addresses one of the Constitutional defects that was pointed out by various critics . . . the second one responds to some of the concerns . . . regarding the potential negative impacts of the legislation on public interest uses.”

After a House hearing was held on H.R. 354, Rep. Coble introduced several changes to the bill, including language that more closely mirrors traditional fair use exceptions in existing copyright law. Although the administration and research community applauded the changes, they stopped short of endorsing the bill.

Genetic testing issues reviewed

Improved interagency cooperation and increased education for the public and professionals are needed to ensure the safe and effective use of genetic testing, according to witnesses at an April 21, 1999 hearing of the House Science Committee’s Subcommittee on Technology.

Currently, genetic tests sold as kits are subject to Food and Drug Administration (FDA) rules. Laboratories that test human specimens are subject only to quality-control standards set by the Department of Health and Human Services (HHS) under the Clinical Laboratory Improvement Amendments of 1998. However, in the fall of 1998, a national Task Force on Genetic Testing urged additional steps, including specific requirements for labs doing genetics testing, formal genetics training for laboratory personnel, and the introduction of some FDA oversight of testing services at commercial labs. At the hearing, Michael Watson, professor of pediatrics and genetics at the Washington University School of Medicine and cochair of the task force, argued that interagency cooperation is needed in establishing genetic testing regulations and that oversight should be provided by institutional review boards assisted by the National Institutes of Health’s Office of Protection of Human Subjects from Research Risks.

The subcommittee’s chairwoman, Rep. Connie Morella (R-Md.), stressed the need to educate the public about the benefits of genetic testing and to prepare health professionals so that they can provide reliable tests and offer appropriate advice. William Raub, HHS’s deputy assistant secretary of science policy, cited the establishment of the Human Genome Epidemiology Network by the Centers for Disease Control to disseminate information via the World Wide Web for that purpose. But he noted that health care providers often lack basic genetics knowledge and receive inadequate genetics training in medical schools. The Task Force on Genetic Testing recommended that the National Coalition for Health Professional Education in Genetics, which is made up of different medical organizations, take the lead in promoting awareness of genetic concepts and testing consequences and in developing genetics curricula for use in medical schools.

Budget resolution deals blow to R&D funding

R&D spending would be hit hard under a congressional budget resolution for fiscal year (FY) 2000 passed this spring. However, it is unlikely that the resolution’s constraints will be adhered to when final appropriations decisions are made.

Under the resolution, which sets congressional spending priorities for the next decade, federal R&D spending would decline from $79.3 billion in FY 1999 to $76.1 billion in FY 2004, or 13.4 percent after adjusting for expected inflation, according to projections made by the American Association for the Advancement of Science.

Despite growing budget surpluses, the Republican-controlled Congress decided to adhere strictly to tight caps on discretionary spending that were established when large budget deficits existed. Future budget surpluses would be set aside entirely for bolstering Social Security and for tax cuts. Only defense, education, and veterans’ budgets would receive increases above FY 1999 levels.

After adoption of the budget resolution, the House and Senate Appropriations Committees approved discretionary spending limits, called 302(b) allocations, for the 13 FY 2000 appropriations bills. Both committees authorized $538 billion in budget authority, or $20 billion below the FY 1999 funding level and President Clinton’s FY 2000 request.

As in the past, it is almost certain that ways will be found to raise discretionary spending to at least the level of the Clinton administration’s proposal, if not higher. Projections of increasing budget surpluses would make the decision to break with the caps easier.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Science at the State Department

The mission of the Department of State is to develop and conduct a sound foreign policy, taking fully into consideration the science and technology that bear on that policy. It is not to advance science. Therefore, scientists have not been, and probably won’t be, at the center of our policymaking apparatus. That said, I also know that the advances and the changes in the worlds of science and technology are so rapid and so important that we must ask ourselves urgently whether we really are equipped to take these changes “fully into consideration” as we go about our work.

I believe the answer is “not quite.” We need to take a number of steps (some of which I’ll outline in a moment) to help us in this regard. Some we can put in place right now. Others will take years to work their way through the system. One thing I can say: I have found in the State Department a widespread and thoughtful understanding of how important science and technology are in the pursuit of our foreign policy goals. The notion that this has somehow passed us by is just plain wrong.

I might add that this sanguine view of the role of science was not always prevalent. In a 1972 Congressional Research Service study on the “interaction between science and technology and U.S. foreign policy,” Franklin P. Huddle wrote: “In the minds of many today, the idea of science and technology as oppressive and uncontrollable forces in our society is becoming increasingly more prevalent. They see in the power of science and technology the means of destruction in warfare, the source of environmental violation, and the stimulant behind man’s growing alienation.”

Today, though, as we look into the 21st century, we see science and technology in a totally different light. We see that they are key ingredients that permit us to perpetuate the economic advances we Americans have made in the past quarter century or so and the key to the developing world’s chance to have the same good fortune. We see at the same time that they are the key factors that permit us to tackle some of the vexing, even life-threatening, global problems we face: climate change, loss of biodiversity, the destruction of our ocean environment, proliferation of nuclear materials, international trafficking in narcotics, and the determination by some closed societies to keep out all influences or information from the outside.

We began our review of the role of science in the State Department for two reasons. First, as part of a larger task the secretary asked me to undertake: ensuring that the various “global foreign policy issues”–protecting the environment, promoting international human rights, meeting the challenges of international narcotics trafficking, and responding to refugee and humanitarian crises, etc.–are fully integrated into our overall foreign policy and the conduct of U.S. diplomacy abroad. She felt that the worst thing we could do is to treat these issues, which affect in the most profound ways our national well-being and our conscience, as some sort of sideshow instead of as issues that are central challenges of our turn-of-the-millennium foreign policy. And we all, of course, are fully aware that these global issues, as well as our economic, nonproliferation and weapons of mass destruction issues, cannot be adequately addressed without a clear understanding of the science and technology involved.

Which brings me to the second impetus for our review: We have heard the criticism from the science community about the department’s most recent attention to this issue. We’re very sensitive to your concerns and we take them seriously. That is, of course, why we asked the National Research Council to study the matter and why we are eager to hear more from you. Our review is definitely spurred on by our desire to analyze the legitimate bases of this criticism and be responsive to it. Let me also note that although we have concluded that some of these criticisms are valid, others are clearly misplaced. However misplaced they may be, somehow we seem to have fed our critics. The entire situation reminds me of something Casey Stengel said during the debut season of the New York Mets. Called upon to explain the team’s performance, he said: “The fans like home runs. And we have assembled a pitching staff to please them.”

Now, let me outline my thoughts on three topics. First, a vision of the relationship between science and technology and foreign policy in the 21st century; second, one man’s evaluation of how well the department has, in recent times, utilized science in making foreign policy determinations; and third, how we might better organize and staff ourselves in order to strengthen our capacity to incorporate science into foreign policy.

An evolving role

Until a decade ago, our foreign policy of the second half of this century was shaped primarily by our focus on winning the Cold War. During those years, science was an important part of our diplomatic repertoire, particularly in the 1960s and 1970s. For example, in 1958, as part of our Cold War political strategy, we set up the North Atlantic Treaty Organization Science Program to strengthen the alliance by recruiting Western scientists. Later, we began entering into umbrella science and technology agreements with key countries with a variety of aims: to facilitate scientific exchanges, to promote-people-to-people or institution-to-institution contacts where those were otherwise difficult or impossible, and generally to promote our foreign policy objectives.

Well, the Cold War is receding into history and the 20th century along with it. And we in the department have retooled for the next period in our history with a full understanding of the huge significance of science in shaping the century ahead of us. But what we have not done recently is to articulate just how we should approach the question of the proper role of science and technology in the conduct of our foreign policy. Let me suggest an approach:

First, and most important, we need to take the steps necessary to ensure that policymakers in the State Department have ready access to scientific information and analysis and that this is incorporated into our policies as appropriate.

Second, when consensus emerges in the science community and in the political realm that large-scale, very expensive science projects are worth pursuing, we need to be able to move quickly and effectively to build international partnerships to help these megascience projects become reality.

Third, we should actively facilitate science and technology cooperation between researchers at home and abroad.

Fourth, we must address more aggressively a task we undertook some time ago: mobilizing and promoting international efforts to combat infectious diseases.

And we need to find a way to ensure that the department continues devoting its attention to these issues long after Secretary Albright, my fellow under secretaries, and I are gone from there.

Past performance

Before we chart the course we want to take, let me try a rather personal assessment of how well we’ve done in the past. And here we meet a paradox: Clearly, as I noted earlier, the State Department is not a science-and-technology­based institution. Its leadership and senior officers don’t come from that community, and relatively few are trained in the sciences. As some of you have pointed out, our established career tracks, within which officers advance, have labels like political, economic, administrative, consular, and now public diplomacy–but not science.

Some have suggested that there are no science-trained people at all working in the State Department. I found myself wondering if this were true, so I asked my staff to look into it. After some digging, we found that there were more than 900 employees with undergraduate majors and more than 600 with graduate degrees in science and engineering. That’s about 5 percent of the people in the Foreign Service and 6 percent of those in the Civil Service. If you add math and other technical fields such as computer science, the numbers are even higher. Now you might say that having 1,500 science-trained people in a workforce of more than 25,000 is nothing to write home about. But I suspect it is a considerably higher number than either you or I imagined.

We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service.

More important, I would say we’ve gotten fairly adept at getting the science we need, when we need it, in order to make decisions. One area where this is true is the field of arms control and nuclear nonproliferation. There, for the past half-century, we have sought out and applied the latest scientific thinking to protect our national security. The Bureau of Political-Military Affairs, or more accurately, the three successor bureaus into which it has been broken up, are responsible for these issues, and are well equipped with scientific expertise. One can find there at any given time as many as a dozen visiting scientists providing expertise in nuclear, biological and chemical weapons systems. Those bureaus also welcome fellows of the American Association for the Advancement of Science (AAAS ) on a regular basis and work closely with scientists from the Departments of Energy and Defense. The Under Secretary for Arms Control and International Security Affairs has a science advisory board that meets once a month to provide independent expertise on arms control and nonproliferation issues. This all adds up to a system that works quite well.

We have also sought and used scientific analysis in some post-Cold War problem areas. For example, our policies on global climate change have been well informed by science. We have reached out regularly and often to the scientific community for expertise on climate science. Inside the department, many of our AAAS fellows have brought expertise in this area to our daily work. We enjoy a particularly close and fruitful relationship with the Intergovernmental Panel on Climate Change (IPCC), which I think of as the world’s largest peer review effort, and we ensure that some of our best officers participate in IPCC discussions. In fact, some of our senior climate experts are IPCC members. We regularly call upon not only the IPCC but also scientists throughout the government, including the Environmental Protection Agency, the Energy Department, National Oceanic and Atmospheric Administration, National Aeronauatics and Space Administration, and, of course, the National Academy of Sciences (NAS), and the National Science Foundation, as we shape our climate change policies.

Next, I would draw your attention to an excellent and alarming report on coral reefs released by the department just last month. This report is really a call to arms. It describes last year’s bleaching and mortality event on many coral reefs around the world and raises awareness of the possibility that climate change could have been a factor. Jamie Reaser, a conservation biologist and current AAAS fellow, and Peter Thomas, an animal behaviorist and former AAAS fellow now a senior conservation officer, pulled this work together, drawing on unpublished research shared by their colleagues throughout the science community. The department was able to take these findings and put them under the international spotlight.

A third example involves our recent critical negotiation in Cartagena, Colombia, concerning a proposed treaty to regulate transborder movements of genetically modified agricultural products. The stakes were high: potential risks to the environment, alleged threats to human health, the future of a huge American agricultural industry and the protection of a trading system that has served us well and contributed much to our thriving economy. Our negotiating position was informed by the best scientific evidence we could muster on the effects of introducing genetically modified organisms into the environment. Some on the other side of the table were guided less by scientific analysis and more by other considerations. Consequently, the negotiations didn’t succeed. This was an instance, it seemed to me, where only a rigorous look at the science could lead to an international agreement that makes sense.

Initial steps

In painting this picture of our performance, I don’t mean to suggest that we’re where we ought to be. As you know, Secretary Albright last year asked the National Research Council (NRC) to study the contributions that science, technology, and health expertise can make to foreign policy and to share with us some ideas on how the department can better fulfill its responsibilities in this area. The NRC put together a special committee to consider these questions. In September, the committee presented to us some thoughtful preliminary observations. I want to express my gratitude to Committee Chairman Robert Frosch and his distinguished colleagues for devoting so much time and attention to our request. And I would like to note here that I’ve asked Richard Morgenstern, who recently took office as a senior counselor in the Bureau of Oceans and International Environmental and Scientific Affairs (OES), to serve as my liaison to the NRC committee. Dick, who is himself a member of an NAS committee, is going to work with the NRC panel to make sure we’re being as helpful as we can be.

We will not try to develop a full plan to improve the science function at the State Department until we receive the final report of the NRC. But clearly there are some steps we can take before then. We have not yet made any final decisions. But let me share with you a five-point plan that is–in my mind at this moment–designed to strengthen the leadership within the department on science, technology, and health issues and to strengthen the available base of science, technology, and health expertise.

Science adviser. The secretary should have a science adviser to make certain that there is adequate consideration within the department of science, technology, and health issues. To be effective, such an adviser must have appropriate scientific credentials, be supported by a small staff, and be situated in the right place in the department. The “right place” might be in the office of an under secretary or in a bureau, such as the Bureau of Oceans and International Environmental and Scientific Affairs. If we chose the latter course, it would be prudent to provide this adviser direct access to the secretary. Either arrangement would appear to be a sensible way to ensure that the adviser has access to the secretary when necessary and appropriate but at the same time is connected as broadly as possible to the larger State Department structure and has the benefit of a bureau or an under secretary’s office to provide support.

There’s an existing position in the State Department that we could use as a model for this: the position of special representative for international religious freedom, now held by Ambassador Robert Seiple. Just as Ambassador Seiple is responsible for relations between the department and religious organizations worldwide, the science adviser would be responsible for relations between the department and the science community. And just as Ambassador Seiple, assisted by a small staff, advises the secretary and senior policymakers on matters of international religious freedom and discrimination, the science adviser would counsel them on matters of scientific importance.

Science roundtables. When a particular issue on our foreign policy agenda requires us to better understand some of the science or technology involved, we should reach out to the science and technology community and form a roundtable of distinguished members of that community to assist us. We envision that these roundtable discussions would take the form of one-time informal gatherings of recognized experts on a particular issue. The goal wouldn’t be to elicit any group advice or recommendations on specific issues. Rather, we would use the discussions as opportunities to hear various opinions on how developments in particular scientific disciplines might affect foreign policy.

I see the science adviser as being responsible for organizing such roundtables and making sure the right expert participants are included. But rather than wait for that person’s arrival in the department, I’d like to propose right now that the department, AAAS, and NAS work together to organize the first of these discussions. My suggestion is that the issue for consideration relate to genetically modified organisms, particularly including genetically modified agricultural products. It’s clear to me that trade in such products will pose major issues for U.S. policymakers in the years to come, and we must make certain that we continue to have available to us the latest and best scientific analysis.

It is not clear whether such roundtables can or should take the place of a standing advisory committee. That is something we want to discuss further. It does strike me that although “science” is one word, the department’s needs are so varied that such a committee would need to reflect a large number and broad array of specialties and disciplines to be useful. I’d be interested in your views as to whether such a committee could be a productive tool.

We need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science.

So far, we’ve been talking about providing leadership in the department on science, technology, and health issues. But we also need to do something more ambitious and more difficult: to diffuse more broadly throughout the department a level of scientific knowledge and awareness. The tools we have available for that include recruiting new officers, training current staff, and reaching out to scientific and technical talent in other parts of the government and in academia.

If you’re a baseball fan, you know that major league ball clubs used to build their teams from the ground up by cultivating players in their farm systems. Nowadays, they just buy them on the open market. We would do well to emulate the old approach, by emphasizing the importance of science and technology in the process of bringing new officers into the Foreign Service. And we’ve got a good start on that. Our record recently is actually better than I thought. Eight of the 46 members of a recent junior officers’ class had scientific degrees.

Training State personnel. In addition to increasing our intake of staff with science backgrounds, we need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science. During the past several years, the Foreign Service Institute (FSI), the department’s training arm, has taken two useful steps. It has introduced and beefed up a short course in science and technology for new officers, and it has introduced environment, science, and technology as a thread that runs through the entire curriculum. Regardless of officers’ assignments, they now encounter these issues at all levels of their FSI training. But we believe this may not be enough, and we have asked FSI to explore additional ways to increase the access of department staff to other professional development opportunities related to science and technology. A couple of weeks ago we wrapped up the inaugural session of a new environment, science, and technology training program for Foreign Service national staff who work at our embassies. Twenty-five of them spent two weeks at FSI learning about climate change, hazardous chemicals, new information technologies, intellectual property rights, and nuclear nonproliferation issues.

Leveraging our resources. I have not raised here today the severe resource problem we encounter at State. I believe that we can and must find ways to deal with our science and technology needs despite this problem. But make no mistake about it: State has not fared well in its struggle to get the resources it needs to do its job. Its tasks have increased and its resources have been reduced. I’ll give you an illustration. Between 1991 and 1998, the number of U.S. embassies rose by about 12 percent and our consular workload increased by more than 20 percent. During the same period, our total worldwide employment was reduced by nearly 15 percent. That has definitely had an impact on the subject we’re discussing today. For example, we’ve had to shift some resources in the Bureau of Oceans, Environment and Science from the science to the enormously complex global climate change negotiations.

But I want to dwell on what we can do and not on what we cannot. One thing we can do is to bring more scientists from other agencies or from academia into the department on long- or short-term assignments. Let me share with you a couple of the other initiatives we have going.

  • We’re slowly but surely expanding the AAAS Diplomatic Fellows Program in OES. That program has made these young scientists highly competitive candidates for permanent positions as they open up. To date, we have received authorization to double the number of AAAS fellows working in OES from four per year to eight, and AAAS has expanded its recruiting accordingly.
  • And we’re talking with the Department of Health and Human Services about a health professional who would specialize in our infectious disease effort. And we’re talking with several other agencies about similar arrangements.

I should point out here a particular step we do not want to take: We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service. We found that having this cone did not help us achieve our goal of getting all the officers in the department, including the very best ones, to focus appropriately on science. In fact, it had the opposite effect; it marginalized and segregated science. And after a while, the best officers chose not to enter that cone, because they felt it would limit their opportunities for advancement. We are concerned about a repeat performance.

Using science as a tool for diplomacy. As for our scientific capabilities abroad, the State Department has 56 designated environment, science, and technology positions at our posts overseas. We manage 33 bilateral science and technology “umbrella agreements” between the U.S. government and others. Under these umbrellas, there are hundreds of implementing agreements between U.S. technical agencies and their counterparts in those countries. Almost all of them have resulted in research projects or other research-related activities. Science and technology agreements represented an extremely valuable tool for engaging with former Warsaw Pact countries at the end of the Cold War and for drawing them into the Western sphere. Based on the success of those agreements, we’re now pursuing similar cooperative efforts with other countries in transition, including Russia and South Africa. We know, however, that these agreements differ in quality and usefulness, and we’ve undertaken an assessment to determine which of them fit into our current policy structure and which do not.

We’ve also established a network of regional environmental hubs to address various transboundary environmental problems whose solutions depend on cooperation among affected countries. For example, the hub for Central America and the Caribbean, located in San Jose, Costa Rica, focuses on regional issues such as deforestation, biodiversity loss, and coral reef and coastline management. We’re in the process of evaluating these hubs to see how we might improve their operations.

I’ve tried to give you an idea of our thinking on science at State. And I’ve tried to give you some reason for optimism while keeping my proposals and ideas within the confines of the possible. Needless to say, our ability to realize some of these ideas will depend in large part on the amount of funding we get. And as long as our budget remains relatively constant, resources for science and technology will necessarily be limited. We look forward to the NRC’s final recommendations in the fall, and we expect to announce some specific plans soon thereafter.

Education Reform for a Mobile Population

The high rate of mobility in today’s society means that local schools have become a de facto national resource for learning. According to the National Center for Education Statistics, one in three students changes schools more than once between grades 1 and 8. A mobile student population dramatizes the need for some coordination of content and resources. Student mobility constitutes a systemic problem: For U.S. student achievement to rise, no one can be left behind.

The future of the nation depends on a strong, competitive workforce and a citizenry equipped to function in a complex world. The national interest encompasses what every student in a grade should know and be able to do in mathematics and science. Further, the connection of K-12 content standards to college admissions criteria is vital for conveying the national expectation that educational excellence improves not just the health of science, but everyone’s life chances through productive employment, active citizenship, and continuous learning.

We all know that improving student achievement in 15,000 school districts with diverse populations, strengths, and problems will not be easy. To help meet that that challenge, the National Science Board (NSB) produced the report Preparing Our Children: Math and Science Education in the National Interest. The goal of the report is to identify what needs to be done and how federal resources can support local action. A core need, according to the NSB report, is for rigorous content standards in mathematics and science. All students require the knowledge and skills that flow from teaching and learning based on world-class content standards. That was the value of Third International Mathematics and Science Study (TIMSS): It helped us calibrate what our students were getting in the classroom relative to their age peers around the world.

What we have learned from TIMSS and other research and evaluation is that U.S. textbooks, teachers, and the structure of the school day do not promote in-depth learning. Thus, well-prepared and well-supported teachers alone will not improve student performance without other important changes such as more discerning selection of textbooks, instructional methods that promote thinking and problem-solving, the judicious use of technology, and a reliance on tests that measure what is taught. When whole communities take responsibility for “content,” teaching and learning improve. Accountability should be a means of monitoring and, we hope, continuous improvement, through the use of appropriate incentives.

The power of standards and accountability is that, from district-level policy changes in course and graduation requirements to well-aligned classroom teaching and testing, all students can be held to the same high standard of performance. At the same time, teachers and schools must be held accountable so that race, ethnicity, gender, physical disability, and economic disadvantage can diminish as excuses for subpar student performance.

Areas for action

The NSB focuses on three areas for consensual national action to improve mathematics and science teaching and learning: instructional materials, teacher preparation, and college admissions.

Instructional materials. According to the TIMSS results, U.S. students are not taught what they need to learn in math and science. Most U.S. high school students take no advanced science, with only one-half enrolling in chemistry, one-quarter in physics. From the TIMSS analysis we also learned that curricula in U.S. high schools lack coherence, depth, and continuity, and cover too many topics in a superficial way. Most of our general science textbooks in the United States touch on many topics rather than probe any one in depth. Without some degree of consensus on content for each grade level, textbooks will continue to be all-inclusive and superficial. They will fail to challenge students to use mathematics and science as ways of knowing about the world.

Institutions of higher education should form partnerships with local districts/schools to create a more seamless K-16 system.

The NSB urges active participation by educators and practicing mathematicians and scientists, as well as parents and employers from knowledge-based industries, in the review of instructional materials considered for local adoption. Professional associations in the science and engineering communities can take the lead in stimulating the dialogue over textbooks and other materials and in formulating checklists or content inventories that could be valuable to their members, and all stakeholders, in the evaluation process.

Teacher preparation. According to the National Commission on Teaching and America’s Future, as many as one in four teachers is teaching “out of field.” The National Association of State Directors of Teacher Education and Certification reports that only 28 states require prospective teachers to pass examinations in the subject areas they plan to teach, and only 13 states test them on their teaching skills. Widely shared goals and standards in teacher preparation, licensure, and professional development provide mechanisms to overcome these difficulties. This is especially critical for middle school teachers, if we take the TIMSS 8th grade findings seriously.

We cannot expect world-class learning of mathematics and science if U.S. teachers lack the knowledge, confidence, and enthusiasm to deliver world-class instruction. Although updating current teacher knowledge is essential, improving future teacher preparation is even more crucial. The community partners of schools–higher education, business, and industry–share the obligation to heighten student achievement. The NSB urges formation of three-pronged partnerships: institutions that graduate new teachers working in concert with national and state certification bodies and local school districts. These partnerships should form around the highest possible standards of subject content knowledge for new teachers and aim at aligning teacher education, certification requirements and processes, and hiring practices. Furthermore, teachers need other types of support, such as sustained mentoring by individual university mathematics, science, and education faculty and financial rewards for achieving board certification.

College admissions. Quality teaching and learning of mathematics and science bestows advantages on students. Content standards, clusters of courses, and graduation requirements illuminate the path to college and the workplace, lay a foundation for later learning, and draw students’ career aspirations within reach. How high schools assess student progress, however, has consequences for deciding who gains access to higher education.

Longitudinal data on 1982 high school graduates point to course-taking or “academic intensity,” as opposed to high school grade point average or SAT/ACT scores, as predictors of completion of baccalaureate degrees. Nevertheless, short-term and readily quantifiable measures such as standardized test scores tend to dominate admissions decisions. Such decisions promote the participation of some students in mathematics and science, and discourage others. The higher education community can play a critical role by helping to enhance academic intensity in elementary and secondary schools.

We must act on the recognition that education is “all one system,” which means that the strengths and deficiencies of elementary or secondary education are not just inherited by higher education. Instead, they become spurs to better preparation and opportunity for advanced learning. The formation of partnerships by an institution of higher education demands adjusting the reward system to recognize service to local schools, teachers, and students as instrumental to the mission of the institution. The NSB urges institutions of higher education to form partnerships with local districts/ schools that create a more seamless K-16 system. These partnerships can help to increase the congruence between high school graduation requirements in math and science and undergraduate performance demands. They can also demonstrate the links between classroom-based skills and the demands on thinking and learning in the workplace.

Research. Questions such as which tests should be used for gauging progress in teaching and learning and how children learn in formal and informal settings require research-based answers. The National Science Board sees research as a necessary condition for improved student achievement in mathematics and science. Further, research on local district, school, and classroom practice is best supported at a national level and in a global context, such as TIMSS. Knowing what works in diverse settings should inform those seeking a change in practice and student learning outcomes. Teachers could especially use such information. Like other professionals, teachers need support networks that deliver content and help to refine and renew their knowledge and skills. The Board urges the National Science Foundation (NSF) and the Department of Education to spearhead the federal contribution to science, mathematics, engineering, and technology education research and evaluation.

Efforts such as the new Interagency Education Research Initiative are rooted in empirical reports by the President’s Committee of Advisors on Science and Technology and the National Science and Technology Council. Led jointly by NSF and the Department of Education, this initiative should support research that yields timely findings and thoughtful plans for transferring lessons and influencing those responsible for math and science teaching and learning.

Prospects

In 1983, the same year that A Nation at Risk was published, the NSB Commission on Precollege Education in Mathematics, Science and Technology advised: “Our children are the most important asset of our country; they deserve at least the heritage that was passed to us . . . a level of mathematics, science, and technology education that is the finest in the world, without sacrificing the American birthright of personal choice, equity, and opportunity.” The health of science and engineering tomorrow depends on improved mathematics and science preparation of our students today. But we cannot delegate the responsibility of teaching and learning math and science solely to teachers and schools. They cannot work miracles by themselves. A balance must therefore be struck between individual and collective incentives and accountability.

The National Science Board asserts that scientists and engineers, and especially our colleges and universities, must act on their responsibility to prepare and support teachers and students for the rigors of advanced learning and the 21st century workplace. Equipping the next generation with these tools of work and citizenship will require a greater consensus than now exists among stakeholders on the content of K-16 teaching and learning. As the NSB report shows, national strategies can help change the conditions of schooling. In 1999, implementing those strategies for excellence in education is nothing less than a national imperative.

Does university-industry collaboration adversely affect university research?

Below is the page above transcribed into this article post.

With university-industry research ties increasing, it is possible to question whether close involvement with industry is always in the best interests of university research. Because industrial research partners provide funds for academic partners, they have the power to shape academic research agendas. That power might be magnified if industrial money were the only new money available, giving industry more say over university research than is justified by the share of university funding they provide. Free and open disclosure of academic research might be restricted, or universities’ commitment to basic research might be weakened. If academics shift towards industry’s more applied, less “academic” agenda, this can look like a loss in quality.

To cast some light on this question, we analyzed the 2.1 million papers published between 1981 and 1994 and indexed in the Science Citation Index for which all the authors were from the United States. Each paper was uniquely classified according to its collaboration status~~for example: single-university (655,000 papers), single-company (150,000 papers), university-industry collaborations (43,000 papers), two or more universities (84,000 papers). Our goal was to determine whether university-industry research differs in nature from university or industry research. Note that medical schools are not examined here, and that nonprofit “companies” such as Scripps, Battelle, and Rand are not included.

Research impact

Evaluating the quality of papers is difficult, but the number of times a paper is cited in other papers is an often-used indirect measure of quality. Citations of single-university research is rising, suggesting that all is well with the quality of university research. Furthermore, university-industry papers are more highly cited on average than single-university research, indicating that university researchers can often enhance the impact of their research by collaborating with an industry researcher

High-impact science

Another way to analyze citations is to focus on the 1,000 most cited papers each year, which typically include the most important and ground-breaking research. Of every 1,000 papers published with a single university address, 1.7 make it into this elite category. For university-industry collaborations, the number is 3.3, another indication that collaboration with industry does not compromise the quality of university research even at the highest levels. One possible explanation for the high quality of the collaborative papers is that industry researchers are under less pressure to publish than are their university counterparts and therefore publish only their more important results.

Diana Hicks & Kimberly Hamilton are Research Analysts at CHI Research, Inc. in Haddon Heights, New Jersey.


Growth in university-industry collaboration

Papers listing both a university and an industry address more than doubled between 1981 and 1994, whereas the total number of U.S. papers grew by 38 percent, and the number of single-university papers grew by 14 percent. In 1995, collaboration with industry accounted for just 5 percent of university output in the sciences. In contrast, university-industry collaborative papers now account for about 25 percent of industrial published research output. Unfortunately, this tells us nothing about the place of university-industry collaboration in companies’ R&D, because published output represents an unknown fraction of corporate R&D.

How basic is collaborative research?

We classified the basic/applied character of research according to the journal in which it appears. The distribution of university-industry collaborative papers is most similar to that of single-company papers, indicating that when universities work with companies, industry’s agenda dominates and the work produced is less basic than the universities would produce otherwise. However, single-company papers have become more basic over time. If association with industry were indirectly influencing the agenda on all academic research, we would see shifts in the distribution of single university papers. There is an insignificant decline in the share of single-university papers in the most basic category~~from 53 percent in 1981 to 51 percent in 1995.

Science Savvy in Foreign Affairs

On September 18, 1997, Deputy Secretary of State Strobe Talbott gave a talk to the World Affairs Council of Northern California in which he observed that “to an unprecedented extent, the United States must take account of a phenomenon known as global interdependence . . . The extent to which the economies, cultures, and politics of whole countries and regions are connected has increased dramatically in the [past] half century . . . That is largely because breakthroughs in communications, transportation, and information technology have made borders more porous and knitted distant parts of the globe more closely together.” In other words, the fundamental driving force in creating a key feature of international relations–global interdependence–has been science and technology (S&T).

Meanwhile, what has been the fate of science in the U.S. Department of State? In 1997, the department decided to phase out a science “cone” for foreign service officers (FSOs). In the lingo of the department, a cone is an area of specialization in which an FSO can expect to spend most, if not all, of a career. Currently, there are five specified cones: administrative, consular, economic, political, and the U.S. Information Agency. Thus, science was demoted as a recognized specialization for FSOs.

Further, in May 1997 the State Department abolished its highest ranking science-related position: deputy assistant secretary for science, technology, and health. The person whose position was eliminated, Anne Keatley Solomon, described the process as “triag[ing] the last remnants of the department’s enfeebled science and technology division.” The result, as described by J. Thomas Ratchford of George Mason University, is that “the United States is in an unenviable position. Among the world’s leading nations its process for developing foreign policy is least well coordinated with advances in S&T and the policies affecting them.”

The litany of decay of science in the State Department is further documented in a recent interim report of a National Research Council (NRC) committee: “Recent trends strongly suggest that . . . important STH [science, technology, and health]-related issues are not receiving adequate attention within the department . . . OES [the Office of Environment and Science] has shifted most of its science-related resources to address international environmental concerns with very little residual capability to address” other issues. Further, “the positions of science and technology counselors have been downgraded at important U.S. embassies, including embassies in New Delhi, Paris, and London. The remaining full-time science, technology, and environment positions at embassies are increasingly filled by FSOs with very limited or no experience in technical fields. Thus, it is not surprising that several U.S. technical agencies have reported a decline in the support they now receive from the embassies.”

This general view of the decay of science in the State Department is supported by many specific examples of ineptness in matters pertaining to S&T. Internet pioneer Vinton Cerf reports that “the State Department has suffered from a serious deficiency in scientific and technical awareness for decades . . . The department officially represents the United States in the International Telecommunications Union (ITU). Its representatives fought vigorously against introduction of core Internet concepts.”

One must ardently hope that the State Department will quickly correct its dismal past performance. The Internet is becoming an increasingly critical element in the conduct of commerce. The department will undoubtedly be called on to help formulate international policies and to negotiate treaties to support global electronic commerce. Without competence, without an appreciation of the power of the Internet to generate business, and without an appreciation of U.S. expertise and interests, how can the department possibly look after U.S. interests in the 21st century?

The recent history of the U.S. stance on the NATO Science Program further illustrates the all-too-frequent “know-nothing” attitude of the State Department toward scientific and technical matters. The NATO Science Program is relatively small (about $30 million per year) but is widely known in the international scientific community. It has a history of 40 years of significant achievement.

Early in 1997, I was a member of an international review committee that evaluated the NATO Science Program. We found that the program has been given consistently high marks on quality, effectiveness, and administrative efficiency by participants. After the fall of the Iron Curtain, the program began modest efforts to draw scientists from the Warsaw Pact nations into its activities. Our principal recommendation was that the major goal of the program should become the promotion of linkages between scientists in the Alliance nations and nations of the former Soviet Union and Warsaw Pact. We also said that the past effectiveness of the program depended critically on the pro-bono efforts of many distinguished and dedicated scientists, motivated largely by the knowledge that the direct governance of the program was in the hands of the Science Committee, composed of distinguished scientists, which in turn reported directly to the North Atlantic Council, the governing body of NATO. We further said that the program could not retain the interest of the people it needed if it were reduced below its already modest budget.

The response of the State Department was threefold: first, to endorse our main recommendation; second, to demand a significant cut in the budget of the Science Program; and third, to make the Science Committee subservient to the Political Committee by placing control in the hands of the ambassadorial staffs in Brussels. In other words, while giving lip service to our main conclusion, the State Department threatened the program’s ability to accomplish this end by taking positions on funding and governance that were opposed to the recommendations of our study and that would ultimately destroy the program.

The NATO Science Program illustrates several subtle features of State’s poor handling of S&T matters. In the grand scheme of things, the issues involved in the NATO Science Program are, appropriately, low on the priority list of State’s concerns. Nevertheless, it is a program for which they have responsibility and they should therefore execute that responsibility with competence. Instead, the issue fell primarily into the hands of a member of the NATO ambassador’s staff who was preoccupied mainly with auditing the activities of the International Secretariat’s scientific staff and with reining in the authority of the Science Committee. Although there were people in Washington with oversight responsibilities for the Science Program who had science backgrounds, they were all adherents of the prevailing attitude of the State Department toward science: Except in select issues such as arms control and the environment, science carries no weight. They live in a culture that sets great store on being a generalist (which an experienced FSO once defined as “a person with a degree in political science”). Many FSOs believe that S&T issues are easily grasped by any “well-rounded” individual; far from being cowed by such issues, they regard them as trivial. It’s no wonder that “small” matters of science that are the responsibility of the department may or may not fall into the hands of people competent to handle them.

Seeking guidance

The general dismay in the science community over the department’s attention to and competence in S&T matters resulted in a request from the State Department to the NRC to undertake a study of science, technology, and health (STH) in the department. The committee’s interim report, Improving the Use of Science, Technology, and Health Expertise in U.S. Foreign Policy (A Preliminary Report), published in 1998, observes that the department pays substantial attention to a number of issues that have significant STH dimensions, including arms control, the spread of infectious diseases, the environment, intellectual property rights, natural disasters, and terrorism. But there are other areas where STH capabilities can play a constructive role in achieving U.S. foreign policy goals, including the promotion and facilitation of U.S. economic and business interests. For example, STH programs often contribute to regional cooperation and understanding in areas of political instability. Of critical importance to the evolution of democratic societies is freedom of association, inquiry, objectivity, and openness–traits that characterize the scientific process.

It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science.

The NRC interim report goes on to say that although specialized offices within the department have important capabilities in some STH areas (such as nuclear nonproliferation, telecommunications, and fisheries), the department has limited capabilities in a number of other areas. For example, the department cannot effectively participate in some interagency technical discussions on important export control issues, in collaborative arrangements between the Department of Defense and researchers in the former Soviet Union, in discussions of alternative energy technologies, or in collaborative opportunities in international health or bioweapons terrorism. In one specific case, only because of last-minute intervention by the scientific community did the department recognize the importance of researcher access to electronic databases that were the subject of disastrous draft legislation and international negotiations with regard to intellectual property rights.

There have been indications that senior officials in the department would like to bring STH considerations more fully into the foreign policy process. There are leaders, past and present–Thomas Pickering, George Schultz, William Nitze, Stuart Eisenstadt, and most recently Frank Loy–who understand the importance of STH to the department and who give it due emphasis. Unfortunately, their leadership has been personal and has not resulted in a permanent shift of departmental attitudes, competencies, or culture. As examples of the department’s recent efforts to raise the STH profile, the leadership noted the attention given to global issues such as climate change, proliferation of weapons of mass destruction, and health aspects of refugee migration. They have pointed out that STH initiatives have also helped promote regional policy objectives, such as scientific cooperation in addressing water and environmental problems, that contribute to the Middle East peace process. However, in one of many ironies, the United States opposed the inclusion of environmental issues in the scientific topics of NATO’s Mediterranean Dialogue on the grounds that they would confound the Middle East peace process.

The interim NRC report concludes, quite emphatically, that “the department needs to have internal resources to integrate STH aspects into the formulation and conduct of foreign policy and a strong capability to draw on outside resources. A major need is to ensure that there are receptors in dozens of offices throughout the department capable of identifying valid sources of relevant advice and of absorbing such advice.” In other words, State needs enough competence to recognize the STH components of the issues it confronts, enough knowledge to know how to find and recruit the advice it needs, and enough competence to use good advice when it gets it, and it needs these competencies on issues big and small. It needs to be science savvy.

The path to progress

The rigor of the committee’s analysis and the good sense of its recommendations will not be enough to ensure their implementation. A sustained effort on the part of the scientific and technical community will be needed if the recommendations are to have a chance of having an impact. Otherwise, these changes are not likely to be given sufficient priority to emerge in the face of competing interests and limited budgets.

Why this pessimism? Past experience. In 1992, the Carnegie Commission on Science, Technology, and Government issued an excellent report, Science and Technology in U.S. International Affairs. It contained a comprehensive set of recommendations, not just for State, but for the entire federal government. New York Academy of Sciences President Rodney Nichols, the principal author of the Carnegie report, recently told me that the report had to be reprinted because of high demand from the public for copies but that he knew of no State Department actions in response to the recommendations. There is interest outside of Washington, but no action inside the Beltway.

The department also says, quite rightly, that its budgets have been severely cut over the past decade, making it difficult to maintain let alone expand its activities in any area. I do not know if the department has attempted to get additional funds explicitly for its STH activities. Congress has generally supported science as a priority area, and I see no reason why it wouldn’t be so regarded at the State Department. In any event, there is no magic that will correct the problem of limited resources; the department must do what many corporations and universities have had to do. The solution is threefold: establish clear priorities (from the top down) for what you do, increase the efficiency and productivity of what you do, and farm out activities that can better be done by others.

State is establishing priorities through its process of strategic planning, so the only question is whether it will give adequate weight to STH issues. To increase the efficiency and productivity of internal STH activities will require spreading at least a minimum level of science savvy more broadly in the department. For example, there should be a set of courses on science and science policy in the curriculum of the Foreign Service Institute. The people on ambassadorial staffs dealing with science issues such as the NATO program should have knowledge and appreciation of the scientific enterprise. And finally, in areas of ostensible State responsibility that fall low in State’s capabilities or priorities, technical oversight should be transferred to other agencies while leaving State its responsibility to properly reflect these areas in foreign policy.

In conclusion, I am discouraged about the past but hopeful for the future. State is now asking for advice and has several people in top positions who have knowledge of and experience with STH issues. However, at these top levels, STH issues get pushed aside by day-to-day crises unless those crises are intrinsically technical in nature. Thus, at least a minimal level of science savvy has to spread throughout the FSO corps. It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science. People with degrees in science or engineering have been successful in a wide variety of careers: chief executive officers of major corporations, investment bankers, entrepreneurs, university presidents, and even a few politicians. Further, the entrance exam for FSO positions could have 10 to 15 percent of the questions on STH issues. Steps such as these, coupled with strengthening courses in science and science policy at the Foreign Service Institute, would spread a level of competence in STH broadly across the department, augmenting the deep competence that State already possesses in a few areas and can develop in others. There should be a lot of people in State who regularly read Science, or Tuesday’s science section of the New York Times, or the New Scientist, or Scientific American, just as I suspect many now read the Economist, Business Week, Forbes, Fortune, and the Wall Street Journal. To be savvy means to have shrewd understanding and common sense. State has the talent to develop such savvy. It needs a culture that promotes it.

The Government-University Partnership in Science

In an age when the entire store of knowledge doubles every five years, where prosperity depends upon command of that ever-growing store, the United States is the strongest it has ever been, thanks in large measure to the remarkable pace and scope of American science and technology in the past 50 years.

Our scientific progress has been fueled by a unique partnership between government, academia, and the private sector. Our Constitution actually promotes the progress of what the Founders called “science and the useful arts.” The partnership deepened with the founding of land-grant universities in the 1860s. After World War II, President Roosevelt directed his science advisor, Vannevar Bush, to determine how the remarkable wartime research partnership between universities and the government could be sustained in peace.

“New frontiers of the mind are before us,” Roosevelt said. “If they are pioneered with the same vision, boldness, and drive with which we have waged the war, we can create a fuller and more fruitful employment, and a fuller and more fruitful life.” Perhaps no presidential prophecy has ever been more accurate.

Vannevar Bush helped to convince the American people that government must support science; that the best way to do it would be to fund the work of independent university researchers. This ensured that, in our nation, scientists would be in charge of science. And where before university science relied largely on philanthropic organizations for support, now the national government would be a strong and steady partner.

This commitment has helped to transform our system of higher education into the world’s best. It has kindled a half-century of creativity and productivity in our university life. Well beyond the walls of academia, it has helped to shape the world in which we live and the world in which we work. Biotechnology, modern telecommunications, the Internet–all had their genesis in university labs in recombinant DNA work, in laser and fiber optic research, in the development of the first Web browser.

It is shaping the way we see ourselves, both in a literal and in an imaginative way. Brain imaging is revealing how we think and process knowledge. We are isolating the genes that cause disease, from cystic fibrosis to breast cancer. Soon we will have mapped the entire human genome, unveiling the very blueprint of human life.

Today, because of this alliance between government and the academy, we are indeed enjoying fuller and more fruitful lives. With only a few months left in the millennium, the time has come to renew the alliance between America and its universities, to modernize our partnership to be ready to meet the challenges of the next century.

Three years ago, I directed my National Science and Technology Council (NSTC) to look into and report back to me on how to meet this challenge. The report makes three major recommendations. First, we must move past today’s patchwork of rules and regulations and develop a new vision for the university-federal government partnership. Vice President Gore has proposed a new compact between our scientific community and our government, one based on rigorous support for science and a shared responsibility to shape our breakthroughs into a force for progress. I ask the NSTC to work with universities to write a statement of principles to guide this partnership into the future.

Next, we must recognize that federal grants support not only scientists but also the university students with whom they work. The students are the foot soldiers of science. Though they are paid for their work, they are also learning and conducting research essential to their own degree programs. That is why we must ensure that government regulations do not enforce artificial distinctions between students and employees. Our young people must be able to fulfill their dual roles as learners and research workers.

And I ask all of you to work with me to get more of our young people–especially our minorities and women students–to work in our research fields. Over the next decade, minorities will represent half of all of our school-age children. If we want to maintain our continued leadership in science and technology well into the next century, we simply must increase our ability to benefit from their talents as well.

Finally, America’s scientists should spend more time on research, not filling out forms in triplicate. Therefore, I direct the NSTC to redouble its efforts to cut down the red tape, to streamline the administrative burden of our partnership. These steps will bring federal support for science into the 21st century. But they will not substitute for the most basic commitment we need to make. We must continue to expand our support for basic research.

You know, one of Clinton’s Laws of Politics–not science, mind you–is that whenever someone looks you in the eye and says, this is not a money problem, they are almost certainly talking about someone else’s problem. Half of all basic research–research not immediately transferable to commerce but essential to progress–is conducted in our universities. For the past six years, we have consistently increased our investment in these areas. Last year, as a part of our millennial observation to honor the past and imagine the future, we launched the 21st Century Research Fund, the largest investment in civilian research and development in our history. In my most recent balanced budget, I proposed a new information technology initiative to help all disciplines take advantage of the latest advances in computing research.

Unfortunately, the resolution on the budget passed by Congress earlier this month shortchanges that proposal and undermines research partnerships with the National Aeronautics and Space Administration, the National Science Foundation, and the Department of Energy. This is no time to step off the path of progress and scientific research. So I ask all of you, as leaders of your community, to build support for these essential initiatives. Let’s make sure the last budget of this century prepares our nation well for the century to come.

From its birth, our nation has been built by bold, restless, searching people. We have always sought new frontiers. The spirit of America is, in that sense, truly the spirit of scientific inquiry.

Vannevar Bush once wrote that “science has a simple faith which transcends utility . . . the faith that it is the privilege of man to learn to understand and that this is his mission . . . Knowledge for the sake of understanding, not merely to prevail, that is the essence of our being. None can define its limits or set its ultimate boundaries.”

I thank all of you for living that faith, for expanding our limits and broadening our boundaries. I thank you through both anonymity and acclaim, through times of stress and strain, as well as times of triumph, for carrying on this fundamental human mission.