DOE and the national labs
In “Fixing the National Laboratory System” (Issues, Spring 1997), Charles B. Curtis, John P. McTague, and David W. Cheney outline a set of next steps. Most of those steps are appropriate, but the pace of change needs to be accelerated significantly.
Reducing the Department of Energy’s (DOE’s) management burdens is critical, as highlighted in the Galvin task force report. This area has repeatedly been identified as the single most critical one for DOE reform and has been the focus of innumerable suggestions and critiques. Many critics of DOE are simply tired of waiting, and this frustration is evident in current congressional proposals to abolish the department. I want to give the new secretary time to effect real change, but unless progress is shown very quickly, the calls for abolishing DOE may intensify. In the simplest terms, management burdens will be reduced and results improved when DOE defines outcomes and stops micromanaging the process. Ideally, DOE would trust its contractors, then verify performance.
The authors discuss the need to improve the integration of their laboratories with universities, industry, and other government agencies. I strongly endorse their recommendations, but again the pace of change needs to accelerate. More laboratory involvement with universities is certainly important. The laboratories (especially the weapons laboratories) also need to significantly strengthen their partnerships with industry. Despite this need, DOE still has barriers, albeit somewhat reduced, against the use of the laboratories as true national resources whose expertise can be readily tapped by other agencies and industry. In evaluating the laboratories’ integration with the other three major research providers, DOE should reexamine its direct funding of large companies in some programs, in order to ensure that these programs are adequately benefitting from the innovation and potential for revolutionary breakthroughs that universities, small businesses, and national labs can inject.
Some DOE burdens have been reduced, especially in business practices. But movements in other areas such as safety and health have been counterproductive. Under the banner of contract reform, DOE is intent on transferring risk to contractors, without evaluating whether all contracts are good candidates for such transfer. The plan to give many of the current rules the force of law may drive organizations, especially nonprofits, away from laboratory management. The plan to shift to external (Occupational Safety and Health Administration and Nuclear Regulatory Commission) regulation and away from internal oversight will help, but this step needs to be taken quickly, not on DOE’s proposed multiyear schedule.
The GOCO (government-owned, contractor-operated) concept of national laboratory management has served the nation well. I support the authors’ enthusiasm for this concept, but DOE needs to move with some urgency to reestablish the relationships on which the concept was founded and depends. The concept is based on “no gain, no loss,” in return for the contractor’s management of critical government functions, such as stewardship of nuclear weapons. In most cases, contractors cannot accept risk that could jeopardize their fundamental missions (such as education, for university contractors). Furthermore, when contract reform seeks to emphasize incentive-based systems, DOE must be extremely careful not to undermine the GOCO concept, which relies on a trusting partnership between government and contractor. When the director of a weapons laboratory certifies the integrity of a nuclear weapon, no hint of an incentive system should affect that decision!
The article emphasizes in closing that the national laboratories must continue to provide scientific and technical leadership for national missions. That must be the overarching goal of future improvements in the laboratory system.
Charles B. Curtis, John P. McTague, and David W. Cheney provide a very sensible approach to managing the Department of Energy (DOE) national laboratories. They correctly state that the size and number of laboratories should not be decided a priori but instead must follow function. I applaud the DOE Laboratory Operations Board for its progress in making the labs more efficient and for attempting to fix a system of governance that is broken. However, as the authors correctly point out, much is left to be done. It will take a sustained effort for several years.
The progress cited by the authors is threatened by two concerns. First, DOE continues to move steadily and without strategic intent toward dismantling the special relationship between the labs and DOE. This relationship, embodied in the GOCO (government-owned, contractor-operated) concept and implemented through the Management and Operations (M&O) contract, is being undermined by certain contract reform initiatives and by promulgation of increasingly rigid M&O procurement regulations. These initiatives are moving DOE into the role of actually operating the laboratories. Historically, the foundation of DOE’s most successful programs has been built on trust engendered by the GOCO relationship. In DOE’s nuclear weapons program, institutions such as the University of California were asked to use the best technical and management talent available to perform an inherently governmental function-the design and lifetime guarantee of nuclear weapons. In turn, the government provided contractual flexibility and broad indemnification to these nonprofit contractors. The current DOE initiatives shift more risks to the contractors, push for inappropriate incentives, and introduce more rigid governmental controls, to the point where the contracts will be fundamentally incompatible with the public-service orientation of nonprofit contractors. What is at stake is the ability of the government to continue to attract the world-class talent to perform many of the missions cited by the authors.
Second, making the labs more effective and reducing the burden on them requires that fewer federal and laboratory employees do paperwork, auditing, and compliance-related activities. U.S. industry has found that such staffing reductions are imperative to increase productivity. However, our experience at Los Alamos has taught us that it is very difficult to overcome congressional pressure to preserve jobs. Also, the reductions we did make were not matched by corresponding reductions in the number of federal employees overseeing us, creating an even greater mismatch than before. It will take very strong DOE leadership to deal with the fundamental dilemma that together, between the government and the labs, we have far too many people doing jobs that neither add to scientific productivity nor produce a safer workplace.
Science funding squeeze
The excellent article by Philip M. Smith and Michael McGeary (“Don’t Look Back: Science Funding for the Future,” Issues, Spring 1997) inspired me to think along two somewhat divergent paths. On the one hand, I agree wholeheartedly with their prescription for science and technology (S&T) policy. Yes, we need better priority-setting mechanisms. Yes, we need to reassess key policy mechanisms, especially the peer review process. Yes, we need to make our system more flexible and agile. On a number of these issues, we are already taking steps in the right direction, such as through the adoption of new merit review criteria at the National Science Foundation and the National Institutes of Health. On others-in particular, appreciating the interplay of discovery and application-we still find ourselves sidetracked by outdated rhetoric.
On the other hand, as vital as all these issues are, they also bring to mind a timely adage about the politics of budgeting: The process is not the problem; the problem is the problem. Despite its faults, our current system delivers for the nation. There is strong evidence that over the past 50 years, innovations emerging from S&T have generated up to half of our nation’s real economic growth. Top economists, such as Edwin Mansfield of the University of Pennsylvania, have found that public investments in research, especially academic research, generate very high returns and play a major role in the development of new products and processes in industry. With this track record, the words “first, do no harm” take on added meaning.
Our greatest challenge is to secure an adequate level of investment in our nation’s future prosperity and quality of life. Science and engineering have thus far fared very well in the push to achieve a balanced budget. As Jack Gibbons noted at the recent American Association for the Advancement of Science Science and Technology Policy Colloquium: “This is the fifth year in a row that President Clinton has proposed to increase research and technology funding while at the same time putting our country on the path toward fiscal sanity.”
This bodes well for our collective future, but continued success is by no means a fait accompli. The projections for the budget category known as domestic discretionary spending are of particular concern. This category includes most of what we think of as “government”: parks, prisons, highways, food safety, and countless other functions, including all of federal nondefense R&D. Thirty years ago, these functions accounted for nearly a quarter of all federal spending. Now they are barely one-sixth of the total, and this one-sixth slice of the pie will shrink to roughly one-seventh over the next five years, according to most projections of the recently announced balanced budget agreement.
The implications of this trend for science and engineering are a decline in purchasing power for nondefense R&D on the order of 15 percent. This is why I often say that we are on the verge of running a high-risk experiment to see if our nation can scale back its investment in critical areas such as research and education and still remain a world leader in the 21st century.
My hat’s off to Smith and McGeary for injecting thoughtful ideas into overdue discussions on the future of U.S. S&T policy. We must also ensure that our discussions are broad enough to address both the internal and external challenges facing our highly successful system.
The article by Philip M. Smith and Michael McGeary on future science policy is a most interesting and insightful analysis of the current challenges to the nation with regard to science funding. In particular, it offered a sophisticated presentation of the bureaucratic politics of developing the federal budget, an aspect of science policymaking that is often neglected in “rational” approaches to the subject.
However, there is one significant (and surprising) omission from their account that deserves attention: the potential role of international cooperation among scientists as advances in information technology begin to change the opportunities and costs of communication across national boundaries.
It remains true, whatever the rhetoric, that public funding of science is primarily a national enterprise undertaken to serve national goals. U.S. government efforts to encourage more international cooperation to serve either scientific or budgetary purposes have had mixed success for big or small science. Budgetary processes and fickle administration or congressional support have often made cooperation too bureaucratically difficult or have earned the Unites States the reputation of being an unreliable partner.
The interesting question today is whether current and future information technology developments will so reduce the cost and increase the effectiveness of international communication as to change the incentives for genuine cooperation and perhaps have a substantial impact on funding requirements.Though the evidence is only anecdotal so far, the experience of scientists working with modest equipment at universities already has been markedly altered by the ease of planning collaborative experiments, distributing raw information, and consulting partners throughout the world. The funding needs that often bedeviled cooperation at this level-initial funding for project planning before peer review and multiyear commitments-become less important, perhaps even irrelevant in most cases.
The picture may not be altered as much for big-science cooperation, because large assured funding for equipment and operations is still required. On the other hand, the ease of planning experiments among many collaborators, of including students without incurring large transportation costs, and of sharing the operation of large facilities from a distance may have dramatic effects on the quality and payoff of cooperation and correspondingly of interest in it. Conceivably, funding requirements could be seen in quite a new light as a result.
None of this is yet clear, especially for the funding of big science. It is possible that the impact will be only modest. However, that may also be dependent on the wisdom of national science policies in the coming years. To ignore the potential opportunities of the new situation in international scientific cooperation when planning national science policy, as Smith and McGeary have done, seems unwise at best.
Making schools better
Norman R. Augustine, chairman and chief executive officer of Lockheed Martin Corporation, is one of America’s outstanding business leaders. His article in the Spring 1997Issues, “A New Business Agenda for Improving U. S. Schools,” shows once again that he’s one of our top education leaders as well. The comprehensive nine-point agenda for improvement he describes is right on target. It includes a strong focus on setting high academic standards and developing assessments that measure whether those standards are being met.
President Clinton has proposed voluntary national tests in fourth-grade reading and eighth-grade math that can help give our nation the kind of assessments Augustine advocates. I am happy to report that Maryland, where Lockheed Martin is headquartered, was the first state to announce that it would administer the tests to its students, beginning in 1999. And the Business Roundtable’s Education Task Force, chaired by Augustine, has also endorsed the tests. These tests will give parents, teachers, and state leaders an opportunity to compare the performance of their students with the performance of students in other states and nations. This will provide national benchmarks that can help states to define and refine their own standards of excellence.
The decision to test fourth-grade reading and eighth-grade math, which would include algebra and some geometry, was very deliberate. Reading and math are the core basics, and fourth and eighth grades are critical transition points in a child’s educational experience.
By the fourth grade, children must be good readers or they cannot go on to learn the rest of the core curriculum. Too often, children who struggle with reading early on fall behind in school, fill up our special-education classes, or lose interest and drop out. I am convinced that a strong and early focus on reading will go a long way toward reducing special education and remedial costs, reducing truancy, and keeping more young people from dropping out of school.
When it comes to math, the vast majority of experts view geometry and algebra as the gateway courses that prepare young people to take college-preparatory courses in high school. Currently, only 20 percent of our young people are taking algebra by the end of the eighth grade. Yet in many countries, such as Japan, 100 percent of all eighth graders are taking algebra. We’ve got to catch up-or fall behind.
These tests will also help to improve accountability. I believe that parents whose children come home with “A’s” on their report cards but low scores on these tests will begin asking some hard questions and hold their schools more accountable. This will be a very healthy development. We must not tolerate failing schools. I will be happy if these tests light fires under some people.
When it comes to improving education, raising expectations is a big part of the battle. We must ask all our children to stretch their minds and get ready for the 21st century. Our kids are smarter than we think. I’m confident they’ll meet the challenge.
Norman Augustine’s article has good features and serious flaws, and both deserve commentary. On the positive side, the article draws legitimate attention to the work of the Business Roundtable’s Education Task Force, which Augustine chairs. It is good to be reminded that some chief executive officers in the business community truly care about education and to learn about some of their actions that are designed to support our schools.
On the negative side, the article displays ignorance about evidence, errors of logic, and neoconservative educational cant. It begins by quoting A Nation at Risk, which warned in 1983 (without benefit of evidence) of “a rising tide of mediocrity” in U.S. schools. It then goes on to assert that some “progress” has recently been made and that “more students are doing better than they were a decade or two ago,” and implies that this improvement came about simply because the business community is now more involved in education.
But is this “progress” sufficient? Indeed, it is not. According to Augustine, “ample data . . . [document the continuing] failures of the U.S. K-12 education system. . . . The problem is that most U.S. schools are not good enough. . . . More and more [their graduates] are simply ill-prepared, not just for jobs and careers but also for the basics of survival in the 21st century.”
This sounds like serious business, but somehow Augustine never gets around to documenting his charges. The “ample data” he offers are confined to an unsupported quote about poor literacy from the 1996 National Education Goals Report, statistics about derived judgments (but no basic achievement data) from recent National Assessment of Educational Progress reports, and a claim from an uncited National Center for Education Statistics document about urban high-school graduation rates. That Augustine offers the opinions of others but no hard evidence to back his alarmist judgments is hardly surprising. Because neither he nor anyone else can know for sure what lies in the future, it is easy to condemn schools for “failing to prepare students for the 21st century” without actually saying anything.
This does not mean that all U.S. schools are successful, of course. In fact, Augustine indicates awareness of inequities in the system when he writes that “our best [schools] still rank with the very best on Earth” but then fails to ask why other schools do not meet this high standard. One need not be a rocket scientist to answer this question. The poverty rate among children is far higher in our country than in other advanced nations, poor children do very badly in school, and some schools must contend with large numbers of poor kids. Worse, because school funding is tied to local resources in our country (but not elsewhere), US. schools responsible for poor children often receive only a fraction of the funding that is given to schools in rich suburbs.
In short, the U.S. education system does indeed “fail” but its problems are not those discussed by Augustine. It follows that the “cures” he advocates-high standards, performance assessments, penalties for schools that “persistently fail to educate their students,” and so on- will not have the effects he intends; indeed, will only impose additional burdens on badly funded schools that are responsible for our most impoverished students. If Augustine is serious about improving education, he should spend more time catching up with ideas and evidence from scholars in the education community who are aware of its real problems and are pioneering exciting programs to solve them.
I read with great interest Lewis M. Branscomb’s “From Technology Politics to Technology Policy” (Issues, Spring 1997). In the few short months since Branscomb penned the article, much has changed. Since I took over as chair of the House Committee on Science in January of this year, the committee has reported out 10 authorization bills by voice vote, marking a new era of bipartisan cooperation on issues related to science and technology.
The bills reported out totaled over $25 billion per year in budget authority for civilian science programs. The totals represented a 2.7 percent increase for civilian R&D spending under the jurisdiction of the Science Committee for next year. Included among the measures was H.R. 1274: the National Institute of Standards and Technology (NIST) Authorization Act of 1997, a bill sponsored by Technology Subcommittee Chairwoman Constance Morella.
H.R. 1274 included authorizations for NIST’s Advanced Technology Program (ATP) and Manufacturing Extension Partnership (MEP) program. Although I have been a supporter of the MEP program, I have had substantive concerns about ATP. The General Accounting Office (GAO) has reported that 63 percent of the ATP applicants surveyed did not look for private sector funding before applying for an ATP grant. Further, roughly half of the ATP applicants surveyed reported that they would go forward with their projects even without ATP grant funding. These findings are a good indication that a significant number of ATP grants are simply displacing private investment in technology development. In addition to the GAO’s findings, ATP has a troubling history of carrying over large amounts of money from one year to the next. Since its inception in 1990, ATP has never spent all the money appropriated for it, so that the program has been overfunded in every year of its existence.
To address these concerns, the Science Committee reformed ATP through H.R. 1274. First, the bill makes two important structural changes to the program. It only allows ATP grants to go to projects that cannot proceed without federal assistance, and it raises the private sector match required for most ATP grants to 60 percent. These changes should further leverage scarce federal research dollars while helping to prevent ATP grants from simply displacing private capital. Second, the bill authorizes ATP at $185 million in fiscal year 1998. That is a decrease from the existing appropriations level and it addresses the issue of unobligated funds. This contrasts dramatically with the 22 percent increase recommended by the Clinton administration.
The bottom line is that every dollar we spend on ATP is not spent on some other form of federal R&D. Although ATP may have a legitimate role in the pantheon of federal R&D programs, it is limited. As we prioritize R&D spending, we must look to leverage federal resources. Business simply will not fund long-term high-risk basic research; therefore, it is incumbent on the federal government to step in and fill the void. For this reason, House Speaker Gingrich and I have commissioned Congressman Vernon Ehlers (R-MI), vice chairman of the Science Committee, to lead congressional development of a new, sensible, long-range science and technology policy. I look to this study to review proposals such as Branscomb’s and to help establish a bipartisan R&D policy.
The House of Representatives passed H.R. 1274 by voice vote on April 24, 1997 without amendment. I believe that H.R. 1274, along with the committee’s and Rep. Ehlers’ work, is a significant step toward moving “from technology politics to technology policy.”
In “From Technology Politics to Technology Policy” (Issues, Spring 1997), Lewis M. Branscomb offers an approach to R&D funding that he believes can go a long way toward eliminating politics and establishing bipartisan support. Such an end is much to be desired. Fortunately or unfortunately, it is only through politics that policies are formulated, and politics are endemic in Washington.
Branscomb focuses on some of the semantical problems that traditionally have plagued definitions of basic and applied research, but his distinctions are useful. Although most of his suggestions make sense, I have reservations about some of the specifics, such as his concept of the way in which the Advanced Technology Program (ATP) of the National Institute of Standards and Technology (NIST) should operate in conjunction with the states and consortia of companies. This is a cumbersome arrangement at best. In my opinion, NIST has done a fine job of charting key areas to be supported and of seeking the best proposals from industries large and small, using a process that seems reasonably streamlined.
Branscomb’s guiding principles are unexceptionable and his suggestions concerning agency roles and programs make good sense. His plea for public/private partnerships that leave to government and industry that which they separately do best is sound. However, the principle leaves many gray areas, and these are the areas that have become political battlegrounds over the years. The article is vintage Branscomb in its sweep and understanding and is a welcome contribution to the debate about the government’s role and agency missions.
Telecommunications in the global market
Cynthia Beltz’s main point, that technology is moving more rapidly than international treaty negotiations in bringing competition to global telecommunications markets, is certainly true (“Global Telecommunications Rules: The Race with Technology,” Issues, Spring 1997). However, economic theory rarely provides insights into the timing of market forces. Consequently, I believe she overstates the extent to which the market power of the incumbent telecommunications providers is likely to deteriorate in the near term. The competition that we have seen to date has been, to mix culinary metaphors, cream-skimming of low-hanging fruit. Increasing competition and lowering prices in telecommunications markets generally may be more difficult, especially given the massive investments that this technology requires.
Consider the growth of callback services for international telephone calls that serves as Beltz’s main evidence. The success of those services may be a unique instance in which U.S.-based providers have been able to circumvent protectionist regulation in other countries because of the special nature of long-distance calls. Long-distance voice is a narrowband service. It requires no specialized equipment, software, or familiarity with technology. It piggybacks nicely on local telephone service. You can do it in the privacy of your own home. And using a callback service results in immediate cost savings.
In contrast, many other uses of the telecommunications infrastructure require not only broadband access but also ancillary investments, such as computers and specialized training, that may take some time to amortize. Such investments are abundant in the United States but are less widely distributed abroad; this lack may allow foreign incumbents to exert continued control.
Even in the United States, as Beltz notes, competitive markets emerged in long distance long before they did so in other services. I would argue that the real midwife of domestic competition in cellular telephony was not technology but the promise of receipts from the spectrum auction that could be used to alleviate the federal budget deficit. Competition in local service has proved especially difficult to foster, with access to the network being jealously guarded. Why should it be easier abroad?
Furthermore, in U.S. cellular markets, substantial price competition often arises only after a third firm has entered. In that context, when new providers enter monopolized foreign markets, they could easily get into cozy pricing relationships with incumbent firms rather than compete away the profit stream between them.
Lastly, the emergence of competitive markets in only part of the telecommunications network may lead to price increases elsewhere. Analysts have long forecast that many local telephone rates in the United States are likely to rise with the emergence of competition. The system of cross-subsidies that is used to encourage universal service depends on extraordinary profits in some areas of the network to reduce local rates elsewhere. Likewise, in the international arena, incumbent providers might raise prices on the portions of the network they control in order to make up for losses elsewhere. Again, control of access to the network will provide those carriers with many fruitful opportunities to do so.
Cynthia Beltz’s review of the recently concluded multilateral negotiations on basic telecommunications makes clear why and how these negotiations were the first major breakthrough in the new trade agenda launched more than a decade ago by the Uruguay Round. This landmark agreement captures the profound difference between the domain of the postwar General Agreement on Tariffs and Trade (the GATT) and the new World Trade Organization (WTO). The GATT was primarily concerned with the removal of the transparent border barriers to trade that were erected in the 1930s and rested on a concept of shallow integration that accepted differences in regulatory systems as a given. The services negotiations of the WTO are focused on impediments to trade and investment that stem primarily from domestic regulations, which are often rather less than transparent and often differ significantly from country to country. The telecom negotiations thus capture the essential characteristics of deeper integration: Trade and investment are complementary means of access, and effective access requires an inherent push toward regulatory harmonization.
But Beltz’ article raises a number of other significant features of the telecom negotiations. She suggests that rapidly changing technology leaves negotiators further and further behind, so that they may be redundant at best or counterproductive at worst. Put another way, if the combined forces of globalization, technology, and investment are redesigning the global playing field, what is the role of governments and intergovernmental institutions, such as the WTO? It would have been very useful if Beltz had presented her views on a new raison d’être for multilateral rules in a regime of deeper integration. If the forces of globalization will secure, over time, harmonization of regulation (mobile capital can, through locational competition, effectively engage in regulatory arbitrage) then is it the role of the WTO to monitor the regulators? Given its paucity of expertise and (as is amply evident from Beltz’s article) the extreme complexity of the legal, technological, and regulatory aspects of this sector, how can the WTO carry out such a function? How will the dispute settlement process function when such expertise is so scarce not only in the WTO but probably also in a majority of member countries? What will be the role of the private sector in this regard? Are new forms of cooperation between the WTO and self-regulatory bodies required?
These questions are not being raised to criticize Beltz’s article. Quite the contrary. The point of this letter is to request an encore!
Cynthia Beltz provides an excellent analysis of the difficulties encountered by regulators confronting an environment where the pace of technological change outstrips the ability to ensure that regulation is appropriate or enforceable. This does not necessarily have serious implications for the relevance of the WTO agreement on basic telecommunications. The WTO’s focus is not on regulatory regimes per se but on their application: on eliminating discrimination against foreign providers and ensuring that they have access to markets. Although a major weakness of the General Agreement on Trade in Services is that members may invoke derogations to the “national treatment” and “market access” principles, the intention is that these will gradually be negotiated away. Achieving this will take time and will require cross-issue linkages and tradeoffs. In this connection, Beltz is surely correct to argue against sector-specific negotiations. The basic point, however, is that the focus of the WTO is on elimination of discrimination; it does not do much to specify the substantive content of regulatory regimes.
An important issue touched on by Beltz concerns dispute settlement. Under the WTO, this is government-to-government, takes a long time, and will often not be particularly helpful to a specific firm even if a case is won. Allowing for person-state dispute settlement could greatly increase the relevance of WTO agreements to many enterprises. Noteworthy in this connection is that one of the major elements of the Organization for Economic Cooperation and Development’s planned Multilateral Agreement on Investment is to provide for investor-state arbitration.
Beltz’ point that greater private sector involvement is required is very important in this connection. A perennial problem in the process of trade policy formation and negotiation is the absence of comprehensive information on national policies and proposals, their costs and benefits, and whether they violate WTO norms. Concerted efforts by global business to provide such information would help push the liberalization process along. One of the key constraints hampering progress in the basic telecom negotiations was uncertainty, on the part of developing countries in particular, regarding the costs and benefits of the status quo and alternative proposals for liberalizing access to telecom markets. Multilateral institutions have a role to play here; as noted by Beltz, the World Bank was active in helping a number of developing countries determine if the agreement was in their interest. But as the primary users of the multilateral trading system, global businesses are the main source of information on the policies that are actually pursued and their economic impact. Developing mechanisms to induce greater cooperation among international businesses in collecting such data and making it available to governments would be very helpful as WTO negotiations move beyond telecommunications to other areas.
Cynthia Beltz provides an insightful review of the changing nature of the international telecommunications market. Although technology will continue to loosen the grip of monopolies around the globe, Beltz astutely points out that the greatest monopoly killer remains U.S. leadership at home.
When we adopt restrictive policies, the world follows, often benefiting at our expense. Passage of the Telecommunications Act of 1996 showed U.S. resolve to practice what it was preaching at the WTO and gave negotiators tremendous leverage in Geneva. In fact, one of the main hurdles faced by the United States in expanding the WTO agreement is our insistence on restricting foreign ownership here. Although we preach the merits of permitting 100 percent foreign ownership, it is readily apparent that Congress would never allow a foreign carrier to purchase a regional Baby Bell. Foreign governments are fully aware of these contradictions and use them to justify restrictions in their markets.
The Internet is another area in which the United States must lead by example in order for it to become a truly global communications medium. Beltz correctly points out that countries with competitive markets (and hence lower prices) have much higher telecommunications and information technology usage rates. Security, transactional costs, and censorship are also issues with global implications for the success of the Internet.
Unfortunately, the United States is leading the tide toward restrictions and protectionism. Although the administration’s much-anticipated White Paper on the Internet advocates a “hands-off” government policy approach, government actions too often contradict this mantra. Congress still has not passed a moratorium on state and federal taxation of electronic commerce; the Communications Decency Act restricts the information that can and will be available; and the export of encryption technology remains as restricted as a munition, denying global firms adequate security on the Web. All of these protectionist and restrictive policies give the United States little room to promulgate a noninterventionist government policy worldwide.
It’s important not to forget that it will be some time before new technologies (such as Internet telephony) make headway in the $54 billion international market. In the meantime, monopolies will reap monopoly profits and maintain the ability to distort competition in the international market. The Federal Communications Commission policy restricting one-way international simple resale is based on the very real fear that foreign monopolies will extract monopoly profits from U.S. consumers and use this revenue to “dump” other services in the U.S. market. As the United States has learned in so many other industries, it is crucial to maintain vigilance against anticompetitive behavior on the part of foreign carriers to ensure that consumers benefit from competitive markets in the future.
Limiting scientist immigration
I agree with Alan Fechter and Michael S. Teitelbaum’s suggestion that we create an expert panel to recommend periodic changes in the level of immigration of scientists and engineers (“A Fresh Approach to Immigration,” Issues, Spring 1997). I would argue that they are too timid in outlining its charge. Such a panel could be given the authority to recommend changes in employment-based immigration preferences within a broad range set by Congress and they could do this annually in an effort to adjust immigration to the changing labor market for scientists and engineers.
Fechter and Teitelbaum stress the projections of shortages of scientists and engineers made just before the 1990 immigration law as a primary reason why employment-based preferences were greatly expanded in that legislation. I agree that the projections were a factor, but I think another influence was the concern that a very heavy reliance on family unification as a criterion for immigration had led to an increasing gap in the skills and human capital of recent immigrants vis-a-vis the broader U.S. work force. Increased preferences in the 1990 law for professors, researchers, and professionals with advanced degrees would not only help avert a possible shortage of scientists and engineers but would also help increase the average skill level of immigrants. George Borjas of Harvard has rather convincingly made the case that the net economic benefits from immigration come primarily from the minority of immigrants with above average skills.
We cannot forecast the Ph.D. labor market several years in advance because we cannot anticipate the events that cause changes in R&D funding levels. Thus, we cannot avert a surplus of new Ph.D.s from time to time. We can, however, avoid making the surplus conditions worse by taking steps to reduce the chance of having record immigration levels at the wrong time. I would prefer to see a high level of immigration of scientists and engineers that is limited at times of weak demand, when our new Ph.D.s are faced with a reduced number of job openings.
The past few years have seen us set records for scientific and engineering immigration even as we knew that that R&D funding cutbacks would worsen the poor job market faced by new PhD.s in the United States. Those who don’t see the harm in this should consider the recent words of astronomer Alan Hale. Given extraordinary media access when the comet bearing his name was visible in the sky, he said that career opportunities for scientists were so limited that he had been unable to find work to adequately support his family and that he couldn’t in good conscience encourage students to pursue science careers. The Fechter/Teitlebaum proposal can help avert the conditions that produce such advice.
Alan Fechter and Michael S. Teitelbaum note that “we are already beginning to see declines in enrollment of foreign citizens in our graduate science and engineering programs.” Couple that with the insuperable problem of projecting the labor markets for potential graduate students now refusing to embark on six to seven years of doctoral study, and one could reasonably argue that in about five years the United States will indeed face doctoral shortages caused by a lack of Ph.D.’s, foreign or domestic, and a labor market desperate for highly trained scientists and engineers.
There is a simple response to this likely problem: Offer U.S. citizenship to all foreign students who earn their doctorates in science, mathematics, or engineering in U.S. schools. The advantages are several. The so-called “foreign student” problem is mooted, because these students, if successful, can become Americans. The country enriches its stock of highly trained citizens and avoids a potential shortage. Further, if the home countries of the foreign students want them to return, they must offer attractive facilities and resources, thus improving the worldwide climate for research by highly trained young men and women.
No DARPA for NIH
Cook-Deegan proposes that a portion of the National Institutes of Health’s (NIH’s) grants be based not on evaluations by a peer-review process but on a Defense Advanced Research Projects Agency (DARPA) model, in which staff experts choose how to distribute the research funds. DARPA works because there is a broad consensus about its mission (to enhance national security) and thus about how to judge its output (through strategic analysis of the technologies that its support made possible). Staffers can’t simply decide to fund what they consider to be “good science,” at least not indefinitely; their decisions are subject to some accountability to their agency’s ultimate clients. A DARPA funding mechanism could also work in supporting some biomedical research, but only if a similar consensus existed as to the mission of that research and how the success of the biomedical grant mechanism could be judged.
Clearly, this consensus is lacking. Some scientists (and many legislators) believe that our country supports biomedical research as part of its mission to ameliorate the human condition by finding ways to prevent and treat terrible diseases. Hence the way to assess the success of NIH would be to determine whether the projects it supports have led to significant progress in treatment discovery. Other scientists (and probably fewer legislators) believe that the sole mission of NIH is to acquire knowledge about how the body works, that NIH-funded scientists have no special obligation to participate in the treatment-discovery process, and that the worthiness of a scientist’s oeuvre should be assessed solely on the basis of the enthusiasm it generates among other scientists (as shown by how often they cite his or her publications).
Until there is real agreement about whether NIH has either or both of these missions and until some objective standards are established for assessing its success, a DARPA-like mechanism for choosing what to support is neither more nor less likely to be successful than the existing peer-review system. Indeed, how would one even know whether it was successful?
Environment: a reasonable view
By juxtaposing “Real Numbers” (Jesse H. Abusel on environmental trends) and Martin Lewis’ review of Paul R. Ehrlichs and Anne E. Ehrlich’s book The Betrayal of Science and Reason: How Anti-Environmental Rhetoric Threatens Our Future (Issues, Winter 1996-97), you draw attention to the stark contrast between a pragmatic, realistic approach to environmental concerns and an extremist, impending-apocalyptic view.
Ausubel has painstakingly gathered statistical data, often from the mid-nineteenth century to the present, to track a broad range of human activities, such as energy use, agriculture, water, and municipal waste, and their impact on the environment. Both the raw data and the data expressed relative to gross national product (GNP), show almost continuous improvement over many decades. As the world’s primary energy source changed from wood in 1860 to coal to oil, carbon emissions have dropped by nearly 40 percent, and energy use in the United States, expressed as oil equivalent per GNP, has dropped by 75 percent since 1850. Agriculture continues to be plagued more by overproduction than by a shortage of arable land, and the population growth rate has been declining for a quarter of a century. Indeed, the UN recently reduced by 500 million people its 2-year-old forecast of population in 2050. The overall picture supports belief in a sustainable world and is directly related to the widespread and accelerating application of science and technology.
On the other hand, the Ehrlichs, who have focused in the past on impending disaster, turn their attention in their new book to those who disagree with them. The very title makes it clear that any disagreement with their belief in impending apocalypse is not just anti-environment but actually corrupts and betrays both science and reason! Placing the Ehrlichs’ view next to Ausubel’s reasoned and science-based argument destroys its credibility and actually brings into question your editorial wisdom in devoting nearly three pages to a review of their book. But thanks for the juxtaposition. That was a wise decision.
It was kind of Martin Lewis to defend my reputation in his review of Paul and Anne Ehrlich’s new book (“In Defense of Environmentalism,” Issues, Winter 1996-97). Readers of Issues should know, however, that just because the Ehrlichs accuse me of errors does not make the accusation true.
For example, the Ehrlichs assert that I committed a serious error in A Moment on the Earth by writing that temperatures have declined in Greenland without adding the vital qualifier that there could be regional cooling even as the global trend was toward warming. But after the Greenland sentence that they quote, my very next sentence declares, “Temperature shifts are not uniform.” The next four paragraphs discuss the fact that it may be unusually cool in one part of the world while the overall global trend is toward warming.
Recent experience has taught me that a significant portion of the environmental debate is conducted using the sophism, “My opinions are fact, your opinions are errors.” Environmental reform will not proceed beyond the stage of ideological gridlock until all parties stop hurling bogus accusations of errors and engage in reasoned discourse.