Reexamining the Patent System

Is the patent system working? It depends on whom you ask. Which industry, upstream or downstream firms, public companies or small inventors? Opinions are plentiful, but answers supported by data are few. The patent system is at the heart of the knowledge economy, but there is surprisingly little knowledge about its costs and benefits. If the system is to promote innovation as effectively as possible, we need to know much more about how patents are used and licensed and what effects they have on innovators and business practice. Since innovation is one of the key engines of economic growth, the cost of a dysfunctional, or merely suboptimal, patent system could be substantial.

Signs of dysfunction are spreading, stimulating interest in legislation to reform our patent system. But to date, progress toward enacting any reforms has been stymied by inter-industry disputes over key provisions. Although patent reform is often contentious because of the divergent economic interests at stake, the current political struggle between the pharmaceutical and biotech industries on one end and information technology and financial services on the other is unprecedented.

In a recent acclaimed book, Patent Failure, economists James Bessen and Michael Meurer review the literature on the private value of patents and costs of patent litigation. They conclude that the system now functions effectively only for the pharmaceutical and chemical industries. Whereas 20 years earlier the system provided a net benefit across the board, in effect it now imposes a tax on other industrial sectors.

The conflict raises a fundamental question: Is a one-size-fits-all system viable in an age when technologies and the processes of innovation are so diverse? Given the high stakes, we need to know the answer to this question. If the end result of a uniform system is to favor innovation in one field at the expense of others, then the patent system will effectively influence the allocation of capital to different economic activities, resulting in an unintentional form of industrial policy.

At least one of the underlying problems is easy to recognize. In pharmaceuticals, there is a close relationship between patents and products: Blockbuster drugs are characteristically very dependent on a single primary patent. Conversely, in information technology, there is a great distance between a patent and a product.

Computers and computer programs may contain thousands of patentable functions. Each may represent only a tiny fraction of the market value of the product, which may in fact derive primarily from product design and the integration of components. But a patent dispute over a single component function may result in an injunction against the entire product line. This creates an incentive for opportunistic patent owners to “hold up” companies who have made a very large investment in a product. In other words, a patent owner who obtains an injunction can potentially extract settlements that approach the financial and opportunity cost of withdrawing the product from the market and redesigning, remanufacturing, and remarketing it.

Although there are numerous press reports describing the tactics of such “patent trolls,” we actually know relatively little about how common or successful holdups are. What we do know is limited to just a few industries, such as semiconductors and cellular telephony, and even those examples are disputed.

Why the mystery?

The reasons why our knowledge about patents is so limited and why we have not even begun to collect sufficient data are numerous. Many stem from overly simplified, idealized understanding of how patents work. Patents are commonly understood to be protections against theft by unscrupulous imitators. And although they are often seen as an affirmative right to exploit technology, in fact a patent is a right to prevent the use of the patented invention. Thus, a patent holder can be blocked from using his or her own patented invention because of patents owned by others.

The patent system is a form of government regulation, but it regulates indirectly, and mostly out of public view. The U.S. Patent and Trademark Office (PTO) grants patents as a private right, and it is up to the patent owner whether and how to assert the right. Thus, patent issues are framed narrowly within disputes arising between two parties under particular circumstances. And the overwhelming number of these conflicts go undocumented because they stop well short of being litigated in court.

Economic considerations play no explicit role in the patent system unless and until the ultimate train wreck occurs—that is, when litigation results in a finding of infringement and damages need to be calculated. In a rights-based legal system managed by specialized lawyers, patents tend to be seen as absolute entitlements: as ends in themselves rather than as tools for promoting innovation. The PTO focuses all its attention on the original decision to deny or grant a patent. Once patents go out the door, the PTO does not attempt to monitor their value, to document how they are used, or to uncover how they are abused.

A dearth of information on assertions, licensing contracts, settlements, and other business aspects of patents means that there is little empirical foundation on which to develop sound policy. Practically all we know about these business activities is anecdotal. This is of little help when each patent is, by definition, unique, and when the business context varies so greatly.

For all the talk of “patent quality,” there is no consensus on what that the term means or how it can be measured consistently, let alone how problems with quality should be fixed. Deep confusion exists over the obligation of technology creators and users to read, assimilate, and evaluate the massive database of current patents in order to avoid infringement.

Even at the level of individual patents, there is often considerable uncertainty about where the boundaries lie, whether the patent is valid and enforceable, and whether a particular product actually infringes. Patents are intended to disclose new knowledge to the public, but they are written by lawyers for lawyers. If you want to know what the patent means and whether it is valid, you need legal assistance. If you want to know whether your product infringes, you are advised to get a legal opinion. In 2007, the average cost of a legal opinion on validity cost $13,000. A legal opinion on whether a product infringes costs another $13,000.

But $26,000 does not buy certainty. Because the interpretation of claims at trial is reversed on appeal one-third to one-half of the time, it is difficult to see how our patent system provides adequate notice of existing or pending property rights to other inventors and the public in general. With this uncertainty and the sheer volume of patenting in some fields, inadvertent infringement inevitably becomes a nearly unavoidable hazard for innovators seeking to bring complex products and services to the marketplace.

The lack of information on the cost-effectiveness of the patent system is inexcusable for a government function that has come to play such a pervasive role in today’s knowledge-driven economy. Although patent policy is inevitably determined via the adjudication of lawsuits, judges lack an adequate framework for evaluating the efficacy of the system as a whole. Judges correctly point to Congress as the proper arbiter of policy, but Congress also is bedeviled by a lack of adequate data. Besides, Congress is burdened with more politically salient issues.

Thus, individual patent cases are decided because they must be, while meaningful policy decisions are deferred for lack of data. No institution has the responsibility to collect data on patents and how they are used, and no organized constituency demands it. There is constant pressure to keep patent application fees low to encourage more patenting, but the much higher legal and business costs of patent practice and litigation are not officially monitored or measured.

Data needs

We need a patent database that, like a land title registry, shows a chain of title and tells us who has what interests in the patent. We need unique identifiers for assignees and a database that tells us whether an assignee is independent or owned by another firm.

It would be useful to have estimates of the number of innovations firms make, what proportion they choose to patent, and with how many patents. We need an understanding of how innovators cope with the problem of inadvertent infringement, especially in areas such as software, where low barriers and/or intense competition result in prolific, widespread innovation at many levels of granularity and abstraction.

We need to know about the life of patents after they are issued and before they become a matter of public record in litigation, because many are asserted or licensed in some form but never fully litigated. It would help to know the frequency and cost of searching patents and other prior art to avoid infringement, the frequency of letters putting innovators on notice of patent claims, and the outcomes of those letters. We need information on the number and terms of settlements, patent-licensing agreements, and transfers of patents. We need to know whether these agreements are really manifestations of technology transfer or capitulation to legal bullying. Although gathering such information must respect the need for confidentiality in some aspects of business practice, acquiring and analyzing as much data as possible is essential to understanding and promoting the efficiency of our nascent markets for technology.

R&D and patent information is generally available for larger publicly held firms, but newer and smaller firms are underrepresented in the available databases. And although the Census Bureau collects data on R&D for smaller firms, accessing these data can be difficult. This is unfortunate, because our understanding of the implications of intellectual property on decisions to form and invest in new companies is essential to understanding the growth of the economy.

Most of the existing databases focus on manufacturing firms, which are traditionally the predominant users of the patent system and where there is a consensus about the definition of R&D. But with more permissive standards for patentable subject matter, service firms have increasingly turned to patents for business models and practices. The service sector, which accounts for the majority of economic output, can no longer be ignored simply because it is difficult to determine what should be considered R&D in a service firm. If our definitions of R&D require refinement, we should begin that process today.

Where should economic insight be built into the system? At one level, investors need meaningful reporting about patents as sources of value as well as potential liability. But we also need statistical reporting that helps us understand how well patents work to promote innovation in different fields. Indeed, the Department of Commerce has already launched an effort to develop metrics for innovation, and better patent data could be a key component.

Perhaps the most obvious solution is to ask the PTO to assume greater responsibility and accountability for the performance of the system. To its credit, the PTO has just announced that it is hiring a chief economist—a step that is long overdue. But will it be possible for the insights of the chief economist to counterbalance the demands of hundreds of thousands of patent applicants and their attorneys for making patents easy to get? The PTO’s commendable efforts at reforming the application process have already met a tidal wave of opposition.”

To insulate economic analysis from political influence or capture by particular patent interests, an autonomous institute could be established, perhaps housed in the PTO but independent of PTO administrators. This institute could be a critical resource not only for the PTO in its advisory functions but also for Congress, the courts, and other agencies.

To ensure independence, the institute could be overseen by a council of agencies with an interest in innovation, along with an advisory board that represents the best disinterested experts as well as the “users” that make up the PTO’s present public advisory committee. This institute would craft and support a research agenda to advance our understanding of the crucial tradeoffs involved in our efforts to improve the functioning of the patent system. The institute might be funded by a very small share of patent maintenance fees, which presently bring in over $500 million a year, and most of the research would be performed through grants or contracts.

This modest step would advance knowledge about patents and their effects on knowledge, technology, and innovation—the very heart of today’s economy. It would help give credence to the rhetoric we often repeat as a matter of faith: that patents are tools for innovation and economic growth. Tools need at times to be calibrated, sharpened, and augmented. Sometimes, they may need to be traded for other tools better suited to the problem at hand and with fewer unintended side effects.

From the Hill – Fall 2008

Progress on 2009 budget stalled

Congress adjourned for its August recess having made little progress on appropriations for fiscal year (FY) 2009, which begins October 1. None of the 12 appropriations bills had been passed by both the House and Senate, although the House passed one of them. With the November elections looming and Congress in conflict with the White House over spending levels, most spending decisions are unlikely until at least November.

In action so far, congressional appropriators have endorsed large increases for the three physical sciences agencies (the National Science Foundation (NSF), National Institute of Science and Technology (NIST), and the Department of Energy (DOE) Office of Science) in the president’s American Competitiveness Initiative, human spacecraft development, biomedical research in the National Institutes of Health (NIH), as well as other parts of the federal R&D portfolio.

R&D was given a boost in June when Congress passed and the president signed a supplemental spending bill. The bill contained $338 million in FY 2008 spending for science programs—$150 million for NIH and $62.5 million each for NSF, the DOE Office of Science, and the National Aeronautics and Space Administration (NASA). The money would, among other things, restore funding for the Fermi National Accelerator Lab, which was forced to lay off employees because of cuts in the FY 2008 budget.

More recently, a second FY 2008 supplemental appropriations bill was proposed by Senate Appropriations Committee chair Robert Byrd (D-WV). The bill includes $500 million for NIH, $250 million for NASA, and $150 million for the DOE Office of Science.

In addition, Sen. Tom Harkin (D-IA) and Arlen Specter (R-PA) introduced a bill that would give NIH $5.2 billion in supplemental funding in 2008 to fully restore the agency’s budget to its 2003 funding level after adjusting for biomedical research inflation. But the prospects for a vote on a bill are highly uncertain.

Consumer product safety changes approved

Despite a packed legislative calendar and industry opposition, the House and Senate passed the Consumer Product Safety Modernization Act just before the August recess. The bill, signed into law on August 14, updates product standards, establishes new toy-safety regulations, and provides more enforcement tools for the Consumer Product Safety Commission.

As the bill went to conference, the most contentious provision was a ban on phthalates in children’s toys. Phthalates are a class of chemicals often used as softening agents in plastics and are commonly used in consumer products, especially toys. Although research has attributed certain deleterious health effects to exposure to these chemicals, questions remain about the levels of exposure and dosage necessary to pose health risks. The U.S. Chamber of Commerce and many industry groups had argued that the ban was not based on science and that it would lead only to increased litigation, but many consumer groups disagreed. At a House hearing in June, lawmakers sparred over the level of evidence necessary to warrant regulation.

In the end, House and Senate conferees agreed to permanently ban toys containing more than 0.1% of any of three specified phthalates, effectively prohibiting use of the plasticizing chemicals in toys. Three other types of phthalates will be banned until a safety review is conducted.

Higher Education Act reauthorized

After six years of work and stalemate, Congress in late July finally reauthorized the Higher Education Opportunity Act, a cornerstone piece of legislation with the primary function of authorizing spending for a variety of higher education financial aid programs. President Bush signed the bill on August 14.

Although the bill contains provisions supported by the education community, including financial aid for low-income students, it also includes a number of new reporting requirements that higher education institutions opposed because they would increase administrative costs.

The bill will increase Pell grants to low-income students to $8,000 a year (up from $5,800) by 2014 and will allow part-time students to use the grants for a full calendar year. It includes a program that forgives up to $10,000 in loans for students enrolling in high-needs areas. Eligible fields include nursing, child welfare, applied science, technology, engineering, and mathematics.

Some of the new reporting requirements that higher-education groups expressed concerns about involve rising tuition rates. According to the bill, universities and colleges that increase tuition or fees by a significant percentage must submit a report to the Department of Education explaining the rationale behind the increase and any measures it plans to implement to reduce costs.

Another example of a new reporting requirement addresses peer-to-peer file sharing—of music, for example. Each institution must certify to the department that it has created plans for combating illegal file sharing, describe if it plans to implement technologies to deter such practices, and to the extent practicable offer students alternative legal mechanisms for sharing files.

One aspect of the reauthorization that the higher-education community was pleased to see—but is not embraced by the Bush administration—is language that prohibits the Department of Education from setting standards for accreditation, a task normally done by independent accrediting agencies. The issue of the federal government’s role in accreditation initially arose with the release of a report by the department’s Commission on the Future of Higher Education. After the commission questioned the quality of university education, the department floated the idea of inserting control over the accreditation process as a means to impose change and increase accountability.

Currently, the accreditation process is a peer-review system managed primarily through independent, private organizations. Needless to say, neither universities nor the independent accreditation groups embraced the notion of the government determining standards for higher education.

In a statement issued in February, the White House expressed opposition to language that “would restrict the Department of Education’s authority to regulate on accreditation.”

EPA delays decision on regulating greenhouse gases

The Environmental Protection Agency (EPA) has effectively delayed until the next administration any decision on whether greenhouse gases must be regulated under the authority of the Clean Air Act.

Although the Bush administration has long opposed regulating greenhouse gases under the act, in April 2007 the Supreme Court ordered EPA to determine whether or not carbon dioxide (CO2) emissions from automobiles endanger public health and welfare and therefore must be regulated under the act.

On July 11, under pressure from Senate Environment and Public Works Committee Chair Barbara Boxer (DCA) and House Oversight and Government Reform Committee Chairman Henry Waxman (D-CA), the EPA released an Advance Notice of Proposed Rulemaking (ANPR) on regulating greenhouse gases under the Clean Air Act. The document outlines how such regulation could work, but does not find that CO2 emissions endanger public health. The ANPR merely requests comments on the proposed regulation and how best to approach the matter.

An introductory statement to the ANPR by EPA Administrator Stephen Johnson argues that the Clean Air Act is not an appropriate legal framework under which to attempt such regulation. This position was repeated in additional letters from the director of the Office of Management and Budget’s Office of Information and Regulatory Affairs and the Secretaries of the Departments of Agriculture, Commerce, Energy, and Transportation. The letters argue that the regulations would cause unnecessary damage to the economy. The ANPR was published in the Federal Register on July 30. The public will have until November 28 to make comments.

On July 22, Boxer held a hearing with former EPA associate deputy administrator Jason Burnett as its principle witness, in which she sought more background information regarding reports that the Bush administration had planned to issue an “endangerment finding,” but retracted it in favor of an ANPR. Burnett, who resigned from the agency in June, testified that the position of policymakers and scientists at the EPA and across the administration was that the public was endangered by greenhouse gas emissions and that such a determination was the only suitable response to the Supreme Court’s order.

The decision to not release the endangerment finding and instead prepare an ANPR was, according to a July 18 interview that Burnett had with the House Select Committee on Energy Independence and Global Warming, made by the White House. Burnett said that the administration’s objective in choosing the ANPR over the endangerment finding was to delay action on the Supreme Court’s mandate until the next administration.

Yucca Mountain’s future examined

In the wake of the Department of Energy’s (DOE’s) submittal in June 2008 of an 8,600-page application for the licensing of the Yucca Mountain nuclear waste repository, the House Energy and Commerce Committee’s Energy and Air Quality Subcommittee held a July 15 hearing to assess the program’s future. In 2002, Congress designated the Yucca Mountain site in south-central Nevada as the permanent repository for spent nuclear fuel and radioactive waste, but the program has been fraught with bureaucratic delays, increasing costs, and considerable opposition from Congress and citizens alike.

The witnesses at the hearing, however, were optimistic about the program’s future. Edward Sproat, director of DOE’s Office of Civilian Radioactive Waste Management, estimated that after the Nuclear Regulatory Commission formally licenses the facility, construction could begin in 2013 and open as early as 2020. However, Sproat said, these predictions are contingent on funding being made available from the Nuclear Waste Fund, which contains $21 billion from industry fees on nuclear power. DOE cannot access these funds without congressional approval, even though the 1982 Nuclear Waste Policy Act requires that they be used for construction of a geologic repository.

Witnesses commended DOE’s progress in completing the license application, but they also identified operational and technical obstacles that could further delay the project. For example, B. John Garrick, chairman of U.S. Nuclear Waste Technical Review Board, noted design issues with canisters being developed to transport and store the waste. Currently, he said, the technology for these canisters does not exist, nor does adequate technology for drip shields, which would be necessary to protect the canisters once inside the repository.

Safety concerns were raised by Rep. Shelley Berkley (D-NV), whose district includes the Yucca Mountain site. Berkley, acting as a witness, said that Nevada residents have long opposed the program because of unresolved safety issues, including lack of radiation standards for the region, the site’s geologic instability, waste transportation risks, and inadequate storage technologies. She also criticized the program’s $90-billion lifecycle cost, saying that onsite waste storage would be a cheaper and safer alternative.

Berkley’s concerns were echoed by Democrats on the subcommittee, including Rep. Jim Matheson (D-UT), who proposed the Interim Storage Bill (H.R. 4062) as an alternative plan. The bill would mandate federal responsibility for onsite waste storage rather than the current practice in which owners of nuclear plants are responsible for storing their own waste onsite.

Subcommittee members also debated the Yucca project in the context of the energy crisis and the role of nuclear power. Rep John Shadegg (R-AZ), frustrated with the project’s delays and the 2020 opening timeframe, said that nuclear energy would be essential for meeting the U.S. energy needs and called for the project to proceed as quickly as possible, whatever the cost. Rep. Fred Upton (R-MI) spoke of a U.S. “nuclear renaissance” and argued that expanding Yucca’s legal storage capacity could be the final piece of the nuclear energy puzzle. Matheson challenged these views, saying that although nuclear power could be an energy solution, its future does not depend solely on the Nevada repository. He pointed out that DOE has yet to evaluate the cost effectiveness of an onsite waste storage approach.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Be Careful What You Wish For: A Cautionary Tale about Budget Doubling

Sometime in the near future, with timing dependent on the economy, the military actions in Iraq and Afghanistan, and other competing demands for government money, Congress will substantially boost R&D spending. It will do so in response to the great challenges facing the United States and the world—global warming, the threat of a global pandemic, rising energy and natural resource prices, and so on—whose solutions depend on increased scientific understanding and technological advance. It will also do so in response to the many reports, especially the National Academy of Sciences’ 2006 Rising Above the Gathering Storm, highlighting the importance of advancements in science and technology to U.S. economic well-being and national security.

Although federal R&D spending relative to gross domestic product has been declining during the past 20 years, between 1998 and 2003 the government increased spending in the biological and life sciences at rates that could presage a future spending boom. The Clinton administration began and the Bush administration completed a doubling of the budget of the National Institutes of Health (NIH).

At first glance, the doubling appeared to be an unalloyed benefit for medical research, but a closer examination reveals that scientists need to be careful what they wish for. The doubling did not appear to produce a dramatic outpouring of high-quality research. It failed to address critical flaws in federal research funding and actually exacerbated some existing problems, especially for younger researchers.

The negative consequences of the rapid run-up in research spending began to be felt immediately after the doubling ended, when the Bush administration and Congress essentially froze the NIH budget, resulting in a sizable drop in real spending. Indeed, one of the key lessons from the doubling experience is that if the aim is to raise aggregate R&D intensity, the United States should increase spending gradually and steadily rather than undertake a one-time surge and subsequent sharp deceleration in spending.

From 1998 to 2003, the NIH budget increased from about $14 billion to $27 billion—twice as rapidly in five years as in the previous decade. Using the Biomedical R&D Price Index, the doubling increased real spending by 66%, or about 12% per year.

Although there are situations in which sharp spending increases are preferable to steady growth, gradual increases generally are more efficient. Gradual buildups produce smaller increases in costs (because it takes time for people and resources to increase to meet the new demand and costs tend to rise nonlinearly in the short run) and avoid large disruptions when the increase decelerates. Although it is difficult to determine whether a more gradual increase in NIH spending would have produced greater scientific output than the five-year doubling, the data are consistent with the notion that the spending surge did less than a gradual buildup of funds might have done. In a November 19, 2007, article in The Scientist, Frederick Sachs noted that the number of biomedical publications from U.S. labs did not accelerate rapidly after 1999, although they did increase steadily (as they had done in the years before 1999). In addition, from 1995 to 2005, the share of U.S. science and engineering articles in the biological and medical sciences did not tilt toward these areas despite their increased share of the nation’s basic research budget, according to 2007 National Science Foundation (NSF) data.

But the big problem with a sharp acceleration of spending occurs when it ends. People and projects get caught in the pipeline. Our analysis here draws on lessons that economists have learned from studying increases in capital spending using the accelerator model of investment in physical capital. In the accelerator model, an increase in demand for output induces firms to seek more capital stock to meet the new demand. This increases investment spending quickly. When firms reach the desired level of capital stock, they reduce investment spending. This process helps explain the volatility of investment that underlies business cycles. The R&D equivalent of demand for output is federal R&D spending, and the equivalent of investment spending is newly hired researchers. We find that the young people who build their skills as graduate students or postdocs during the acceleration phase of spending bear much of the cost of the deceleration.

The deceleration of NIH spending in the aftermath of the budget doubling was particularly brutal. Using the Consumer Price Index deflator, real NIH spending was 6.6% lower in 2007 than in 2004. It is expected to fall 13.4% below the 2004 peak by 2009, according to a 2008 analysis by Howard Garrison and Kimberly McGuire for the Federation of Associated Societies for Experimental Biology. Using the Biomedical R&D Price Index deflator, real spending was down 10.9% through 2007. The drop in the real NIH budget shocked the agency and the bioscience community because it largely undid the increased funding from the doubling.

The deceleration caused a career crisis for the young researchers who obtained their independent research grants during the doubling and for the principal investigators whose probability of continuing a grant or making a successful new application fell. Research labs were pressured to cut staff. NIH, the single largest employer of biomedical researchers in the country, with more than 1,000 principal investigators and 6,000 to 7,000 researchers, cut the number of principal investigators by 9%. The situation was described in the March 7, 2008, Science as “a completely new category of nightmare” by a researcher at the National Institute of Child Health and Development, which was especially hard hit. “The marvellous engine of American biomedical research that was constructed during the last half of the 20th century is being taken apart, piece by piece,” said Robert Weinberg, founder of Whitehead Institute, in the July 2006 Cell.

In economics, the optimal path to a larger stock of capital depends on the adjustment costs. Many models of adjustment use a quadratic cost curve to reflect the fact that when you expand more rapidly, costs rise more than proportionately. If adjustment costs take any convex form, the ideal adjustment path is a gradual movement to the new desired level. Empirical studies estimate that adjustment costs for R&D are substantial as compared to those for other forms of investment. The way research works, with senior scientists running labs in which postdocs and graduate students perform most of the hands-on work, much of the adjustment cost falls on young researchers. An increase in R&D increases the number of graduate students and the number of postdocs hired. During the deceleration phase, a large supply of newly trained researchers compete for jobs when the number of independent research opportunities may be less than when they were attracted to the field. In the United States, indeed, much of the adjustment fell on postdocs, whose numbers increased rapidly during the doubling, with the greatest increase among those born overseas.

Funding and researcher behavior

A second key lesson of the budget doubling is that the way agencies divide budgets between the number and size of research grants will affect researchers’ behavior and thus research output.

FUNDING AGENCIES SHOULD VIEW RESEARCH GRANTS AS INVESTMENTS IN THE HUMAN CAPITAL OF THE RESEARCHER AS WELL AS IN THE PRODUCTION OF KNOWLEDGE, AND CONSEQUENTLY SHOULD SUPPORT PROPOSALS BY YOUNGER RESEARCHERS OVER EQUIVALENT PROPOSALS BY OLDER ONES.

Funding agencies and researchers interact in the market for research grants. An agency with a given budget decides how to allocate the budget between the number of grants and the size of grants. Researchers respond to the changed dollar value and number of research awards by applying for grants or engaging in other activities.

During the doubling, NIH increased the average value and number of awards, particularly for new submissions, which include new projects by experienced researchers as well as projects by new investigators. With the success rate of awards stable at roughly 25%—the proportion the agency views as desirable on the basis of the quality of proposals—the number of awards increased proportionate to the number of submissions. From 2003 to 2006, when the budget contracted, NIH maintained the value of awards in real terms and reduced the number of new awards by 20%. But surprisingly, the number of new submissions grew, producing a large drop in the success rate. In 2007, NIH squeezed the budgets of existing projects and raised the number of new awards.

The interesting behavior is the response of researchers to changes by funding agencies in the number of awards granted and thus the chance of winning an award. An increase in the number of awards increases the number of researchers who apply, because the chance of winning increases. As might be expected, researchers responded to the NIH doubling by submitting more proposals. Given higher grant awards and increased numbers of awards (with roughly constant funding rates), the growth of submissions reflects standard supply behavior: positive responses to the incentive of more and more highly valued research awards.

But researchers also increased the number of submissions when NIH research support fell. The number of submissions per new award and per continuation award granted rose from 2003 to 2007 after changing modestly during the doubling period. By 2007, NIH awardees were submitting roughly two proposals to get an award. Fewer investigators gained awards on original proposals, which induced them to amend proposals in response to peer reviews to increase their chances of gaining a research grant. The response of researchers in submitting more than one proposal to funding agencies produced the seemingly odd short-run supply behavior: more proposals with lower expected rewards. Faced with the risk of losing support and closing or contracting their labs, principal investigators made multiple submissions, although over time, normal long-run supply behavior would be expected to lead those who do not gain awards to leave research science and to discourage young people from going on in science. Who wants to spend time writing proposal after proposal with modest probabilities of success? It may also lead to more conservative science, as researchers shy away from the big research questions in favor of manageable topics that fit with prevailing fashions and gain support from study groups.

The message is that to get the most research from their budgets, funding agencies need good knowledge about the likely behavior of researchers to different allocations of funds. NIH, burned by its experience with the doubling and ensuing cutback in funds, hopefully will respond differently to future increases in R&D budgets.

Young researchers take a hit

A third lesson of the doubling experience is that increased R&D spending will not resolve the structural problems of the U.S. scientific endeavor that limit the career prospects of young researchers and arguably discourage riskier transformative projects.

THE AVERAGE AGE OF A NEW R01 RECIPIENT WAS 42.9 IN 2005, UP FROM 35.2 IN 1970 AND 37.3 IN THE MID-1980S. IN 1980, 22% OF GRANTS WENT TO SCIENTISTS 35 AND YOUNGER, BUT IN 2005, ONLY 3% DID.

At the heart of the U.S. biomedical science enterprise are the individual (R01) grants that NIH gives to fund individual scientists and their teams of postdoctoral employees and graduate students. The system of funding individual researchers on the basis of unsolicited applications for support comes close enough to an economist’s views of a decentralized market mechanism to suggest that this ought to be an efficient way to conduct research as compared, say, to some central planner mandating research topics. The individual researchers choose the most promising line of research based on local knowledge of their special field. They submit proposals to funding agencies, where expert panels provide independent peer review, ranking proposals in accordance with criteria set out by funding agencies and their perceived quality. Finally, the agency funds as many proposals with high rankings as it can within its budget.

Although there are alternative funding sources in biomedical sciences, NIH is the 800-pound gorilla. For most academic bioscientists, winning an NIH R01 grant is critical to their research careers. It gives young scientists the opportunity to run their own lab rather than work for a senior researcher or abandon research entirely. For scientists with an NIH grant, winning a continuation grant is often an implicit criterion for obtaining tenure at a research university.

It is common to refer to new R01 awardees as young researchers, but this term is a misnomer. Because R01s generally go to scientists who are assistant professors or higher in rank, and because postdoctoral jobs last longer, the average age of a new R01 recipient was 42.9 in 2005, up from 35.2 in 1970 and 37.3 in the mid-1980s. In 1980, 22% of grants went to scientists 35 and younger, but in 2005, only 3% did. In contrast, the proportion of grants going to scientists 45 and older increased from 22% to 77%, and within the 45 and older group, the largest gainers were scientists aged 55 and older.

Most of this change is due to the structure of research and research funding, which gives older investigators substantive advantages in obtaining funding and places younger researchers as postdocs in their labs. Taking account of the distribution of Ph.D. bioscientists by age, the relative odds of a younger scientist gaining an NIH grant as compared to someone 45 and older dropped more than 10-fold. The doubling of research money did not create this problem, which reflects a longer-run trend, but it did not address or solve the problem. The result is considerable malaise among graduate students and postdocs in the life sciences as well as among senior scientists concerned with the health of their field, as has been well documented in a variety of studies. More money is not enough.

Bolstering younger scientists

A final lesson that we derived from the doubling experience is that funding agencies should view research grants as investments in the human capital of the researcher as well as in the production of knowledge, and consequently should support proposals by younger researchers over equivalent proposals by older ones.

There are three reasons for believing that providing greater research support for younger scientists would improve research productivity.

First, scientists may be more creative and productive at younger ages and may be more likely to undertake breakthrough research when they have their own grant support rather than when they work as postdocs in the labs of senior investigators. We use the word “may” here because we have not explored the complicated issue of how productivity changes with age.

Second, supporting scientists earlier in their careers will increase the attractiveness of science and engineering to young people choosing their life’s work. It will do this because the normal discounting of future returns makes money and opportunities received earlier more valuable than money and opportunities received later. If scientists had a better chance to become independent investigators at a younger age, the number of students choosing science would be higher than it is today.

The third reason relates to the likely use of new knowledge uncovered by researchers. A research project produces research findings that are public information. But it also increases the human capital of the researcher, who knows better than anyone else the new outcomes and who probably has better ideas about how to apply them to future research or other activities than other persons. If an older researcher and a younger researcher are equally productive and accrue the same additional knowledge and skills from a research project, the fact that the younger person will have more years to use the new knowledge implies a higher payoff from funding the younger person than from funding the older person. Just as human capital theory says that people should invest in education when they are younger, because they have more years to reap the returns than when they are older, it would be better to award research grants to younger scientists than to otherwise comparable older scientists.

Making adjustments

Future increases in research spending should seek to raise sustainable activity rather than meet some arbitrary target, such as doubling funding, in a short period. There are virtues to a smooth approach to changes in R&D levels, because it takes considerable time to build up human capital, which then has a potentially long period of return. Since budgets are determined annually, the question becomes how Congress can commit to a more stable spending goal or how agencies and universities can offset large changes in funding from budget to budget. One possible way of dealing with this issue is to add extra stabilization overhead funding to R&D grants, with the stipulation that universities or other research institutions place these payments into a fund to provide bridge support for researchers when R&D spending levels off.

To deal with some of the structural problems in R&D funding, our earlier argument that younger investigators have longer careers during which to use newly created knowledge than do equally competent older investigators suggests that future increases should be tilted toward younger scientists. In addition, given multiple applications and the excessive burden on the peer review system, agencies should add program officers and find ways to deal more efficiently with proposals, as indeed both NIH and NSF have begun to do.

In sum, there are lessons from the NIH doubling experience that could make any new boost in research spending more efficacious and could direct funds to ameliorating the structural problem of fostering independence for the young scientists on whom future progress depends.

From the Hill – Summer 2008

Bush signs genetic nondiscrimination bill

On May 21, President Bush signed a bill that bans health insurers and employers from discriminating against anyone whose genetic information shows a predisposition to illnesses such as cancer or heart disease. The Genetic Information Nondiscrimination Act (GINA), first introduced in 1995, had been repeatedly derailed by politics during the Republican control of Congress. But that changed with the turnover of power to Democrats, and the bill passed by overwhelming majorities in each chamber.

The law bars health insurers from rejecting coverage or raising premiums for healthy people based on personal or familial genetic predisposition to a particular disease. It pro hibits employers from using genetic information in hiring, firing, pay, or promotion decisions. It also forbids health insurers from requiring a genetic test.

Supporters of the law said that people have been reluctant to take genetic tests that could lead to detection of an illness because of fears that they could lose their jobs or insurance coverage. Longtime GINA booster Francis Collins, head of the National Human Genome Research Institute, called its passage “a great gift to all Americans.”

Possible political interference at the EPA investigated

On May 7, the Senate Environment and Public Works Committee’s Subcommittee on Public Sector Solutions to Global Warming, Oversight, and Children’s Health Protection held a hearing on the scientific integrity of the Environmental Protection Agency’s (EPA’s) policy decisions.

In her opening remarks, Committee Chairwoman Barbara Boxer (D-CA) focused on recurring accusations of political interference with scientific findings and reports by Bush administration officials. In particular, the EPA has recently been criticized for setting ground-level ozone limits higher than those recommended by the agency’s science advisory board, and for the firing of regional administrator Mary Gade because of her request for a chemical industry cleanup of a site in Michigan found to be contaminated with dioxin.

The chief witness was George Gray, the EPA’s assistant administrator for the Office of Research and Development. Gray represented the EPA in lieu of the subcommittee’s requested witness, EPA Administrator Stephen Johnson, who was unable to attend.

In his testimony, Gray stated that the EPA’s scientists are key contributors to a transparent process of sound decisionmaking that begins with scientific examination of the facts and ends with careful and defensible decisions based on those conclusions. He specifically described the decisions of the administrator as matters of science policy that must consider the conclusions of experts from many disciplines and acknowledge their uncertainties.

Gray’s testimony was met with sharp criticism from Sen. Sheldon Whitehouse (D-RI), who is leading an inquiry into political interference in science at the EPA. He questioned Gray specifically about a recent Union of Concerned Scientists (UCS) survey in which 889 scientists at the EPA (out of 1,586 respondents) stated that they had been subject to political interference of at least one kind, as defined in the survey. Gray argued that individual opinions couldn’t provide a basis on which to judge the actions of the agency as a whole, but that such a number is still unacceptable. Nonetheless, he said that the agency’s scientific work is transparent and again defended its peer-review process.

Boxer said that Gray lost credibility when he stated that he believed in the transparency of the EPA’s process despite a Government Accountability Office report citing evidence that interagency comments on EPA regulatory decisions, notably ones from the Department of Defense and Office of Management and Budget, were kept secret.

The Senate hearing coincided with “Whistleblower Week,” a series of events organized by the nonprofit Government Accountability Project (GAP) to evaluate scientific freedom within the government and the effects of political interference on the conduct of science. On May 12, the GAP and the UCS co-hosted a panel that included two whistleblowers from the federal government who left their positions because of alleged political interference with their work, and a UCS scientist who is tracking such activity.

David Ross, a former Food and Drug Administration (FDA) scientist, described a pattern of political interference in his work that culminated in his resignation after the approval of the antibiotic drug Ketek. The agency approved the drug despite his findings that it could cause dangerous side effects and his recommendation that it be kept off the market. Ross said his supervisors told him to “soften” the wording in his reports on Ketek, and they forbade him to publicly disclose employee findings.

Other FDA scientists have also complained about political interference in recent years. In a 2006 UCS survey, nearly one-fifth of 997 surveyed scientists reported experiencing interference, meaning specifically that they had “been asked, for non-scientific reasons, to inappropriately exclude or alter technical information or their conclusions in a FDA scientific document.” Similar findings have surfaced in surveys of other agencies, including the Fish and Wildlife Service, the National Oceanic and Atmospheric Administration, and the EPA.

Bill introduced to close FACA “loopholes”

Two House members have introduced legislation aimed at closing “loopholes” in the Federal Advisory Committee Act (FACA). The bill was introduced by Rep. Henry Waxman (D-CA), chairman of the House Oversight and Government Reform Committee, and Rep. Wm. Lacy Clay (D-MO), chairman of the Subcommittee on Information Policy, Census, and National Archives.

At an April hearing, Clay said that “some appointments to scientific and technical advisory committees have generated some controversy due to the perception that appointments were made based on ideology rather than expertise or were weighted to favor one group of stakeholders over another.” In his opening statement, he specifically cited “Vice President Dick Cheney’s infamous Energy Task Force that was stacked with industry executives.”

Testifying at the hearing, Sidney Shapiro, associate dean for research and development at the Wake Forest School of Law, cited several loopholes in FACA, which he argued allow the work of advisory committees to be completed in settings not subject to FACA regulations. He mentioned a “contractor loophole,” which allows agencies to hire private companies to organize advisory committees that are not subject to the FACA regulations; a “subcommittee loophole,” which allows agencies to divert the substantive committee work to subcommittees not subject to the regulations; and a “non-voting participant loophole,” which allows non–committee members to be involved in the work of the committee as long as they do not vote. FACA requires that members of federal advisory committees be designated as “special government employees” and thus be subject to official conflict of interest guidelines. Subcommittee members at the hearing expressed concerns about the last loophole, arguing that nonvoting members are able to strongly influence the work of committees without being subject to FACA regulations.

Climate impact on oceans examined

On April 29, the House Select Committee on Energy Independence and Global Warming held a hearing on the impact of climate change on the world’s oceans, a relatively new topic of exploration for the nation’s policymakers. The four scientific experts testifying at the hearing provided varying perspectives on the oceans’ vulnerability to the mechanisms of climate change, but all were unanimous that the policies that affect the oceans urgently need to change.

The witnesses said they were alarmed by the warming of the oceans worldwide, which contributes to sea-level rise and reduces the ocean’s capacity to absorb carbon dioxide because the gas is less soluble in warmer water. In addition, higher concentrations of carbon dioxide in the atmosphere have led to a buildup of carbonic acid in seawater. These higher acidity levels negatively affect nearly all sea life, the scientists said.

Joan Kleypas of the National Center for Atmospheric Research discussed the “irreparable damage” to the world’s coral reefs caused by warming seas, as did Jane Lubchenco of Oregon State University, who also said that the accelerating pace of change in the oceans is contributing to numerous cases of ecosystem stress, including the development of “dead zones”—oxygen-deprived areas that can’t sustain marine life.

Committee Chairman Edward Markey (D-MA) asked the witnesses how the problem could be best approached and its urgency conveyed to the public. The scientists said that policy solutions should include efforts to educate the public on how oceans are affected by climate change, further funding for oceanic research, more regulation of fishing practices, better control of pollution from land, and protection of coastal habitats.

Vikki Spruill of the Ocean Conservancy discussed three pieces of legislation in various stages of development that could benefit the study and protection of the oceans. They include the Oceans Conservation, Education, and National Marine Strategy for the 21st Century Act (H.R. 21), which would create a National Oceans Advisor and establish a new method of oceanic ecosystem management in U.S. coastal waters. The others are the National Marine Sanctuary Act and the Coral Reef Conservation Act, both of which were passed in 2000.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Fixing the Parole System

About 600,000 felons will be released from prison this year in the United States and begin some form of official supervision, usually parole. But the nation’s system for managing them in the community is inept. Turning this situation around requires paying attention to one simple idea: When it comes to changing behavior, swiftness and certainty of punishment matter more than severity. Under a reformed system, parolees would be closely monitored for compliance with parole conditions, and any detected violation would be met with immediate and predictable consequences.

There is now experimental evidence from probation that this idea can be put into practice, with dramatic effects. But organizing interlocking public agencies to be able to deliver swift and certain sanctions may pose a larger challenge than getting offenders to comply once such sanctions are in place.

Crime rates in the United States have fallen by nearly half since their highs in the early 1990s but remain substantially elevated from the levels of the 1950s. In homicide, the United States retains the undesirable distinction of having the highest rates in the developed world. Every year about eight times as many U.S. residents are deliberately killed by one another as were killed by foreign terrorists on September 11, 2001.

Crime, and especially the most serious kinds of violent crime, remains heavily concentrated among low-status groups. Criminals and victims alike are far more likely than average to be poor, poorly educated, and black. The disparities are greatest in the case of gang violence, which accounts for a rising share of the total. Even after the recent crime decline, the fear of crime continues to drive the location decisions of households and businesses, contributing to the concentrations of poverty that, in turn, help maintain high local crime rates.

Although crime has declined substantially, the rate of incarceration has continued to grow at approximately 3% per year. The United States now has 2.3 million people—nearly 1% of the adult population—behind bars, several times the rate of any other nation in the Organization for Economic Cooperation and Development. The failure of parole and other forms of post-incarceration supervision contributes to crime and increases the size of the prison population. More effective parole could enable the nation to have less crime and less incarceration.

The number of felons released on parole will continue to grow, because ever more convicts are being sent to prison and because crowded prisons are being forced to release some of those convicts early in order to meet budget limits and population caps. The condition of those ex-prisoners after return to civilian life will be, on average, terrible. Many of them will be homeless (and will be ineligible for shelter space until they have spent at least one night on the street). Most of them will have untreated physical illnesses, mental diseases, addiction disorders, or a combination of these. Two-thirds of them will be back behind bars within three years.

In addition to their own suffering, released prisoners often also cause suffering to others, most notably the victims of their future crimes. Public agencies have tried a variety of service-delivery approaches to improve their condition and behavior, none with especially striking success. Among other problems, felony criminal histories render many parolees hard to employ and ineligible for a range of social service programs.

Less attention has been paid to the role of supervision (as opposed to services) in improving the lives of parolees and the communities to which they return. Most current systems of supervision perform poorly as measured by the condition and behavior of those subject to them.

Not only does parole markedly fail to control the behavior of its clients, it also contributes heavily to the prison-crowding problem by sending so many of them back. And the high recidivism rate among parolees, while casting doubt on the capacity of incarceration to achieve either deterrence or rehabilitation, also complicates the task of reducing the number of people behind bars: It is harder to make the case that large numbers of prisoners don’t need to be there when they have such a hard time staying out after they are released.

Deterrence dynamics

If offenders were perfectly rational in economic terms—if they acted so as to maximize the expected utility of the results of their actions, appropriately adjusted for risk and delay—then the nation’s current criminal justice system would provide better-than-adequate deterrence for most street crimes. Although the vast majority of offenses result in no punishment, a tiny proportion of them, chosen almost at random by the accidents of the law enforcement process, lead to years of incarceration. The expected present value of the punishment for street drug dealing or residential burglary, using any reasonable set of valuations and discount rates, far exceeds the quite modest financial rewards: A residential burglar, on average, receives less than $10 in illicit gains per expected day spent behind bars. In some markets, retail crack dealers earn less than the minimum wage.

Accordingly, the criminally active population overrepresents not only those with poor noncriminal opportunities, but also the strongly present-oriented, reckless, and impulsive. This latter group has exaggerated versions of the normal human tendencies (of the sort studied by psychologists and behavioral economists) to give undue weight to the immediate future over the even slightly longer term, and to underweight small risks of large disasters by comparison with high probabilities of small gains. Thus, efforts to control crime by increasing the severity of punishment will quickly hit the point of diminishing returns.

The idea that swiftness and certainty are more important determinants of deterrent effectiveness is at least as old as the founding document of criminology, Cesare Beccaria’s 18th-century On Crimes and Punishments. But putting that insight into practice requires more capacity for detecting crime and faster-acting justice mechanisms than the nation currently has or is likely to acquire. The tension between legal due process and the demand for swift justice is not easily resolved, and the more severe the punishment, the slower the requisite process is likely to be, as the glacial pace of death-penalty litigation illustrates.

Unfortunately, the community-corrections system—parole or supervised release for those let out of prison before the expiration of their terms, and probation for those not incarcerated at all or incarcerated only briefly in a jail as opposed to a prison—reproduces the flaws of the broader criminal justice system. Probationers and parolees are subject to a variety of rules specific to them, in addition to their obligation to obey the laws that apply to all. Yet with caseloads in community-corrections agencies ranging from scores to hundreds of offenders per officer, the probability of detection of any given violation (whether a “technical” violation, such as missing an appointment, or a new crime) is tiny. The penalties for violation can be severe: months or even years behind bars. But even a detected violation is unlikely to lead to a sanction, and even if it does, the process typically takes weeks, if not months. And even in the extreme case in which a probationer simply walks away (“absconds”) from supervision—in a typical big-city probation office, 10% or more of the nominal caseload consists of absconders—it’s still true that nothing is likely to happen. If the absconder is reported to the court, a bench warrant for his arrest may be issued, but most law enforcement agencies give a low priority to the service of bench warrants, so it is unlikely that anyone will actually pursue the absconder. Instead, the warrant is likely to remain dormant until the probationer is arrested for something else.

The contrast between the low-violation and the high-violation equilibriums can be illustrated by imagining two different classrooms. If a teacher faces a class of mostly well-behaved students, when Johnny starts throwing spitballs, the teacher can call him to order, making him less likely to misbehave again and reminding other students not to imitate him. But now consider the same teacher facing a classroom where Johnny is throwing spitballs, Judy is passing notes, Jane is doodling in her textbook, and Jim and Jerry have started a fistfight. Overwhelmed by the sheer volume of misconduct, the teacher likely will deal first with the fistfight, ignoring the other violations of the rules. But this action conveys to those miscreants and others that misconduct does not lead to sanctions. That disorderly classroom, which has a strong resemblance to the current community-corrections system, will have not only more violations but more punishments than the orderly classroom.

Analytically, the problem of rule enforcement is described by the “tipping” model first developed by economist and Nobel laureate Thomas Schelling The effectiveness of any deterrent threat in enforcing a rule depends in part on how likely it is that someone who breaks that rule will actually be punished. The probability of punishment, in turn, depends on the availability of enforcement resources and the frequency of violation. Thus, as in the classic tipping scenario, both high violation rates and low ones tend to be self-sustaining, because high violation rates generate small risks of punishment, whereas low violation rates generate large risks.

That helps explain why violations tend to be concentrated both geographically (in hot spots) and temporally (in crime waves), because crime-control resources are always limited and do not automatically rise in step with the violation rate. For a given sanctioning capacity, a low-violation community can deliver a high dose of sanction per violation. If for some reason the rate of violation increased, the sanction rate per violation would fall. The result is a low punishment-per-violation ratio, which entices offenders to commit further violations as they face lower effective risks of punishment. The induced violations lead to an even lower punishment-per-offense ratio, and the cycle continues.

Thus, high violation rates may become self-sustaining as the large number of violations outstrips the capacity of the enforcement system to deliver reliably on the threat of punishment, and the reduced risk of punishment encourages still higher rates of violation. The result can be a “social trap” in which violation rates in some times and places are high and the punishment risk per violation is low. That leaves enforcement agencies with the unpleasant choice between further escalating the level of punishment in an attempt to restore a punishment-per-offense level that would be an effective deterrent, or instead cutting back on punishment and risking a further escalation of violation rates.

In principle, there is an escape from this trap: Even a temporary increase in sanctions capacity, if it brings the sanctions risk per offense above the tipping point of the system long enough to produce a behavioral response among potential violators, has the potential to move the system from high violation, low punishment risk, to low violation, high punishment risk. Once that situation is reached, even the original pre-enhancement sanctions capacity may be adequate to maintain it.

But that leaves the problem of where the temporary increment to sanctions capacity is to come from. One answer is concentration: A level of sanctions capacity that produces nothing but futile punishment if scattered broadly may be sufficient to get some part of the problem—a group of offenders, a specific offense type, or a geographic region—past its tipping point. If that can be done, reduced violation rates in the area of concentration will then free up sanctions capacity to be concentrated elsewhere. Thus, a situation that seems intractable if addressed all at once may yield to piece-by-piece tactics.

Using scarce punishment capacity more economically by stressing certainty over severity, increasing its efficacy by shortening the time between violation and response, and directly communicating the deterrent threat (and its concentration) to potential violators can all tend to reduce the critical value of sanctions capacity and minimize the cost of moving from high violation, low punishment risk, to low violation, high punishment risk.

If the nation can learn to put these ideas into practice, it may be possible to drastically change the terms of the tradeoff between crime and punishment. This is among the conclusions of a National Research Council workshop report Parole, Desistance from Crime, and Community Integration, released in late 2007. The report also determined that the community-corrections system offers a proving ground for projects aimed at “getting deterrence right.”

Reform in Hawaii

Five years ago, the probation system in Honolulu was typical. Hawaiian probation officers were better trained than average, but like probation officers everywhere they found themselves overwhelmed by the sheer volume of rule breaking by probationers. Of a randomly drawn group of 100 probationers ordered to meet with their probation officers and submit to drug testing, about 10 would fail to appear and another 20 would test “dirty” for one or more illicit drugs, even though the appointments were announced far enough in advance that probationers could escape detection merely by abstaining from drug use. Probationers ordered to enter and remain in outpatient drug treatment programs complied with those orders only sporadically.

Such drug treatment problems are common nationwide. A typical drug-diversion program, in which offenders are supposed to accept treatment in order to avoid incarceration, has rates of treatment entry of less than 70% among those ordered into treatment and rates of completion of about 30%. Diversion clients who fail to show up for treatment or who drop out before completing the prescribed course are very unlikely to face any sanction, even if the treatment provider reports nonattendance to the probation officer and the probationer officer in turn reports that to the sentencing court. Here again, high violation rates and low sanctions rates, conditional on violation, are mutually reinforcing.

The drug that is most abused by Hawaii’s felony probationers is methamphetamine, with alcohol (often in combination) second; the opiates are rarely encountered. Methamphetamine abuse, although treatable in the sense that all drug abuse disorders are treatable, tends to have low treatment-retention rates and poor outcomes, especially compared with the abuse of opiates, where substitution therapies reliably retain 75% or more of the clients and reliably reduce the crime rates and improve the personal condition of those who continue to make use of them.

There were not enough hours in a Honolulu probation officer’s workday to prepare the paperwork to start the sanctions process for more than a tiny fraction of missed and dirty drug tests and failures to comply with treatment orders. The usual response to a missed appointment or a positive drug test was a warning. Only after a long series of violations would the probation officer admit defeat and spend the time to write up a motion to revoke probation, potentially (but not, in practice, usually) leading to the imposition of a prison term by the sentencing judge.

The threat of a possible sanction sometime in the indefinite future had little deterrent value. Levels of noncompliance tended to rise sharply over the course of any individual’s probation term, as clients learned that they could get away with drug use almost all of the time. Still, a substantial number eventually accumulated sufficiently long records of noncompliance to lead to revocation.

One judge, Steven Alm, recognized the problems and decided to try something different, implementing a pilot program called Hawaii’s Opportunity Probation with Enforcement (HOPE). After long negotiations with the probation department, police, and jail administrators, Alm selected a few dozen probationers whose records of noncompliance put them at imminent risk of having their probations revoked. They were called in for a new court procedure, dubbed by Alm a “warning hearing,” at which they were formally put on notice that each and every subsequent missed appointment or positive drug test would lead to an immediate jail stay ranging from two days to a few weeks.

To make it possible to carry out that threat, the probation department developed a fill-in-the-blanks violation-reporting form. Because probation was being “modified” instead of revoked and because the focus was a single, easily verified, recent violation, it was possible to drastically curtail the hearing process; a probationer who gave a dirty urine specimen in the morning would find himself in jail that evening. Subsequent violations led to longer jail stays and eventually to a choice between long-term residential treatment and prison.

To induce probationers to appear for testing even when they expected to be found to have used drugs, the program provided for more severe sanctions for nonappearance than for testing dirty. To make that threat effective, Alm arranged with federal and local authorities to have officers available to promptly arrest those who failed to appear. Only rarely have probationers absconded, so the demand placed on the fugitive-tracking system has been modest.

The warning hearings have proven strikingly effective. Of probationers warned (all chronic noncompliers), fewer than half were referred for an actual sanction, and most of those referred once and briefly jailed were never referred again. This happened despite the fact that the drug-testing regime for probationers subject to the new program was drastically tightened. Instead of infrequent testing by advance appointment, HOPE probationers called a hotline every weekday to learn whether they were required to come in for testing that day. Initially, they were tested six times a month, with decreasing frequency offered as a reward for obeying the rules.

At first, only probationers under Judge Alm’s supervision were eligible for HOPE. That made it possible to assemble a comparison group of equally noncompliant probationers in other courtrooms—not exactly a true random selection, but a quite robust natural experiment. Compared with the three months before being put in the program, HOPE probationers reduced no-show and positive test result rates by more than 90%, and their behavior improved over time. In contrast, violation rates for the non-HOPE sample grew steadily worse over time, with 37% eventually having their probation revoked, compared with fewer than 5% of the HOPE group.

The HOPE pilot program has now been expanded to more than 1,000 probationers, about one-eighth of all felony probationers on Oahu, and to the calendars of all 10 felony judges. So far, the results of the expanded program match the results of the pilot. A controlled trial with true random assignment between HOPE and business-as-usual probationers is currently under way. Evidence to date suggests that reductions in criminal-justice expenditures due to improved compliance pay several times over for the HOPE program’s rather modest costs (about $1,000 per probationer per year, most of that for drug treatment), with reductions in crime and improvements in probationers’ welfare and conduct. The shrinkage of illicit drug markets due to the removal of active customers is a bonus benefit.

There also appear to be ways to make programs such as HOPE work even better. In one important respect, HOPE does not comply with the principles of behavioral change discovered by psychologists: Its focus is entirely on punishment, whereas the literature makes it clear that reward often can be a more potent force in shaping conduct. For example, researchers led by Stephen T. Higgins of the University of Vermont have shown in pilot programs involving cocaine and methamphetamine users that providing small financial rewards for “clean” urine tests can greatly increase compliance among individuals who want to quit and have sought help in doing so. Whether the same would be true for probationers is not clear. Moreover, positive incentives may be hard to integrate into community corrections, if only for political reasons; the citizen outrage at a proposal to pay criminals to stop committing crimes is easy to imagine, even if it could be shown that doing so is a cost-effective means of crime control. One possible way to deal with such political hurdles might be to cast rewards as remissions of previously assessed fines.

Still, the question of whether drug-involved probationers can and will reduce their drug use in the face of predictable sanctions has now been answered. The remaining open question is whether community-corrections agencies outside Hawaii can organize themselves and secure the necessary cooperation from the courts, police, jails, and drug treatment providers to actually make and deliver on that threat.

The California experiment

Although felony probationers are rarely Rotarians, parolees on average behave worse and have bigger problems. They tend to be older, with longer spells of drug abuse and longer and more serious criminal histories, and have much higher rates of rearrest. Even before their most recent prison stay, they are more likely than probationers to have been jobless and homeless. Their high rates of return to incarceration also make them expensive.

Consider California, which now spends an average of $43,000 per year for each of its prisoners and also has the highest rate of prison overcrowding in the country. The total number of inmates in the state’s institutions exceeds 200% of design capacity, and such overcrowding poses health and safety risks to inmates and prison staff. Because of the conditions in the state’s prisons, Gov. Arnold Schwarzenegger has proclaimed a state of emergency, and the federal courts are considering imposing a cap on the state’s prison population on the grounds that the degree of crowding converts imprisonment into unconstitutional “cruel and unusual punishment.”

The governor also has declared a fiscal state of emergency, which will force a trimming of the state’s corrections budget. Parolees returning to the state’s prisons are significant contributors to overcrowding problems and overall system costs. California parolees have the highest rate of return to prison of any state, with more than a third returning for drug crimes. With the poor performance of the status quo, the state’s policymakers will have no choice but to look for new approaches.

Enter COPE, the California parole version of Hawaii’s HOPE program. Negotiations are under way to test a supervision model for California parolees that would mirror the key elements of HOPE. The program would start small, with a limited number of parolees and probationers in a limited number of counties (a refreshingly prudent approach for a state with a history of rolling out untested programs en masse). This approach will enable COPE to be tested and tailored to the California system and tweaked to meet the needs of specific parolee populations. If outcome improvements in California are even a fraction of those observed in Hawaii, Californians can expect to see substantial savings in corrections costs and crime.

The plan in California calls for a three-armed randomized controlled trial, with one group getting business-as-usual treatment, one getting an aggressively therapeutic approach, and the third getting COPE. If the pilot study shows improved parolee outcomes with significant reductions in recidivism, the combination of that finding with what are expected to be strong evaluation results from Hawaii may create administrative and political conditions in which the innovation can spread, although it will always require appropriate adaptation to local conditions and procedures. A small federal grant program to fund HOPE-like experiments has already made it into law.

Extending the HOPE model

Illicit drug use is an important form of behavior to control and an easy one to monitor. That makes it a natural focus of HOPE-style programs. But nothing about the idea of close monitoring and swift, predictable, and measured sanctions is specific to drug use. HOPE also has worked well with domestic violence offenders, where the behaviors being monitored are attendance at treatment sessions and compliance with restraining orders, and with sex offenders, where the issues are attendance at treatment sessions and observance of precautionary rules such as staying away from playgrounds and schoolyards.

Any behavior that relates closely to a probationer’s risk of reoffending or chances of establishing a law-abiding lifestyle and that can be monitored with reasonable accuracy at acceptable cost is a candidate for incorporation into a HOPE-style community-corrections regime. For example, insofar as it becomes technically feasible to monitor a probationer’s alcohol use (perhaps with a skin patch that detects alcohol in perspiration) or location (with cell-phone or global positioning system technology, or a combination of the two), then a HOPE-style program could require abstinence from alcohol and observance of time-and-place rules (such as a curfew, being at work during work hours, avoiding drug-dealing areas, or obeying “stay-away” orders).

The more parole and probation systems develop the capacity to punish law breaking and prevent reoffending without physically confining offenders, the more they will become true alternatives to incarceration and the better will be the terms of the social tradeoff between crime rates and incarceration rates.

To be sure, it is possible to imagine overshooting the mark and making the community-corrections system too intrusive; the idea of having public agencies continuously monitoring the whereabouts of millions of individuals has an Orwellian ring. When and if the technical and operational capacities of community-corrections agencies reach that stage, there will be a need for moderation in their use. Only those with records of persistent or outrageous offending should be put on position monitoring for periods of years. Cost pressures and the principles of incentive management will dictate that the reward for sustained compliance should be loosened restrictions and reduced monitoring.

But the problem of overintrusiveness, if it arises, is somewhere off in the future. Today’s problem is the failure of parole and probation to substitute for incarceration, giving rise to the unpleasant combination of high crime rates and large prison populations. That is a problem for which the HOPE model may point the way to a solution.

Already, the Hawaii experience is drawing considerable attention from federal agencies, foundations, and other jurisdictions. Such interest reflects the high level of discontent within the field about the performance of the current community-corrections process. The field of offender rehabilitation is in dire need of innovation; widespread change in community supervision practices along these lines would rank among the most significant reforms in corrections policy to date.

But Hawaii had some important advantages when launching its HOPE program: a collegial relationship across state agencies, the absence (despite the state’s high overall crime rate) of large crime-blighted or gang-dominated neighborhoods, and the extraordinary public management skill of the innovating judge in securing the cooperation of the many key players whose buy-in was essential to the successful implementation of swift and certain sanctions.

It remains to be seen whether California and other states, with very different institutional arrangements and public-sector cultures, will be able to find the means required to make this collaborative approach work. But if, as U.S. political scientist and policy adviser Richard Neustadt once said, “A crisis is a moment at which it is possible to do something different,” then the prison-crowding crisis, and the dramatic failure of parole as currently practiced, may create a moment ready for HOPE.

Building a Wider Skills Net for Workers

The skills of workers in the United States are critical to their own economic performance as well as to that of society at large. But today, despite the nation’s generally healthy economic growth in recent decades, workers face serious challenges. Less-educated workers have seen their wages stagnate or decline. The share of workers covered by pensions and health insurance has declined. Immigration, outsourcing, and the expanding labor force in India, China, and other less-developed countries have increased the competition for jobs in a world labor market.

Yet to right these workplace problems, policymakers are, for the most part, looking in the wrong direction. They should be paying more attention to what skills workers really need to succeed, rather than focusing on an assumed set of skills that may not be so critical after all.

Economists widely see the job market as generating a rising demand for skills that is outpacing the supply, thereby widening wage differences between the skilled and unskilled. They favor expanding the share of skilled workers to improve economic growth, increasing the number of workers obtaining good-paying jobs, and lowering wage differentials. In translating a skills strategy into concrete actions, policymakers have focused almost exclusively on adding schooling, partly because the common definition and measure of skills is educational attainment, sometimes supplemented by reading and math test scores.

As evidence for their case, policymakers often cite statistics showing that U.S. workers no longer lead the world in formal education. They also note that employers report difficulty in recruiting workers with adequate skills. Over half of manufacturing firms reported that the shortage of available skills is affecting their ability to serve customers, and 84% of the firms say that the K-12 school system is not doing a good job preparing students for the workplace. In response, federal and state governments have increased spending on elementary, secondary, and postsecondary schools, mandated test-based performance measures that hold schools accountable for student performance, expanded competition and school choice, and offered subsidies to help students attend college.

But the current approach ignores the presence of a multiplicity of skills required for successful careers. Although reading, math, and writing capabilities are in high demand and are relevant to most jobs, so too are occupation-specific and other generic skills, including communication, responsibility, teamwork, allocating resources, problem-solving, and finding information. Unfortunately, it is difficult to conceptualize and measure the broad array of skills critical to career success. When the policymakers, educators, and the public lack information on critical occupational and generic skills, they find it difficult to diagnose trends, identify skill gaps, learn about the skill limitations of different subgroups, understand the skills in most demand, and determine the best mechanisms for teaching skills.

The problem is compounded when it comes to projecting future skills. But the evidence indicates that skill requirements will be heterogeneous and most job openings will be in careers that do not require a bachelor’s degree or higher. In defining high-, medium-, and low-skill positions, the U.S. Bureau of Labor Statistics relies on formal educational requirements, the share of workers at each educational level, and the amount of specialized training required for a given occupation. Given these criteria, the past two decades have witnessed somewhat higher growth in high-skill and low-skill occupational categories than in middle-skill jobs. Still, middle-skill occupations account for nearly half of all jobs, with high-skill categories at 35% and low-skill fields at 16%.

Changes in specific occupations illustrate this pattern. At the high end of the schooling distribution, examples of growing occupations include teachers, financial managers, health managers, and accountants and auditors. Many middle-skill jobs, such as registered nurses and health technicians, have shown rapid increases in employment as well. Jobs in construction occupations, many of which require substantial classroom and on-the-job training, have expanded by about 4 million since 1986.

In the coming decade, according to projections by the Bureau of Labor Statistics, 47% of all job openings will be in heterogeneous middle-skill positions, whereas high-skill occupational categories will account for about one-third of job openings. Past patterns and current expectations suggest that most jobs in the foreseeable future will not require a bachelor’s degree but that half will demand a varied array of skills generally gained through community colleges, occupational training, and work experience.

What skill strategies will work best to prepare these and other workers? Economists commonly distinguish between general skills (capabilities that increase a worker’s productivity in a range of firms) and specific skills (capabilities that increase productivity within one firm). Firms are less likely to pay to expand general skills because they are unlikely to recoup their investments, since competing firms will bid up the wages of newly trained workers to match their enhanced productivity. Partly for this reason, the burden of financing general skill development falls mainly on governments and individuals. In contrast, firms can earn a return on investments in specific training because the added skill will be useful to the current firm but will probably not command higher wages from other firms.

Though useful and effective in predicting a range of outcomes, the human capital perspective ignores some motivational factors that affect the accumulation and effective use of skills. Not all learning is for instrumental purposes. People often learn in order to satisfy their curiosity or to gain a sense of accomplishment. The ability to learn a skill conveys a sense of pride, and the effective use of skills in an occupation often brings workers a sense of identity. Skills rarely raise productivity in isolation; increases in productivity typically result when workers use their skills to complement the work of others in an appropriate setting within the organization.

The human capital framework offers little guidance about which general, specific, or occupational skills are valuable in any given labor market. One approach is to use educational attainment as a proxy for skills. Another is to estimate the gains in earnings associated with specific cognitive skills. Although economists have found that years of schooling and scores on math and verbal tests are positively correlated with earnings, these measures account for only a modest amount of the variation in earnings among workers, thus indicating that other attributes or skills are relevant to job performance. To identify these attributes, a commission sponsored by the U.S. Department of Labor studied what effective workers require to succeed on specific jobs. The commission’s report documented many important skills not directly taught in school, such as the ability to allocate time and resources, to acquire and evaluate information, to participate effectively as a member of a team, to teach others, to negotiate differences, to listen and communicate with customers and supervisors, to understand the functioning of organizational systems, to select technology, and to apply technology to relevant tasks. Context also matters. Responsibility and attention to detail are necessary for mowing lawns and for nursing as well, but the required levels differ enormously.

Reforms should include offering students the option of attaining rigorous occupational qualifications through programs that incorporate academic, occupational, and other workplace skills.

Employer hiring practices are consistent with the commission reports. Surveys find that employers view such personal qualities as responsibility, integrity, and self-management as being as important or even more important than basic skills. In a survey of over 3,300 businesses, employers ranked attitude, communication skills, previous work experience, employer recommendations, and industry-based credentials above years of schooling, grades, and test scores. Other research shows that workers gain high wage returns from occupation-specific and industry-specific work experience.

Many skills must be learned in the context of a work environment or through joining experienced workers in a “community of practice,” or both. Workplaces not only require formal knowledge—facts, principles, theories, and math and writing skills—but also informal knowledge, embodied in heuristics, work styles, and contextualized understanding of tools and techniques. A revealing study found that auto repair workers needed social skills to succeed in learning informal knowledge, as captured in stories, advice, and guided practice. Other studies of the schooling and job market experience of a cohort of workers showed that except for college graduates, noncognitive skills (as measured by indices of locus of control and self-esteem) have at least as high and probably a higher impact on job market outcomes than do cognitive skills.

The importance of noncognitive skills does not mean that verbal, math, and writing skills are irrelevant or unnecessary for the vast majority of positions. When employers emphasize personal qualities, they may be assuming that workers have at least some basic academic skills and that once some threshold level is reached, noncognitive skills become a priority. On the other hand, few workers use many of the academic skills that educators view as vital to success. In a survey of a representative sample of workers, only 9% reported using the capabilities learned in basic algebra, and only 13% or fewer workers below the upper white collar job level ever write anything five pages or longer.

Nonetheless, schooling remains the nation’s primary skill-development vehicle, with expenditures of nearly $1 trillion on 72 million students, including 17 million in postsecondary programs. Despite the high and rising expenditures per student, national reports, public officials, and the general public have expressed great dissatisfaction with the ability of the educational system to help students gain adequate skills. Still, the economic rates of return from completing high school and college are 13 and 10%, respectively, suggesting that schools are doing something right. However, schools are unable to retain as many as one in four students through high school. Nearly half of the dropouts attribute their school-leaving to boredom and lack of interest in classes. Employers report great dissatisfaction with the quality of high-school graduates. In one survey, firms reported that 60% of applicants with a high-school diploma or GED were poorly prepared for the typical entry job in the firm. Finally, the observed high rates of return from completing high school and college do not reveal whether alternative approaches might be more cost-effective in improving earnings and occupational outcomes.

Boosting career-focused training

Increasingly, state standards are driving school curricula, but not necessarily in directions that best prepare students for careers. At the high-school level, the standards typically require that all students take academic courses that meet college requirements. There is little discussion about how the requirements respond to the heterogeneous skills required for careers. Education and political leaders seem to view “ready for work” as implying that students complete college-prep academic courses, assuming that students will learn occupational and other workplace skills on the job or in community colleges.

Although some of the nation’s skill-building efforts are explicitly career-focused, they have declined relative to formal schooling. Yet many vocational programs remain, including career and technical education programs in high schools and community colleges, occupational programs provided through for-profit proprietary schools, and publicly supported job-training programs. The record of these programs is mixed. Vocational high-school programs, though much maligned, appear to raise the earnings of students sufficiently to yield a solid rate of return. Taking several career and technical education courses leads to substantial gains in employment and earnings gains about eight years after normal high-school graduation. The gains are especially high among at-risk and minority students. Work-based learning through cooperative education adds to the earnings effects. Career academies are schools within schools that often have an industry focus, such as finance, tourism, or health. An experiment in eight cities indicates that career academies lead to increased earnings for male students, especially those with a high risk of dropping out of high school.

In the case of community colleges, the earnings gains are higher for completing a vocational program than for completing an academic program. Some studies indicate that men achieve no gains in earnings after a year of academic community college work or an academic associate’s degree but that women gain from both programs.

Studies generally find that education programs with close links to the world of work improve earnings. The earnings gains are especially solid for students unlikely to attend or complete college. Cooperative education, school enterprises, and internship or apprenticeship increased employment and lowered the share of young men who are idle after high school. Women unlikely to attend college also achieved earnings gains from the internship or apprenticeship components of high school programs.

Youth apprenticeships go beyond school internships by providing in-depth work-based learning combined with related course work. In the late 1980s and early 1990s, the federal government sponsored demonstration projects, and some states began youth apprenticeship programs. But by the mid-1990s, government officials administering the School-to-Work Opportunities Act of 1994 were deemphasizing youth apprenticeship in favor of less intensive interventions. Today, youth apprenticeships play a very minor role, though Wisconsin, Georgia, and some other states still have solid youth apprenticeship programs.

Outcomes from other public and private training programs vary widely. Publicly sponsored job-training programs generally provide only short-term training. Although they generate sufficient gains for women to offset their costs, the gains for adult men and for young people are low. Even Job Corps, an intensive program for young workers that yields earnings gains for at-risk youth, fails to pass a standard cost/benefit test.

Many workers learn productive skills through formal employer-led training, informal training, and work experience. Some employer-led training introduces workers to operations, safety aspects of the job, and organizational goals. Other training aims at raising the basic skills of workers and their capability to implement new technologies or organizational methods. Some long-term training improves the occupational skills of workers in diverse professions such as medicine, nursing, law, and plumbing. Employer-led training generally occurs in the context of a work environment.

Much of the employer-led training is short-term but yields high rates of return. One recent study found that 60 hours of employer training increased wage rates by about 5%, indicating rates of return on an annualized basis of at least 40 to 50%. The skill development that takes place informally on the job is productive as well, yielding returns for workers and firms.

One long-term component of employer-led training is apprenticeship, a highly structured approach that combines three to four years of learning on the job with theoretical and practical courses related to a profession. These programs require apprentices to demonstrate mastery of the full complement of skills required for a skilled worker in the relevant occupation. Employers are responsible for documenting the ability of apprentices to use occupational skills in the context of the productive process. The Department of Labor’s Office of Apprenticeship administers the registered apprenticeship system by approving firm-based or industry-based programs, tracking apprentices, providing a certification to completers, and monitoring compliance with antidiscrimination laws. Approximately 450,000 workers are registered apprentices as of 2008. Evidence from national household surveys suggests that substantially more workers are in apprenticeship programs that are not formally registered. The limited research on gains from apprenticeship indicate that completing apprenticeships yields gains that are substantially higher than comparably estimated gains for individuals graduating with a vocational degree from community colleges.

What are the implications of the evidence on programs and skill requirements for public policy? How can the United States best improve worker capabilities and qualifications for productive careers? Policymakers concerned about these questions generally focus on educational goals, particularly raising verbal, math, and science scores and widening access to college. Less emphasized are policies that help potential workers attain a broad range of skills required in the workplace, that acknowledge the diversity of learning styles and skill requirements, that recognize the importance of retaining and using skills, that limit the costs experienced by students and government, and that increase student motivation.

Consider the widely stated goal of assuring that “all students should leave high school ready for college or work.” The implicit goal is to prepare everyone for college, as indicated by the fact that state high school standards increasingly require that all students take a college-prep curriculum. There is little discussion or analysis of what is meant by “ready for work.” Officials setting standards provide little documentation about why the college prep curriculum is universally required to be work-ready. Although more learning is desirable, not every content standard in literature, math, science, and social science is central to qualifying workers for productive careers. Current academic content standards may prevent students from taking courses that may be more helpful to their careers or may worsen the dropout problem, or both.

Toward sensible reforms

What, then, are sensible high-school reforms that can create a better-qualified work force? A good start is to recognize the diversity of skills and learning styles required for success in the workplace. Reforms should include offering students the option of attaining rigorous occupational qualifications through programs that incorporate academic, occupational, and other workplace skills and that combine school-based instruction with well-structured work-based learning. Students would learn work discipline and make practical use of their reading, writing, math, and science skills in the context of achieving a demanding occupational standard. Many students who drop out because classes are not interesting could be more engaged in programs that provide learning at work, pay, and an occupational certification.

This approach of using workplaces as learning environments is supported by several strands of research. It builds on evidence of the importance of occupational skills and other noncognitive skills. It is consistent with evidence on the effectiveness of sectoral approaches and of employer-based training, including on-the-job training. It offers good options for meeting such youth-development goals as personal autonomy, motivation, and knowledge of vocations. It helps link the supply mix of skills to the composition of demands by employers. It helps students develop an occupational identity, a professional ethic, and self-esteem based on accomplishment.

One initiative that combines high standards, projectbased learning, and an occupational focus is Project Lead the Way. The program offers engineering and biomedical science curricula to high-school students in over 1,500 schools, often through career and technical education programs. It emphasizes project-based learning and the application of math and science to subjects such as electronics, civil engineering, and architecture. The project also incorporates noncognitive skills, such as working in and leading a team, public speaking, and managing time, resources, and projects. Careeracademy and other technical preparation programs also offer starting points for expanding occupational skills and noncognitive workplace skills.

To help diffuse these approaches among schools, states should expand their educational standards to include the noncognitive skills highlighted in national studies, and they should take steps to encourage the development of career-focused qualifications linked with real careers. One barrier to recognizing these skills is the disparate nature of occupational skill qualifications. States typically have a plethora of standards for certification and licensing requirements, often influenced by current members of the occupation. At one time, the National Skill Standards Board tried to make these standards more coherent. The board failed, but it is time for another try. The federal government and state governments should modernize and broaden their occupational profiles to ensure that individuals obtain qualifications that extend beyond a narrow category of occupations. Once sound standards are in place and schools see themselves and their students judged on the basis of these broader competencies, they may be more receptive to approaches that build these skills.

For adults, public and private activities to increase job-related skills are increasingly turning to employer-led or employer-linked training initiatives. The initiatives often select an industry sector, create coalitions, assess the skill requirements for existing positions, project skills required to upgrade jobs, recruit and target potential trainees, develop training modules, and obtain a mix of public and private funding. The focus on industry needs and close linkages with employers is sound, but so far, the programs are ad hoc arrangements and not a systemic part of the landscape.

Like sectoral programs, apprenticeship training is driven by employer demand, but differs in having more in-depth, long-duration training in classes and at workplaces. Apprenticeship generates high skills for participants, involves extensive work-based learning, requires few or no foregone earnings on the part of participants, builds in wage progression and job ladders, and offers a respected portable certification. The programs provide academic classroom education that is at least equivalent to that in community colleges, as well as training in the tasks, problem-solving, and social interactions of the occupation. The learner can draw on help from experienced adults and from peers trying to succeed in the same career. Apprenticeships are mainstream in Western Europe and in other advanced economies, providing training for 50 to 70% of young people in Switzerland, Austria, and Germany. It is expanding rapidly in other countries, including Ireland, Australia, and the United Kingdom.

In the United States, the budget to administer, oversee, and promote registered apprenticeships is only $20 million, a tiny fraction of the cost of training initiatives of similar scale. Added investment in the federal Office of Apprenticeship would likely bear increased fruit. Tripling the current budget, which would cost only $40 million, could go toward expanding outreach and technical assistance, stimulating more employers to offer apprenticeships, funding development and marketing of new apprenticable occupations, coordinating skill requirements across programs in the same occupations, and conducting research and evaluations. If the expanded funding generated a 2 to 3% increase in apprentices, it would more than pay for itself. Ideally, apprenticeship programs should work closely with high schools to provide immediate outlets for graduates. These steps could well encourage more students to complete high school and to gain sufficient academic competencies to qualify for the new opportunities.

Expanding apprenticeships is likely to be a highly cost-effective method for skill building. Foregone earnings (and foregone output) are low or zero, depending on the alternative job available to the apprentice. Although no definitive analysis has estimated the returns to apprentices (over, say, highschool graduates with no other certification), the evidence from Washington state indicates earnings gains in the range of $15,000 to $17,000 per year. Given these figures, adding apprentices will almost certainly exceed the social returns from adding marginal college students, especially since two-thirds of community college participants do not complete a degree within four years and about 45% of entrants to four-year programs do not complete within six years. Skills learned through apprenticeship are more likely to be retained because they will be used far more often than classroom subjects that are often disconnected from the workplace.

The time has come to base skill-building in the United States on improved measures and more nuanced policies. U.S. residents have long viewed education and training as primary mechanisms for economic mobility. Unfortunately, laudable efforts to promote opportunity have too often become too narrowly focused on raising educational attainment and academic test scores. Years of schooling and test scores certainly are relevant to success in the job market, but so is a range of other skills, including noncognitive skills and occupational qualifications. It is important that the nation develop and use appropriate measures of skills and qualifications not only to track progress and shortfalls, but also to encourage sound workforce policies. When the public and private sectors begin to assess qualifications using more comprehensive measures, policies to deliver skills will become more effective.


Partnership pays off

When General Motors (GM) decided to build a new transmission manufacturing plant in Baltimore, Maryland, the company immediately sought a partner to address its training needs. The company faced the huge challenge of developing and implementing a major customized training program to prepare hundreds of experienced GM employees for work at the new plant. It was crucial that the employees be trained to understand the state-of-the-art equipment they would use to fabricate key components with fine tolerances and assemble them according to customer specifications. At the new plant, employees would use computer numerical control (CNC) machine tools, read and interpret blueprints, and rotate among different jobs as part of 5- to 10-person teams. Their previous experience of up to 30 years in the traditional assembly-line environment of GM’s Baltimore truck plant left them unprepared for this new and totally different culture.

GM needed a comprehensive training program including basic skills (reading and algebra), technical skills (metrics, machine parts gauging, computer use, equipment operation, and blueprint reading), and soft skills for the new organizational culture (team building and situational leadership). All of these components had to be tailored to the unique learning needs of the older employees. These complex requirements represented a significant departure from what the Community College of Baltimore County (CCBC) had done in the past.

The college rose to the company’s challenge. First, the college resource center administered a “learning-gap analysis” to all of the employees, in which they rated their own abilities and discussed them with a counselor. Then, in April 2000, the formal training began. For three months, while the new plant was under construction, about 150 GM employees attended full-time training at the college campus. The college dedicated 10 instructors to the GM employees and also hired guest experts to teach about topics ranging from gauging and geometric tolerancing to conflict resolution. To maintain the trainees’ interest, each day included a mix of activities, beginning with a morning business meeting, followed by classes in basic skills, soft skills, and technical skills.

When the plant opened and launched production, the college provided all employees with specific on-the-job training in the new work environment. In total, 375 hourly and salaried employees were trained. Since that time, CCBC has continued to respond to other plant training needs and to provide personal development courses to employees, many more of whom take advantage of their union-negotiated tuition assistance benefits than had previously.

GM and CCBC have both benefitted from the strong commitment each made to the partnership. The company avoided the costs of locating and hiring a qualified trainer for each different element of the comprehensive training program and buying or leasing training facilities. For example, the college offered computer labs where employees could access various curriculum modules. In addition, because the college was located in the neighborhood of the new plant, conducting the training there increased the visibility of the new GM plant, which was good for business.

More important, the partnership enhances the plant’s competitiveness, delivering the additional learning that the workers need in order to provide continuous product and process improvement and to drive safety, quality, and on-time delivery. This success was demonstrated in 2006, when GM selected the Baltimore transmission plant as the site for production of a new hybrid vehicle transmission, investing millions of new dollars and creating more local jobs.

The college has reaped benefits as small and medium-sized manufacturing firms in the area have learned from the partnership and taken a similar approach. The curriculum originally tailored to meet GM’s needs has a common core of skills needed in other manufacturing plants, and the college has established larger partnerships, including GM and other manufacturers, to update the curriculum and deliver these skills. The college enjoys a growing reputation, both within the state and nationally. Maryland’s Department of Economic Development uses the GM-CCBC partnership as a template for other companies with training needs, and GM headquarters sends launch teams who are planning new plants or products to Baltimore to learn from the plant and the community college. The college’s growing success is built on its new approach to working with business developed through the partnership with GM.

Today, the plant is a world class operation that is prepared to meet the competition thanks to the solid baseline of skills developed through our partnership with CCBC.

It’s about More than Money

Presidential Science Advisor John Marburger’s call for a new science of science policy has led the National Science Foundation to initiate a program to support the development of more rigorous empirical and theoretical foundations for understanding and evaluating U.S. science and innovation policies and programs. Reports from the National Academies and many presentations at the American Association for the Advancement of Science’s annual science and technology (S&T) policy forums have also explored the issues surrounding federal funding of science and innovation. But the government role in science and innovation extends far beyond R&D funding. A much broader approach that explores the data and theory that influence the full range of decisions that make up S&T policy is needed.

Policymakers often talk about a “U.S. innovation system,” within which different sectors are funded to perform specific roles and which connect, tightly or loosely, to one another in well-known ways. Policymaking thus tends to focus on the level of funding and its distribution among sectors. But the framework used to study federal policy no longer fits the reality. As expressed in two recent National Academies reports, Measuring Research and Development Expenditures in the U.S. Economy (2004) and Understanding Business Dynamics (2007), there is increased recognition that the categories are outdated. Measuring R&D, for example, notes that the models of innovation that underlie the data are “increasingly unrepresentative of the whole of the R&D enterprise,” omitting factors such as “the growth of the service sector, the growing … role of small firms in R&D, the shift in funding from manufacturing R&D to health-related R&D, changes in geographic location, and the globalization of R&D.” Further, some scholars are describing new modes of “open innovation” occurring outside corporate R&D departments, and indeed often outside firms entirely. These trends transcend traditional vocabulary and concepts. We cannot assess the economic value of federal programs without an up-to-date understanding of how the economy functions and how innovation happens.

The rigorous but narrowly focused studies of the effect of the Bayh-Dole Act on university patenting and licensing, for example, do not examine whether changes in the practices of U.S. universities are leading multinational firms to shift their research support to universities and institutes in other countries. If this happens, it could also reduce the number of foreign nationals who enroll in U.S. universities and often stay as faculty, corporate researchers, or entrepreneurs to make significant contributions to the U.S. innovation system.

National and state science and innovation policies and programs are founded on a grab bag of theories, big and small. Big theories include the contributions that S&T are asserted to make to national defense, economic competitiveness, and human understanding of the physical world. Smaller theories address geographic, institutional, and demographic equity; specific perceived scientific or technological opportunities; and various short-lived policy fads. Although some of these rationales rest on firm empirical evidence, such as the suboptimal levels of private-sector investment in basic research and the contributions of scientific and technological innovation to national and regional economic growth, others hinge on symbolic value or wishful thinking. Some are simply wrongheaded, requiring agencies and performers to torque their activities to meet wasteful, vague, or counterproductive performance expectations and outcomes.

The fundamental problem with these unreliable analytic methods is that they can lead to flawed decisions about R&D policies and programs. For example, the Office of Management and Budget (OMB) relies heavily on the notion of market failure and the supposedly quantifiable contribution that federal R&D can make to economic growth as the measure of the effectiveness of programs, even as contemporary economic analysis finds it impossible to quantify with any precision the effect that a given unit of knowledge will have on economic performance. Conversely, other evaluations are the polar opposite of precision because they fail to include explicit statements of program objectives, agreement on what constitutes success, use of comparison or control groups in evaluations, or consideration of the value of foregone alternatives. These observations point to the need to take seriously Will Rogers’s adage that “It isn’t what we know that gives us trouble, it’s what we know that ain’t so.” We need to develop a more rigorous R&D evaluation system built on a foundation of current reliable data and new analytic methodologies.

Calls for research

To experienced participants in U.S. R&D policy debates, Marburger’s 2002 call for a new science of science policy that provided for “a systematic way of ordering the opportunities so finite resources can be invested to best effect” sounds a familiar refrain that dates back at least to the 1960s, when Alvin Weinberg advanced his intrinsic and extrinsic criteria for scientific choice. In effect, many new evaluative and predictive techniques, such as bibliometrics, patent analyses, foresight, and roadmapping, have been designed to provide evidence of past and current performance to better inform prospective actions. Recent calls for even more new models suggest that whatever progress may have been made along these lines over the past 40 years, it is still deemed inadequate, at least at the level of program and budget detail needed for policymaking. For example, the recent National Academies report A Strategy for Assessing Science (2007) concludes that “No theory exists that can reliably predict which research activities are most likely to lead to scientific advances or to societal benefit.”

The absence of a reliable model for predicting the value of R&D investments, at least for the near term, means that answers to the S&T policy questions cited above will likely continue to be made on the basis of a combination of expert judgment and competitive peer-review processes. This observation is not, however, an uncritical reaffirmation of the views expressed in various recent reports that expert review is the most effective means of evaluating federally funded research programs or that peer review is an international gold standard for evaluating science and engineering proposals. A recent stream of empirical research and participant observation has found that peer-review decisions can be skewed by a number of systemic factors, including panelist attitudes toward scientific risk and their cognitive maps concerning the structure of knowledge, small-group dynamics, the number and ordering of criteria, voting rules, instructions provided by funding agencies, and the intervention of program managers. In particular, the limitations of peer-review mechanisms, at least as conventionally implemented, are increasingly evident in at least three important areas: capacity to forecast trends in fundamental research not only within but across fields of science, receptivity to discontinuous or transformative research, and acceptance of interdisciplinary research.

Numerous modifications to peer-review procedures, including changes in the way reviewers are chosen and in the voting rules, have been proposed. Some have even suggested that a little randomness be included. These variations deserve to be tested because we know that the current system is not perfect.

If there is one area of S&T policy where there should be a solid research base, it is human resources. But as one examines the rationales for policy prescriptions in recent congressional testimony and high-profile reports, it becomes clear that there is little empirical evidence for the claims of shortages or surpluses in scientific and technical personnel. Recent legislation, however well-intentioned, reflects a time-honored tradition of making policy on the basis of simplistic analysis and simple solutions.

The enormously influential National Academies report Rising Above the Gathering Storm, which calls for recruiting 10,000 new science and mathematics teachers annually, essentially ignores the impressive data base of human resource surveys with roots as far back as the 1950s, as well as subsequent analyses by sociologists and economists who provide important insights into S&T career patterns. Research on the S&T labor force has failed to address the challenge of how the federal government can most effectively pursue national S&T objectives in the context of historic divisions across levels of government for K-12 education as well as distributed responsibilities among agencies.

WITHOUT CLEARLY DEFINED EXPECTATIONS AND INITIAL AGREEMENTS ON WHAT S&T PROGRAMS ARE DESIGNED TO ACHIEVE, CURRENT REVIEWS OF FEDERAL S&T UNDERTAKINGS, WHETHER BY OMB OR A CONGRESSIONAL COMMITTEE, ARE UNAVOIDABLY SUBJECTIVE.

The unfavorable international comparisons of student math and science achievement that underpin much contemporary U.S. discourse and policy also only scratch the surface of a complicated phenomenon. Many industrialized countries with high-scoring students are facing a science and engineering recruitment challenge that looks much like that in the United States. A reasonable hypothesis is that major structural shifts in the global economy are connected to the recruitment issues; demand for technical talent is increasing in other parts of the world and dampening immigration rates. The S&T policy research communities have barely begun to look at the connections among these trends or to generate policy options under these conditions.

The statistic that most sharply separates U.S. public-sector R&D spending from that of other industrialized economies is the prominence of defense-related spending in the United States. Because of the size of the U.S. defense R&D enterprise, researchers need to devote more attention to its effect on nondefense technological innovation; the competitiveness of U.S. industries; regional growth patterns; and the recruitment, training, and distribution of S&T personnel.

For example, although documented examples exist of major industries, including computers and aerospace, that have started or been sustained by military R&D, efforts of major defense contractors to diversify into the civilian sectors suggest that military requirements often do not stimulate the development of products that can survive in these markets. Relatedly, military installations are spread around the country, but military laboratories and contractors are more spatially concentrated. Does an R&D-intensive military activity produce different effects in local economies than does a more standard installation?

The role of politics

Science policy, to the extent that it addresses specific national objectives and the production of benefits for different constituencies, is manifestly a political process, with the usual battles among the branches of government, various constituencies, and warring ideologies. All this is obvious.

What is less obvious—or at least not systematically understood—is how these actors and factors interact over time to produce specific outcomes. Retrospectively, one can perhaps account for the constellation of influences that led to the doubling of the National Institutes of Health (NIH) budget. But can this help us understand what is likely to become of efforts to “balance” this effort by increasing the budget for the physical sciences and engineering while the NIH budget is held constant? The rich economics and political science literature about the social dynamics shaping the formation, force, and staying power of coalitions of interest suggests that the future will not be explained by anything as simple as a desire for balance. More important, given the increasingly widespread view that sharp discontinuities in the level of federal R&D funding disrupt scientific careers and result in unwise investments in physical infrastructure, what changes in the structures, processes, or criteria by which the executive and legislative branches form budgets might produce a more predictable funding pattern, and thus a more stable and productive scientific enterprise?

Similarly, with a view toward understanding and perhaps predicting the outcomes of the dynamics of political processes on S&T policy, how are coalitions of interest formed to promote or oppose specific initiatives? For example, what coalitions, using what arguments and with what influence, might be expected to form around recently advanced proposals to increase the Small Business Innovation Research set-aside above 2.5%?

Any large-scale federal S&T program can be expected to generate some positive results, be it peer-reviewed publications or improved performance of a technology. Conversely, the inherent uncertainty surrounding R&D, especially basic research undertakings, guarantees that there will be a number of projects that can be described as unsuccessful. Without clearly defined expectations and initial agreements on what S&T programs are designed to achieve, current reviews of federal S&T undertakings, whether by OMB or a congressional committee, are unavoidably subjective.

Scientific research has always been an international activity, and the rapidly evolving geopolitical environment is of critical importance. As recently as a decade ago, U.S. policymakers could talk confidently about broad S&T leadership. A National Academies committee recommended that the United States seek to be a leader in some fields of science and a fast second in others. Yet as the evidence of growing strength elsewhere mounts, the United States clearly needs to plan for a future in which it is among the world leaders, rather than the dominant force, in most areas of science and engineering. Given the high cost of major scientific undertakings, increased patterns of international scientific cooperation, the polycentric location of R&D laboratories by multinational firms, the emergence of new and potentially significant contributors to international science, and rapid diffusion and transfer of scientific and technological knowledge across national borders, what strategic principles should guide U.S. S&T policy in a world increasingly characterized by complex sets of interactions—partly cooperative, partly competitive, partly independent—with other countries?

These questions also spill over to innovation policy and concerns about U.S. economic competitiveness. For example, if scientific leadership shifts to other countries, are U.S. industries equipped to be fast followers? Is their absorptive capacity equal to their innovative capacity? Will U.S. universities be as capable of providing a window on the rest of the world as they have been of providing a window for the rest of the world? Under what conditions will U.S.-based economic activities thrive in the new global economy?

Research on these topics has to move fast to keep up with the changing realities. But existing national industry surveys do not do much to track international patterns, and national data sets on the workforce include scant information on the increasing flows of scientists and engineers among countries and regions. With a few exceptions, the research on international S&T collaboration is not charting this process. Part of that literature focuses on international collaboration in Big Science, a game played mostly by the rich countries. A large part focuses on the movement of technology through foreign aid programs but neglects the greater volume of technology transfer through multinational firms. There is thus considerable scope for new research on the S&T elements of relationships in a developing world economy. Because the dominant theoretical model relies on the idea of science as a self-organizing system, there is plenty of scope for testing the effect of policy interventions in that system. Mapping technology transfer through its public and private routes will be an important part of the agenda, and studying the conditions under which technology transfer builds lasting capacity and continuing partnerships will also be useful.

Not surprisingly, many items on the above agenda are familiar ones. Their inclusion reflects the perennial challenges that researchers and policymakers confront as they try to reduce the uncertainties and complexities surrounding processes of scientific discovery and technological innovation. Given these challenges, we should have modest expectations about the new science of science and innovation policy. Perhaps even more so, modesty may be needed on the part of those advancing claims about new and improved theories, models, or algorithms. As we have demonstrated, serious gaps exist in the knowledge base on which new theories must be built. The success of the current initiative for a science of science policy will depend less on having more pieces of data than on how well the pieces are put together. Increased dialogue between the policy and research communities is a necessary precondition for completing the puzzle successfully.

Research Funding via Direct Democracy: Is It Good for Science?

On November 2, 2004, California voters passed the California Stem Cell Research and Cures Bond Act of 2004, popularly known as Prop 71. Its purpose and intent was to, among other things:

  • “Protect and benefit the California budget: … by funding scientific and medical research that will significantly reduce state health care costs in the future; and by providing an opportunity for the state to benefit from royalties, patents, and licensing fees that result from the research.
  • Benefit the California economy by creating projects, jobs, and therapies that will generate millions of dollars in new tax revenues in our state.
  • Advance the biotech industry in California to world leadership, as an economic engine for California’s future.” (see )

With its passage by 59% of voters, Prop 71 became part of California’s constitution. It created the California Institute for Regenerative Medicine (CIRM), a new state agency to administer $3 billion of state bond–funded stem cell research over 10 years. This affront to the Bush administration’s restrictive policy on embryonic stem cell research received an enthusiastic response from scientists in California and elsewhere in the nation and around the world.

But as is usually the case beyond the initial hype and hope surrounding an emotionally driven public issue, the devil is in the details of designing and implementing good policies. And in this case, much can be learned from the delicate dance in the real world among elected policymakers, advocates for research on specific diseases, public interest groups, and the media. As other states try to emulate California’s example, it is important now, more than three years since Prop 71 was passed and its implementation began, to examine the process and the unintended consequences of this approach to funding.

Public funding of a particular research area through the drafting and passage of a topic-specific proposition poses a set of important challenges for research funding, including the unintended consequences of disruption of traditional legislative discussions of thorny public policy issues; fundamental shifts in the involvement and influence of some stakeholders, in this case disease and patient advocates; the Balkanization of research; and managing the expectations of the voters who fund science, particularly if outcomes are not certain. Those interested in science policy, stakeholders, policymakers, and the public ought to consider these challenges as other states consider imitating California’s approach.

An emerging trend

In states around the nation, research funding decided by direct voter participation is challenging the standard way in which public policy has been made. Traditionally, federal grants flowing from science mission agencies fund most U.S. basic science, primarily via a peer-review system. State funding usually supports state universities’ research infrastructures or programs of particular importance to the state. Changes in federal and state science funding policies in recent years, however, are causing the creation of new models, such as propositions at the ballot box, that warrant a close examination of their intended and unintended consequences. Whether out of frustration at the slow pace of action, impatience with the intrigue of political gamesmanship, or discomfort with compromises required during the political process, special interest groups and influential individuals are turning to direct democracy, rather than representative democracy, to shape public policy in their own interest.

“PROPOSITION 71 IS DESIGNED TO BREAK THE POLITICAL LOGJAM AND TURN THE HOPE FOR NEW CURES AND THERAPIES INTO REALITY USING TAX-FREE STATE BONDS TO SUPPORT STEM CELL RESEARCH AT CALIFORNIA’S HOSPITALS, MEDICAL SCHOOLS AND UNIVERSITIES.” YES ON 71: COALITION FOR STEM CELL RESEARCH AND CURES

In California, direct democracy occurs primarily through propositions, determined by direct vote of the people at the ballot box. Famous examples of direct voter decisionmaking are Prop 13, which limited the rise of property taxes relative to increases in real estate assessments, and Prop 209, which made racial preferences illegal as a basis for admission to public universities. This direct method of influencing policy emerged nearly 100 years ago during the California Progressive Era, when individuals, concerned that large businesses such as railroads were having an inordinate influence in the state capital, developed a mechanism to assert some control over policymaking. Successful propositions generally become part of the state constitution, making them exceedingly difficult to amend or repeal.

Prop 71 made its way to the ballot primarily through the efforts of Robert Klein, a California real estate developer whose teenaged son is afflicted with juvenile diabetes. Klein was frustrated with the Bush administration’s limits on stem cell research and the pace at which the state legislature was moving to fund stem cell research efforts, despite its recently enacted policies intended to draw stem cell research to California. Klein used his considerable wealth and wealthy contacts to underwrite Prop 71’s campaign, “Yes on 71: Coalition for Stem Cell Research and Cures.” Klein’s privately funded effort included drafting the language for Prop 71, securing its inclusion on the fall 2004 ballot through a signature drive, a persuasive multimedia communications effort, flyers mailed to voters, a Web site documenting scores of endorsements from state and national organizations, and strategically timed television ads using celebrities from Hollywood as well as the biomedical research community.

After Prop 71’s success at the ballot box, Klein led the effort to establish CIRM at breakneck speed, as specified in the proposition. Prop 71’s official language (eight-plus pages of fine print that challenges even those with the best eyesight) stated that the work of CIRM was to be overseen by an Independent Citizens Oversight Committee (ICOC) composed of 29 members representing patient advocate groups, major public and private academic medical centers, private research institutions, and commercial life-science companies in California. Klein was named ICOC’s chairman. Prop 71 designated state elected officials to select, within 40 days of the election, ICOC members from specific patient advocacy groups. The governor, for example, selected representatives from Alzheimer’s and spinal cord injury advocacy groups, and the president pro tempore of the Senate appointed a patient advocate for HIV/AIDS. Other elected officials selected members from specified disease advocacy groups. This potpourri of representation that now governs the workings of CIRM was in and of itself a new experiment in research oversight.

Because of all the attention generated by the Prop 71 campaign, the media and public watchdog groups cast a glaring light on CIRM, a new state agency being established quickly from scratch. CIRM was also emerging under the scrutiny of irked legislators, who, by the proposition’s carefully crafted language, were cut out of the process of legislating stem cell research funding, overseeing the formation of CIRM, or revising any of its policies. In fact, Section 8 of Prop 71 prevents the legislature from making any changes until after “the third full calendar year following adoption, by 70 percent of the membership of both houses of the Legislature and signed by the Governor, provided that at least 14 days prior to passage in each house, copies of the bill in final form shall be made available by the clerk of each house to the public and news media.” Legislators, displeased by this tying of their hands, showed their dissatisfaction in various ways in the intervening years by trying to impose more oversight and control through a series of legislative attempts, such as regular auditing of CIRM (in addition to the financial reporting required by Prop 71), rules for securing the eggs to be used in research, and the control of intellectual property (IP) derived from state-funded stem cell research.

When the ICOC held its first mandated meeting in mid-December 2004, CIRM existed in name only. It had no fixed location, executive or administrative staff, or guidelines or policies for its work. It also had no funding stream, because the process of selling bonds to fund the agency had not yet been established. The ICOC’s first order of business was to hire executives who were both experienced in managing biomedical research and up to the challenges of establishing a new, highly visible state agency. Once hired, that inaugural group’s immediate attention turned to the drafting of policies of the three required working groups (scientific and medical research facilities, funding, and medical and scientific accountability standards) for subsequent ICOC approval. Among the issues to be dealt with were conflict of interest, IP rules for recipients of funding from both nonprofit and for-profit institutions, grant submission and peer-review procedures, and a 10-year institutional strategic plan, all of which needed to be drafted, approved, and in place before research grants could be funded. It was no surprise that Klein’s initial goal of awarding initial research grants by May 2005, just six months after the November election, was not met.

In an odd twist of fate, funding was delayed because the state could not sell bonds to fund stem cell research due to a series of legal challenges by anti–stem cell research/antiabortion groups that challenged the constitutionality of CIRM on various grounds. As those cases were consolidated and worked their way through to the state supreme court, CIRM had more time to establish its policies in key areas, and the press, watchdog groups, and legislators had more time to examine every move in detail.

In order to maintain momentum in the face of legal challenges, Klein secured $5 million from private donors to fund CIRM’s operations in the first year; during the next year, he raised another $45 million from private donors for what were termed “bond anticipation notes,” which would be repaid only if bonds were ever sold. Those funds, which were used for training grants in research institutions around the state, elicited another lawsuit. In July 2006, Governor Schwarzenegger lent CIRM an additional $150 million to begin funding research grants. This move by the governor was politically significant because it occurred one day after President Bush vetoed bipartisan legislation that would have relaxed federal restrictions on stem cell research.

In mid-May 2007, nearly 30 months after its formation, the last legal challenge to CIRM’s right to exist and administer $3 billion in research funds ended when the state supreme court refused to hear the appeal of litigation challenging the constitutionality of Prop 71. As a result, the state finally sold the first $250 million in bonds to fund research in October 2007, a full 35 months after the vote that established CIRM. The irony is that in the close to four years after California’s grand experiment began with the passage of Prop 71, relatively little new public funding has reached stem cell researchers in California even though nearly $260 million in grants has been approved for research in 22 institutions around the state. More funding will flow later this year, after the approval in May 2008 of $271 million in state funding to build stem cell research facilities in 12 institutions (with $560 million in required matching funding from charitable donations and institutional reserves) and when grants are awarded for new faculty, disease teams, the development of new cell lines, and the creation of new tools and technologies.

Lesson learned

The Prop 71 should serve as a cautionary tale for anyone who believes that more direct democracy always leads to better policy. First, direct voter determination of policy without the input of elected legislators can result in unintended consequences by bypassing the traditional bipartisan and bicameral legislative debate about thorny public policy issues.

By excluding legislators from participating in the creation and design of CIRM, the framers of Prop 71 were short-sightedly taunting the state’s most powerful and skillful political players. Some legislators, most notably the then-chair of the Health and Human Services Committee, reacted by using their seniority and media savvy to mobilize often rancorous public attention around specific hot-button financial and ethical issues that, in large measure, were already being addressed by CIRM and the ICOC. CIRM, with its newly hired executive staff (who collectively had scores of years of expertise in research administration in universities, industry, and federal agencies) had rapidly begun developing policies and guidelines in all these areas through an open process that was subject to a high level of public scrutiny and accountability through members of the ICOC and the state’s open meetings laws. Nevertheless, legislators grabbed the attention of the press and numerous special groups with well-publicized hearings and proposed legislation. Because this effort involved little participation by scientists, it devolved at times into an unfortunate “us versus them” conflict that made it impossible to produce any useful substantive guidance for CIRM and ICOC. A painful irony is that many of the legislators leading efforts to scrutinize CIRM’s activities were among the state’s earliest and most ardent stem cell research advocates. Given California’s strict term limits, these legislators were reluctant to sit on the sidelines of an activity they cared about deeply.

A related unintended consequence was the circumvention of critical basic policy processes. In particular, public discussions about IP derived from the bond-funded initiative revealed deep misunderstandings by the legislature, the media, and the public about how basic research is conducted and the ownership of new knowledge derived from creative discovery. For many months, public discussions were bogged down in legislative and media rhetoric asserting that “since we Californians are paying for the research, then we own it.” Comments showed up regularly in the media and in hearings that revealed a failure to understand, or an unwillingness to acknowledge, generally accepted federal research grant policies such as the Bayh-Dole Act that govern IP ownership that provide the nation’s current framework for technology transfer resulting in commercial products, or in this case, new treatments and therapies. IP policies for nonprofit and for-profit organizations that will receive grants from CIRM were eventually established, accompanied by vigorous and rancorous public discussions that often produced more heat than light.

A second outcome of this exercise in direct democracy is that it concentrated a significant amount of power in a small group of self-designated disease and patient advocates. Those most active in promoting Prop 71 emerged with lasting political power. Whereas a number of disease and patient advocate groups have six- or eight-year terms on the ICOC, there is no formal mechanism for other groups that may have a stake in the advancement of stem cell research to influence policy and direction. Moreover, stakeholders who may have differing opinions, strategies, or priorities are left out of the discussion and have no access other than through highly structured public meetings. The danger of this arrangement is that research priorities are being influenced primarily by their potential applicability to certain disease categories. This is not an effective way to choose basic research projects, and it excludes too many knowledgeable people from the priority-setting decisions.

CAMPAIGN BROCHURE MAILED TO ALL CALIFORNIA VOTERS, FALL 2004: “A RECENT ECONOMIC STUDY … CONCLUDED THAT THE RESEARCH SUPPORTED BY PROP 71 WILL GENERATE UP TO $12.6 BILLION IN NEW STATE REVENUES AND HEALTH CARE COSTS SAVINGS DURING THE PAYBACK PERIOD— PROVIDING A RETURN ON INVESTMENT OF 236% OR MORE.” YES ON 71: COALITION FOR STEM CELL RESEARCH AND CURES

A third unintended consequence is the Balkanization of research. For example, within research institutions, new facilities will be built with Prop 71 funds. Although these new facilities will usually be welcome at public and private research institutions across the state, they also create a problem. It might be necessary to build in redundancies in facilities to mitigate the risk of sharing facilities that receive federal funding, which does not allow embryonic stem cell research on any cell lines developed after August 2001.

Within states, the funds flowing to stem cell research through the proposition process may pit this area of research against other research interests and needs in the state, such as energy, regional climate change, water resources, land use management, and so on. Among states, the creation of special state interests set in policies may impede research or the transfer of research results elsewhere in the nation or around the world. For example, policies that govern the use of IP by firms within California versus outside of California may potentially limit the easy flow of research results across boundaries. And when conflicts or internal inconsistencies exist between state and federal interests and laws, the federal regulations will prevail.

The fourth, and arguably the most serious consequence of direct voter participation is the necessity of managing public expectations. The campaign rhetoric in support of Prop 71 created very high expectations. Comments from the public, legislators, public interest groups, and the media after the election exposed unfettered optimism about the pace of creating new knowledge and applying it, reflecting a belief that within 10 years new treatments and therapies would be ready for those with debilitating illnesses. This is an impossibly short time for yielding applications from basic biomedical research. Public comments also revealed the strong expectations that uninsured Californians should have low-cost access to new treatments and therapies developed with any fraction of CIRM funding, and that a percentage of net licensing revenue from them (presumably from a succession of blockbuster products) be returned to the state’s general fund. Those expectations are now incorporated in CIRM’s policies for IP and revenue-sharing requirements.

Implications

Our system of government was intended to be a representative democracy, an admittedly messy and inefficient system, but one well-designed to put single goals in the context of other state needs. Manipulation of the public to bypass state legislators to support the most recent hot research areas is not a good way to make science policy or public policy. Among the unintended or intended consequences are the lack of deliberation around key issues and a failure to reach consensus about the goals of new public policy.

Moreover, creating a process that excludes legislators and reserves powerful positions for the representatives of specific disease groups unnecessarily limits the inputs and drivers of what ought to be an open, public, and dynamic process that can be responsive to new knowledge and opportunities as they emerge. Stakeholders who were not identified early on will continue to have severely limited ability to make contributions to the policymaking process for stem cell research in California.

Finally, the process of selling a research proposition directly to the public necessarily means that important details such as the time it takes to discover and translate new knowledge into new treatments and therapies and the real costs and benefits of research will not be included in the bumper-sticker sales pitch. We know from decades of research on public understanding of science and technology that although the public doesn’t know a lot about scientific or technical issues, in general it trusts the research community. The direct democracy process makes it too easy for scientists to be careless with that trust by becoming participants in campaigns that make unrealistic promises in the quest for research funding.

Although research scientists may not have been the primary drivers of Prop 71, they were willing participants in the campaign to market “stem cell research and cures” to the public. In doing so, they abdicated their responsibility to be sure that the realities of research and innovation were explained to the public.

As for the marketing experts who sold Prop 71 to the public, most have moved on to other nonresearch positions. They will not be held accountable for not meeting the expectations they helped create. When the public becomes disillusioned, it is likely to lose confidence in researchers and research—and not just stem cell research. That is not good for science.

Many in the scientific community regarded Prop 71 as a victory for scientists and research funding. A little reflection will make it clear that making science policy by public referenda is a risky high-stakes game. Basic science research is too important and too dependent on the continuity and stability of public funding to be subjected to the transitory whims reflected in direct democracy.


Connecting Jobs to Education

Contrary to popular opinion, attaining at least a bachelor’s degree is not the only, nor in all cases the best, route to success. Nor is it the norm. Most jobs do not require a bachelor’s degree for entry, and most Americans—including most young adults—do not have a bachelor’s degree.

What makes a bachelor’s (or higher) degree so appealing is not that demand for this level of education is high, but that jobs requiring higher levels of education tend to pay more. In addition, within a job, those with higher education (at any level) tend to earn more; for example, a dental assistant with an associate degree earns about 20% more than a dental assistant who has only a high school education. However, in some jobs, particularly computer-related jobs, other forms of training and experience can substitute for a college degree. For example, although a bachelor’s degree is the entry requirement for computer systems analysts, 35% of younger analysts do not have a bachelor’s degree.

A significant number of sub-baccalaureate jobs offer better than average salary, and many of these are expected to grow. Many of these higher-paying jobs are technical or supervisory, so that although they may not require a bachelor’s degree, they do require job-specific technical training or supervisory skills. How can people get these skills? Numerous education and training opportunities (other than baccalaureate education) exist to help people train for the vast number of jobs that require only moderate amounts of training or higher education. For the sake of students and workers, it is important to acknowledge and encourage these routes to learning and labor market success. They fall into five broad categories:

  • occupational (vocational) education in high school;
  • sub-baccalaureate postsecondary credentials (occupational certificates and associate degrees);
  • post-high-school coursetaking, from a postsecondary institution, employer, professional association, or other organization (sometimes leading to occupational certification or licensure);
  • formal apprenticeship programs (federally sponsored programs that combine on-the-job training with classroom instruction);
  • informal learning activities, such as seminars, web-based tutorials, and mentoring programs.

Obviously, adults can engage in more than one of these learning opportunities over a lifetime, a year, or even during a week or a day; it is not possible to say how many do so. But one hint at the size of this learning enterprise can be gleaned from a national survey of adult education, conducted by the U.S. Department of Education. For 2004-05 this survey found that 31% of all adults age 16 or older had, over 12 months, taken courses for a sub-baccalaureate degree, been in an apprenticeship program, or taken other work-related courses. Most adults also participate in informal learning activities, with estimates ranging from 75% of all employed adults to 96% of workers in establishments with at least 50 employees. Participation trends in these activities are hard to gauge, but participation in the most common of these activities—work-related course-taking—has hovered around 30% (40% for employed adults) since 2001.

Although attaining a bachelor’s degree is often (but not always) a route to higher pay—else why spend the requisite time and money?—other forms of education and training serve a broad swath of the population, including many who go on to earn relatively high salaries. These alternative learning sources should not be overlooked as important contributors to the economic success of individuals and society. Education policymakers, teachers, and guidance counselors should provide due consideration of these options for students. Rather than encouraging all students to pursue one type of education, we should encourage in all students a lifelong interest in multiple routes to learning.

Not everyone needs or will need a bachelor’s degree

About 70% of jobs require no college education for entry, and an additional 9% require only postsecondary education below the baccalaureate level. Labor market demand is changing over time, but not very quickly; the distribution of job openings in 2016 will roughly mirror the current labor market, with 69% of openings requiring no college education, and 9% requiring a sub-baccalaureate credential. In other words, twice as many new jobs will not require a bachelor’s degree as will require one.

Job entry requirements, all 2006 jobs; Job entry requirements, job openings 2006Ð16

Workers young and old eschew excess education

In line with labor market demand, about 70% of adults ages 25 to 29 do not have a bachelor’s degree. Further, only 38% of 18 to-24-year-olds are enrolled in a 4-year college.

Highest level of educational attainment, adults ages 25 and older: 2007; Highest level of educational attainment, adults ages 25Ð29: 2007

No degree does not mean low pay

Some sub-baccalaureate jobs do offer better-than-average pay. The Bureau of Labor Statistics estimates that 360 occupations fall into this category, including high-demand jobs such as truck driver, repair and maintenance worker, carpenter, and executive secretary/administrative assistant as well as jobs in the fast-growing health and IT sectors, such radiologic technician, dental hygienist, licensed practical nurse, and computer support specialist.

Median annual earnings in 2004 for selected jobs in which most workers age 25Ð44 do not have a bachelorÕs degree

Apprenticeships play small but important role

Apprenticeships make up a relatively small but important part of the training system for new workers. Only about 470,000 adults are enrolled in apprenticeship programs, but this number masks the importance of this training for job entry. In 2007, new apprentices comprised 4% of new jobs. Because most apprenticeship programs are in the construction trades, the number of apprentices tends to follow labor market demand in this area. Thus, the number of apprentices has grown in recent years, but may decline in the near future if the building industry contracts.

Number of registered apprentices, 2002Ð2007

High schoolers value occupational courses

The formal education system serves as a training route for both job entry and lateral or upward job mobility. High school typically provides the first opportunity for job-specific training, through career/technical education offerings in occupational fields (e.g., business support, agriculture, or health science courses). The vast majority (92%) of public high school students take at least one occupational course, with 46% taking at least three such courses

Percent of public high school students taking occupational courses: class of 2005

Popularity of occupational courses not fading

In spite of pressures from increasing high school graduation requirements and a resulting increase in academic coursetaking, participation in occupational education has remained constant in recent years, in terms of the percent of students taking occupational courses and the average number of credits earned in these courses.

Average number of credits earned by high school students in each curricular area: class of 1990, 2000, and 2005

Sub-baccalaureate programs very popular

There are almost as many students enrolled in sub-baccalaureate postsecondary programs as there are in baccalaureate programs. Looking at college entry directly after high school, about 30% of recent graduates enter less-than-4-year postsecondary institutions, whereas 40% enter 4-year institutions. But the sub-baccalaureate sector is particularly likely to serve older adults, so that overall, the ratio of sub-baccalaureate to baccalaureate students is about 9 to10; about 43% of undergraduates are in sub-baccalaureate programs, compared to 47% in baccalaureate programs. (The remaining 10% are non-degree students.) Similarly, 1,422,000 sub-baccalaureate credentials were awarded in 2005-06, about 90% the number of baccalaureate credentials awarded (1,562,000).

Percent of undergraduates enroled in the sub-baccalaureate sector: 2003Ð2004

Number of degrees has and will continue to grow

The number of sub-baccalaureate (as well as baccalaureate) credentials awarded by postsecondary institutions has increased in recent years, and is projected to increase through 2016-17, as rates of participation continue to increase.

The number of postsecondary credentials awarded and projected to be awarded: 1996Ð2017

Prominence of sub-baccalaureate degrees holds steady

At least in more recent years, sub-baccalaureate education has held its own as postsecondary participation has increased; the share of undergraduate credentials awarded at the sub-baccalaureate level has fluctuated between 45 and 49 percent since 1996-97, remaining relatively steady at about 48% since 2003-04. Projections for credential awards in future years are restricted to associate and bachelor’s degrees, and suggest that both degrees will increase in number, although associate degrees will decline slightly relative to bachelor’s degrees, dropping from 32% to 30% of undergraduate degrees. It is less clear what will happen to the market for postsecondary certificates, but these are likely to grow in the health and IT sectors.

Percent of undergraduate credentials that are sub-baccalaureate: 1996 to 2006

Many degree seekers fail to finish

Left out of analyses of credential awards are adults who started but never completed a college program. Over a six-year period, about 35% of college students leave school without a credential. This percentage is higher for students who initially seek a certificate or associate degree than for those who seek a bachelor’s degree. Most of these students do not appear to be leaving because they got what they needed. Among sub-baccalaureate students, only 7% of leavers say they left because they took all the courses they wanted, whereas about half report that they left due to job, family, or financial demands. Similarly, about 45% of baccalaureate leavers report that job, family, or financial demands were the main reason they left. With so many students facing practical constraints on college completion, alternative paths to learning become even more important to ensure that all adults can develop the skills needed to remain competitive in the labor market.

Percent of postsecondary students who left school without a credential, six years after starting: 1995Ð2001

Community Colleges under Stress

It is now generally recognized that a high-school degree is no longer sufficient to achieve a family-supporting income in today’s society. Society is increasingly divided by income, and income is highly correlated with education, with higher earners having at least a four-year degree. Real hourly wages for those with only a high-school diploma fell slightly between 1973 and 2005, from $14.39 to $14.14. Wages for those with a college degree but no postgraduate training rose modestly during the same period, from $21 to $24.67. Those with postgraduate training did even better.

Individuals have increasingly recognized the benefits of more education, which has lead to big increases in applications to colleges at all levels. At the same time, higher education, especially a four-year college degree, is becoming more costly. As a result, more individuals are applying to two-year community colleges, which now enroll almost half of all college students, including disproportionate numbers of minority and immigrant students.

Many community colleges are finding it difficult to deal with this new enrollment onslaught. As publicly funded institutions with limited resources, community colleges must deal more than ever before with the challenge of educating students that are both more disadvantaged and less prepared for college work. They are often asked to fulfill numerous missions, including providing academic, vocational, noncredit, and enrichment courses to their communities, and playing a role in local economic development. Although the colleges differ considerably from one another in terms of the missions they are willing to undertake, there is a core mission shared by virtually all community colleges of enabling low-income students and those with relatively weak academic achievement to continue their education and acquire useful skills. They face three key challenges: unprepared students, financial stress, and high dropout rates.

The remedial education challenge

At most colleges, substantial numbers of incoming students are not prepared for college-level work in at least one of the basic subjects of mathematics, reading, and writing. Students who do not pass placement exams in math and English must first take a remedial class before beginning regular class work. Today, it is a rare institution that does not have a significant share of its student population enrolled in remedial courses, with community colleges bearing the brunt of the problem. In the fall of 2000, 42% of first-year students at two-year public schools enrolled in at least one remedial course, compared with 20% at public four-year schools and 12% at private four-year schools.

Remedial courses are controversial for many reasons. Students don’t like them, because they feel that having earned a high-school degree, they are ready for college work. They take time and money and postpone the earning of college credits and the attainment of degrees and certificates. Placement into remedial courses may result in some students leaving college earlier than they otherwise would have. The public complains that they are paying for students to retake subjects they should have mastered in high school. College faculty and administrators, blaming deficiencies in the K-12 system, are often frustrated by the difficulties of serving an unprepared student population.

There is some basis for student complaints, because placement tests are not a reliable predictor of success in remedial courses. In a 2007 study for the Connecticut Community College System, Davis Jenkins and I found only a weak correlation between placement test scores and success in remedial or “gatekeeper” courses (courses required to earn a degree or to continue on to higher study in a particular subject). Part of this is due to the inability of a single high-stakes test to accurately measure student skills and to predict future outcomes, and part is due to the inconsistency of the curriculum and grading of the same course across different instructors. This argues for greater standardization in curriculum and grading, as well as applying more care to student placement. More attention could be paid to individual students, their motivation, and their records in order to improve placement, but this would be costly.

Moreover, it is not clear that remedial instruction is effective. There has been little rigorous research on this topic. One study, by Eric Bettinger and Bridget Terry Long, of students in Ohio found that those who took remedial courses had better outcomes, including better levels of retention and degree completion. They noted, however, that remediation is costly: more than $1 billion annually for public colleges alone. In another study, Juan Carlos Calcagno found that remediation increased the likelihood that students would enroll in the subsequent fall term but made no difference in their chances of passing first college-level courses, in completing associate degrees, or of transferring to a four-year school.

It is unfortunate that so many students must take remedial courses. How could we reduce the number? In addition to improving the K-12 system, it appears that the best strategy is to improve coordination between high schools and colleges.

Today, it is rare to find much coordination between high schools and community colleges, despite the fact that they typically are located near one another. High schools and colleges operate in their own institutional silos, with limited communication. It would require substantial reform to better coordinate them. All we have currently are some programs that aim to give students a taste of college while still in high school. In the absence of a seamless system that aligns high-school curricula with those of colleges, such programs can at least give students an idea of what is needed for college-level work.

Traditionally, college-level work was restricted to top high-school students, who enrolled in advanced placement (AP) courses. Increasingly, students are taking dual-enrollment courses—college-level courses taken while a student is still in high school and count for both high-school and college credit. These courses, which are often offered to a broader spectrum of students than AP courses, can give students a better sense of what is required to succeed in college and may prompt them to improve their college preparation. They may also cause students to raise their aspirations. A 2007 study by Melinda Mechur Karp and her colleagues found evidence that dual enrollment can boost postsecondary outcomes in some contexts.

Another program currently being tried is the early-college program, in which students combine their work toward a high-school diploma with work toward an associate degree or two years of college-level credits to transfer to a four-year institution. The program aims to raise aspirations and expectations and to offer students who work hard in high school the rewards of reduced time in college and earlier entry to the workforce. The largest such initiative, the Early College High School initiative (www.earlycolleges.org), is funded by a number of large foundations and targets schools with high numbers of poor and minority students. Although these programs have the potential to boost high-school graduation rates in schools where they are relatively low, to improve college preparation, and to improve postsecondary outcomes, there is not yet any hard evidence that they are effective.

Another promising strategy that could potentially improve the efficacy of remediation is the “learning community,” in which groups of students take a number of courses together, with faculty coordinating the teaching. In this arrangement, a remedial course can be coupled with a college-level subject course. A student might take a remedial English course in combination with college-level history and sociology courses. The reading and writing in the remedial course would use materials from the college-level course.

Learning as a group can create a sense of teamwork and connectedness that can improve student motivation and success. In addition, if students see a connection between remedial courses and college-level success, it may motivate them to work harder in the remedial courses. An experimental evaluation by Susan Scrivener and colleagues of learning communities at a community college in Brooklyn, New York, found some positive impacts of the program. Students in the learning community had better outcomes during the semester in which the learning community was implemented and completed the college’s remedial English requirements faster.

Financial stress

Financial pressures are increasingly prompting states to direct more students to community colleges, where costs are far less than at four-year schools. Full-time community college teachers are paid substantially less than their counter-parts at four-year colleges. In 2007, according to the Chronicle of Higher Education, the average salary for a full professor at a doctoral-level public university was $106,495, compared with $68,424 at a two-year institution, and $50,474 at a two-year institution without academic ranks. Community college teachers also typically have a much heavier teaching load.

Most community colleges, however, lack the funding to handle increasing enrollments, creating pressures for cost cutting. “Nonessential” services such as counseling are often the first to go. The number of part-time faculty members has been increasing, with more of these teachers working at more than one campus. Research indicates that student engagement with a campus can promote retention and positive outcomes; heavy use of part-time faculty who are unlikely to have attachment to any one campus is not likely to promote such engagement.

Community colleges get most of their funding from three sources: local property taxes, state allocations, and student tuition and fees. Revenue from the latter is limited because the cost of tuition and fees is so low, averaging about $2,400 annually compared to about $6,200 at public four-year colleges, according to the American Association of Community Colleges. (In contrast, tuition at elite colleges is much higher; for instance, it is $37,000 at my school, Columbia University, which allows for many more services to students.) State funding fluctuates as economic and political conditions change, and community college leaders increasingly complain that they are not receiving enough state support even to keep up with inflation and enrollment increases.

Unlike elite colleges, community colleges also have little in the way of endowment funds to draw on. The philanthropic community needs to recognize this inequality, although it is an uphill battle, given that much of the elite colleges’ endowments are raised from wealthy alumni.

Major foundations, at least, have recognized the critical role of community colleges in serving less-advantaged populations. They have funded projects such as the Ford Foundation’s Bridges to Opportunity program (www.communitycollegecentral.org), which focuses on state-level initiatives to help community colleges, with a particular focus on improving outcomes for low-income adults. With its grant, the community college system in the state of Washington developed a communications strategy aimed at marketing the system in order to demonstrate its value to key constituencies such as legislators, business leaders, and potential students, with an eye to increasing state support for the system. One result of this strategy was the creation by the state legislature of an “opportunity grant” program, which provides grants to low-income students to help them persist in college.

Poor student outcomes

For a variety of reasons, degree-completion rates at community colleges tend to be relatively low. Take, for instance, California, which, like most states, operates a stratified higher-education system, with the University of California (UC) on top, followed by the California State University (CSU) system, and finally the community college system. Many students have been shunted away from the first two systems and into community colleges, partially as a cost-cutting measure.

In a 2007 study, Nancy Shulock and Colleen Moore found that of public undergraduates in California, 73% attended a community college, 18% attended a CSU campus, and 9% attended a UC campus. Of the community college students, 60% were seeking a degree, with the remainder attending for reasons such as personal enrichment and obtaining job skills. Among all degree-seekers, only 24% were eventually able to transfer to a four-year school or obtain an associate degree or a certificate within a six-year period.

Students have high educational aspirations: Of high-school seniors who participated in the 2002 Education Longitudinal Study, 69% expected to earn a four-year degree or higher; another 18% expected to either earn a two-year degree or at least attend some college, according to the U.S. Department of Education. Actual degree-completion rates are much lower.

Bachelor’s degree–completion rates for students who begin their careers at four-year colleges are relatively high. Three studies (one from 1992–2000, one from 1994–2000, and one from 1995–2001) found that these rates ranged from 60 to 70%, within a six-to-eight–year period. However, the results for community college students are much less positive. In a 2006 study, Thomas Bailey, Peter Crosta, and Davis Jenkins found a six-year graduation rate for community college students of slightly less than 40%, for a cohort starting in fall of 1996. The graduation rate varies greatly from institution to institution because schools vary in many dimensions.

In addition to being unprepared for college work, community college students often lack knowledge of how to succeed in college. They often do not have adequate study skills. They may be the first in their families to attend college, so they have not learned much about college life from parents or peers. They may not know how to interact with faculty and how to make use of college resources such as the library, the counseling center, the computer lab, or the tutoring center. They may not be well briefed on their college’s programs and how they are connected to careers, and they often do not know what they need to do in order to earn a degree or to transfer to a four-year institution. They often do not know how to balance their school, work, and personal lives. If they are given all of this information, they tend to become more engaged with the campus, which can lead to higher retention and greater student success.

As a result of the many difficulties students face in adjusting to college, many campuses have developed special courses that aim to improve student information. These courses go by various names, such as “student life skills” courses or “student success” courses. An initial examination of courses in Florida, which I conducted in 2007 with Juan Carlos Calcagno and Davis Jenkins, found a positive correlation between taking these courses and outcomes such as degree completion and transfer to a four-year institution, although a causal relationship was not established. Despite these positive results, most community colleges lack the incentive to provide such courses, in part because state funding is awarded on the basis of total enrollments rather on retention or successful outcomes.

Tracking students and measuring outcomes could help community colleges improve, because by doing so, the colleges could begin to evaluate their programs and engage in a process of continuous improvement, similar to the approach taken in the business world. Yet most institutions track their students only to the extent required for reporting to state agencies and the federal government. The foundation-funded Achieving the Dream (www.achievingthedream.org) project is a promising new program aimed at helping colleges begin the process of tracking students and measuring student outcomes, and then devise strategies for improving these outcomes

Some states have started performance-incentive programs for community colleges, rewarding them financially for improved student outcomes. These programs have been, to date, quite modest in magnitude and appear to have achieved only modest results, according to a 2006 study by Kevin Dougherty and Esther Hong. In order to see perhaps larger effects, one would need to put in place stronger incentives, but this is controversial, because community colleges argue that if they were given sufficient resources to do their job in the first place, they would be more successful. The state of Washington is currently putting in place a system with modest incentives, with plans to make them stronger over time.

The road ahead

A number of approaches have been taken to tackle the problems of unprepared students, financial stress, and high dropout rates, but a few stand out. Obviously, if the K-12 system did a better job of preparing students for college, the problem of unprepared students would greatly diminish. Better integration is needed between the K-12 and college systems; they should communicate continuously and increasingly function like a single system.

Limited finances are hobbling the ability of community colleges to fulfill their multiple missions. Although more money is not the solution to all problems, it is clear that it is better, all things being equal, to be less reliant on part-time faculty and to allocate money to support strategies such as tutoring and mentoring, supplemental instruction, individualized student counseling, and programs to assist students to succeed in college. More financial assistance to students, including help with travel and child care expenses, could help reduce dropout rates. Incentive programs may also be helpful. An experiment in Louisiana, according to a 2006 paper by Thomas Brock and Lawshawn Richburg-Hayes, found that scholarships that required students to maintain a certain level of enrollment and performance improved student outcomes.

Finally, more attention needs to be paid to improving student success in college. Performance incentives may be helpful, so that colleges are financed not just on the basis of enrollments but in terms of successful outcomes. Obviously, it would be better if such incentives were created with additional funds, instead of cutting into base funding, which is currently insufficient. Tracking of students must be improved, so that it can be determined which programs and which colleges within state systems produce the most favorable outcomes. As with all of the above, continued experimentation and rigorous evaluation of programs are needed to find solutions that work.

Forum – Summer 2008

Controlling health care costs

Robert Louis Stevenson wrote, “These are my politics: to change what we can; to better what we can.” Health care in the 21st century requires a change from old ways of thinking and doing business to better the lives of all Americans.

The articles by Peter Orszag (“Time to Act on Health Care Costs”) and Elliott S. Fisher (”Learning to Deliver Better Health Care”) in the Spring 2008 Issues are just two of the growing number of articles discussing the broken health care system in America. Orszag reports that the runaway costs of the Medicare program are attributed to the rising costs per beneficiary, not solely to the increasing numbers of older adults. Fisher rightly states that this increased cost, due to more frequent physician visits and hospitalizations, referrals to multiple specialists, and frequent use of advanced imaging services, varies across the country and does not better the lives of patients. Perversely, higher spending seems to lead to less satisfaction with care and worse health outcomes.

Fortunately, it is possible to improve the health of individuals, families, and communities while controlling costs. Congress is acting to immediately introduce legislation to improve the care that Medicare beneficiaries receive. For example, it is my hope that Medicare legislation that Congress intends to pass this year will include policies to improve the quality of care patients receive and increase access to health promotion and disease prevention services. Further, the Senate Committee on Finance has set an aggressive agenda of hearings this year to identify additional strategies for health care reform. As part of this series, on June 16, we will convene a full-day Health Summit to bring together health care leaders and Congress to explore viable strategies to improve the health of Americans.

The demand for health care reform goes beyond Medicare and Medicaid. The overall quality and cost of care must be addressed. Public and private payment systems can be tools used to obtain more appropriate health care while controlling costs. Payment should support clinical decisions based on the best available evidence, instead of irregular local practices. The judicious and appropriate use of technology will improve the lives of Americans and stimulate innovation. Because approximately three-fourths of the health care dollar goes to treating the complications of chronic illness, preventing disease and carefully managing chronic conditions are vital, and we should target the most expensive medical conditions first.

In the 16th century, Richard Hooker wrote, “Change is not made without inconvenience, even from worse to better.” This remains true in the 21st century. We know what needs to be done and how to do it. We have the political will to make the necessary changes. It will require continued commitment from all health care providers and payers to meet the expectations of the American public.

Senator Max Baucus

Democrat of Montana


When it comes to health care, we all want the best. We want the latest, most sophisticated care for ourselves and our loved ones. We want the steady, unencumbered march of medical innovation that will bring us tomorrow’s treatments, cures, and preventions. And we want this high-quality, accessible, and forwardmoving care at a fair and sensible price. Once a year, most Americans make a price-driven decision about our choice of health insurance. Throughout the rest of the year, we don’t want money to be a factor in the decisions made about our care.

WE NEED TO EMBRACE WELL-DESIGNED PAY-FOR-PERFORMANCE PROGRAMS THAT PROVIDE APPROPRIATE INCENTIVES TO DO THE RIGHT THING FOR PATIENTS, BUT NO MORE OR LESS.

The articles by Peter Orszag and Elliot S. Fisher present thoughtful perspectives in the ongoing debate about medical costs, quality, and access. The authors note the significant variations in health care costs and practices across different regions of the country and point out that certain higher-cost practices do not necessarily translate into better outcomes. They advocate for a system that balances care, efficiency, and costs in part by identifying best practices, sharing them openly, and embracing them willingly. Orszag also demonstrates that the aging of our population is likely to drive inexorable increases in our society’s health care spending.

Health care providers should clearly look to one another to share practices that make the most sense for patients and for society. We need to rigorously study and analyze promising ideas and methods that can bring greater value to our health care system, and we need to adopt these best practices widely and universally. Controlling health care costs will also require stronger effectiveness reviews of drugs, devices, and technology to evaluate whether new products are a good value and what their reimbursement should be. The recent widespread dissemination of the da Vinci robot is a good case study about the consequences of the absence of such a review system. Implementing electronic medical records and expanding information systems that enhance care coordination among providers and support clinical decisions also offer great cost-saving potential. Another important tool in cost control is process improvement, which identifies unnecessary steps and removes waste from the system while enhancing quality and safety.

Adopting best practices and injecting more uniformity into health care should help curb health care costs, but this approach won’t work in isolation. We also must take a hard look at payment reform. It is important, for instance, to find ways to reward those who practice evidence-based medicine, those who offer effective disease management programs and those who provide ongoing preventive care. In addition, we need to embrace well-designed pay-for-performance programs that provide appropriate incentives to do the right thing for patients, but no more or less.

The road leading to high-quality care, ongoing innovation, and cost control is long and winding. But given the national attention to these issues, now is the time to make choices and decisions. We must act quickly as a nation to develop the necessary policies and programs to do so. Doing so may allow us to trim the unnecessary fat in our health care system and not be forced to cut into the muscle.

PETER L. SLAVIN

President

Massachusetts General Hospital

Boston, Massachusetts


Peter Orszag suggests that the United States could reduce growth in health care costs by pruning low-hanging fruit— unwarranted variation—from the invasive vines of health care spending.

For more than 30 years, John Wennberg, Elliott Fisher, and colleagues at Dartmouth have documented remarkable regional variations in Medicare spending across the United States. Dartmouth Atlas research has shown that the quality of care is actually worse when spending and use of care (more visits and tests) are greater. Fisher says that if all U.S. regions would safely adopt the organizational structures and practice patterns of the lowest-spending regions, Medicare spending would decline by about 30%.

This demonstrates a tremendous opportunity for providers to improve quality and decrease health care costs; in other words, to increase the value of the care we provide to patients. Here are two ways to move the health care industry in that direction.

Coordinate care. Traditionally, physicians have been trained in a competitive environment that rewards knowledge and independence. Yet we know that the highest-value care is delivered in regions where providers work in teams in various organizational models. Patients, particularly those with chronic or complex illnesses, need and deserve coordinated care, in which physicians are team members, working as partners with patients, families, nurses, and other health care professionals.

Pay for value. Because our current reimbursement system rewards piecework (more reimbursement for performing more visits, diagnostic tests, and procedures), it’s natural that U.S. health care is laden with it. To get the value we want, payers should begin to reward those who deliver high-quality care at a lower cost over time. Currently, physicians who offer efficient high-quality care are financially penalized. In addition, our current system does not reward providers who offer coordination of care for patients, who often do not need a physical visit to the doctor’s office.

Orszag suggests that moving from a fee-for-service to a fee-for-value system, in which higher-value care is rewarded with stronger financial incentives could yield the largest long-term budgetary gains. Fortunately, some standardized data, including the Dartmouth Atlas (measuring cost) and the Medicare Provider Analysis and Review File (measuring mortality), are now publicly available. In fact, Medicare could use these data to change the way they pay by giving fee increases to only those providers that are delivering value to their beneficiaries. Taking that step is not the long-term solution but would definitely create incentives to increase the value of health care.

It’s possible to restrain swelling health care costs, and we don’t have to sacrifice quality to do it. Coordinating care and reforming the way we reimburse providers can help us move along the path toward this goal.

JEFF KORSMO

Executive Director

Mayo Clinic Health Policy Center

Rochester, Minnesota


Middle East economics

Howard Pack’s analysis of East Asia’s economic successes rings true (“Asian Successes vs. Middle Eastern Failures: The Role of Technology Transfer in Economic Development,” Issues, Spring 2008). The importance placed on education; stable macroeconomic policies and strong institutions; high rates of private investment; an openness to trade, investment, and new ideas; and a commitment to compete in the global economy are all factors in this economic success.

Unfortunately, in recent decades, growth in the Middle East and North Africa has not been as uniformly strong, although some countries in the region have done very well. I am encouraged, however, by recent signs of success. Over the past five years, economic growth across the Middle East and North Africa has averaged over 6% per year—the longest sustained growth performance since the 1970s. This has been accompanied by the creation of new jobs for the people of the region and improvements in social indicators.

Oil price rises have been part of the story, but only part. Recent gains are being driven by governments stepping back to make room for a dynamic private sector to introduce new technology and ideas to generate growth, jobs, and opportunities.

Egypt is one country that has embarked on wide-ranging reforms to open its economy, attract investment, and encourage innovation. The World Bank’s Doing Business report suggests that Egypt was the fastest reformer in the world in 2007. Progress can also be seen in other parts of the region: in Saudi Arabia, the Gulf states, Morocco, Tunisia, and Jordan, to name only a few.

Of course, there is a long way to go. The Middle East and North Africa face major challenges to provide jobs for the 80 million young people that will be entering the workforce over the next 15 years. Long-term solutions to address the impact of current food price increases on the poorest people will require improvements in agricultural productivity and efforts to ensure that growth both continues and is broadly based.

Responding to these opportunities and challenges, World Bank President Robert Zoellick has heard leaders of the region and committed to making the development of the Arab world one of the bank’s six strategic priorities.

The bank looks forward to working with partners in the region—governments, the private sector, and civil society—to scale up support to help them meet their own development goals. During extensive consultations, there has been a consensus on key priorities: supporting further reform and greater economic integration between the Arab world and the global economy to generate opportunities and jobs; improving the quality of education to encourage innovation and give young people vital skills; drawing more fully on the talents of the entire population, including women; strengthening the management of scarce water resources and overcoming environmental challenges; and providing opportunities for countries that remain in conflict.

I hope that Pack may have the opportunity to extend his valuable analysis of the lessons that can be drawn from the development experience of East Asia and the Middle East and North Africa to other regions, including Latin America.

I applaud the gains made by the East Asian region over the past few decades. At the same time, we should not forget that the countries of the Arab world have a proud history as the place where writing, advanced mathematics, navigation, and the early instruments of global trade were first introduced. They have led the world in the past, and I am hopeful that they can grasp current opportunities by capitalizing on existing and evolving knowledge to position themselves as leaders in fields such as renewable energy, water management, and financial services.

JUAN JOSÉ DABOUB

Managing Director

World Bank

Washington, DC


Homeland security research

“The R&D Future of Intelligence” by Bruce Berkowitz (Issues, Spring 2008) accurately portrays many of the significant technology-related challenges— and opportunities—confronting the intelligence community today and suggests four pragmatic “strategy principles” that could lead to a stronger R&D posture. I would, however, offer a few additional observations.

The major challenges cited by Berkowitz are “two developments—changes in the threat and changes in the world of R&D.” Although this is certainly true, I believe it is the feedback loop that exists between these two developments that is of even greater concern. Taken in combination, ongoing changes in both areas amplify the effects and demand from the intelligence community a degree of agility that is not delivered today. The potential for technology surprise is growing, while the community’s ability to rapidly exploit state-of-the-art technologies is declining.

The remedies proposed by Berkowitz have considerable merit; again, however, I would go a bit further. He first recommends more engagement between the intelligence community and the broader R&D community, suggesting that this would, over time, build a larger pool of cleared scientific personnel. Although necessary, I do not believe this is sufficient. I would augment his recommendation to include the identification of technologies that are today being driven by academic researchers and/or commercial interests and the development of aggressive strategies to engage with those R&D communities in unclassified settings. When separated from mission specifics, many of the technological capabilities needed by the intelligence community are well aligned with the needs of other organizations. This is particularly (but not uniquely) true in areas relating to information analysis, collaboration, and dissemination.

I fully agree with the need to ensure that intelligence community researchers are informed by real-world problems without being captive to those problems. The notion of prudent risk-taking to foster high-payoff innovations is an important one. I am hopeful that the Intelligence Advanced Research Projects Activity will, over time, mature to fill the void that exists in this area today.

I also endorse the point Berkowitz makes several times about the need to align incentives with desired outcomes— from a community perspective. Although some problems are common to virtually all agencies within the intelligence community and would benefit from enterprise-wide solutions, other problems exist at the boundaries between organizations and will require collaboration simply to frame the problem so that a solution can be developed. Unfortunately, the intelligence community today is weak in both dimensions, and these are not issues that will be resolved by simply investing more in R&D.

Finally, I would observe that although Berkowitz’s article is specific to the intelligence community, the challenges and opportunities he identifies are not. Issues spawned by the globalization of science and technology are ubiquitous and, unfortunately, most governmental institutions are not well positioned to either exploit the opportunities or counter the challenges.

RUTH DAVID

President and CEO

Analytical Services

Arlington, Virginia


Nuclear safeguards

In “Strengthening Nuclear Safeguards” (Issues, Spring 2008), among the five factors considered by Charles D. Ferguson to explain why the International Atomic Energy Agency (IAEA) is confronting a crisis in its ability to detect undeclared nuclear activities, I believe the most relevant one is the “limited authority for the IAEA to investigate possible clandestine nuclear programs.” There is no doubt, as he rightfully stresses, that safeguards agreements have loopholes that can be exploited to develop nuclear weapons programs. It is therefore important to identify these loopholes and suggest practical measures to plug the gaps.

One of the main problems is that, contrary to what many experts believe, even when a state has ratified an Additional Protocol to its comprehensive safeguard agreement with the IAEA, agency inspectors still don’t have “access at all times to all places and data and to any person” as requested under the IAEA Statute.

This was made clear in the agency’s November 2004 report on Iran, which stated that “absent some nexus to nuclear material, the Agency’s legal authority to pursue the verification of possible nuclear weapons related activities is limited.”

It would also be a mistake to believe that the IAEA’s board of governors can “give the safeguards department the authority to investigate any persons and locations with relevance to nuclear programs,” as Ferguson suggests. Indeed, the board’s resolutions are not legally binding; only resolutions adopted by the United Nations Security Council under chapter VII of the UN Charter are mandatory.

Although it is correct to say that the Board “has usually been reluctant to exercise its existing authority to order a special inspection,” special inspections are not a panacea. In the case of the alleged construction in Syria of a clandestine nuclear reactor with help from North Korea, for instance, it is likely that if a special inspection had been requested by the IAEA, Syria would have had time to remove incriminating evidence before IAEA inspectors could access the site.

Under a comprehensive safeguards agreement, “in circumstances which may lead to special inspections … the State and the Agency shall consult forthwith.” As a result of such consultations, for which there is no time limit and which would normally take weeks or months, the agency “may obtain access in agreement with the State.” If the state refuses, the board of governors can decide that an action is essential and urgent, but that procedure can also take weeks.

A special inspection will therefore be useful mainly in cases where nuclear material or traces thereof cannot be removed, as would have been the case in 1993, with the request to access a waste storage facility in North Korea. If Syria wished to dispel any suspicion that nuclear-related activities were taking place at the al Kibar site bombed by Israel in September 2007, it could have invited the IAEA to immediately conduct a special inspection there, as Romania did in the early 1990s to verify previously undeclared activities. Had Syria an Additional Protocol in force, agency inspectors would have long since requested “complementary access” to the incriminated site.

As underlined by Ferguson, a strengthened safeguards system is crucial for securing international security. The international community knows what could and should be done to improve the nonproliferation regime. What is missing is the political will to act.

PIERRE GOLDSCHMIDT

Nonresident Senior Associate

Carnegie Endowment for International Peace

Washington, DC


Charles D. Ferguson has covered a complex subject succinctly yet thoroughly. He ably draws out the main contemporary safeguards themes, especially how to improve the capability to detect clandestine nuclear programs, manage an expanding safeguards workload, and ensure that there is political will to deal with violations. In this brief response, I will highlight some key issues for the safeguards system.

Although seen by many as a deterministic system, actually safeguards are about risk management: how to identify and address proliferation risk, using available authority and resources. The principal proliferation risk has always been from unsafeguarded nuclear programs involving states outside the Non-Proliferation Treaty (NPT) or non–nuclear-weapon states (NNWS) with undeclared nuclear activities (thereby being in violation of the NPT). In the non-NPT states (India, Israel, and Pakistan, and depending on one’s legal view, North Korea), most facilities are outside safeguards. Bringing the nuclear programs of the nuclear-weapon states and the non-NPT states under appropriate verification is the goal of the proposed fissile material cutoff treaty.

In the 1970s and 1980s, the principal proliferation indicator for NPT NNWS was thought to be the diversion of nuclear material from safeguarded facilities. Hence, safeguards developed as a facility-based system with an emphasis on material accountancy. Discovery of Iraq’s clandestine nuclear activities prompted a major program by the International Atomic Energy Agency (IAEA) with member-state experts to redesign the safeguards system, especially to better address the issue of undeclared activities. As a consequence, safeguards are changing to an information-driven, state-level system. The essential foundation for strengthened safeguards is the Additional Protocol (AP), giving the IAEA more extensive rights of access and information. States must do more toward achieving universalization of the AP—it is high time that all nuclear suppliers made the AP a condition for supply.

ALMOST ALL U.S. BIRDWATCHERS WHO HAVE BEEN ACTIVELY INVOLVED IN THEIR HOBBY FOR MORE THAN A DECADE PROBABLY HAVE THE SENSE THAT THERE ARE NOW FEWER MIGRATORY BIRDS THAN THERE WERE WHEN THEY FIRST BEGAN BIRDING.

Current proliferation challenges have come from clandestine activities, not mainstream nuclear programs. More needs to be done to develop detection capabilities for undeclared programs and to redirect safeguards efforts toward this problem. Detection of undeclared nuclear activities is a major challenge; the IAEA cannot be expected to do this unaided, as the agency can never match the intelligence capabilities of a major state. States need to be more willing to share information with the agency, particularly on dual-use exports and export denials and intelligence information. For its part, the IAEA must be more active in using its existing authority, including special inspections and rights under its Statute.

One area that might be given further attention concerns the IAEA’s processes, especially on compliance. Particularly with the Iranian case, politics have intruded into the deliberations of the IAEA’s board of governors. This may be difficult to avoid, since the board comprises representatives of governments, but the board’s work was made more complicated through the agency’s involvement in negotiations with Iran. The IAEA’s Statute is clear: Questions relating to international peace and security are to be referred to the United Nations Security Council. Enabling the agency to deal more effectively with proliferation cases depends on both improving its technical capabilities and refocusing on its technical responsibilities.

JOHN CARLSON

Director General

Australian Safeguards and Non-Proliferation Office

Barton, Australia

John Carlson is a former chair of the IAEA’s Standing Advisory Group on Safeguards Implementation.


Protecting migration routes

Almost all birdwatchers in the United States who have been actively involved in their hobby for more than a decade probably have the sense that there are now fewer migratory birds than there were when they first began birding.

In fact, one birder, Barth Schorre, who photographed birds for 30 years on the Texas coast, recently recounted his concerns by saying that “Over the years I became aware that I was not only seeing fewer species, but also fewer total numbers of birds,” referring to the years from 1977 to 2004, when he observed and photographed spring migrants at a single 3.5-acre site in south Texas. “Looking back through my log books I can see that on a typical spring day in the 1980s, a list of migrant species filled a page to overflowing. More recently I am logging the observations of three or four days on a single page.”

Concerns about spring bird migration and many of the world’s great animal migrations, including scientific data that support the observations of amateur naturalists, are highlighted by David S. Wilcove in “Animal Migration: An Endangered Phenomenon?” (Issues, Spring 2008). Working as I do for the American Bird Conservancy, an organization dedicated to conserving wild birds and their habitats throughout the Americas, I am naturally engaged in an effort to address the issue that Wilcove so eloquently addresses, the same one that is being witnessed firsthand by birdwatchers across the country.

Fortunately, the issue has also raised concerns among some politicians. Reps. Ron Kind (D-WI) and Wayne Gilchrest (R-MD) have recently introduced legislation to fund efforts to help protect migratory birds. The act, H.R. 5756, reauthorizes an existing law, the Neotropical Migratory Bird Conservation Act (NMBCA), but at significantly higher levels, to meet the growing needs of our migrant songbirds, many of which are in rapid decline.

NMBCA supports partnership programs to conserve birds in the United States, Canada, Latin America, and the Caribbean, where approximately five billion birds of over 500 species, including some of the most endangered birds in North America, spend their winters. Projects include habitat restoration, research and monitoring, law enforcement, and outreach and education. To date, more than $21 million from NMBCA grants has leveraged over $95 million in partner contributions. Projects involving land conservation have affected about 3 million acres of bird habitat.

Under the new bill, the amount available for grants would increase to $20 million by 2015. We believe that this support can make an important difference in reversing the negative trends of many migratory songbirds.

MICHAEL J. PARR

Vice President

American Bird Conservancy

Washington, DC


David S. Wilcove calls attention to a major conservation issue. His frequent mention of migratory fish particularly hit home for me, a biologist who works with native fish in California.

California has the southernmost populations of 13 species of anadromous fishes (6 salmon, 2 sturgeon, 2 lampreys, and 3 smelt). In addition, the salmon, steelhead, and sturgeon have been divided into 22 distinct taxonomic units, most of them endemic to the state. All of these migratory fishes are in decline, and 11 have been listed as threatened or endangered.

The persistence of these fish is astonishing. Southern steelhead still migrate to the ocean from streams as far south as Malibu and San Diego. Four runs of Chinook salmon persist in the great Central Valley, although numbers are down considerably from the 1 to 2 million fish that once returned every year. The Klamath River still supports 10 species of oceangoing fish and multiple runs of salmon and steelhead.

As Wilcove points out, the fish have value far beyond their harvest value. They are spectacular and iconic representatives of our wild heritage. They are also still part of our ecosystems. Even hatchery-driven runs of salmon coming up channelized rivers can support diverse wildlife, and nutrients from their flesh find their way into the grapes of riparian vineyards.

The abundance of migratory fishes results from a diverse topography, a long coastline, and one of the most productive coastal regions in the world. All of this means that whether in fresh or salt water, the fish live mostly in California. Thus, their rapid decline in recent years is entirely our fault; we have done everything bad imaginable to their watersheds.

Unfortunately, most Californians are unaware of these amazing runs of migratory fish and their status. Awareness comes only when water becomes less available because it is needed to sustain endangered fish. Thus, a federal court has ruled twice (for different species) in the past year that the giant pumps in the Sacramento–San Joaquin delta must pump less water, sending it south for agricultural and urban use. The plight of fishermen also made the news when fisheries were shut down because of the rapid decline of salmon populations.

What can be done? One view is that it is already too late for most migratory fish in the state; therefore the most we can do is maintain a few boutique runs to view in a Disneyland-like atmosphere. I prefer a more optimistic view, relying on the resilience of the fish to respond to major improvements in their habitats. Thus, I am involved in a fast-track effort to bring back Chinook salmon to 150 miles of the now-dry San Joaquin River, an incredibly complex task. Perhaps the biggest return from this endeavor will be for people of the San Joaquin Valley to see salmon in a living stream, a phenomenon that has been absent for over 75 years. I hope that Wilcove’s writings will inspire further efforts along these lines.

PETER B. MOYLE

Department of Wildlife, Fish, and Conservation Biology

Center for Watershed Sciences

University of California

Davis, California


David S. Wilcove provides a rich account of animal migration and an incisive analysis of the attendant conservation challenges. To his credit, he assiduously avoids the doom-and-gloom approach that pervades much environmental literature and alienates most of the public. So, despite my concern that bad news is rarely motivating, I worry that conserving migrations may be more difficult than the author suggests.

The first challenge that Wilcove identifies is the coordination of planning across borders in light of the large distances involved in many animal migrations. And the scale of movement points to a deeper problem. Although Wilcove proposes that, “animal migrations are among the world’s most visible and inspiring phenomena,” it seems that many, even most, are invisible to the public, in substantial part because we are not used to perceiving the world at such a scale. People generally know that songbirds “head south” for the winter, but few understand the enormity of the space these creatures traverse. I’d wager that not 1 in 10 Wyomingites knows that their state hosts the third longest overland mammal migration in the world—the movement of pronghorn from the Red Desert to Grand Teton National Park. And returning to Wilcove’s geopolitical worry, when one realizes the challenge of making this pathway, which is entirely within Wyoming, the country’s first National Migration Corridor, the difficulties of protecting pathways that cross national borders become daunting.

The second problem that Wilcove raises is that of protecting animals while they are still abundant. Of course, we know that plentitude is no guarantee against extinction, as evidenced by the Rocky Mountain locust, whose swarms were arguably the greatest movement of animal biomass in Earth’s history. But the deeper problem is that we’re not talking about saving species understood as material collections of organisms. Rather, as Wilcove indicates in the subtitle of his essay, we are proposing to safeguard processes. Ecologists are coming to understand that organisms, populations, species, communities, and ecosystems may best be understood in terms of what they do: A thing is what it does. So an ecosystem is nutrient cycling, soil building, and water purification. Likewise, migrating gray whales, cerulean warblers, and pronghorn are not objects but waves of life, in the same sense that a wave is not the moving water but the energy coursing through a fluid. We can no more conserve a migratory species by keeping a cetacean in a tank, a bird in a cage, or an antelope in a zoo than by storing DNA in a test tube.

My concerns regarding political coordination and philosophical conceptualization are not meant to dissuade anyone who wishes to conserve the endangered phenomena of animal migration. We desperately need clear, intelligent, impassioned, and hopeful voices such as that of Wilcove. But we must fully grasp the nature of the challenges that lie ahead, for humans and our fellow animals.

JEFFREY A. LOCKWOOD

Professor of Natural Sciences and Humanities

University of Wyoming

Laramie, Wyoming


Reforming medical liability

Frank Sloan and Lindsey Chepke are to be commended for calling attention to fundamental problems in America’s medical liability system (“From Medical Malpractice to Quality Assurance,” Issues, Spring 2008). The existing system compensates few patients who have been injured, its deterrent effect is limited, its administrative costs are very high, and it does little to improve patient safety. And, as they point out, the types of medical liability reforms that traditionally have generated political traction, including caps on noneconomic damages, do little to address these shortcomings.

Of course, these problems are not new, and a variety of both incremental and far-reaching reform proposals to address them have been advanced through the years by political leaders, academics, policy advocates, and interest groups. Among these, the nonprofit organization Common Good (of which this author is general counsel) has been active in promoting the concept of developing administrative health courts, with specialized judges, independent expert witnesses, and predictable damage awards. In many ways, the health court proposal bears a resemblance to workers’ compensation, as a structured approach to compensating a particular type of injury, with strong linkages to risk-management efforts intended to reduce errors. With support from the Robert Wood Johnson Foundation, Common Good has worked with a research team from the Harvard School of Public Health to develop a conceptual proposal for how this system might work and to identify opportunities to test this proposal in pilot projects.

As far as implementation is concerned, transformative reform proposals tend to face considerable political challenges. Still, there are a number of ways in which some variant of the health court/administrative compensation proposal might be adopted at the state level. It’s not inconceivable that a state legislature might establish a demonstration program for compensating certain types of injuries outside the tort system. It’s more likely, however, that a hybrid model might be created that linked some sort of error-disclosure program with structured arbitration. It might also include supplemental insurance (purchased by the patient to protect against the risk of injury) and/or scheduled damages. To align incentives between health care professionals and institutions, these initiatives might well be paired with enterprise liability or insurance.

Although these reforms may take different and daedal shapes, they are necessitated by the complex problems of America’s medical liability system. If these enduring problems were easy to fix, policymakers would have done so decades ago. Still, though solutions may be challenging, they’re far from impossible. Sloan and Chepke’s article and the book from which it is drawn will undoubtedly play a significant role in continuing to spur interest in promising reform alternatives.

PAUL BARRINGER

General Counsel

Common Good

Washington, DC

The Path Not Studied

Corporate executives, elected officials, political analysts, leading academics, and the rest of the national elite have formed a chorus of voices proclaiming the value of more and better education for all Americans. The message to the nation, particularly the young and disadvantaged, says in essence: Do what we did and you will have interesting, respected, and financially rewarding careers just like us. What’s more, the nation will be richer, public health will improve, and our democratic political system will function more effectively.

How could anyone object? This is the same advice these luminaries give to their own children as they push them along the path to a four-year college degree and graduate or professional school. The ethos of our meritocratic society holds that the same opportunities are available to everyone, and everyone can benefit from a rigorous academic education. The nation’s leaders have followed this path themselves, many traveling far from humble and difficult beginnings. They know it well, and they know it works.

The only flaw in this vision is that this path is not really open to everyone. And even if everyone did earn a college degree, most would still find themselves in jobs that do not require a college degree but that do require skills quite different from what they learned. The chorus is correct that Americans can benefit from more education and training, and many Americans recognize this. But tens of millions of people who accept this advice do not follow the well-known path through a four-year academic degree. They attend one of the nation’s 1,200 community or technical colleges, take courses at one of the roughly 2,700 for-profit career colleges, enroll in an apprenticeship program, or study for a certificate in an infotech or health care specialty. They make up the majority of young Americans who follow the path not studied.

We in the policy elite, particularly in science, technology, and health, devote enormous attention to the quality of the education that will produce future scientists, engineers, physicians, corporate executives, and entrepreneurs, and there is no doubt that the academic superstars makes an invaluable contribution to the nation’s well-being. But we also preach that the nation’s success depends on the skills and contributions of people in every corner of the economy. Yet remarkably little attention is paid to the quality of community colleges or their for-profit competitors.

We endorse the notion that every high school student needs to be prepared to attend a four-year college, even though we know that more than a quarter of high school graduates will go directly to work and another quarter will pursue postsecondary education in someplace other than a four-year college. For the foreseeable future only about 30% of U.S. jobs will require a baccalaureate degree. In offering advice to young people, it would be wise to learn a little more about the real opportunities that exist, the types of education and training they are actually choosing to pursue, and the quality of the educational options that are available and appealing to them.

Perhaps the nation would be better off if the elites spent a little less time fretting over whether U.S. News gives the higher rank to Yale or Princeton (with their combined 10,000 undergraduates) and a bit more to what the California community colleges are doing for their 1.6 million students or the University of Phoenix is doing for its 330,000 students. There is no doubt that Yale and Princeton are doing an estimable job, but do we have any confidence that the young people who are paying plenty to attend University of Phoenix classes are receiving the education that will benefit them and the nation?

The first challenge in evaluating the quality of community colleges is to identify their mission. We ask community colleges to make up for the deficiencies of the nation’s high schools, to prepare students to transfer to four-year programs, to provide practical career training for a diverse and rapidly evolving job market, to be responsive to the training needs of local companies, and to provide enriching classes in art, literature, music, and a variety of other fields that have no connection to work.

A school that offered popular arts classes and targeted employee training for people not seeking a degree and that had a large number of students who transferred to a four-year college before receiving an associate’s degree could be deemed a failure because of its very low degree-completion rate. Or a school that offered strong basic academic courses and granted many associate’s degrees to students who were primarily interested in a job credential might be hailed as a success. One reason for the popularity of the for-profit career colleges is that they are unambiguous about their mission of offering courses that will be immediately useful in winning a promotion or finding a new job.

Idealistic educators worry about tracking students toward courses that do not prepare them for a four-year college. They argue that everyone can benefit from a rigorous academic education. Although it might seem enlightened to pursue this course, does it really serve the interests of all students? Many school systems have deemphasized career and technical education in their high schools because this training limits students’ future opportunities. But there are many students who have no desire to attend college and who would be much better off if they received training that would qualify them for a better-paid job with just a high school diploma. Shouldn’t we pay attention to the fact that 92% of high school students take at least one occupational course in high school?

One reason that education is such a popular medicine for national ailments as different as income inequalities and global economic competitiveness is that everyone supports the idea without paying too much attention to the details. Although we might like to believe that the simple educational goal of enhancing the quality of standard academic education will do as much to close the wage gap as to expand the pool of scientists and engineers, it’s not that easy. One size does not fit all.

Enormous differences exist in the social, economic, and intellectual resources of young Americans, and these result in a broad diversity of interests and aspirations. Why would we expect them all to be well served by the same path through school? We do many of these young people a disservice by pretending that they can all follow the accepted route to the upper middle class. We need to be more curious about what these young people want and more honest about the educational options that will serve their needs. Postsecondary education is critically important to all young people and to the nation. But what type of education other than the four-year college?

One objection to a multi-track educational system is that it will reduce social mobility by providing only limited opportunities to some social groups. That could happen, but it need not. Being born into an affluent, well-educated family undoubtedly gives someone a head start on the road to a professional degree, but as the parents in these families know, it is no guarantee that these kids will be academic superstars or particularly motivated in school. The people with the talent and the desire to excel in school can come from any social group. We need to continue to work to improve the quality of schools in low-income neighborhoods to ensure that promising young people do not have their aspirations undermined. Likewise, when students choose a path other than a four-year college, we should make certain that the institutions they attend receive the resources and the attention necessary to provide high-quality training. The problem is that we have not given community colleges or other alternative education paths adequate resources nor have we devoted enough attention to understanding what type of education and training should occur in these places or how we should ensure its quality.

Fortunately, a few farsighted experts have been curious about this question. You can read what they have found in this edition of Issues. This is no simple chore, and they cannot provide any simple answers. But at least they are looking in the right place and encouraging the rest of us to do the same.

Schools of Dreams More Education Is Not an Economic Elixir

The idea that education is the key to economic success is widely and rightly popular—for individuals. A person who makes it through high school, then college, and then graduate school is likely to enjoy increased earning power with each new degree.

The idea that education is the key to the success of an economy as a whole is a relatively new concept, and the evidence for it is not especially compelling. In developing countries, many people lack even the fundamental literacy and numeracy skills necessary to participate in the modern economy, and that does hold back economies. But beyond acknowledging the value of achieving that basic level of educational competency, is there evidence that higher levels of education help national economic performance? The conventional economic wisdom has been that a nation’s economic strength is linked to physical capital, not human capital: A U.S. engineer is seen as much more productive than an equivalent engineer in India because the U.S. engineer is supported by an abundance of equipment, management systems, and infrastructure. Has something in the global economic system changed so that the overall education level of a country is now a key determinant of its economic prowess?

One attempt to link education levels to economic success relies on the common logical error known as “the fallacy of composition,” which assumes that what is true for an individual will also be true for large groups or for the society as a whole. To illustrate, the average U.S. software engineer earns about $90,000 per year. A high-school graduate with adequate ability who acquired the requisite training in software engineering could expect to earn that much. But if this year’s entire class of high-school graduates decided to become software engineers, there is no way that they would all earn $90,000 per year. Most of them would be unemployed initially because there are not enough jobs for software engineers, then we would see wages fall for those who could get jobs as software engineers, and eventually most would end up working in other fields.

Yes, more education enables an individual to make more money and be less subject to unemployment, but if everyone had that professional degree, they couldn’t all make more money. The reason, of course, is because wages are set by supply and demand. Those with professional degrees make more money only when there is a lot of demand for their skills relative to the supply available. Consider someone with a Ph.D. in history. Although earning this degree requires intelligence and a great deal of hard work, there are a great many such Ph.D.s relative to the demand for them, and as a result, the degree adds almost nothing to the degree holder’s earning power.

Some education boosters argue that the entry of a large number of well-educated baby boomers into the labor force resulted in tremendous growth in the U.S. economy. That was not the case. The 1970s were famous as the period of stagflation, marked by declines in economic growth accompanied by inflation. My colleague Michael Wachter became well-known among economists in that period for his evidence that the entry of the baby boomers into the workforce actually reduced productivity. Although better educated than older workers, their lack of experience made the boomers less productive on average. We baby boomers remember that in those years, college graduates who were lucky enough to find a job could expect to be underemployed for a long, long time. No one is arguing that the greater education level of the boomers caused the collapse of labor productivity, but rather that greater education is not a sufficient condition for improving economic performance.

Others worry that the movement of the boomers into retirement will create a shortage of educated workers, but that is not the case. The workforce as a whole is not shrinking, nor is the number of young people entering the workforce declining. The largest high-school class in U.S. history will leave school this year, and the class next year will be even larger. The U.S. labor force is projected to keep growing as far into the future as we can predict (not true in all countries, of course); only the rate of increase is expected to slow, trivially, in about a decade. It is impossible to say what effect, if any, that modest decline in the rate of increase will have on the economy. The situation regarding education levels is similar. The rate of increase in the average level of education achieved has slowed, but it is still rising every year and according to projections from the National Center for Educational Statistics is projected to keep rising through 2017.

The more important question is whether average education levels affect the economy, and if they do, how. Economies can get into trouble if there is a demand for skills that can’t be met by the labor force, and education can supply some of those skills. Again, we can see this problem in many developing countries such as India and China, where a lack of basic education makes many individuals unemployable. But is there any evidence that the United States will soon face a problem like that? Consider this sobering statistic: The General Social Survey of the United States reports that 30% of adult workers have education levels that exceed the requirements for their current job, a figure that has actually gone up over time. Aside from a small handful of jobs such as nursing where there are infrastructure bottlenecks in training, there is no credible evidence of any overall skill shortage in the United States and certainly no evidence of an education shortfall.

Still, wouldn’t it be good for the economy to raise the level of education in the workforce? Let’s start with a simplifying but clearly wrong assumption that education is free and that it would cost nothing to raise the level of education in the workforce. What would happen to the economy if the average level of education in the workforce went up?

Unintended consequences

One immediate outcome of such a move would be to lower the wages associated with education, effectively lowering the price of educated labor to employers. The economist Richard Freeman described how the returns from education collapsed for baby boomers when their more-educated cohort hit the labor market. In this period, the talk was of “the overeducated American.” Lower wages could allow employers to hire more educated workers for the same jobs, making it possible to maintain an even more overeducated workforce at the same wage rate. Remember, employers today have little trouble finding workers with adequate education at all skill levels. The question is just about what that costs and whether being able to hire workers more cheaply would help them.

How would this move affect the economy? Adding more educated workers will reduce the wages associated with education, and that will allow employers to hire a more educated labor force at the same price. The first point to note here is the obvious one: that expanding the supply of education, a least in the short run, hurts the wages of educated workers even though it will raise the education level in the economy. Will that, in turn, improve productivity? The evidence on this front is not at all encouraging. The key ingredients of increased worker productivity appear to have little to do with education levels. At the level of the company, the biggest improvements in performance come from closing old facilities and opening new ones with better equipment and more efficient production systems. It is not obvious what role, if any, a lower price for educated workers would play in that process, as it seems to have much more to do with physical than human capital. At the level of the establishment, many and perhaps most of the big improvements in productivity come through changes that eliminate jobs altogether: rationalizing assignments and consolidating roles, replacing secretaries with word processing software, etc. Interestingly, making educated workers cheaper actually reduces the incentive to replace them with the technology that would raise productivity.

It is at the level of the individual job where we might expect to see the real contribution of additional education to economic performance. In the years I spent co-directing the National Center for the Educational Quality of the Workforce for the U.S. Department of Education, we scoured research for evidence on the relationship between education and individual job performance. Clearly, additional education allows individuals to take on more difficult and better-paying jobs, but that’s the fallacy of composition argument if we’re talking about overall economic performance. When we look at workers doing exactly the same jobs, additional years of education have very little, if any, effect on improving job performance. It does matter a lot when levels of education are below basic job requirements: Teaching illiterate workers how to read instructions, for example, has a huge payoff. But increasing education beyond the level of the job requirements doesn’t have much effect on job performance.

To see why, consider the following experiment: Let’s take the current employees on a typical assembly line, put them through a Ph.D. program, and then place them back on the line. Should we expect their job performance to go up? Unless job requirements change and the work is redesigned to allow them to make use of their education, there is no reason to believe that more education will have any effect on job performance. The good news in the economy as a whole is that employers are giving employees more scope to contribute and that skill requirements, on average, have risen. However, there is no evidence that the additional requirements are taxing education levels and lots of evidence that education levels still exceed job requirements.

This is the fundamental problem with the “if we build it, they will use it” approach to increasing education levels. It assumes that once employers hired better-educated workers, they would redesign jobs to make full use of them; that they are just waiting to have more-educated workers so that they can put in place more productive work systems. It’s an appealing prospect, but not one with any evidence behind it, at least in the developed world. If employers acted in this way, we would not have overeducated workers. There is no evidence from the research on work organization that a shortfall in overall education levels has kept employers from empowering their employees and expanding job requirements. Most employers do not have formal mechanisms to empower workers to contribute their ideas to workplace decisions.

A disheartening best case

For the sake of making the most optimistic case, let’s assume not only that education is free but that employers would be willing to raise job requirements if their workers brought more skill to their job. The next problem is that education per se does not equal work-related skill. No matter how flexible and accommodating the employer is, a master’s degree graduate in astrophysics is unlikely to be able to contribute much to the performance of a typical service worker. This is an obvious point but one easy to lose in discussions about overall education levels: Beyond a basic level, well below what we typically think of as postsecondary education, what matters to job performance is not generic education but education specific to the performance of particular jobs.

So let’s make yet another heroic assumption: that our education system can somehow match up in a very close way to job requirements; that we could produce graduates with more occupationally specific education that mapped precisely onto the occupations that were in demand. If we could get candidates with more industrial and mechanical engineering coursework to production jobs, for example., and employers would redesign the jobs in ways to make use of those additional skills, would job performance and overall performance improve? Certainly. But given that the country already has a workforce that is overqualified for the jobs they are doing, the more useful intervention would be to find ways to get employers to make greater use of the skills they already have by changing the nature of current jobs to take better advantage of existing skills rather than to load the employer up with even more redundant skill.

Further, if we wanted to raise the average skill level in the workforce, it is not obvious that putting individuals through more academic coursework is the best way to do that. The evidence suggests that other forms of human capital—training and experience—are likely to pay off better in terms of performance and wages in the same job than does more traditional academic education. Anecdotal accounts from employers suggest that their biggest challenge is finding candidates with work-based competencies, the kind that are learned in employer training programs and through experience.

The society benefits in many ways from having a more educated population, as do individuals, and as a society, we have a collective interest in maintaining, even increasing, education levels. And there is no doubt that the country’s postsecondary education system is failing many young people. It is remarkably inefficient at getting students through to graduation. The rapidly rising cost of education is becoming an increasing constraint, exacerbating problems of access and inequality. For the individual, education remains the most important avenue of opportunity in society. One could also make a strong case that there is a segment of the workforce whose level of education is so low that it places them below the job requirements of even the lowest level of jobs. The entire economy could improve if additional education could make them employable.

When we move to a discussion about the effects of overall education levels in society on the aggregate economy, however, we need to put a big caveat around the idea that increasing the level of education in the workforce as a whole will improve the overall economy. When employers are asked what factors are limiting improvements in productivity in their organization, they often don’t have obvious responses. If they did, they would have acted on them. That’s why they spend so much time and money seeking help from consultants in the quest to find better ways to manage their operations. In other words, it’s a mistake to assume that they are being held back in important ways from obvious improvements they would like to make. When they are asked specifically to identify workforce concerns, education issues are almost never at the top of the list. Instead, the top complaints focus on work-related behaviors and attitudes such as conscientiousness, motivation, and social skills. Issues associated with academic skills are far down the list, especially when one moves away from lower-level, front-line work.

Beyond the classroom

The best way to tackle concerns about skills is not by adding more years of traditional classroom education. It is to expand work-based education through programs at the workplace or those that attempt to combine work and classroom experiences. The nation will derive much more economic benefit from apprenticeships, school-to-work programs, and the close associations that many community colleges have developed between employers and classroom topics. If there is a skills problem in the United States, it lies in the area of work-based skills: Every employer wants someone who already has three to five years of experience, who already knows how to do the job. The cold reality is that many employers have either abandoned or cut back on training and work-based learning programs. Union apprenticeship programs have dried up so much that the U.S. Bureau of Labor Statistics no longer even reports them. Employers in industries such as information technology now expect to do no training of any kind, to hire just-in-time candidates who have exactly the skills they need precisely when they need them.

As employers have backed away from training and developing their own talent, they are effectively pushing the problem off onto the job candidates, who in turn look to traditional education providers for help. The shift toward more vocationally oriented degree programs among postsecondary institutions is unmistakable and is a clear response to the hiring practices of employers. Consider, for example, the fact that undergraduates majoring in business have increased from about one in eight students in 1970 to roughly one in four in 2005. This development has also pushed the costs of acquiring skills onto job candidates, who now have to pay for them up front, rather than through employer-based programs (the individuals often pay for those as well through lower wages, but at least they don’t have to advance the costs). My colleagues Ivar Berg and Randall Collins described the now famous finding that individuals achieve more credentials in education to signal to employers that they are better than the other candidates, even though the extra expenditure on additional education has little payoff. If everyone has a high-school degree, then I can differentiate myself by getting a college degree. But then everyone does the same thing, and we all end up doing the same jobs we would have done, jobs that require only high-school degrees, even though we have all been through college. Although it may be true that more educated workers are more conscientious and have better work attitudes, we have to question whether the education is creating more positive attitudes or is it simply that being able to finish a postsecondary degree requires that one already have those work attitudes?

To the extent to which postsecondary education is becoming the substitute for employer-provided training and development, then education truly has become more important to the economy. But it is not education in the traditional sense that matters. It is the highly vocational programs that we see especially in community colleges and the fast-growing for-profit schools. There are lots of reasons to support education in the United States, especially to focus on improving it at the lower end of the distribution. But there are also risks to overpromising what it can do, particularly when we assert that individuals, paying a lot of the costs themselves, can solve the problems just by getting more education. If the goal is to improve economic performance and workplace outcomes, we should think about all the options, including those that might change what employers do, especially given that education is not free. If we want to increase educational expenditures for the purpose of improving economic performance, the way to do that is through work-based learning, not expanding postsecondary education per se.

Can Science Policy Advice Be Disinterested?

The Honest Broker by Roger A. Pielke Jr., an environmental studies professor at the University of Colorado, has many strengths, including lucidity, refreshing common sense, and a good instinct for the relevant point in a complex discussion. The author is an experienced observer, and on occasion rises to the level of a wise and reflective commentator on the complex scene of science affairs, maybe even qualifying as the honest broker he admires and whose putative virtues he extols in the volume. The book also has irritating and serious flaws that detract from his stated goal of presenting a way of thinking more clearly about how the scientist “relates to the decision-making process.” Pielke takes some stylistic risks in attempting to present his argument in a lively and creative way. Much of the book has the air of a PowerPoint presentation, complete with snappy short (and misleading) definitions; bullets at the start of chapters; plenty of charts (mostly either trivial or incomprehensible); and short wrap-ups, recaps, and conclusions that are unsatisfying. I am reminded of Edward Tufte’s critique of how PowerPoint presentations invariably “dumb down” any discussion of a serious issue.

A key assumption of the book is that what is really wrong with science advising is that scientists lack a clear view of their choices and of what they are doing. Pielke seems to believe that once scientists get this straight, the rest of what needs to happen in the advising process will fall into place. I wish it were that simple.

Whether the scientists do or do not have it right is a small part of the overall problem, for in truth the scientists are bit players in this whole drama. What congressional staffers, civil servants, presidential advisers, journalists, media talking heads, and politicians at all levels do or think is far more significant. Pielke worries that scientists who are not savvy about their role are likely to be used by these other players as “stealth issue advocates.” He shouldn’t fret about this; he should accept it. Politicians and powerful interests use everybody and anybody to advance their causes. Whether scientists think they are honest brokers or issue advocates will matter little to the policymakers, who assume that everyone has an interest and are untroubled by that fact.

Pielke assumes that the scientists’ role is growing in importance as more and more issues are at least partly or highly technical in their content. This is a common conceit among scientists, but it does not hold up under careful scrutiny. Empirically speaking, independent scientists have become less important in recent years in Congress and in the Executive Branch, as more “scientific middlemen” have emerged to interpret scientific findings to policymakers. Many of these have had some technical education and are familiar in varying degrees with developments within science. In the years immediately after World War II, scientists from the universities, industry, and the national labs were almost alone as the dispensers of technical advice and thus found themselves in great demand as government advisers. The whole business of science advice was an elite affair, with a relatively small number of scientists from select private universities in the Northeast and a few companies in California as the major players. That small group of individuals did have influence. Today, there is a cacophony of voices in a much more pluralist, open, and disorderly policy process, and the technical staff capacities within the government have grown in numbers and sophistication. The voice of the individual scientist is lost in the din.

Models of advice

Pielke identifies four major models of science advising—the pure scientist who prefers not to offer advice, the science arbiter who is objective but willfully innocent of policy realities, the science advocate, and the honest broker (his favorite) who is objective but responsive to the practical realities of policymaking—and tries to point out where each is appropriate. He criticizes the United Nations’ Intergovernmental Panel on Climate Change (IPCC) for “stealth issue advocacy”; that is, for sneaking into its analysis assumptions that point to policy recommendations, while pretending to offer only purely scientific observations. Fair enough. One should recognize when one is advocating and do so in a straightforward fashion. One way to guard against stealth issue advocacy, he suggests later, is to have a broadly based committee so that different viewpoints are necessarily represented. But isn’t this exactly what the IPCC is? In Pielke’s defense, it must be said that he does not oversimplify issues such as climate change; fair-minded and cautious, he tries to give the various sides of an issue without caricaturing any of the views.

More generally, however, Pielke’s analytical categories begin to blur rather than enhance clarity, and I believe that they do not take us very far. Does the IPCC try to act as science adviser or honest broker but then slip by mistake into stealth issue advocacy, or are all three roles involved in various aspects of its work? How, in short, would we apply the categories to an actual situation? Pielke leaves no doubt that the honest broker is the rarer bird and the sort of scientist who is most useful, but it is not clear that this ideal can exist in the real world. How then should scientists behave?

Pielke says that scientists should not consider themselves “above the fray,” but the honest broker seems to want to do just that. The honest broker is a kind of Diogenes wandering around with his lantern and looking for an honest man. The honest broker is an individual with no institutional self-interest or agenda or set of predisposing values—in short, a unicorn. Or at least no animal that has ever been sighted inside the Beltway.

The downside of Pielke’s even-handedness is that at times he seems to argue around in circles. For example, he affirms the importance of and the primacy of democracy (of a certain kind where interest groups are held in check). Then he backtracks, fearing that politics (and those naughty politicians) will not adopt the correct policies, will plunge the nation into gridlock, and will “politicize” science. So science has to ride to the rescue, but in doing so, to be careful not to “scientize” politics or to allow itself to be “politicized.” And how are we to tell good policy from bad policy? Well, for one thing “disputes over values [have] to be mapped onto debates over science.” In general, “good decisions are those that most reliably lead to desired outcomes.” Committees of distinguished scientists from different fields will normally give one balanced judgments, except when they don’t. The efforts to define good outcomes and decisions are often tautological. Further, whatever one means by “mapping” values onto science, the scientific aspects of decisions on climate policy or questions of war and peace are only some of the factors that must be weighed, and are not always the most significant ones.

So Pielke eventually works his way back to our disorderly democracy and decisionmaking processes, with his various constructs strewn along the path like roadkill. Why, one wonders, does he object so strongly to adversarial processes? The adversarial style is so firmly lodged in our legal and political system that we have to live with it. He acknowledges that there is nothing wrong with advocacy per se. Scientists are citizens and have the right to be advocates. But he is inherently suspicious of advocacy, especially by his scientific colleagues who do not seem to realize they are advocating or else do not care that they are lining up on certain sides of an issue. The reason lies in his assessment that this kind of behavior will tend to “politicize” science. If “science [is seen] mainly as a servant of interest group politics,” then “political battles are played out in the language of science, resulting in policy gridlock and the diminishment of science as a source for policymaking.” Pielke implies that the main rationale for society’s support of science is that scientists provide policy advice. But this is not so, and we should be grateful for that. The future of science would be parlous indeed if this were true. Society supports science because a civilized society values the arts, the sciences, learning in general, museums, archives, and all the other attributes of high culture. Society expects that useful findings will emerge from scientists’ indulging their curiosity (and evidence offers a modest amount of support for this belief). Scientists do not have to give advice to anybody unless they are employed by the government or industry to do so, and most scientists will simply want to go about their work.

Because Pielke is fond of illustrative tales, I will end with a story. At a conference held in Washington, DC, in 2003 at the American Association for the Advancement of Science to discuss the findings of Science and Technical Advice to the Congress, a report prepared by M. Granger Morgan and Jon Pena of Carnegie Mellon University, a group of assembled luminaries sat listening to various presentations, including one by the majority and minority staff directors of a major science and technology committee. The two, a Republican and a Democrat, gave a brief presentation and asked for questions. Someone rose (to tell the truth, it was yours truly) and posed this question: Who would you turn to most frequently for scientific advice: a committee staff member who is tracking a topic, someone from the Congressional Research Service or a Congress-wide staff agency, an outside think tank person, a university scientist, or whom? To the surprise and slight consternation of the assembled colleagues, mostly academics, the two men said without batting an eye that they would turn to their favorite lobbyist. For a thoroughly knowledgeable analysis of the issue, for a fair presentation of both sides, for singling out the central points in dispute that required the Congress to decide, and for a prompt and timely response, the lobbyist won hands down. Lobbyists have to give you accurate information, they said, or their reputations will be ruined. They have experience, they know what you need, and they will give you a pretty good and pretty objective assessment of what we will have to tell our congresspeople.

A New Manhattan Project for Clean Energy Independence

In 1942, President Franklin D. Roosevelt asked Sen. Kenneth McKellar, the Tennessean who chaired the Appropriations Committee, to hide $2 billion in the appropriations bill for a secret project to win World War II.

Sen. McKellar replied, “Mr. President, I have just one question: Where in Tennessee do you want me to hide it?” That place in Tennessee turned out to be Oak Ridge, one of three secret cities that became the principal sites for the Manhattan Project.

The purpose of the Manhattan Project was to find a way to split the atom and build a bomb before Germany could. Nearly 200,000 people worked secretly in 30 different sites in three countries. President Roosevelt’s $2-billion appropriation is the equivalent of $24 billion today. According to New York Times science reporter William Laurence, “Into [the bomb’s] design went millions of man-hours of what is without doubt the most concentrated intellectual effort in history.”

I returned to Oak Ridge recently to propose that the United States launch a new Manhattan project: a five-year project to put America firmly on the path to clean energy independence. Instead of ending a war, the goal will be to enable the nation to deal with rising gasoline and electricity prices, the threat of climate change, challenges to national security, and the need to protect air quality, efforts that will benefit not only the United States but all the world’s countries.

In 1942, many were afraid that the first country to build an atomic bomb could blackmail the rest of the world. Today, countries that supply oil and natural gas can blackmail the rest of the world. By independence I do not mean that the United States would never buy oil from Mexico or Canada or Saudi Arabia. By independence I do mean that the United States could never be held hostage by any country for its energy needs.

Not a new idea

A new Manhattan Project is not a new idea, but it is a good idea and fits the goal of clean energy independence. The Apollo Program to send men to the moon in the 1960s was a kind of Manhattan Project. Presidential candidates John McCain and Barack Obama have called for a Manhattan Project for new energy sources. So have former House Speaker Newt Gingrich, Democratic National Committee chairman Howard Dean, Sen. Susan Collins of Maine and Sen. Kit Bond of Missouri—among others. And throughout the two years of discussion that led to the passage in 2007 of the America COMPETES Act, several participants suggested that focusing on energy independence would force the kinds of investments in the physical sciences and research that the United States needs to maintain its competitiveness.

The overwhelming challenge in 1942 was the prospect that Germany would build the bomb before the United States could and thus win the war. The overwhelming challenge today, according to National Academy of Sciences President Ralph Cicerone, is to discover ways to satisfy the human demand for and use of energy in an environmentally satisfactory and affordable way so that the United States does not become overly dependent on overseas sources.

Cicerone estimates that this year Americans will pay $500 billion overseas for oil—that’s $1,600 for each citizen—some of it to nations that are so hostile that they are bankrolling anti-U.S. terrorists. Sending $500 billion abroad weakens the dollar. It is half the U.S. trade deficit. It is forcing gasoline prices over $4 a gallon and crushing family budgets.

Then there are the environmental consequences. If worldwide energy use continues to grow as it has, between 2000 and 2030 humans will inject as much CO2 into the air from fossil-fuel burning as they did between 1850 and 2000. The United States has plenty of coal to help achieve its energy independence, but there is no commercial way (yet) to capture and store the carbon from so much coal burning, and the country has not finished the job of controlling sulfur, nitrogen, and mercury emissions.

In addition to the need to meet an overwhelming challenge, other characteristics of the original Manhattan Project are suited to this new challenge:

  • It needs to proceed as fast as possible along several tracks to reach the goal. According to Don Gillespie, a young engineer at Los Alamos during World War II, the “entire project was being conducted using a shotgun approach, trying all possible approaches simultaneously, without regard to cost, to speed toward a conclusion.”
  • It needs presidential focus and bipartisan support in Congress.
  • It needs the kind of centralized, gruff leadership that Gen. Leslie R. Groves of the Army Corps of Engineers gave the first Manhattan Project.
  • It needs to “break the mold.” To borrow the words of J. Robert Oppenheimer in a speech to Los Alamos scientists in November of 1945, the challenge of clean energy independence is “too revolutionary to consider in the framework of old ideas.”
  • Most important, in the words of George Cowan as reported in the excellent book edited by Cynthia C. Kelly, “…The Manhattan Project model starts with a small, diverse group of great minds.”

I said to the National Academies when a group of members of Congress first asked for their help on the America COMPETES Act in 2005, “In Washington, D.C., most ideas fail for lack of the idea.”

There are some lessons, too, from America COMPETES. Remember how it happened. Just three years ago—in May 2005—a bipartisan group in Congress asked the National Academies to tell Congress in priority order the 10 most important steps policymakers could take to help the United States keep its brainpower advantage. By October, the Academies had assembled a “small diverse group of great minds” chaired by Norm Augustine, which presented to Congress and to the president 20 specific recommendations in a report called Rising Above the Gathering Storm, and a number of other organizations contributed valuable proposals.

Then, in January 2006, President Bush outlined his American Competitiveness Initiative that over the next 10 years would double basic research budgets for the physical sciences and engineering. The Republican and Democratic Senate leaders and 68 other senators sponsored the legislation. It became law by August 2007, with strong support from Speaker Nancy Pelosi and the president.

Combining the model of the Manhattan Project with the process of the America COMPETES Act has already begun. The National Academies have under way an “America’s Energy Future” project that will be completed in 2010. In the meantime, Cicerone has agreed to sit down with a bipartisan group to discuss what concrete proposals we might offer to the new president and the new Congress. Energy Secretary Sam Bodman and Ray Orbach, the Energy Department’s undersecretary for science, have said the same.

The presidential candidates seem ready. There is bipartisan interest in Congress. Rep. Bart Gordon (D-TN), chairman of the House Science and Technology Committee and one of the original four signers of the 2005 request to the National Academies that led to the America COMPETES Act, joined me in Oak Ridge to offer his ideas, as did Rep. Zach Wamp (D-TN), a senior member of the House Appropriations Committee who played a key role in the America COMPETES Act. I have talked with Sens. Jeff Bingaman (D-NM) and Pete Domenici (R-NM), the chairman and senior Republican on the Energy Committee who played such a critical role in America COMPETES, and to Sen. Lisa Murkowski (R-AK), who likely will succeed Domenici as the senior Republican on the Energy Committee.

Some say a presidential election year is no time for bipartisan action. I can’t think of a better time. Voters expect presidential and congressional candidates to come up with solutions for $4 gasoline, clean air, and climate change, and the national security implications of our dependence on foreign oil. The people didn’t elect us to take a vacation this year just because there is a presidential election.

A grand way to begin

Sen. Bingaman’s first reaction to the idea of a new Manhattan Project was that instead we need several mini-Manhattan Projects. He suggested as an example the “14 Grand Challenges for Engineering in the 21st Century” laid out by National Academy of Engineering (NAE) president Chuck Vest, three of which involve energy. I agree with Bingaman and Vest.

Congress doesn’t do “comprehensive” well, as was demonstrated by the collapse of the comprehensive immigration bill. Step-by-step solutions or different tracks toward one goal are easier to digest and have fewer surprises. And, of course, the original Manhattan Project itself proceeded along several tracks toward one goal.

Here are my criteria for choosing several grand challenges:

  • Grand consequences, too. The United States uses 25% of all the energy in the world. Interesting solutions for small problems producing small results should be a part of some other project.
  • Real scientific breakthroughs. This is not about drilling offshore for oil or natural gas in an environmentally clean way or building a new generation of nuclear power plants, both of which we already know how to do—and, in my opinion, should be doing.
  • Five years. Grand challenges should within five years put the United States firmly on a path to clean energy independence so that the goal can be achieved within a generation.
  • Family budget. Solutions need to fit the family budget, and costs of different solutions need to be compared.
  • Consensus. The Augustine panel that drafted the Gathering Storm report wisely avoided some germane topics, such as excessive litigation, on which they could not agree, figuring that Congress might not be able to agree either.

Here is where I need help. Rather than having members of Congress proclaim these challenges, or asking scientists alone to suggest them, I believe there needs to be preliminary discussion that begins with whether the criteria are correct. Then, Congress can pose to scientists questions about the steps to take to achieve the grand challenges.

To begin the discussion, I’ll offer seven challenges that illustrate the scale and ambition that I would like to see.

Make plug-in electric cars and trucks commonplace. In the 1960s, H. Ross Perot noticed that when banks in Texas locked their doors at 5 p.m., they also turned off their new computers. Perot bought the idle nighttime bank computer capacity and made a deal with states to manage Medicare and Medicaid data. Banks made money, states saved money, and Perot made a billion dollars.

Idle nighttime bank computer capacity in the 1960s reminds me of idle nighttime power plant capacity in 2008. This is why:

  • The Tennessee Valley Authority has 7,000-8,000 megawatts, the equivalent of seven or eight nuclear power plants or 15 coal plants, of unused electric capacity most nights.
  • Beginning in 2010 Nissan, Toyota, General Motors, and Ford will sell electric cars that can be plugged into wall sockets. FedEx is already using hybrid delivery trucks.
  • TVA could offer “smart meters” that would allow its 8.7 million customers to plug in their vehicles to “fill up” at night for only a few dollars, in exchange for the customer paying more for electricity between 4 p.m. and 10 pm. when the grid is busy.
  • Sixty percent of Americans drive less than 30 miles each day. Those Americans could drive a plug-in electric car or truck without using a drop of gasoline. By some estimates, there is so much idle electric capacity in power plants at night that over time Americans could replace three-fourths of their light vehicles with plug-ins. That could reduce the nations’s overseas oil bill from $500 billion to $250 billion, and do it all without building one new power plant.

In other words, we have the plug. The cars are coming. All we need is the cord.

Too good to be true? Haven’t U.S. presidents back to Nixon promised revolutionary vehicles? Yes, but times have changed. Batteries are better. Gas is $4. We are angry about sending so many dollars overseas, worried about climate change and clean air. And consumers have already bought one million hybrid vehicles and are waiting in line to buy more, even without the plug-in. Down the road is the prospect of a hydrogen fuel-cell hybrid vehicle with two engines, neither of which uses a drop of gasoline. Oak Ridge is evaluating these opportunities.

Still, there are obstacles. Expensive batteries add $8,000-$11,000 to the cost of an electric car. Smart metering is not widespread. There will be increased pollution from the operation of coal plants at night. We know how to get rid of those sulfur, nitrogen, and mercury pollutants (and should do it) but haven’t yet found a way to get rid of the carbon produced by widespread use in coal-burning power plants. And that leads to a second grand challenge.

Make carbon capture and storage a reality for coal-burning power plants. This was one of the NAE’s grand challenges, and there may be solutions other than underground storage, such as using algae to capture carbon. Interestingly, the Natural Resources Defense Council argues that, after conservation, coal with carbon capture is the best option for clean energy independence because it provides for the growing U.S. power needs and will be easily adopted by other countries.

Make solar power cost competitive with power from fossil fuels. This is a second of the NAE’s grand challenges. Solar power, despite 50 years of trying, produces 0.01% of U.S. electricity. The cost of putting solar panels on homes averages $25,000-$30,000, and the electricity produced, for the most part, can’t be stored. Now, there is new photovoltaic research as well as promising solar thermal power plants, which capture the sunlight using mirrors, turn heat into steam, and store it underground until the customer needs it.

Safely reprocess and store nuclear waste. Nuclear plants produce 20% of U.S. electricity but 70% of U.S. electricity—electricity that does not pollute the air with mercury, nitrogen, sulfur, or carbon. The most important breakthrough needed during the next five years to build more nuclear power plants is solving the problem of what to do with nuclear waste. A political stalemate has stopped nuclear waste from going to Yucca Mountain in Nevada, and $15 billion collected from ratepayers for that purpose is sitting in a bank. Recycling waste could reduce its mass by 90%, creating less stuff to store temporarily while long-term storage is resolved.

Make advanced biofuels cost-competitive with gasoline. The backlash toward ethanol made from corn because of its effect on food prices is a reminder to beware of the great law of unintended consequences when issuing grand challenges. Ethanol from cellulosic materials shows great promise, but there are a limited number of cars capable of using alternative fuels and of places for drivers to buy it. Turning coal into liquid fuel is an established technology but expensive and a producer of much carbon.

Make new buildings green buildings. Japan believes it may miss its 2012 Kyoto goals for greenhouse gas reductions primarily because of energy wasted by inefficient buildings. Many of the technologies needed to do this are known. Figuring out how to accelerate their use in a decentralized society is most of this grand challenge.

Provide energy from fusion. The idea of recreating on Earth the process by which the Sun creates energy and using it for commercial power is the third grand challenge suggested by NAE. The promise of sustaining a controlled fusion reaction for commercial power generation is so fantastic that the five-year goal should be to do everything possible to reach the long-term goal. The failure of Congress to approve the president’s budget request for U.S. participation in the International Thermonuclear Experimental Reactor is embarrassing.

This country is a remarkable place. Even during an economic slowdown, this nation with 5% of the world’s population will this year produce about 30% of all the wealth. Despite the gathering storm of concern about U.S. competitiveness, no other country approaches its brainpower advantage, unmatched collection of research universities, national laboratories, and private-sector companies.

And this is still the only country where people say with a straight face that anything is possible—and really believe it. These are precisely the ingredients that the United States needs during the next five years to place itself firmly on a path to clean energy independence within a generation. In doing so, it will make jobs more secure, help balance the family budget, make the air cleaner and our planet safer and healthier, and lead the rest of the world to do the same.

Archives – Summer 2008

VIK MUNIZ, Carcere VII, The Drawbridge, after Piranesi, Cibachrome photograph, 42 × 32 inches, 2002.

Carcere VII, The Drawbridge, after Piranesi

Brazilian-born artist Vik Muniz, who often uses common but ephemeral materials, meticulously recreated etchings from Giovanni Battista Piranesi’s (1720-1778) prison series using metal pins and thread. By presenting installation photographs of his sculptural reconstructions such as the one depicted here, Muniz removes the viewer from the original objects created by Piranesi through a series of reproductions of reproductions. This is not done with the intention to mimic or even to improve upon the original, but to encourage the viewer to revisit and look harder at the original while pondering the process by which we embed meaning into such icons.

Both the original Piranesi prints and Muniz’s reinterpretations were exhibited next to each other in 2004 at the National Academy of Sciences and later in 2007 at the National Gallery of Victoria, Australia.

Image Courtesy of Vik Muniz

Strategies for Today’s Energy Challenge

The energy challenge the United States faces today is different from and more encompassing than what it encountered even a few years ago. Until fairly recently, at least in Washington, the energy challenge was seen largely as the need to reduce dependence on foreign oil. For the past quarter century, the country has seen its oil imports grow, and although relatively little has been done during these years to reverse that trend, that issue has dominated the energy debate.

Dependence on foreign oil remains a major concern, but today the energy challenge is larger than that and in many ways very different. Different in nature, different in scale, and much more urgent.

Today’s energy challenge is global rather than national. It is to change the way the world produces, stores, distributes, and uses energy so as to reduce greenhouse gas emissions. It is to shift, not just the U.S. economy but the global economy from dependence on combustion of fossil fuels to the use of non-emitting energy sources. With the concentration of greenhouse gases in the atmosphere on a trajectory to unacceptable levels, the sense of urgency to take action has risen as well. Simply stated, it is not enough to commit to reducing greenhouse gas emissions beginning in 2025. We must act and we must act now.

The scale of the challenge is immense. The United States and the other nations of the world will need to overhaul the existing energy infrastructure on which we all depend. That infrastructure did not develop overnight. Two hundred years ago, the combustion of fossil fuels, primarily coal, produced the steam that turned the turbines that powered the Industrial Revolution. Today, our planet has more than 50,000 coal-burning power plants, accounting for nearly one-third of greenhouse gas emissions worldwide. The normal rate of turnover for this infrastructure is at least 40 to 50 years. One hundred years ago, the decision was made to power our transportation sector by burning petroleum-based fuels in an internal combustion engine, rather than through the use of electric motors and batteries. Today, we have over six hundred million vehicles using some version of that internal combustion engine, producing 14% of greenhouse gas emissions worldwide.

But our challenge is not limited to just the power plants and vehicles we have today. We live in a world of growing demand for energy as billions of people are rising out of poverty. As that demand for energy grows, it will require new energy production capacity. Today, that new capacity generally consists of coal-fired power plants with the same high CO2 emissions as our current energy infrastructure. Just a couple of weeks ago, India announced that it is building a new four-gigawatt coal-burning power plant complex. These plants will emit more than 23 million tons of CO2 a year. The justification? That the need to bring electricity to one of the world’s poorest regions is more pressing than the need to limit CO2 from burning fuel, and this is the least expensive way to do it. It is difficult to argue against such a statement, when most of us here have never known a life without electricity.

As we struggle to develop alternatives to our current energy infrastructure, we must recognize that in order to achieve sustainable use of those alternatives worldwide, they must become cost-competitive so that they are the option of first resort. To accomplish all of this, we will need both a revolution in technology and major changes in our economy. Our past technological choices are inadequate for our future. The solutions we need can only come from new technologies. And if the challenge of developing those new energy technologies, and implementing them worldwide, is immense, so too are the opportunities afforded by tackling this problem the right way. If the United States sees its most pressing environmental problems as an opportunity to reassert its leadership in science, technology, and innovation, it has the potential not only to resolve those problems, but also to revitalize its R&D enterprise and rebuild its manufacturing base.

But how can the country accelerate the development and widespread use of new technologies to address the energy challenge? One promising place to start is to adopt policies that put a price on emitting CO2 and other greenhouse gases. Levying a cost on putting greenhouse gases in the air will accelerate the private-sector development and use of technologies that avoid and minimize greenhouse gas emissions.

The president’s science advisor needs to be given a stronger hand by being made a deputy director in the Office of Management and Budget.

In the Senate, we are working to design a regulatory framework in the form of a cap-and-trade system that will recognize the real costs of continued emission of greenhouse gases and shift development toward low-carbon energy production. In the past few years, we have seen a dramatic increase in private-sector entrepreneurs who want to develop clean energy technologies. Putting a price on the emission of greenhouse gases will stimulate that private-sector involvement even more.

The proper design of a cap-and-trade system for greenhouse gas emissions is not a simple matter. Having been in the Senate for 25 years, I can assure you that we in Congress have the ability to design and enact a totally unworkable system. Without the help of this country’s best minds, we could wind up doing just that.

Although putting a price on CO2 emissions is an essential part of the solution, it is not the only tool that should be used to resolve this problem. Changing the way that the country pursues technology development and deployment will also be essential, and that is what I want to address.

Spurring innovation

U.S. policies to support technology development and use have fallen short in key areas. The government cannot afford to sit on the sidelines at this critical time. It must understand how it has failed to fulfill some key responsibilities and take action.

Lack of support for the nation’s basic scientific and engineering enterprise. The best recent analysis of this problem was in the National Academies report Rising Above the Gathering Storm. The report was a significant and well-supported wake-up call for policymakers about the need for major sustained support of the basic sciences. We in Washington are beginning to respond. Although we don’t have major progress to report as yet, I believe we will make progress in the months and years to come. One aspect of our anemic and unreliable support for the basic science and engineering enterprise in this country has been the anemic and unreliable support for energy-related science and technology development.

Failure to set priorities among the promising energy technologies that would lower our greenhouse gas emissions. You can find government reports on climate change technologies such as the Department of Energy’s 2006 Strategic Plan, but these reports are basically only shopping lists of viable technologies. They lack concrete goals, roadmaps for making progress, and timelines for development. Such reports are not entirely without value, but what we have nationally now is far from being a strategy. And it is far from adequate to address the challenges before us. We need to formulate a strategic R&D plan that maps out a prioritized set of technological goals, the steps needed to achieve those goals, and the time in which those goals should be met. I am not talking about a document that would limit scientific and technological exploration, but a roadmap with broad highways along which we could ensure that science and technology would be supported. Any energy R&D roadmap we design will need plenty of on- and off-ramps to incorporate the new knowledge, understanding, and breakthroughs that will inevitably occur.

Japan has recently begun to move along the path of developing such a strategic plan with the release of its Cool Earth—Innovative Energy Technology Program. This document identifies 21 areas of technology development that meet two criteria. First, each is expected to deliver substantial reductions in CO2 emissions in the world by 2050. Second, each is a technology area in which Japan believes it can lead the world. Technology roadmaps are being formulated for each of the 21 technologies, giving R&D direction and milestones for measuring performance, with timelines toward long-term goals.

Perhaps the closest parallel we have to the Japanese priority-setting effort is a National Academy of Engineering project that identified the Grand Challenges for Engineering in the 21st Century. Among the challenges are 2 of the 21 technology areas covered in the Japanese innovative technology program: making solar energy economical and developing carbon sequestration methods. Although there is a significant effort under way at the National Academies to determine U.S. R&D needs in the energy area, it is clear that the systematic setting and maintenance of priorities for energy technology development is not something we have committed to at the highest levels of our government. The time has come for the government to act.

The first step is to establish overall responsibility at the highest levels of government. The Department of Energy is already supporting research in most of the key energy technologies, and other agencies such as the Department of Commerce are also funding necessary research. But even the secretary of Energy and the secretary of Commerce have a difficult time acquiring the research funding they need out of the White House budget process, which is run by the Office of Management and Budget. So I believe that the president’s science advisor needs to be given a stronger hand by also being made a deputy director in the Office of Management and Budget. This would ensure that the same person with responsibility for overall science and technology policy in government has some real authority to ensure that the funds to support science and technology make it into the federal budget.

As a second step, the president’s science advisor, armed with his enhanced authority, should work with the key departments and the National Academies to come up with a manageable set of energy technology areas that promise to help meet energy needs and substantially reduce greenhouse gas emissions in coming decades. Some of these will be technology areas that the Japanese or others have chosen as well. Others will be new to the list.

As a third step, in each of the chosen technology areas, a working group of academic, government, laboratory, and industry representatives should be convened and a broad roadmap developed to chart the way forward. Responsibility for pursuit of the roadmap in each technology area should be assigned to a particular government department or agency.

Fourth, to ensure an adequate degree of sustained focus and an adequate level of funding, the president should be required to submit to the Congress with his annual budget proposal a separate document detailing the funds being requested in support of each energy technology area across the agencies of the government.

And finally, to ensure that the areas being pursued continue to be those that hold the greatest promise, the National Academies should be directed to prepare an updated analysis of energy technology priorities every five years. This is similar to what government does for military technology in the Quadrennial Defense Review.

Sustainable policy

But as we have learned from hard experience, it is one thing to set priorities and begin pursuing them and quite another to sustain the effort. This brings me to the third major policy failing on my earlier list.

The U.S. record of sustaining its efforts in critical technology development has been poor. Once we set the course, why can’t we stay on it? One obvious problem is that each new administration feels a need to pursue something new. Instead of sticking with the difficult blocking and tackling required to move the ball down the field, the nation’s leaders allow their attention and effort to be deflected and then comfort themselves with the notion that some hail Mary pass will nevertheless allow them to score the touchdown.

The nation’s numerous discarded efforts to improve vehicle technology are a frustrating example of this tendency. On February 10, 1970, President Nixon announced the following in a special message to the Congress: “I am inaugurating a program to marshal both government and private research with the goal of producing an unconventionally powered pollution-free automobile within five years.” In 1977 President Carter announced his program for “reinventing the car,” in 1993 President Clinton announced his Partnership for a New Generation of Vehicles, and in 2003 President Bush announced his push for the Freedom Car.

Identifying the priority is obviously not enough. It is also necessary to develop a consensus on how to proceed—a consensus that will survive from one administration and one Congress to the next. The development of a national strategic plan for energy technology development, together with regular updating of that plan, will go a long way toward avoiding the stop-and-start approach that has plagued us in the past.

The fourth major failing is the absence of long-term regulatory and tax policies to promote development, manufacture, and widespread use of new technologies. As Germany has shown in the areas of wind and solar technology, providing such long-term policies can create a booming renewables industry. A very different story has played out in the United States. Utility regulation and rate setting have historically been the job of public regulatory commissions at the state level. Although some states have enacted progressive policies such as renewable portfolio standards and net metering, many have not.

We have tried for the past three Congresses to enact a renewable portfolio standard at the national level, but those efforts have met strong resistance from utilities and from the current administration. Similarly, in the area of tax incentives for increased efficiency and renewable technologies, our record has not been stellar. Congress has enacted some renewable tax incentives, but for budgetary reasons they were enacted for only short periods of time. And often they were allowed to expire.

As an example, the most significant tax incentive Congress has enacted to encourage alternative energy development is the Renewable Energy Production Tax Credit. In the case of wind energy, this credit provides a reimbursement of nearly 2 cents per kilowatt-hour for electricity produced from a wind turbine for a full 10 years after the turbine is put into service. The problem has been that the periods during which one is required to put the turbine in use to receive the tax credit were relatively short.

This problem is clearly illustrated in the history of U.S. wind capacity expansion. In years when the production tax credit was fully available, there was robust development. In years when the tax credit was scheduled to expire, financial institutions were reluctant to invest in projects that were not certain to be producing before the expiration of the credit. The result was the boom-and-bust cycle that is apparent in the graph. Clearly, a more consistent tax policy would have moved the country much further along in its development and use of wind power. Government-driven boom-and-bust cycles send the wrong message to entrepreneurs.

The government needs to support long-term market stability for renewable electricity production. One way to do that is to provide a long-term extension of the tax credits for renewable electricity. I believe that Congress will, next year, with a new administration in office, finally pass a much longer-term extension of these tax credits.

Finally, the country must devise a way to capture all the economic benefits from clean-tech manufacturing. First, policymakers need to acknowledge, at least in theory, that it is possible to meet the energy challenges I have outlined without creating the domestic manufacturing capability and domestic manufacturing jobs that ought to go with that. To use the current buzzword, we unfortunately could wind up “outsourcing” that manufacturing, particularly through inaction. Advanced energy storage devices, thin-film photovoltaic cells, and highly efficient light-emitting diodes will all be needed for clean, efficient energy production and use. But there is no assurance that these products will be produced in the United States. In fact, some would argue that unless the country adopts substantial changes in the way it does business, it is more likely than not that the United States will buy these products from abroad.

In their 1990 book The Breakthrough Illusion, Richard Florida and Martin Kenney argue convincingly that “Although the commonplace impression that breakthrough innovations create permanent advantage for American companies may once have been true, it is just not the case anymore. A new reality is upon us: the U.S. makes the breakthroughs, while other countries, especially Japan, provide the follow-through.” Now, 18 years after that was written, I believe it is truer than ever, and the other countries include many besides Japan.

The nature of the problem is evident in the history of world production in photovoltaic cells since 1995. Until 1998, the United States was holding its own against other countries. In the past decade, though, while production in other countries has soared, the U.S. photovoltaic industry has remained stagnant. This failure to compete well in growing markets is consistent with a worrisome trend in the entire U.S. manufacturing sector, which is experiencing a steady decline in jobs.

A strategy to revitalize U.S. manufacturing is a topic for another day. Such a strategy will require developing a consensus on changes in tax, procurement, trade, and probably health and education policy as well. The United States has a real opportunity to grow a high-tech renewables manufacturing base if it commits to the right policies. The nation has the knowledge, the technology, the workforce, and the drive to make it possible. Germany has proven that such a transformation can occur in an advanced economy. Nearly 250,000 renewable energy jobs have been created in Germany, and it is expected that over 400,000 people will be employed by 2020. Imagine what is possible in the much larger U.S. economy.

Tackling the policy challenges in the five major areas I have discussed is important to all Americans, and I believe it provides an exciting opportunity for the nation’s young people, particularly those now studying science and engineering in college. Young people equipped with knowledge, ability, and persistence will likely emerge as the leaders in meeting this global challenge. Reengineering the way the world produces, stores, distributes, and uses energy may in fact be the greatest challenge that we as a global community must face together. And to my mind, it is a worthy calling.

Addressing the energy challenge will require government, industry, scientists, and engineers to work together. Some may choose to make contributions through government service; many others will make a mark on our future energy system through direct research and innovation. As Vannevar Bush, a scientific advisor to Presidents Roosevelt and Truman said, “Without scientific progress, no amount of achievement in other directions can ensure our health, prosperity, and security as a nation in the modern world.”

The Crisis in Adult Education

During the past several decades, a dramatic increase in the educational attainment of the U.S. labor force has helped boost worker productivity and fuel national economic growth. However, the demographic forces that produced this increase have ended. Unless the United States makes some fundamental adjustments in its national strategies for the education of adults, labor force attainment will stagnate, productivity will lag, and economic growth will suffer.

The historic increase in educational attainment was driven by the fortunate confluence of two factors. First, the baby boomers, huge numbers of them, began working. From 1960 to 2000, the number of workers in their prime productive years (ages 25 to 54) increased by more than 120%, from about 45.5 million to 100 million workers. Second, these new workers were much more highly educated than their elders. For example, in 1960 only 60% of workers in the 25-to-29 age group had a high-school diploma or better and fewer than 8% had a bachelor’s degree or higher. But by 1990, 84% of this group of younger workers had a high-school diploma and 22% had a bachelor’s degree.

Just as successively larger new cohorts of these better-educated workers joined the labor force in the 1960s and on through the 1980s, less-educated older workers were leaving the labor force. As a result, the overall educational attainment of the workforce increased dramatically, especially between 1970 and 1990.

The increase in the educational attainment of the labor force made a substantial contribution to economic growth and rising productivity—as much as 20 to 25% of overall labor productivity growth, according to some estimates. The indirect contribution made by better educations in fueling innovation and technology growth may have been even greater.

However, this long-term increase in labor force educational attainment is now over. Predictably, the labor force impact of the baby boom peaked during the 1990s, and from 1990 to 2000, the number of workers aged 25 to 34 actually fell. Similarly, but less predictably, the increase in educational attainment leveled off. The percentage of younger people entering the workforce in the 1990s with at least a high-school diploma was no higher than in the 1980s, and it has not increased in the current decade. The percentage of 25-to-34-year-olds with a bachelor’s degree began to level off even earlier, from 1980 to 1990.

Future demographic trends are unfavorable to rising educational attainment in the workforce. During the next several decades, the older workers leaving the workforce (the aging baby boomers) will be as well educated or better educated than the new workers coming in. The next generation of workers is far more racially and ethnically diverse than in the past and has greater representation of groups that historically have not been well served in either K-12 or postsecondary education. In 2000, whites were twice as likely as African Americans and three times as likely as Hispanics to earn a bachelor’s degree. By 2020, the proportion of whites in the workforce will drop to 63%, from 82% in 1980. The proportion of Hispanics will nearly triple. We can hope that the rates of high-school completion and college readiness among African Americans and Hispanics will significantly increase during the next decade or two, but there currently is no evidence that this will happen.

Moreover, it seems unlikely that college entrance rates, which have remained relatively flat during the past several years, will somehow increase enough to offset the decline in the rate of population growth. Additionally, the college graduation rate for two-year and four-year colleges has actually decreased during the past 20 years, and although much is possible that could turn that around, it would take a stunning increase to make an appreciable difference in the face of the other negative demographic trends.

During the next few decades, the demographic trends yielding a much smaller rate of increase in the younger segments of the labor force and the postsecondary attainment trends reflecting the leveling off of college entrance and completion rates will come together. Even if the college continuation rate for young people increases modestly in the next few years and even if the graduation rate for traditional students picks up, the decreasing relative size of the younger age cohorts, coupled with the movement of older well-educated workers out of the workforce, means that the percentage of people in the labor force with postsecondary credentials will not rise; in fact, it is likely to decline.

This will be a huge drag on productivity and economic growth. In addition, postsecondary educational attainment is also the most important predictor of personal economic success and intergenerational mobility—more important than race, health, location, or family assets. Table 1 shows a weekly earnings premium of 21% for an associate’s degree holder over an individual with only a high-school diploma and a premium of 33% for a bachelor’s degree over an associate’s degree.

The only way out of this serious problem is to act immediately to help adults now in the workforce find their way to success in postsecondary education. Unfortunately, moving these adults into and through postsecondary study to a credential is not going to be easy. The United States does not offer effective ways for adults already in the labor force to increase their educational attainment. The nation needs a better federal policy to support the postsecondary education of working adults and additional paths to the postsecondary level for those hampered by low literacy or a lack of proficiency in English.

TABLE 1
Education Pays In Higher Earnings and Lower Unemployment Rates

Unemployment rate in 2006 Education attained Median weekly earnings in 2006
1.4% Doctoral degree $1,441

1.1% Professional degree $1,474

1.7% Master’s degree $1,140

2.3% Bachelor’s degree $962

3.0% Associate degree $721

3.9% Some college, no degree $674

4.3% High-school graduate $595

6.8% Less than high-school diploma $419

Note: Data are 2006 annual averages for people age 25 and over. Earnings are for full-time wage and salary workers.

Source: Bureau of Labor Statistics, Current Population Survey.

A huge opportunity

In 2006, there were about 120 million adults aged 25 to 64 in the active labor force. Of those, 51 million (42%) had a college degree at the associate’s, bachelor’s, or advanced level. Another 21 million were identified by census surveys as having “some college, no degree.” Perhaps one-third of these actually had a college credential below the associate’s degree (a one-year certificate, for example) or had an industry-recognized certification as a result of an industry-administered examination for which they might have prepared through postsecondary study. The rest attended college briefly after high school but dropped out before gaining a credential. Another 36 million people completed high school or its equivalent but did not attempt postsecondary study. Finally, about 12 million working adults failed even to complete high school.

This means that about 62 million adult workers lack a postsecondary credential of any kind. This presents, in one sense, a huge opportunity. This vast pool of undereducated workers can be seen as a talent reservoir, especially in contrast to the relatively small cadres of about 3 million younger adults leaving high school every year.

Yet many adults are not prepared for college. Findings from the 2003 National Assessment of Adult Literacy indicate that 31 million adults (14%) in the United States have “below basic” prose literacy and 48 million (22%) have “below basic” quantitative literacy. According to the 2000 Census, about 47 million U.S. residents reported that they predominantly spoke a language other than English at home, and over 21 million spoke English less than “very well” (the threshold for full proficiency in English as determined by the U.S. Department of Education). That self-estimate may reflect some personal grade inflation.

Unfortunately, current federal policies designed to ameliorate literacy and language proficiency problems are a dismal failure. For example, only about 2.6 million people were enrolled in federally supported adult basic education (ABE) programs in 2004–2005, and most failed to achieve any significant gain. Fewer than 40% of those pursing literacy gains advanced even one educational level. (The U.S. Department of Education, which is the source of these statistics, defines six literacy levels, from beginning literacy to high advanced.) Only 45% of those pursuing a diploma or General Educational Development degree succeeded, and only 45,000 of the 2.6 million participants moved on to any kind of postsecondary education. In fact, the majority of participants in federally supported ABE are not even in the active labor market. Many are not adults at all; almost 40% are young people of high-school age or just older (ages 16 to 24), and about half of those are simply using ABE as an alternative pathway to high-school completion. English language instruction is woefully inadequate. According to a 1998 study reported by the National Center for Education Statistics, only 11% of non–English-speaking adults had participated in even one English as a second language (ESL) class in the 12 months before the study.

A discouragingly low percentage of working adults who do enroll in postsecondary education ever gain a credential. A 2003 study by Ali Berker and Laura J. Horn of MPR Associates examined the six-year persistence and attainment of adults who had entered college for the first time between 1995 and 1999. Overall, only 39% of those adults gained a credential within six years of enrollment. Of those classified in the research as “employees who study,” only 31% gained a credential, and of those classified as “students who work,” 45% gained a credential. The six-year completion rate for traditional students is about 75%.

Other research confirms that older students tend to complete their degree objectives at a lower rate than younger students, part-timers don’t complete at the same frequency as full-timers, and completion rates at community college (which enroll more than 50% of older students and more than 60% percent of “employees who study”) lag behind those of baccalaureate-granting institutions.

If these trends continue, it seems likely that only about 35 to 40% of the 7 million age 24-plus students now enrolled in degree-granting postsecondary institutions will gain any kind of credential within six years of entry. This translates to only about 300,000 to 400,000 per year and will not make even a tiny dent in the problem.

Hurdles in completing schooling

Postsecondary institutions tend to focus their instruction and delivery strategies on very traditional students: recent high-school graduates who have no attachment to the labor force and no major constraints on their capacity to participate in campus-based, course-oriented educational delivery systems. Even at most community colleges, the majority of programs and courses are geared for traditional postsecondary students. They are not offered in ways that meet the scheduling or timing needs of working adults who must fit college around the requirements of full-time jobs and often child-care responsibilities. Students typically are expected to take several unconnected courses over a 15- to 16-week semester, with each course usually requiring two or even three campus visits per week. Schedules for access to student services such as registration, financial aid, career counseling, and even meeting with instructors too often assume that the students have few constraints on their daytime weekday schedule.

To gain an associate’s degree, students typically have to complete between 20 and 25 semester-long courses. Even for those able to attempt a steady pace of, say, two courses at a time, completing an associate’s degree would take 10 to 12 semesters over four years or more; gaining a bachelor’s degree at this pace would take twice as long.

This pace of course-taking is rarely successful. Too many things change in the lives of working adult students for that slow pace to permit success. Changes in jobs or job schedules, child-care arrangements, transportation logistics, and other life changes intervene. Working adult students too frequently drop out, discouraged with their slow pace, or simply become disconnected from their education.

A very large percentage of working adult students do not even get past remediation requirements, the greatest source of attrition among working adults. Even when remediation is a matter of just completing a single one-semester course to brush up on rusty math or writing skills, it has a significant impact on persistence rates. When students are placed in more than one remediation course, their persistence plummets. A 1998 study by Norton Grubb of the University of California at Berkeley found that of students who needed nine or more credit hours of remedial courses, only about 25% completed all of their remedial courses and only about 4% completed a degree within five years of initial enrollment.

Cost is also a big barrier. The out-of-pocket costs of tuition, fees, books, and other direct costs can easily add $500 to $750 per course at the least expensive community colleges, and more elsewhere. For a family earning less than $35,000 annually, that can be unaffordable.

Financial aid is too often inadequate for low-income working adults. Federal financial aid policies for grants and loans are ill-designed for working adults who struggle to balance the conflicting demands of work, family, and college enrollment. The relatively new federal education tax credits (approved in 1998) are not much help to working adults. Less than 20% of the credits (of a total of $6.3 billion in 2003) is going to working adults. Of the two tax credits, the generous one—the Hope Scholarship—is available only to families of more traditional students (half-time or more). The Lifetime Learning Tax Credit that was intended for working adults is much less generous and is irrelevant for millions of working adults whose lack of postsecondary education forces them into low-paying jobs where tax credits are not useful.

Only a few states provide grants to students in short-term, intensive, nondegree programs that would not be eligible under federal Pell grants. Almost all states have very early aid-application deadlines (the March or April before the fall semester of intended enrollment), which create a barrier for adults whose work and family obligations discourage long-term planning.

There are several colleges—some four-year schools as well as many two-year schools—that have worked hard to develop programs that work well for working adults and are affordable. Some colleges have created short-term intensive programs with curricula and scheduling formats that can better accommodate the time limitations of working adults. Private and proprietary institutions that are specifically seeking to attract the adult market, such as the University of Phoenix, have led the way in many of these reforms. Regrettably, however, these are exceptions: best practice, not common practice. In terms of cost, program structure, and delivery methods, most higher education institutions are not sufficiently accessible to working adults and do not promote success.

Bolstering employer support

Leveraging employer support for postsecondary education can be a very significant strategy to increase the postsecondary success of working adults. Businesses are important beneficiaries of their employees’ education, and recognizing this, many employers already pay some or all of their employees’ postsecondary education costs. Employer aid may be distributed to individuals entirely at the employer’s discretion, made available to some or all types of employees as a formal employee benefit, or made available to unionized employees as part of a collective bargaining agreement.

Government surveys of employer training and other research studies about employer assistance indicate that employers provide a substantial amount of financial aid to employees. This aid is increasing modestly in terms of both how many employees receive aid and how much they get (as measured by percentage of payroll, number of hours of training, and expenditures per employee). Most employer-paid formal training is done in house, but for external training, community colleges are more popular than four-year colleges and universities. The research also shows that employers with lower rates of employee turnover, higher rates of employment growth, and smaller proportions of part-time employment do more formal training. Importantly, employers spend more to help employees gain bachelor’s and advanced degrees than sub-baccalaureate degrees, and they spend more for more-credentialed and higher-wage employees than they do for less-credentialed and lower-wage workers.

A frequently cited source for information about employer aid is Training magazine, which conducts a periodic survey from which it draws estimates of total employer expenditures on formal training. The survey includes only those training costs that are specifically budgeted for training that takes place off the job and has discrete costs for trainers, materials, facilities, and so forth. In an October 2001 report, the magazine concluded that employers had budgeted $57 billion for formal training in 2001. More recent estimates by the American Society for Training and Development (ASTD) of increases in training expenditures as a percentage of payroll and increases in the average number of hours of training provided on a per-employee basis suggest that employer spending for formal training increased to about $60 billion in 2005.

Of the total spent on formal training, the employer community devotes a small but important fraction to helping their employees with the cost of postsecondary education. Employers report to ASTD that they spend 11 to 13% of their formal training expenditures on tuition reimbursement for college study. That would place employer aid in the range of $6 billion to $7.5 billion. However, information collected from a sample of graduate and undergraduate students via the National Postsecondary Student Aid Study suggests a somewhat smaller investment, perhaps about $4 billion in 2003–2004.

Even the lower estimate is a good deal of money. To put this in perspective, this current level of employer spending is between 30 and 40% of federal spending on the Pell grant program. Even modest percentage increases in the level of employer investment in postsecondary education could make a significant difference in skill development for working adults. If increased spending were accompanied by changes in the structure of these investments in training and education (such as encouraging the development of more general and more portable skills and allocating a greater share toward currently underprepared workers at lower wage levels), employer aid would have enormous impact.

New strategies

The problem of undereducated and underskilled adult workers is getting worse, not better. Current federal policies are not working. Our education strategies have rested on the expectation that the educational attainment and productivity of the workforce would rise almost inexorably as huge numbers of more-educated young labor market entrants crowded out less-educated older workers. That worked for a long time, but it is not going to work anymore. The United States cannot simply grow its way out of this problem of undereducated adult workers.

The nation needs new strategies based on the reality of current labor market demographics and aimed at lifting the attainment of adults already in the workforce. These new strategies will not come with zero costs, but if they are well designed with the incentives in the right place, they are affordable.

A few basic principles can help shape an effective policy response. First, there is a need to develop thoughtfully segmented responses, avoiding a one-size-fits-all approach. As summarized above, the country faces a series of different problems in educating different segments of the adult workforce. Policymakers need to be thoughtful in carefully targeting resources and policies to these segments.

Second, there must be a focus on building and shaping demand for adult education, on the part of both less-educated workers and their employers. Simply putting more resources into the hands of education and training providers is not likely to be very effective. It would be better to work from a demand-side strategy that first asks underprepared individuals and their employers to step up to greater responsibility in investing in adult education and then provides direct incentives and assistance to those who do.

Third, the federal government and the states need to work together on this. The federal government can have a strong impact on changing adult basic education and instruction practices in higher education only by working with and through the states.

Finally, there must be far greater emphasis on new and improved education technology in order to build and articulate demand, to deliver instruction, to measure progress, and to test for competency. Higher education and adult basic education have been very slow to deploy technology, especially in ways that can overcome the problems of time and flexibility that limit working adult access to good education.

Five basic strategies can provide the foundation of policy reform. First, it is essential to create new economic incentives for employers to help finance basic skill training, ESL training, and credentialed postsecondary education for their employees. Specifically, employers should be offered a substantial new tax credit for their educational investments. A framework for this tax credit is already available through an education assistance plan under Section 127 of the tax code. This provides that when employers reimburse their employees for the cost of tuition, books, fees, supplies, and equipment for job or non–job-related education as part of a “qualified educational assistance program,” these benefits may be excluded from income as reported by the employee, up to a limit of $5,250 per year. That’s good for the employee but, as it stands now, not a real incentive to the employer. A tax credit to the employer in the amount of 50% of such benefits, focused on helping employees gain literacy and English skills as well as postsecondary credentials up to a bachelor’s degree, would stimulate new employer investment.

Second, it is essential to strengthen existing incentives for individuals to invest in their basic skills and their credentialed postsecondary education. The Lifetime Learning Tax Credit (LLTC) should be expanded to offer tax credits of 50% on qualified expenses up to $2,000 per year, and the credits should be made fully refundable for low-income workers. (The LLTC is currently limited to 20% of the first $10,000 of postsecondary spending and is not refundable to taxpayers with limited tax liability.) For individual spending on adult basic education and ESL instruction, an even more generous credit of 100% on the first $1,000 of qualified expenses and 50% on the next $1,000 should be provided.

With its legislative companion, the Hope Scholarship, the LLTC was enacted in the Taxpayer Relief Act of 1997 to increase college affordability and to encourage lifelong learning. The two credits were designed to complement each other by targeting different groups of students. Whereas the Hope credit may be used only for a student’s first two years of postsecondary education, the LLTC is available for unlimited years to those taking classes beyond their first two years of college, including college juniors and seniors, graduate students, and working adults pursuing lifelong learning. However, the current structure of the LLTC has limited most of the benefit to full-time students who attend higher-cost institutions and who have used up their eligibility for the Hope. The LLTC has minimally benefited students who attend lower-cost institutions such as community colleges and who are enrolled less than full time. These proposed changes would directly stimulate new investment by working adults in their own education.

Third, there should be more effective ways to encourage postsecondary institutions to develop more flexible programs and degree strategies that work for working adults. The federal government should establish a five-year program of matching grants to states that are most committed to helping their public postsecondary institutions create innovative and effective degree and credential pathways for working adults. States would be invited to compete for two-year planning grants that would be followed by two or three more years of implementation grants. Grants would be made to only 20 to 25 states prepared up front to make the strongest commitment to the postsecondary education of working adults. The federal grants would be renewable annually, subject to performance, rather than allocated in one large grant to the participating states. There would be monitoring, assessment, and enforcement mechanisms to keep states on track in implementing the plans that they develop. There would be a reserve for additional allocations to high-performance states, providing incentives for outstanding work as well as sanctions for poor performance. In addition to the state grants, there might be some resources set aside for competitively awarded research grants and some demonstration grants directly to colleges and universities.

Fourth, the United States needs a fresh start on adult basic education, a new strategy centered on the deployment and use of technology to accelerate English language acquisition by non–English speakers and employer-defined basic skills for low-literacy adults. The existing federal adult basic education program should be tossed out to begin anew with a more employment-focused and technology-based program that supports individual and employer investment in basic skills and English acquisition. The centerpiece of this new approach would be a Basic Skills Innovation Fund to encourage private-sector investment in new technologies and instructional designs for basic skill and ESL training that can be delivered through a variety of venues: worksites certainly, but also public libraries, community centers, and community colleges. Rather than just pumping more funding into the current federal adult literacy program, funds should be redirected away from the current focus to building new technology tools and employee/employer demand for effective basic skill development. The LLTC and the employer credit would become the primary funding vehicle for adult basic education and ESL instruction.

Finally, there should be a national marketing campaign to help millions of working adults and their employers better understand their shared interest in more and better education and learn about effective ways to plan, finance, and complete that education. This should be an aggressive and targeted campaign aimed at raising awareness about the importance of education for the adult workforce, driving home these key facts: that educational attainment matters greatly for the competitive success of individuals and companies as well as national economic growth; that limited basic skills and low English proficiency will continue to be a major handicap for adults trying to get ahead, but help is available; that the state and federal governments are helping educational institutions build new educational pathways that work for working adults; and that tax incentives and other forms of assistance are available to workers and their employers.

These strategies are bold only in their departure from current policy; in fact, they represent a measured and necessary response to a huge problem. They are market-oriented policy interventions that seek to stimulate and organize effective demand for education rather than simply trying to increase supply-side offerings. They aim to influence, as directly as possible, the ways in which less-educated workers and their employers spend their money, so that they invest more in education. This is a big job that requires unambiguous and substantial economic incentives, unfiltered by intermediating agencies or institutions.

There is low risk associated with this set of strategies, but extraordinary upside potential. To the extent that employers and individuals do not take up the challenge to invest in worker education, there will be little fiscal consequence. The more this program works, the greater the tax revenue given up. However, if this program works even modestly to lift the educational attainment of current workers, it will have a huge payoff in terms of economic growth. In an era in which increased educational attainment has contributed from one-fifth to one-quarter of overall economic growth and at a time when we know that continued dependence on new labor force entrants will not produce higher attainment, there could be no smarter investment in human capital.

College admissions and exclusions

A nation’s competitiveness in a global knowledge-based economy depends on the education levels of its population. However, the United States no longer leads the world in college completion rates. Inequalities in college access rates across family income classes and racial/ethnic groups have narrowed only slightly over the past 30 years, and inequalities in college completion rates have narrowed even less. The groups in the population that are growing most rapidly are the ones that historically have been underrepresented in higher education. Hence, the issues that John Aubrey Douglass’ book, The Conditions for Admission: Access, Equity and the Social Contract of Public Universities, discusses are very timely.

Douglass is concerned with leading U.S. public research universities. Expenditures per student are higher at public research universities than at public comprehensive institutions, where they are in turn higher than at public two-year colleges. A considerable body of research by economists suggests that a student’s economic gains from attending a higher education institution depend on the level of expenditures per student at the institution. However, in most states, the share of students who come from lower- and lower-middle-income families (as measured by the number of Pell Grant recipients) is higher at the public two-year colleges than at the public comprehensives, and is higher there than at the flagship public research universities. Hence the students for whom public higher education is supposed to serve as a vehicle for social mobility are, for the most part, attending the more poorly funded public higher education institutions. A notable exception occurs in California, where a number of the campuses of what is arguably the nation’s finest public research university system, the University of California (UC), rank near the top of flagship publics in terms of the shares of their students who are Pell Grant recipients.

It is thus fitting that, for the most part, The Conditions for Admission is a historical treatment of the UC system and how the criteria used for admission to it have changed over time. But the book is much more than a discussion of changing criteria; it is about changing social pressures and changing interest groups, and about the roles and interests of university administrators and faculty, members of the board of regents, elected government officials, and the public at large. It is also a book about the key role that state funding plays in setting admission criteria. During periods when state funding and tuition revenue are inadequate to allow all qualified candidates to attend the university, admission policies become a way of rationing scarce positions.

The Conditions for Admission is divided into four sections. In the first section, Douglass describes the founding of the UC at Berkeley around 1870 and the evolution of its progressive policy that any student who met prescribed academic qualifications would be admitted, in contrast to the behavior of private academic institutions, which often used sectarian and racial preferences. Because of this policy, the geographic representation of students was always important at UC and later led to the development of multiple campuses around the state. Likewise, providing educational opportunities for women became an earlier concern there than it did at many private institutions. Whereas private higher education institutions often respond to increases in demand by increasing their selectivity, UC felt more of a responsibility to grow to increase access. However, Douglass describes the tension that arose early in its history between its obligation to grow and the lack of state resources to fund it. Some tensions never change.

Over time, the development of the junior college system in California and then the California State system allowed UC to preserve its role as a selective academic institution. Although many private colleges and universities had overt or implicit religious quotas in the 1920s and used standardized tests (which were believed to be culturally biased) and “holistic” admissions processes to keep students (often lower-income) from immigrant families out, the UC system based admission on achievement in a set of prescribed high school classes. But as early as the 1920s, there were some students who lacked some of the requirements but were admitted “by exception”; this category of admits grew somewhat after World War II to accommodate returning veterans. Early on, the importance of cultural diversity meant exposing California’s students to international students.

The second section covers the period from after World War II to 1990. When grade inflation made some wonder about the use of the SAT exam to provide a better measure of comparative student performance across high schools, the UC faculty committee in charge of admissions policies rejected the use of the SAT because it did not add much beyond high-school grades in predicting student performance in college. The 1960s saw the Civil Rights movement and race, rather than class, became much more important in admissions policies. During this period, the divergent interests of faculty (often focused on selectivity) and administrators (concerned with the social goals of diversity and expanding enrollments) came into play, and administrators now used holistic admissions processes as part of the special admissions process to enroll members of disadvantaged groups, rather than keep them out. Pressure on enrollments led to the use of SAT scores to evaluate marginal students, not because SAT scores enhanced the predictive power of models of college performance, but because their use created a veil of an objective criterion being used to ration entry from among the eligible students.

In 1978, the Supreme Court’s Bakke decision prohibited racial quotas but allowed race to be a consideration in admission decisions, and this led to a further use of holistic admissions, as up to half the UC class was now admitted that way. However, the impact of this admission process was to disadvantage Asian Americans, and a relatively new interest group put pressure on the political process and the UC regents to improve their access.

The third section talks about more contemporary experiences. In the 1990s, Board of Regents member Ward Connerly led the campaign to end affirmative action in UC admissions. The board formally voted to ban the use of race or ethnicity in admissions, and in 1996 the voters passed Proposition 209, which formally prohibited the use of race or ethnicity in admissions. To counter this prohibition, UC shifted back to its original concerns about preserving geographic and economic diversity, and the circle was complete. But differences in the wealth levels of communities affected the quality of local schools, and the lack of advanced placement classes at schools in poorer areas limited their students in successfully competing for admission. So consideration of rank in class as an admissions criterion rose, and UC began using holistic admissions criteria to admit a greater share of its entering class.

In the final section, Douglass discusses the very fundamental issues that his historical description and the changing external environment have raised. Who should set the admission standards at public universities: the state, the courts, public opinion and referenda, interest groups, or faculty and administrators? How should merit be defined, and what are public universities trying to achieve? How is the current crisis of funding at public universities, in which public funding is declining relative to the institutions’ true costs of operations, likely to affect the universities? How will the privatization of public higher education that is occurring (with increased tuition levels and increased reliance on external giving) affect admissions processes at public universities? Will we see public universities behaving more like their private counterparts, with widespread alumni influence, for example? As public funding for public universities becomes a progressively smaller share of the universities’ budgets, is it reasonable to expect that these institutions will begin to disregard their public purpose and the reasons why they were originally founded?

Although in places readers may wish that the evolution of UC admissions policies were discussed in a little less detail, The Conditions for Admission is an important and well-written book that anyone interested in higher education should find of great interest. It will help the reader to understand that admissions decisions are not clear-cut, that policies disfavored at one time may be favored at another, and that policies adopted for one reason at one time may serve a different purpose at another time. It will also help the reader to understand that the goals of public universities are not always obvious, and therefore the optimal policies that the universities should pursue are also not always obvious. Certainly reading the book has made me less dogmatic about what I believe.

I do have one major quibble with Douglass’ book. He draws a distinction between public universities that have an obligation to behave in the social interest and private universities that have the freedom to unilaterally pursue their own private interests. Although there may be some truth to that view, I would argue that the distinction is not as strong as it may at first seem.

Taxpayers as a whole subsidize private higher education institutions through the favorable tax treatment that the institutions receive: They pay no taxes on their endowment earnings, individuals and corporations that make contributions to them can deduct these from their personal and corporate income taxes, academic institutions’ property used for educational purposes is exempt from local property taxes, and the institutions can borrow money at tax-free interest rates. In return for this favorable tax treatment, private universities are expected to act in the public interest. I have argued that one aspect of this is that they are expected to remain accessible to students from all family-income levels.

Many of the selective private institutions have a much smaller share of their undergraduate students who are Pell Grant recipients than do most other public and private institutions, and the share of Pell Grant recipients at many of the selective private institutions has declined in recent years. With their tuition levels rising at rates exceeding the rate of inflation and their endowments soaring to unprecedented levels, is it any wonder that in the summer and fall of 2007 pressure arose from the Senate Finance Committee for them to spend more from their endowments and devote this added spending to grant aid for needy students?

Acting out of a belated understanding of their social responsibility, or out of fear that Congress would mandate higher endowment spending, the nation’s richest private universities such as Harvard and Yale eliminated all loans from their financial aid offers, while their somewhat less rich competitors such as Cornell and Duke converted at least some of their loans to grants. A number have also announced plans to increase the size of their student bodies. So the distinction between public and private universities in terms of their obligation to act in the social interest may not be as strong as the author believes.

Achieving space security

These two books, although very different in intent and style, converge on a fundamental conclusion: If the United States wants to continue to enjoy the benefits of space security, defined by James Clay Moltz as “the ability to place and operate assets outside the Earth’s atmosphere without external interference, damage, or destruction,” there is a “compelling logic to the exercise of military restraint.” Indeed, he suggests, all nations active in space should exercise such restraint “because of their shared national interest in maintaining safe access to critical regions of space.” For example, it is in the common interest to avoid actions that create space debris and threaten the environment of outer space, such as the January 2007 Chinese test of an antisatellite device or the February 2008 U.S. destruction of a reentering National Reconnaissance Office satellite. Both books suggest that the space environment is, as Moltz puts it, “too valuable to be used for war.”

Mike Moore is the former editor of The Bulletin of the Atomic Scientists, a periodical with a strong arms control perspective, and thus it is not surprising that Twilight War is an extended tract arguing against the desirability of U.S. space dominance. This concept is defined as an overwhelming advantage in space capability that would allow the United States to control who has access to outer space and what is done there, and, if it so chooses, to use space as an arena for the projection of U.S. military power. Moore suggests that a coherent group of “space warriors” in the Department of Defense, the Air Force, various think tanks, Congress, the aerospace industry, and, at least during its first term, the Bush administration, believes that the capability to dominate space should be a top-priority U.S. goal. Moore writes that this view “is uniquely in tune with twenty-first century American triumphalism,” defined as the belief that “America’s values, perhaps divinely inspired, ought to be the world’s values.” Indeed, throughout the book he suggests that such exceptionalism, not geostrategic security considerations, is the underlying motivation behind the U.S. rhetoric regarding space dominance, thus giving the United States the right to define for the rest of the world, not just itself, what is acceptable behavior beyond Earth’s atmosphere.

Twilight War puts forward two well-articulated lines of argument against this point of view. The first is that achieving U.S. space control is technologically and fiscally extremely unlikely, given the increasing space capabilities of other countries, not only Russia but also China, India, Japan, and the nation-states of Europe. The second is that rather than being consistent with U.S. values, “building and deploying the capability to unilaterally control space and place weapons in space would not square with America’s historic reverence for liberty and the rule of law.” As a self-described “liberal internationalist,” Moore suggests that U.S. interests might be best served if this country “took the lead in developing a new, tougher, and more comprehensive space treaty that would decisively prevent any nation from developing the capability to militarily dominate space.”

Moving toward such a treaty is currently a very unlikely prospect. The most recent U.S. national space policy, approved by President Bush on August 31, 2006, strongly rejects such a treaty-based approach to space security, saying that “the United States will oppose the development of new legal regimes or other restrictions that seek to prohibit or limit U.S. access to or use of space” and that “proposed arms control agreements or restrictions must not impair the rights of the United States to conduct research, development, testing, and operations or other activities in space for U.S. national interests.” The U.S. opposition to binding space arms control treaties has been relatively consistent through the past four presidential administrations. One crucial space policy issue for the next occupant of the White House will be whether to maintain this unilateralist approach to ensuring U.S. freedom of action in space.

Moore marshals various examples of “overheated warrior rhetoric” to make his case that space dominance is indeed the motivating force behind the development of many U.S. national security space capabilities, but he is honest in acknowledging that reality has not matched that rhetoric and that in recent years actual spending on space-dominance capabilities has been relatively modest. Even so, suggests Moore, space warriors, although opposed to creating weapons that would lead to space debris, remain “interested in developing more sophisticated systems that could temporarily disable, damage, or destroy satellites in any orbit, without creating debris.”

Moore’s book is sometimes wordy and rambling. Still, Twilight War is a very useful addition to the growing body of thinking and writing that opposes moves to weaponize space and that advocates multilateral approaches to space security.

In The Politics of Space Security, Moltz, an associate professor in the Department of National Security Affairs at the Naval Postgraduate School in Monterey, CA, notes that his book is intended for a scholarly audience, but he also hopes that the policymaking community, the media, and the general public will find it of interest. Although the author draws from the academic literature in setting up his framework for analysis, most of the well-written study is blessedly free from jargon. This is a particularly valuable contribution to the space security debate, given both the wide range of the historical dimensions of the study and the incisive quality of its analysis.

The Politics of Space Security is actually two books in one. The first and last sections of the book contain a thoughtful analysis of various perspectives for understanding the concept of space security and their application to understanding the current situation and future prospects. Moltz’s first two chapters look at how other analysts have understood space security and set forth an alternative explanation that stresses a growing awareness of the environmental consequences of actions such as the testing of nuclear weapons in outer space in the early 1960s, which created electromagnetic effects that interfered with satellite operations in both the short and potentially the longer term, and the kinetic destruction caused by antisatellite weapons, which could create long-lived debris in heavily used orbits. Moltz writes as his “main conceptual argument” that “environmental factors have played an influential role in space security over time and provide a useful context for considering its future [emphasis in original].”

The middle chapters of the study contain a detailed history of U.S.-Soviet interactions in space, particularly in the national security arena, from the origins of the two countries’ space programs early in the 20th century to the end of the Clinton administration. This historical account is worthwhile on its own terms, because there has been nothing comparable in the past 20-plus years, since Paul Stares’ 1987 study Space and National Security. Moltz sees U.S.–Soviet/Russian interactions in space between 1958 and 2000 as alternating between “military-first” phases, during which the emphasis in both countries was on developing military space capabilities, and “diplomacy-first” phases, during which the emphasis was on creating bilateral and multilateral agreements on good behavior in space, including a number of space-related treaties. He characterizes the fundamental relationship as one of “strategic restraint.” Moltz concludes, supporting his emphasis on environmental security, that over this 40-plus–year period “the two sides gradually accepted mutual constraints on deployable weapons in return for safe access to the space environment for military reconnaissance, weather forecasting, tracking, early warning, and a range of civilian uses.”

The final part of The Politics of Space Security returns to the questions raised at the start of the study regarding alternate paths to space security. Moltz argues, echoing the main argument of Twilight War, that the early years of the Bush administration saw a rejection of strategic restraint in space and a move toward a unilateral approach to space security, one that, in the words of the 2006 National Space Policy, gave the United States the right to “deny, if necessary, adversaries the use of space capabilities hostile to U. S. national interests.”

This most recent military-first approach to space security has produced a number of reactions in the United States and around the world. They include the rise of a vocal community opposed to recent U.S. space policy, particularly because it is seen as leading to the weaponization of space. Moore’s book is just one example. They include the possibility that some states, most notably but perhaps not only China, deem it increasingly necessary to develop their own ways of countering U.S. moves toward space control; some interpret the development of a kinetic-kill antisatellite weapon by China in this context. Even though it was China that carried out an antisatellite test, the United States has increasingly become seen around the world as the primary opponent of any limitations on space weaponization.

Like Moore, Moltz argues that the actions of the Bush administration have not matched its rhetoric, although Moltz (also like Moore) notes the continued development of counterspace capabilities that can operate without lasting harm to the space environment. He points out an important new development in recent years. Whereas previously the space security debate within the U.S. government was limited to the national security space community, there has been an increasing engagement of commercial interests, who after all have their own strong interests in a secure operating environment. He concludes that a new discussion about space security, involving new actors, is beginning.

With a new president taking office in January 2009, the direction that such a new discussion might take and its outcome cannot be forecast with any certainty. In the space sector, it is clearly the most important emerging issue. Moltz believes that the stage has been set for another phase of diplomacy-first activity, and Moore would certainly hope that he is correct. In recent years, the center of gravity in the space security debate has shifted from attempts to negotiate a comprehensive treaty-based regime banning space weapons to a more bottom-up approach that emphasizes limited concrete actions to enhance space security for all. These include the adoption by the United Nations of guidelines regarding minimizing space debris, discussions of some form of multilateral cooperation in improving space situational awareness, increasing acceptance of the concept of a code of conduct setting out best practices for spacefaring states, and perhaps moves toward space traffic management as more users become active in space.

Moltz writes that “the future course of international relations in space remains unwritten.” It is his hope that the combination of collective learning about the consequences of conflictual behavior in space and the involvement of more national and commercial actors in the space security debate will see a return to the kind of mutual restraint that characterized U.S. and Soviet behavior, but this time on a global scale involving the newly emergent space powers. He points out, however, that this outcome is still contingent on “human initiative” and a willingness on the part of key states to support multilateral cooperation, neither of which can be taken for granted.

In his book, Moore lays out a somewhat overstated version of the perspectives that have guided U.S. national security space policy during the past eight years. Moltz reminds us that these perspectives are not in any way immutable, and his analysis gives hope that it is possible for leaders, informed by experience and desiring to maintain space as an environment in which all can carry out beneficial activities, to find a path to achieve that goal. Both these books are reminders that the ability to operate in space free from threats of disruption is essential to the modern world, and both provide an ultimately optimistic assessment that it is possible to get to such a state of affairs.

Time to Act on Health Care Costs

Popular discussions of the long-term fiscal challenges confronting the United States usually misdiagnose the problem. They typically focus on the government expenses related to the aging of the baby boomers, with lower fertility rates and longer life expectancy causing most of the long-term budget problem. In fact, most of the long-term problem will be driven by excess health care cost growth; that is, the rate at which health care costs grow compared to income per capita. In other words, it is the rising cost per beneficiary rather than the number of beneficiaries that explains the bulk of the nation’s long-term fiscal problem.

One can see this phenomenon manifesting itself even in the next decade: Figure 1 shows the Congressional Budget Office’s (CBO’s) projections for spending on Social Security, Medicare, and Medicaid through 2017. As Figure 1 shows, Social Security rises by about 0.5% of gross domestic product (GDP), from 4.2 to 4.8%, over that period. Spending on Medicare and the federal share of Medicaid rises from 4.6 to 5.9% of GDP—an increase of 1.3%, or roughly twice as much as that for Social Security.

If one looks further into the future, the basic point is accentuated. Figure 2 portrays a simple extrapolation in which Medicare and Medicaid costs continue to grow at the same rate over the next four decades as they did over the past four decades. (Fortunately, even with no change in federal policy, there are reasons to believe that this simple extrapolation overstates future cost growth in Medicare and Medicaid. The CBO has recently released a long-term health outlook that presents a more sophisticated approach to projecting Medicare and Medicaid costs under current law, but this simple extrapolation is adequate to illustrate the key point.) Under this scenario, Medicare and Medicaid would rise from 4.6% of the economy today to 20% of the economy by 2050. To appreciate the scale of this increase, all of the activities of the federal government today make up 20% of the economy.

FIGURE 1
Spending on Medicare and Medicaid and on Social Security as a percentage of GDP, 2007 and 2017

The most interesting part of Figure 2 is the bottom line, which isolates the pure effect of demographics on those two programs. The only reason that the bottom line is rising is that the population is getting older and there are more beneficiaries of the two public programs. The increase between today and 2050 in that bottom dotted line shows that aging does indeed affect the federal government’s fiscal position. But that increase is much smaller than the difference in 2050 between the bottom line and the top line. In other words, the rate at which health care costs grow—whether they continue to grow 2.5% per year faster than income per capita, or 1%, or 0.5%—is to a first approximation the central long-term fiscal challenge facing the United States.

Conventional wisdom tells us that the sooner we act, the better off we are, and convention certainly has it right in this case. Figure 3 shows that if we slow health care costs’ excess growth from 2.5 to 1% per year starting in 2015 (which would be extremely difficult if not impossible to do, but is helpful as an illustration), the result in 2050 would be that federal Medicare and Medicaid expenditures would account for 10% rather than 20% of GDP.

On its face, this challenge looks pretty daunting. And it is further complicated by the fact that it is implausible that we will slow Medicare and Medicaid growth unless overall health care spending also slows. The reason is that if all one did was, say, reduce payment rates under Medicare and Medicaid, and then tried to perpetuate that over time without a slowing of overall health care cost growth, the result would probably be that fewer doctors would accept Medicare and Medicaid patients, creating an access problem that would be inconsistent with the underlying premise and public understanding of these programs. One therefore needs to think about changes in Medicare and Medicaid in terms of the impact that they can have on the overall health care system.

FIGURE 2
Total federal spending for Medicare and Medicaid under assumptions about the health cost growth differential

FIGURE 3
Effects of slowing the growth of spending for Medicare and Medicaid

From that perspective, this long-term fiscal challenge appears to present a very substantial opportunity: the possibility of taking costs out of the system without harming health. Perhaps the most compelling evidence underscoring this opportunity is the significant variations across different parts of the United States that do not translate into differences in health quality or health outcomes, as explained by Elliott Fisher in the accompanying article.

The question then becomes, why is this happening? To me, it appears to be a combination of two things. One is the lack of information specifically about what works and what doesn’t. And the second thing is a payment system that gives neither providers nor consumers an effective incentive to eliminate low-value or negative-value care.

On the consumer side, and despite media portrayals to the contrary, the share of health care expenditures paid out of pocket, which is the relevant factor for evaluating the degree to which consumers are faced with cost sharing, has plummeted over the past few decades, from about 33% in 1975 to 15% today. All available evidence suggests that lower cost sharing increases health care spending overall. The result is that collectively we all pay a higher burden, although the evidence is somewhat mixed on the precise magnitude of the effect.

This observation leads some analysts to argue that the way forward is more cost sharing and a health savings account approach, and that can indeed help to reduce costs. But there are two things that need to be kept in mind in evaluating this approach. The first is that there is a significant amount of cost sharing that is involved in existing plans. Moving to universal health savings accounts would thus not entail as much of an increase in cost sharing, and therefore as much of a reduction in spending, as one might think. Second, there is an inherent limit to what we should expect from increased consumer cost sharing, because health care costs are so concentrated among the very sick. For example, the top 25% most expensive Medicare beneficiaries account for 85% of total costs, and the concentration of health care costs among a small share of the population is replicated in Medicaid and in the private health care system. To the extent that we in the United States want to provide insurance, and insurance is supposed to provide coverage against catastrophic costs, the fact that those catastrophic costs are accounting for such a large share of overall costs imposes an inherent limit on the traction that one can obtain from increased consumer cost sharing. In sum, increased cost sharing on the consumer side can help to reduce costs, but it seems very unlikely to capture the full potential to reduce costs without impairing health quality.

That leads us to the provider side, where the accumulation of additional information and changes in incentives would be beneficial. There is growing interest in comparative effectiveness research, and the original House version of the State Children’s Health Insurance Program legislation had some additional funding for comparative effectiveness research. Policymaker interest in expanding comparative effectiveness research is encouraging, but we need to ask some hard questions about what we mean by comparative effectiveness research and how it would be implemented.

The first issue is what kind of research is undertaken and what standard of evidence is used. As Mark McClellan, the former administrator of the Centers for Medicare and Medicaid Services, has noted, comparative effectiveness research will very probably have to rely on nonrandomized evidence. The reason is that it seems implausible that we could build out the evidence base across a whole variety of different clinical interventions and practice norms using randomized control trials, especially if we want to study subpopulations. On the other hand, economists have long been aware of the limitations of panel data econometrics, where one attempts to control for every possible factor that could influence the results and typically falls far short of perfection. There is thus a tension between using statistical techniques on panel data sets (of electronic health records, insurance claims, and other medical data), which seems to be the only cost-effective and feasible mechanism for significantly expanding the evidence base, and the inherent difficulty of separating correlation and causation in such an approach.

In terms of the budgetary effects from comparative effectiveness research, much depends on both what is done and how it is implemented. If the effort involves only releasing the results of literature surveys, the effects would probably be relatively modest. If new research using registries or analysis of electronic health records is involved, there may be somewhat larger effects. The real traction, though, will come from building the results of that research into financial incentives for providers. In other words, if we move from a “fee-for-service” to a “fee-for-value” system, where higher-value care is awarded with stronger financial incentives and low- or negative-value health care is penalized with smaller incentives or perhaps even penalties, the effects would be maximized. The design of such a system is very complicated and difficult to implement, but that is where the largest long-term budgetary savings could come.

My conclusion is that the combination of some increased cost sharing on the consumer side with a substantially expanded comparative effectiveness effort linked to changes in the incentive system for providers offers the nation the most auspicious approach to capturing the apparent opportunity to reduce health care costs with minimal or no adverse consequences for health outcomes.

A New System for Preventing Bridge Collapses

On August 1, 2007, the eight-lane Interstate 35W bridge in Minneapolis, Minnesota, collapsed catastrophically during rush hour, killing 13 people and severing a crucial connection across the Mississippi River. Before an investigation even had time to get started, the Minneapolis bridge failure had reignited an old debate about whether the United States was investing enough in overseeing and maintaining its infrastructure.

In general, the primary cause of bridge failure is not sloppy maintenance or deviation from design standards; bridges fall down because extreme events are too much for them to bear. A bridge designed to current standards and properly maintained can still fail. That is because we still do not know enough about how exceptional stresses—and especially the interaction of exceptional stresses—can compromise a bridge’s integrity. This ignorance prevents the United States from setting design and engineering standards that can cope with extraordinary “loadings” caused by weather and other extreme events to prevent them from bringing down a bridge.

We need to revisit current standards with the benefit of much more extensive research. We need to install monitoring devices to record stresses on bridges. Of course, we need to examine what happens during catastrophic failure such as the Minneapolis bridge collapse, but we also need to study and learn from incidents of lesser damage and close calls.

In the case of the Minneapolis bridge collapse, speculation in the initial weeks focused on fatigue cracking and corrosion (implying inadequacies of inspection and maintenance). There was also speculation about the adequacy of steel plates called gussets, which connect a truss’s structural members at a joint (implying problems in design, construction, or maintenance). We will not know for sure what the cause was until the National Transportation Safety Board reports its findings, which will not happen before summer 2008.

However, most catastrophic bridge failures occur because of, or are significantly exacerbated by, factors outside the bridge structure itself, especially extreme events. There are many kinds of extreme events, including: the force of a surging river current; the collision of fuel-carrying vehicles with a bridge support; large ground acceleration in an earthquake; a group of trucks carrying more weight than is allowed by law; and unprecedented events such as a terrorist attack through blast at piers or support cables.

IN GENERAL, THE PRIMARY CAUSE OF BRIDGE FAILURE IS NOT SLOPPY MAINTENANCE OR DEVIATION FROM DESIGN STANDARDS; BRIDGES FALL DOWN BECAUSE EXTREME EVENTS ARE TOO MUCH FOR THEM TO BEAR.

An extreme event imposes load effects on the bridge; these add to loads exerted by the bridge’s own weight and by moving vehicles. Bridge designers seek to be sure that the bridge provides more than sufficient capacity to resist the loads likely to be applied to it during its life. When a bridge fails under loads that it should have resisted under accepted standards of professional practice, it is said to be a failure of its resistance capacity. Otherwise, the bridge fails because loads imposed by external events exceed those against which standard codes and practices are expected to provide protection. The latter are the more common so we focus on them here.

Bridge design and engineering advanced significantly in the latter half of the past century because bridge designers and engineers applied lessons learned from a series of bridge failures. Despite these advances, however, the United States has continued to experience major bridge collapses. Those failures happened because we don’t know enough about the dynamic effects that extreme loads have on structures. That is why the United States, in addition to needing to increase spending to maintain its infrastructure, also needs to invest in the research and information systems that will tell engineers how to avert future disasters.

Researchers need to examine both the failure of materials at the microscopic level and the progressive failure of entire bridge systems. They should further develop computer modeling of nonlinear dynamics during bridge failure. Working with economists and risk analysts, they should seek better understanding of methods for minimizing costs during the life of the bridge. But these studies would largely be extensions and developments of research that is already proceeding.

We argue that the United States needs to invest in new national R&D initiative, one that monitors and reports on bridge performance and bridge failures electronically through smart sensor networks incorporated into bridges. The nation needs to move its transportation systems into the digital age.

A history of failure

Historically, the most infamous bridge failures include the Ashtabula bridge (Ohio, 1876), Tay Bridge (Dundee, Scotland, 1879), Quebec Bridge (St. Lawrence River, 1905), and Tacoma Narrows Bridge (Tacoma, Washington, 1940), which achieved notoriety because the catastrophic collapse was captured on an amateur’s movie camera. These bridges all collapsed because of design or materials flaws that have been rectified by present-day practice.

However, better design and engineering have clearly not eliminated the risk of bridge collapse. Aside from the Minneapolis case, important examples in recent decades include the Schoharie Creek bridge on the New York Thruway in 1987 during intense rain, the San-Francisco-Oakland Bay Bridge in 1989 during an earthquake, and the Walnut Street Bridge in Harrisburg, Pennsylvania, in 1996, from scour due to flooding. Several other failures occurred during flooding on the Mississippi River in the 1990s, earthquakes at Northridge (1994) and Kobe, Japan, (1995), and the Gulf Coast during Hurricanes Katrina and Rita in 2005.

Data compiled for 1989-2000 by researchers Kumalasani Wardhana and Fabian Hadipriono and published in the Journal of Performance of Constructed Facilities shows a continuing record of U.S. bridge failures, even before the resurgence of concern in the wake of Katrina. They obtained their data from a national database prepared by the New York State Department of Transportation (NYSDOT) after the Schoharie Creek tragedy of 1987. Over the study period, the authors identify 503 cases of bridge collapse, with a peak of 112 in 1993.

The highest number of bridge failures (85) occurred in Iowa and the second highest (64) in New York. Iowa’s high figure may derive in part from the effects of the 1993 floods on the Mississippi and Missouri Rivers, and New York’s perhaps from more complete in-state data available to NYSDOT. As a proportion of the number of bridges in these two states, the failure rate for the 12-year study period was 0.33% for Iowa and 0.29% for New York. In the United States in general, there was an average of 42 bridge failures every year. If for no other reason than the aspiration for excellence in public works, bridge safety planning needs attention.

Going to extremes

In most cases, the causes of bridge failure are difficult to diagnose because they occur in complex interaction between the resistance and load. Failures attributable to resistance may occur because of flaws in the structure’s original design, flaws in detailing documents submitted by contractors with engineers’ approval, errors in construction practice, materials deficiencies, environmental deterioration over time, or inadequate inspection and maintenance. Failures are otherwise attributable to excess loads. For any particular case of failure, a forensic study would be needed to scientifically establish whether a failed bridge did or did not fulfill required codes, design specifications, and expectations of the profession and whether, by accepted standards, problems should have been recognized and the bridge closed or retrofitted. Without consistent forensic study, statements about the cause of failure are suspect.

TABLE 1
Type and Number of Bridge Failure Causes

Failure causes and events Number of occurrences Percentage of total
Hydraulic (flood, scour. debris, drift, others) 266 52.88

Collision (auto, barge, etc.) 59 11.73

Overload 44 8.75

Deterioration 43 8.55

Fire 16 3.18

Construction 13 2.58

Ice 10 1.99

Earthquake 17 3.38

Steel fatigue 5 0.99

Design 3 0.60

Soil* 3 0.60

Storm/hurricane/tsunami 2 0.40

Miscellaneous/other* 22 4.37

TOTAL 503 100.00

Source: Wardhana and Hadipriono, 2003

Note: Italics indicate external events. An asterisk (*) indicates uncertainty about whether a cause was an external event.

Nevertheless, the information available from databases does provide a suggestive list. Table 1, obtained from Wardhana and Hadipriono, lists both internal (resistance-side) causes and external (load-side) causes. The failures that can be readily labeled external make up just over 82% of the total.

Bridge failures during the Gulf Coast hurricane disasters of 2005 serve as a further wake-up call. According to the National Institute of Standards and Technology, three out of four major bridges from New Orleans to areas north of Lake Pontchartrain underwent catastrophic failure, and the fourth was reduced to partial service. Major bridges also failed over Bay St. Louis, Biloxi Bay, Back Bay, and the Pascagoula River, all in Mississippi, and over Mobile Bay, Alabama. In addition, several movable bridges became inoperable because electrical or mechanical chambers were flooded. In fixed bridges that underwent structural failure, the causes included the lateral forces of waves on bridge superstructure and substructure along with buoyant uplift of inundated bridges. They also included impacts from barges and other debris, such as floating vehicles, shipping containers, logs, boats, and large appliances, plus the undermining of foundations through scour.

The Gulf Coast’s vulnerability may not be at all unique. Massive land-use changes have exacerbated the propensity for flooding in other places as well. Even New York City is among the coastal cities at risk from hurricane and coastal surge.

The threats from terrorism

The possibility of bridge failure from terrorism is also a concern in the aftermath of 9/11. Long-span signature bridges that have high symbolic significance may be especially at risk. In contrast with natural and accidental hazards, terrorist threats are likely to focus on high-visibility, high-service bridges of strategic economic importance

In some ways an attack against a bridge, any bridge, is less threatening to life and safety than an attack against a building because bridge users are traversing the structure and not (as in a building) occupying it. If the bridge and its approaches are not congested with traffic, then simply preventing additional vehicles from entering will quickly clear the bridge. Bridge landings provide easier and faster egress than do the doors of a high-rise building. With proper protocols, bridges can be closed early in response to suspicious activity.

In other respects, bridges are more vulnerable than buildings. Bridges are subject to constant flow of vehicles, which as a practical matter cannot be inspected for explosives. Because in a bridge (as compared to a building) the structural members are exposed, a malicious but trained observer may more easily estimate the most vulnerable points where explosives can undermine the structure’s integrity catastrophically. Long-span bridges generally have less structural redundancy than buildings do. Suspension bridges are especially vulnerable because the rupture of one main cable might cause rapid, progressive, and unstoppable catastrophic failure. Cable-supported bridges are also often architecturally dramatic and would therefore attract attackers searching for high-visibility targets.

Several news reports have suggested that bridges have been targeted by terrorists. According to news reports in 2004, al Qaeda documents captured in Pakistan indicated that the Brooklyn Bridge was reconnoitered as a possible terrorist target. Other news items reported the arrest of a suspect, wanted on other terrorism-related suspicions, for close-up filming of details of the Chesapeake Bay Bridge. In California, there was public debate reaching the gubernatorial level on the ability of the San Francisco–Oakland Bay Bridge to withstand an explosive attack.

Even in the absence of actual incidents, bomb threats and fears of terrorism can cause interruptions of service. The National Cooperative Highway Research Program estimates that there are 1,000 “critical bridges” at which special anti-terrorism precautions may have to be taken, even though there is no certainty that attackers would target these particular bridges.

Multihazard events

Bridge safety is receiving renewed attention because there is a growing appreciation of how separate load effects can combine to increase hazards. For example, surge and strong wind caused by Hurricane Katrina’s imposed lateral wave forces, buoyant uplift, debris impact, and scour. During a flood, the bridges that remain in service may be subjected to unusually heavy traffic loads. Such multihazard events may not even be exceptional; they may be at least as common, and perhaps more so, than the simplest scenario in which a single extreme event exerts a single type of load.

We have identified five categories of extreme events, listed in order of approximate likelihood:

  • A simple event in which one extreme event exerts a single load.
  • A combined multihazard event in which a single extreme event exerts multiple load effects, such as an earthquake that sets off ground shaking, ground faulting, and soil liquefaction.
  • A consequent multihazard event, which is a single extreme event exerting one or more initial load effects that set off secondary events, each of which exerts additional forces on the structure—such as flood and wind that cause barge collisions or earthquakes that cause vehicles to collide and catch fire.
  • A subsequent multihazard event, two unrelated extreme events separated in time, perhaps even years, with the second event affecting a subsystem weakened in the first event—for example, a barge colliding with a pier a few months after the pier is weakened by scour.
  • A simultaneous multihazard event in which two unrelated extreme events coincide; for example a heavily loaded truck convoy crossing a bridge during a flood, or an earthquake in Alaska during the winter, when there is extreme cold or pressure from an ice jam.

Of these, combined and consequent events are the most likely multihazard events except for scour, which can cause long-term subsequent events. In practice, it may be difficult to distinguish between combined and consequent events. The critical lesson is that bridge design and planning must increasingly take into account such combinations of events.

Comprehensive monitoring

Bridge experts are becoming aware that bridges should be situated, designed, retrofitted, and maintained with a view to an ever-widening range of extreme loads and their combinations, with terrorism posing a new challenge. It is appropriate that, as the field of engineering progresses, its practitioners should refine their work to improve quality, cost-effectiveness, and public safety. If the United States is going to increase investment in infrastructure, it is essential that it have reliable information on what makes bridges safer.

We can make progress in bridge safety by modeling the reliability relationship between loads (including multihazard loads) and resistance and by developing improved structural modeling and materials. Both kinds of progress depend on better data, but obtaining better information is no easy task.

WHILE COLLECTING DATA ON STRUCTURAL DAMAGE, ENGINEERING RESEARCHERS ALSO MUST SYSTEMATICALLY DEVELOP DAMAGE MODELS, SO AS TO BE ABLE TO PREDICT FAILURE PROCESSES FOR VARIOUS KINDS OF STRUCTURES AND MATERIALS UNDER EXTREME EVENTS.

For building or bridge design (as compared to the design of ordinary commercial products), it is prohibitively expensive to put the structure through destructive testing. The National Science Foundation does establish facilities for earthquake study, with shake tables where experimental structures can be tested, but this testing is necessarily conducted on models that are far smaller than the structures of interest. It is also desirable to conduct research during the demolition of an obsolescent or otherwise unneeded bridge, but such research is almost unheard of in the United States, perhaps because of legal constraints.

Additional results may be obtained from forensic engineering studies of failed bridges. A well-developed approach is the post-disaster reconnaissance study, in which teams of engineers and other investigators examine the unique confluence of events that caused a structural failure. But such failures are rare and highly varied in the combinations of hazard types, severities, structure types, soils, and chains of causation that they exemplify. The studies simply do not provide enough evidence to enable generalizations about hazard effects. In addition, the fundamental flaw of forensic studies is that they take place only after a bridge has failed.

We believe that an especially useful supplemental source of information will be incident reporting. Its essential and distinctive feature is this: Reporting is on all incidents, including near-misses, minor mishaps, and significantly stressful events, not just spectacular accidents, disasters, or failures.

With respect to bridges, the challenge would be to measure and document events that severely stress a bridge but result in little or no damage. Data should be collected and coded in a rigorous, consistent manner. The data should encompass both the loads (and the single or multiple hazards that caused them) and the structural impacts. When such incidents, as well as full-fledged accidents are studied, the resulting database may be large enough to allow for statistical analysis. A practical advantage is that the organizations involved are more willing to share information about small incidents than major ones, where issues of liability and official responsibility arise.

Incident reporting systems are now required for airplane accidents and near-misses and are routinely recommended for the study of medical errors and construction site incidents. To implement such a system for bridges, the United States should develop and widely implement a structural health monitoring system on bridges. Such a system will track stresses, strains, and other conditions on the structure and its components and will identify, locate, and measure the effect. Such systems could have added value beyond data collection—for example, for real-time emergency management or to inform bridge operators of conditions during and after extreme events.

This system should also unify forms and reporting that are now collected in many different ways. At present, hazard data are sought by specialists in meteorology, hydraulics, seismology, highway accidents, fires, maritime accidents, volcanology, hazmat events, and security threats. Coordinated and consistent data collection is essential because even measures of severity are difficult to compare across hazard types. Fire intensity, for example, is not quantifiable in the same way as flow velocity or buoyant uplift from flooding.

Of course, it makes sense that data must be defined and screened by the pertinent scientific disciplines. However, excessively uncoordinated data collection and inconsistent reporting serve as an obstacle to policy insight on the probabilities of harm to various infrastructures and on ways of alleviating this harm.

In view of the resurgent concern about bridge safety, we propose infrastructure incident reporting through a National Bridge Health Monitoring System. To the extent that it focuses only on bridges, the system should be implemented under the auspices of the Federal Highway Administration, in cooperation with disaster preparedness agencies such as the Federal Emergency Management Agency and its state counterparts, state highway departments, and universities. Implementation should follow experimental testing by means of small-scale pilot projects in selected states. Should the tests prove successful, the system should be expanded to the United States as a whole.

While collecting data on structural damage, engineering researchers also must systematically develop damage models, so as to be able to predict failure processes for various kinds of structures and materials under extreme events. For long-term improvements in bridge safety, observations from destructive testing, forensic studies, reconnaissance studies, and bridge-health monitoring have to be integrated in predictive analytic models.

With such improvements, U.S. infrastructure investment would be more cost-effective on a risk-adjusted basis, because bridge decisionmakers would have more accurately accounted for the range of hazards and multihazard effects to which bridges are susceptible. In addition, although such a project may be initially developed for bridge safety, U.S. infrastructure as a whole would derive economies of scope from extending the system to meet broader needs for national protection.

Asian Successes vs. Middle Eastern Failures: The Role of Technology Transfer in Economic Development

In 1960, Korea, Taiwan, Syria, Tunisia, Morocco, Jordan, and Egypt were in roughly the same economic position. Average per capita income was about $1,500 in 1995 U.S. dollars, and none of these countries had significant manufacturing capacity or exports. China and India were much poorer than any of these countries. Today, the discrepancies between the Middle Eastern and more advanced Asian nations are quite striking, and identifying the source of these differences is critical to understanding the dynamics of economic development.

Korea is the home to Samsung, LG, Hyundai, and other notable technology-driven firms. Phillips, the major European consumer electronics firm, could enter the flat panel TV market only through a joint venture with LG. Taiwan is the base for Acer, which recently acquired Gateway, a major U.S. computer company. Four decades ago, Samsung, LG, and Hyundai were small firms. Acer, Logitech, and other large Taiwanese high-tech firms did not exist. Newer industrializing nations such as China and India are also improving their technological base. China is home to Lenovo, the largest laptop manufacturer in Asia, which now owns IBM’s former laptop division, and India possesses a thriving software industry. Egypt, the most industrially developed Arab economy, exports largely simple textiles and clothing.

Scholarly understanding of the mechanisms of economic development has shifted over time. One of the earliest influential hypotheses in analyzing economic development was by Alexander Gerschenkron, who asserted the “advantages of relative backwardness,” the ability of poor nations to benefit from accessing existing, more productive technology from the rich nations. Rather than having to develop them de nouveau through the R&D process, with all of the huge expenses and false roads inevitably encountered, borrowing was much less expensive and risky. Other scholars emphasized the need for absorptive capacity: the existence of a minimum level of domestic institutional and industrial capacity to enable late starters to take advantage of the potential for catching up. This local capability depended on public and private competence: infrastructure, education, the financial system, and the quality of government institutions. Simon Kuznets, a Nobel Laureate, argued that the rapid economic growth in developed nations had stemmed from the systematic application of science and technology to the production process.

The role of technology transfer reentered the mainstream of discussion about the engines of growth in developed nations as a central point of endogenous growth theory set forth in the 1980s. But this welcome improvement largely emphasizes science and innovation on the frontier of knowledge rather than the transfer of existing technology to poorer regions. I will focus on the role of technology transfer in the economic growth of countries, contrasting some of the economies of the Middle East with those of successful Asian nations such as Korea and Taiwan. The divergence in experience between the Asian nations and Egypt, Jordan, or Tunisia as exemplars of non-oil rich Middle Eastern countries can be explained by many factors, including differences in the quality of leadership and economic policy. But a crucial distinction in these cases was the willingness and ability to tap external knowledge to exploit the technology gap.

Since the publication of the World Bank’s East Asian Miracle in 1993, there has been a general consensus about the proximate sources of rapid growth in the Asian economies. These include: high rates of investment in physical capital such as roads, buildings, and machinery; growing levels of education; a stable macroeconomic policy that controlled inflation; and an emphasis on exports that motivated firms to compete in global markets, thus generating a demand for international technology transfer. The policies generating this performance included maintaining stable foreign exchange rates set to provide some mild incentives to export. Underlying these policies was a general consensus that economic growth was a primary goal of government. A competent bureaucracy, insulated from populist pressures, implemented the growth-oriented policies of the political leaders. This insulation reflected the authoritarian nature of the governments, but it is important to note that although not democratic and occasionally harsh, the governments were not the brutal dictatorial regimes that characterized much of the developing world in the 1960s through the 1980s.

The Arab economies generally had limited economic growth and exhibited behavior considerably different from that of their Asian counterparts: less investment; an orientation to the domestic economy rather than openness to foreign trade and international technology transfer; and less effort to build a high-quality education system. A full discussion of the Middle Eastern nations is provided in a recent book that I coauthored with Marcus Noland. Although the Middle Eastern countries began where the Asian countries began in 1960 and possessed many favorable characteristics such as proximity to European markets, their economic growth trailed far behind that of their Asian counterparts from 1960 to 2000 (Figure 1). On the other hand, they were hardly the hopeless cases often depicted in western media. Egypt, Morocco, and Tunisia experienced per capita income growth of 2% for a good part of the period. Oil-rich nations such as Kuwait and Saudi Arabia enjoyed a rapid ascension in the 1970s, when oil prices rose rapidly, followed by a precipitous decline beginning in the mid-1980s, when oil prices fell. The fortunes of the two groups of economies, the oil-rich and the oil-poor, are closely linked by large remittances from workers who emigrate from the latter to the former. And in the current oil boom, the oil-rich Arab countries are investing heavily in their poorer neighbors, in part because of a reluctance to invest in western economies since 9/11. Yet even in the past four years of a new oil price boom, most of the Arab nations lag their lower-middle-income peers in the rest of the developing world.

In 2000, per capita income in the oil-poor countries of the Middle East such as Egypt, Morocco, and Syria was less than 20% of the income in the industrialized countries, about what it was in 1960. But in Korea and Taiwan, per capita income had risen to more than 60% of the levels in the industrialized world (Figure 2). Part of the difference in income growth is explained by better productivity growth in Asia. In turn, this differential is partly attributable to Asia’s greater inflow of international knowledge and the ability to effectively absorb it. Although many factors contributed to productivity growth in the Asian countries, I will focus on measures of technology inflow and the quality of education, where the contrast with the Middle East is particularly pronounced.

Source: Penn World Tables (PWT) and World Development Indicators (WDI).

Source: PWT and WDI.

Technology transfer

For poorer nations with low levels of domestic innovation, new technology is primarily imported. Some of the technology is embodied in the imports of physical goods such as steel with improved characteristics and new machinery that incorporates improvements in speed, quality-control mechanisms, and energy efficiency. Ideas not incorporated into material inputs, usually referred to as disembodied knowledge, are also critical. Modes by which such knowledge is transferred include foreign direct investment (FDI) by multinational corporations, technology licensing agreements that provide access to new products and processes, and the employment of consultants from the industrialized nations. Occasionally, knowledge transfers are the unpaid for and unanticipated byproducts of commercial transactions; for example, knowledge provided by Western retail purchasers of Asian exports.

Other sources of ideas include the use of nonproprietary information or reverse engineering and the advice of foreign consultants, who can suggest improvements in firm and factory organization ranging from quality control to machine settings to accounting systems that improve productivity. These latter modes, although well documented in studies of the history of individual industrial firms, do not lend themselves to easy measurement because by their nature they are not disclosed. All of these vectors of technology represent an attempt to move toward international best practice by assimilating technologies available abroad. Although in principle domestic efforts can generate improved technology, in most poor nations there is limited effort in this sphere.

Countries need not employ all of the potential vectors of new technology, but they need to utilize at least some. During the 1950s, Japan relied heavily on technology licensing while discouraging FDI. In the 1960s and 1970s, Korea also largely excluded FDI but used technology licensing, consultants, and imported equipment and intermediates as sources of technological advances. Countries such as Malaysia and Thailand pursued several paths simultaneously. The overarching orientation in the Asian countries was openness to foreign ideas, some embodied in physical inputs, others conveyed by manuals, blueprints, and know-how. More recently, technology transfers through émigrés who either return home or collaborate with former colleagues has become an important source of technology transfer, but this is obviously dependent on a prior “brain drain” of university graduates from the home country.

To identify and apply such knowledge transfers requires a highly educated domestic labor force that is critical in the identification, modification, and absorption of foreign technology. They may, in addition, generate purely local innovations through their own R&D. Technology inflows and domestic absorptive capacity are complementary, a feature noted four decades ago by Richard Nelson and Edmund Phelps. More recently there has been considerable analysis of why new technologies are complementary to skilled labor. During the industrial revolution, it was skilled artisans who most often opposed the introduction of new technologies. Now it is their modern counterparts, computer scientists, electrical engineers, or MBAs, who benefit from the skill-intensive nature of new technologies. The productivity of education in industry and advanced services is increasing because operating entirely new manufacturing processes and producing new products requires well-trained workers and managers. Although the highly educated labor force within a country may generate some purely indigenous innovations, they can be more productive if they are able to use their talents on a proven body of knowledge that is being introduced into the country for the first time. Local R&D inevitably has failures, whereas gaining a mastery of technologies that have proven their effectiveness in other countries has few dead ends.

In contrast to the Asian experience, most Middle Eastern nations have been refractory to involvement in international technology transfer through any of these mechanisms. This stance reflects their more general insularity from the world economy except in energy exports. Despite the huge increase in international trade in manufacturing in the period that etched “globalization” into general consciousness, most Middle Eastern nations barely participated in this growth (Figure 3). This reflected the adoption of economic policies that discouraged imports and had the unanticipated consequence of simultaneously reducing exports. Given no need to compete in export markets, firms could ignore those potential technology transfers that facilitated gains in productivity that were so avidly exploited by the Asian nations. Not only did the Asian nations increase their manufactured exports to an extraordinary degree, they increasingly moved to production of high-technology goods (see Table 1). This has augmented the demand for imported equipment and know-how that is necessary to produce goods of the requisite quality for advanced country markets.

Source:WDI.

We thus have a simple guide to some of the key dimensions that determine the difference between the Asian and Middle Eastern countries. In the latter, investment rates were lower, education, though improving, was inferior to that in East Asia, economic policies did not encourage firms to enter international trade, and one consequence of the domestic orientation of the economies was the neglect of the knowledge of technology that was available from other countries.

TABLE 1
High technology exports (billions of U.S. dollars)

1990 1995 1998 2001 2004
Egypt 6 2 12 15

Korea, Rep. 10.8 29.6 30.6 40.0 75.7

Indonesia .1 1.7 2.2 4.4 5.8

Malaysia 6.0 25.4 31.6 40.9 52.9

Thailand 3.0 10.1 13.5 15.2

Tunisia .05 .07 .1 .2 .4

Morocco .01 .4 .4 .7

Source: World Bank, World Development Indicators, 2006

A closer look

Although not all information on technology transfer is easily available, numerous indicators provide evidence of the paucity of technology transfer activities by the Middle Eastern countries in contrast with the variety of actions taken by high-performing Asian countries. Current measures of the disparities in technology transfers and education levels for the two regions reveal occasionally startling differences. Some of the disparities are a result of earlier growth in income, and causality is not easy to demonstrate statistically. It is not always possible to extend these indicators backward in time to ascertain their evolution. Where it is possible, it is clear that the Asian nations were initially at low levels in many of the relevant dimensions of technology transfer but achieved dramatic growth, and this growth was the handmaiden if not the ultimate source of growing productivity.

On any one or two measures, some of the Middle East and North African (MENA) countries will look almost as good as their Asian counterparts. But as will be seen in the comprehensive picture, the overwhelming pattern is one of an absence of international technology transfer in the Middle East. Moreover, the international interaction of the Asian countries has been a consistent feature for three or four decades, whereas the slight recent improvement in MENA has still left the countries behind where the Asians were in the 1970s.Clearly, current indices of technology transfer do not permit insights into ultimate causality because the technology inflows reflect many factors. For example, much of the demand for productivity-improving technology in Asian countries stemmed from the need to compete in international markets, which was driven by successful export-oriented policies whose results are reflected in Figure 3. This required local firms to improve their efficiency by staying abreast of international best practice by using consultants, licenses, and improved equipment. Exporting generates a demand for technology. But the growing success in export markets could not have occurred without the increasing application of western knowledge. There was a virtuous circle as new technology facilitated further export growth, which in turn provided still further incentives, and the foreign exchange, to seek more foreign technology.

Most of the data provided reflects current international inflows of technology because there are very limited historical data. A causal interpretation is supported by innumerable case studies that provide a rich history of many Asian firms, and the recent data are a contemporaneous snapshot of this process. The aggregate data are consistent with these case studies that have been done of individual Asian firms.

Greater imports of raw and intermediate goods increase the productivity of plants. For example, manufacturers of machinery can import steel that has more appropriate properties than domestically produced steel and thus allows finer tolerances during the production process. Newer imported machinery is faster and safer, allowing greater output per hour. Both intermediate goods and machinery embody large amounts of R&D undertaken by the firms in the industrialized countries, and the research evidence confirms that greater amounts of foreign purchases yield greater productivity in the purchasing nation.

The most comprehensive indicators of interactions that lead to the transfer of technology are imports of intermediate manufactured goods that enter into further processing, MI/GDP, and imports of capital equipment relative to GDP, ME/GDP. The Asian countries generally have ratios of MI/GDP that are 50% more than those in the MENA nations. And the typical Asian countries in 1990 had much higher ratios than the MENA countries exhibited a dozen years later. For the few countries for which data are available, the Asian pattern by 1970 exceeded that of the Arab nations in 2002.

A similar picture unfolds when imports of machinery relative to GDP are considered. In general, the ME/GDP levels of the MENA countries as late as 2002 are less than those in the high-performing Asian economies in 1990. Moreover, data for earlier years suggest similar ratios in the Asian countries as far back as the 1970s for Korea and Taiwan. Moreover, by 1990 several of the Asian countries had an extensive domestic machine-producing sector producing western designed equipment, and this reduced the need for imports. The absence of the technology transfers of the largest type, embodied in intermediate goods and equipment, explains part of the disparity in the success of the nations in raising productivity.

Disembodied knowledge

A detailed analysis or the transfer of disembodied knowledge through FDI or technology licensing reveals a similar pattern. Multinational corporations (MNCs) setting up plants in developing economies or buying existing firms and revitalizing them import new equipment, implement advanced managerial practices, and provide a marketing network. These skills are particularly valuable because they are difficult to purchase in arm’s-length transactions, although consulting firms can help. Improvements in logistics, manufacturing technology, and information technology in the past quarter century allow multinational firms to disaggregate their production process into separable activities, each of which can be undertaken in a different nation. Although estimates vary, about 70% of international trade among countries in the past decade took place as a result of the activities of multinational corporations, underlining the importance of FDI. Singapore and China have been key participants in this type of trade, but Arab nations have not availed themselves of this option nor have they intensively utilized other modes of transfer.

FDI makes it possible to complement local factors with foreign knowledge and specialized human and physical capital. Although there is no immediate productivity augmentation of local firms, their productivity will increase if foreign firms introduce new technologies or management methods that leak out to domestic companies. For example, workers initially employed by MNCs may be hired by local firms or establish their own enterprises, thus disseminating proprietary knowledge that is not possessed by local firms.

The level of FDI shown in Figure 4 shows the enormous differences. Singapore is the poster country for the role of FDI as a critical factor in catalyzing otherwise good economic policies into rapid and sustained growth, but more recently China has been a major recipient. The MENA countries have received very little FDI. During the 1990s, Thailand, which is roughly the size of Egypt, received more FDI than all of the MENA countries combined.

A considerable part of the FDI that does occur in MENA countries has been in sectors such as natural resources and tourism in which the transfer of knowledge to other firms is low relative to that in manufacturing or modern services. One cannot easily explain the low levels of manufacturing FDI in the MENA countries. It depends on how the country’s economic and investment climate are viewed by potential investors as well as the receptivity of policymakers and the local business community to FDI. In some countries, such as India, there was a conscious effort for many years to suppress FDI, which stemmed from the reigning view among influential politicians that FDI was a new form of colonialism. Many countries, including Japan, Korea, and recently China and India, abandoned earlier policies that discouraged FDI. Even though Arab intellectuals and policymakers have never advocated an anti-FDI position, FDI in critical industries has been rare in Arab countries, though it has begun to grow slowly.

Source:WDI.

Licensing proprietary technology can serve as a substitute for FDI. If foreign firms cannot export to a country because of its tariff barriers or if they believe the policy environment is too uncertain to undertake major plant investment, they may be willing to license their technology for a set fee or a percentage royalty. This allows the licensing firm to earn a profit in the local market, but it entails a greater risk of losing control of the knowledge, particularly in countries with less stringent intellectual property rights legislation and enforcement. Although technology licensing may be especially helpful as countries shift to technology-intensive sectors, even in the early stages it can be useful. Firms in Japan and later in Korea and Taiwan used technology licensing in their early industrialization efforts. Yet the MENA countries did not avail themselves of this alternative source of foreign knowledge until the 1990s, when Egypt and Morocco initiated the still tiny efforts.

Figure 5 provides an estimate of royalty payments for a small group of countries for 2005. Compared to data on FDI, the data on royalty payments are more uncertain in scope and definition and are available only for shorter periods of time. Data for 2005 do not provide definitive evidence of the causal relation between technology transfer and growth insofar as the size of royalties for the Asian nations partly reflects their previous success and their effort to diversify into new product areas that are most easily entered via licensing. The virtual nonexistence of royalties in the Arab nations even in 2005 is evident. Moreover, even in the 1970s and 1980s, Korea and Taiwan already had a large number of technology contracts. For example, in the five-year period centered on 1980, both nations were paying around $90 million per year (roughly $300 million in 2005 prices), and these numbers were growing rapidly. This can be contrasted with Egypt’s $180 million and Morocco’s $45 million in 2005. Moreover, the current period of increasingly competitive international markets requires greater technological sophistication than in 1980. This difference explains part of the slower growth of productivity in the experiences of the two regions and is simultaneously an indicator of the very limited shift to new industrial sectors.

Source:WDI.

Finally, case studies suggest that the cultivation of manufacturing outsourcing relationships between local firms and multinationals, with or without a local presence in the market, can also serve as a channel of knowledge flows. Research in Korea and Taiwan indicates one mechanism through which productivity growth in manufacturing was enhanced. In both countries, detailed interviews with firms find that considerable knowledge of production engineering and of new production processes came from foreign companies that purchased goods from Korea and Taiwan. Sears Roebuck & Co., K Mart, and J.C. Penney supplied knowledge of production engineering and quality control prevailing in the United States to their suppliers in Asia. They often specified the equipment to be acquired, set out the production process for new products, and provided detailed help on management practices such as quality control. These customers were not altruistic; the improvement in the quality and cost of their suppliers had a positive impact on their profitability. Knowledge about changing product demand or about new products conveyed by importers enabled Asian companies to shift more quickly from products with declining prices to new products still in the early part of the growth cycle. This type of informal knowledge transfer will occur only when a firm is actively exporting, and very few Arab firms had taken this critical first step.

The human dimension

In recent years there has been considerable attention to the key role of returning nationals in the development of the Indian software sector. Expatriates have also been critical in the development of high technology in Korea and Taiwan. A key ingredient in this process is the presence of strong educational institutions in the home country. In order for young people to be able to go elsewhere for graduate training or work, they must first acquire a good education at home. The Arab countries have few good engineering institutions and therefore send relatively few young people to work in high-technology companies or to study science and engineering in other countries. Although there has been considerable emigration, particularly to Western Europe, few of these emigrants are employed in high-technology sectors in the West. Far fewer Arab than Asian graduate students were studying at U.S. universities in 2000—and that was before 9/11. Thailand and Egypt, with roughly similar populations, had 7,000 and 1,400 students, respectively. There simply have not been enough Arab engineers to create a significant emigration-education-repatriation cycle that could result in technology transfer.

If technology is changing slowly, the payoff to even elementary school education will be low. For example, a Korean cotton spinner in 1960 who was an elementary school graduate but tended spindles not much different in design from those of 1900 would not have benefited much from her education. In contrast, her education would have led to an increase in productivity relative to a less-educated spinner if she had to adjust to the complexities of newly developed open-end spinning. Flexibility and problem-solving abilities conferred by more education yield a reward when technology is changing, but education may have little payoff in the absence of technological change. Thus, the Asian nations derived a huge benefit from the complementarity between their improving education system and their large technology imports. Technological inflows depend on the ability to identify relevant foreign technologies, decide how best to access and negotiate for them, and finally how to incorporate technologies new to the firm or the nation within the productive routines of local firms. A well-educated populace is a valuable asset for a nation dealing with rapid change.

With respect to higher education, there are no systematic time series on tertiary enrollment and the percentage of those students who are enrolled in science and engineering programs. But data that are available for 1995 indicate the large difference in achievement between Korea, typical of the fast-growing Asian nations, and a number of Arab countries. More than 20% of university-age students in Korea were receiving tertiary education in science and mathematics compared to fewer than 5% in most of the Arab countries. Although some of the technical education observed in 1995 was obviously a response to the huge growth in per capita income and the technological sophistication of the societies, even in the 1980s, Korea, Taiwan, and Singapore had large percentages of students pursuing a technical education. Moreover, such measured differences understate the true differential because many of the Asian universities such as Seoul National, National Taiwan, and National University of Singapore are internationally recognized for their quality, whereas no Arab university is ranked among the 500 top research universities in the world.

It could be argued that young people in the Arab countries were nevertheless acquiring many relevant skills through more basic education and that these were relevant to utilizing the small amount of foreign technology that was entering the countries. After all, average educational attainment in the Asian nations was far below Western standards in the 1960s when these countries started making rapid economic progress. Today, young people in the Arab countries receive more education than their Asian counterparts did in the 1960s. But the quality of the education available in Arab countries still leaves much to be desired. In international comparisons of eighth grade student achievement in science and mathematics, students from Hong Kong, Korea, Singapore, and Taiwan are among the world leaders, whereas Egyptian, Jordanian, and Tunisian students score well below the global mean. It appears that Arab countries are still not giving their young people the cognitive skills necessary to succeed in a modern industrial labor force.

Domestic knowledge generation can partly substitute for foreign technology. Two reliable indicators of domestic innovation activity are R&D expenditures and patenting activity, but once again the Middle East trails way behind Asia. Arab nations’ R&D spending as a percentage of gross domestic product is very low; in fact, they are spending less on R&D than Taiwan did a quarter century ago. It should therefore be no surprise that they receive few patents. Egyptians were granted fewer than six U.S. patents per year on average between 2001 and 2005, whereas Malaysians received 74 per year and Koreans and Taiwanese each earned thousands. And as early as 1981, Taiwanese residents applied to Taiwanese authorities for 5,800 patents, far more than the 264 applications in 2005 by Egyptian residents to Egyptian authorities. Whatever the imprecision in these indicators, there is no way to avoid the conclusion that no significant domestic innovative activity of a formal type is taking place in the MENA countries. It is possible, of course, that some efforts toward enhancing productivity are occurring but do not get reported in formal statistics. However, there are no case studies, as there were of early efforts in the Asian countries, to suggest that this global statistical picture is not valid.

The role of openness

The data support the view that the Asian nations prospered not solely because of higher investment in physical and human capital but also as result of their export orientation. This resulted in a great demand for technology transfer on the part of firms, and this was not discouraged by any perceived threat to political, social, or religious interests.

But are there deeper explanations that go beyond economics for the different intensity of technology transfer between the two regions and the industrialized economies? One interpretation would be that the tradition of openness in Asia has a long historical precedent. For example, in the late 19th century Japanese textile manufacturers sent their own engineers to work with British equipment manufacturers to gain knowledge that would help them to design machinery that took account of the local conditions in Japan. Lee Kuan Yew, Singapore’s prime minister from 1959 to 1990, encouraged a favorable attitude to FDI in the 1960s and 1970s when many other poor nations viewed FDI as an extension of the colonial past. Taiwan depended heavily on advice from U.S.-based expatriate Chinese economists in the 1960s and 1970s. In contrast, several recent Arab Human Development Reports have documented how the Middle East has been insulated from international ideas. One telling measure is the tiny number of books in other languages that have been translated into Arabic.

But clearly the Middle East is hampered not simply by an aversion to or hesitance about foreign ideas but also by economic policies that have led to growth that emphasizes shielded domestic markets to the detriment of exports. Protected from foreign competition, Arab firms can neglect the advances in machinery, imported material inputs, and licensed technology that are available from abroad. In turn, the emphasis on domestic markets may reflect fears on the part of governments that more adventurous efforts would expose local firms to great competitive pressure that could engender an increase in unemployment that would provide a fertile ground for the growth of religious extremism. And FDI in the Arab world is undoubtedly discouraged by the threat of terrorism in a number of the countries as well as the perceived high probability of violent political change.

Nevertheless, recent changes in some of the oil-rich kingdoms such as Dubai and Qatar provide tentative grounds for hope. New western-led universities, hospitals, and globalized firms are following the path of successful Asian countries. Whether such innovations can be diffused to the larger resource-poor nations from Syria to Morocco remains to be seen. But unless the latter change their economic policies and political climate to become more open to foreign technology, their growth prospects are not good. And that is hardly good news for the rest of the world.

Animal Migration: An Endangered Phenomenon?

Animal migrations are among the world’s most visible and inspiring natural phenomena. Whether it’s a farmer in Nebraska who stops his tractor on a cold March morning to watch a flock of sandhill cranes passing overhead or a Maasai pastoralist who climbs a hill in southern Kenya and gazes down on a quarter million wildebeest marching across the savanna, migration touches the lives of most people in one form or another. Although animal migration may be a ubiquitous phenomenon, it is also an increasingly endangered one. In virtually every corner of the globe, migratory animals face a growing array of threats, including habitat destruction, overexploitation, disease, and global climate change. Saving the great migrations will be one of the most difficult conservation challenges of the 21st century. But if we fail to do so, we will pay a heavy price—aesthetically, ecologically, and even economically.

The decline of migratory species is by no means a new problem. North America’s two greatest migratory phenomena—the flocks of passenger pigeons that literally darkened the skies during their spring and fall journeys in the East and the herds of bison that once stretched from horizon to horizon on the Great Plains—were snuffed out well over a century ago. (The passenger pigeon vanished completely in 1914; bison held on only because of last-minute conservation efforts.) Even as far back as the American Revolution, colonial leaders were alarmed enough about declines in Atlantic salmon to push legislation banning the practice of placing nets across the complete span of a river in order to catch every salmon heading upstream to spawn.

Yet the rate at which migratory species are declining seems to have accelerated in recent years. Ornithologists using radar to monitor the spring migration of songbirds across the Gulf of Mexico report that the number of nightly flights dropped by nearly 50% between 1963 and 1989. University of Montana ecologist Joel Berger has estimated that 58% of the elk migratory routes and 78% of the pronghorn routes in the Greater Yellowstone Ecosystem have been lost due to development. The American Fisheries Society has tallied more than 100 stocks of salmon in the Pacific North-west that have been driven to extinction because of dam construction, logging, water diversion, and other human activities. Meanwhile, in Michoacán, Mexico, illegal loggers are destroying the high-elevation fir forests where virtually all of eastern North America’s monarch butterflies spend the winter. These diminishing forests serve as a blanket for the overwintering monarchs, protecting them from cold weather, rain, and even snow.

North America is hardly the only place where migratory animals are in trouble. European scientists are deeply concerned that overgrazing and desertification in Africa’s Sahel are harming populations of songbirds that breed in Europe and winter in northern Africa. (These same birds are also shot and trapped by the tens of millions as they pass through the Mediterranean region during their spring and fall migrations.) In East Africa, the spread of agriculture is severing the migratory routes of many populations of zebra, wilde-beest, elephants, and other large mammals. In Finland, wild Atlantic salmon have disappeared from more than 90% of the rivers where they spawned historically; in France, they have vanished from nearly a third of their historic spawning rivers and are endangered in the remaining two-thirds.

To be fair, most of these species are in little danger of disappearing altogether. Few if any scientists are predicting the extinction of the wildebeest, Atlantic salmon, or monarch butterfly. But what is at stake is the continued abundance of these animals as they make their long-distance journeys through an increasingly human-dominated landscape.

Special vulnerabilities

The threats facing migratory species are not qualitatively different from those confronting nonmigratory species. But migratory animals seem especially vulnerable by virtue of the long distances they travel. Their populations can be harmed not only by the loss of breeding habitat but also by changes in their wintering grounds and stopover sites. The cerulean warbler, for example, nests in deciduous forests across a wide swath of eastern North America, from southern New England and southern Ontario west to Minnesota and south to Arkansas and Mississippi. It winters primarily in forests in the foothills of the eastern slope of the Andes, from Venezuela to Peru. By some estimates, the breeding population of cerulean warblers in North America has declined by as much as 80% during the past 40 years. This decline, evident to birdwatchers in the United States and Canada, probably reflects habitat destruction at both ends of the

warbler’s migratory route. Mountaintop-removal mining, an extraordinarily destructive practice in which the tops of mountains are scraped away to expose coal seems, has already destroyed hundreds of thousands of acres of breeding habitat in the Appalachians. Meanwhile, much of the warbler’s wintering habitat has been converted to cattle pastures, coffee and coca plantations, and other agricultural uses.

Moreover, many migratory animals aggregate at key places during certain times of the year, a habit that makes them vulnerable to overexploitation. Gray whales in the eastern Pacific largely escaped persecution until the mid-1800s, when whalers stumbled on the shallow lagoons in Baja California where most of the animals gather in the winter to mate and give birth. Within two decades, whaling operations had driven the gray whale close to extinction, although they subsequently rebounded because of protection. All of the world’s sea turtles are imperiled in part because adult females return year after year to the same beaches to lay their eggs; the slow-moving and defenseless turtles and their eggs are easily harvested at their nesting beaches.

Climate change, too, has the potential to disrupt the migratory patterns of a wide range of animals. Rising sea levels could submerge the nesting beaches of sea turtles and shore-birds. Songbirds breeding in the temperate forests of Eurasia and North America depend on a summer flush of insects, particularly caterpillars, to feed themselves and their off-spring. In some places, these caterpillars are emerging earlier and earlier in response to rising temperatures. In theory, the songbirds could simply push up their departure from their winter quarters in Central America, the Caribbean, or Africa to catch the earlier flush of insect prey. If, however, the birds are relying on a fixed cue such as increasing day length to decide when to head north, they may be unable to adjust the timing of their migration. Precisely this disruption in the timing of bird migration relative to the emergence of insect prey has been identified as the cause of a decline of 90% in populations of pied flycatchers in the Netherlands. In East Africa, where the movements of wildebeest, zebras, and other grazers are timed to the seasonal rains, any change in rainfall patterns due to global warming will probably produce concurrent changes in migratory routes. As land outside Africa’s existing game reserves is converted to villages and farm fields, it may be difficult or impossible for the mammals to adjust their migratory routes in response to the changes in rainfall. It’s possible, of course, that warblers and wildebeest will find ways to cope with the twin dangers of habitat destruction and climate change. But the opposite could also be true, with declines occurring even faster and deeper than we anticipate.

The decline of the world’s great animal migrations is clearly a major aesthetic loss. But it is also a major environmental and economic problem, given the important ecosystem services these species provide. Consider the case of salmon in the Pacific Northwest. They head for the ocean when they are young and small, taking advantage of the productivity of the seas to grow to full size. They then return to their natal streams where they spawn, die, and decompose. They are, in essence, self-propelled bags of fertilizer, gathering important nutrients such as nitrogen and phosphorus from the ocean and delivering them to the streams, where these same nutrients can then taken up by other aquatic species or carried onshore by scavenging eagles, bears, and other animals. As salmon runs across the Northwest have declined because of dams, overfishing, and habitat degradation, so too has the free delivery of nutrients. In the Columbia River, for example, annual salmon runs have dropped from roughly 9.6 to 16.3 million fish before the arrival of white settlers to about 0.5 million today. According to one estimate, the weight of carcasses in the Columbia has dropped from nearly 50,000 tons per year to 3,700 tons. Perhaps some of the nutrient deficits caused by the lack of salmon have been erased by fertilizer runoff or other human-created sources. But even if our overuse of fertilizers (with its attendant runoff) has somehow lessened the impact of the salmon shortfall, it has not helped the Northwest’s beleaguered fishing industry, which has lost jobs as a result of the drop in salmon populations. From 1990 to 2005, unemployment rates in British Columbia’s commercial fisheries averaged 17.2%, twice the rate for the province’s economy as a whole.

Migratory songbirds perform their own important ecosystem service by consuming vast numbers of caterpillars that would otherwise eat the foliage of trees and shrubs. As numbers of songbirds drop, one might predict an increase in insect damage to forests or, alternatively, an increase in pesticide use to counteract any increase in defoliation.

Twin challenges

Given the strong aesthetic, environmental, and economic reasons for protecting animal migrations, the question naturally arises: Why have we been so unsuccessful at conserving them? The answer may lie in the fact that conserving migratory animals poses two unique challenges. First, it demands coordinated planning across borders and boundaries that mean a great deal to us but nothing to the animals. A single Swainson’s thrush winging its way from Canada to Brazil may pass through 10 or more countries. Each of these nations must provide safe nesting, wintering, or refueling stops in order for the thrush to complete its journey. Bison in Yellowstone National Park face harassment or even death if they cross an invisible line separating the park from adjacent land managed by the U.S. Forest Service and the state of Montana. The bison need access to lower-elevation rangelands outside the park during harsh winters, when the snowpack prevents them from finding sufficient forage inside the park. However, ranchers in Montana fear that the bison will spread brucellosis, a disease that causes some cattle to abort their fetuses, to their livestock, and they have used their political leverage to force the federal government and the state to curtail the bison migration.

The second key challenge associated with conserving migrations is convincing agencies, institutions, and individuals to agree to protect these animals while they are still abundant. The United States and many other countries have a long tradition of protecting endangered species, usually when the plant or animal in question is teetering on the brink of extinction. But for the reasons cited above, this type of 11th-hour intervention is wholly unsuited to the task of saving migrations, where the goal should be to protect the species while they are still plentiful.

Fortunately, there are a number of examples of successful efforts to conserve migratory animals, and we can look to them for guidance on addressing these problems. By the early 1940s, commercial whaling operations had dramatically reduced populations of the great whales, many of which undertake lengthy migrations through international waters where no one nation has sovereignty. In response to these declines, the major whaling nations signed the International Convention for the Regulation of Whaling in 1946. This treaty created a scientific and administrative body, the International Whaling Commission (IWC), with the power to curtail commercial whaling operations. After many years of stalling, the IWC finally halted commercial whaling in 1982, resulting in increased whale populations. (Japan, Norway, and Iceland continue to hunt several species of whales by exploiting loopholes in the treaty, but at levels well below what prevailed in the heyday of whaling).

The success of the International Convention for the Regulation of Whaling is due in large part to the fact that it created an administrative body with regulatory teeth. In contrast, an even more ambitious treaty, the 1979 Convention on the Conservation of Migratory Species (also known as the Bonn Convention) was designed to protect migratory animals of all kinds, but it lacks a powerful administrative body. Instead, it creates a mechanism whereby groups of nations can come together to address problems facing particular migratory species. The treaty does not specify what conservation measures must be taken, leaving that task to the nations involved in the agreements. Because the Bonn Convention lacks a strong administrative body, it has had relatively few successes thus far.

In a promising new development within the United States, the Western Governors’ Association approved a policy resolution in February 2007 aimed at protecting “wildlife migration corridors.” Alarmed by losses of migratory routes for elk, deer, bighorn sheep, and other animals caused by energy development and sprawl, the governors of the western states have pledged to identify and protect migratory routes in a more aggressive, coordinated manner. They recognize that the administrative barriers among states or among agencies within a state can undermine conservation programs for migratory species.

To address the second big challenge associated with conserving migratory species—protecting these species while they are still common—the institutions charged with managing natural resources will need to embrace the idea that migration is fundamentally a phenomenon of abundance and must be protected as such. To that end, it would be useful to have a standardized early-warning system to identify migrations at risk. One approach would be to develop a threat-ranking scheme for migrations akin to the one now used by the World Conservation Union for endangered species. Under that approach, species are listed as critically endangered, endangered, or vulnerable based on quantitative criteria related to factors such as population size, amount of habitat, trends in population size, and trends in habitat. Similar criteria, emphasizing trends in numbers, could be developed for discrete populations of migratory species, such as runs of salmon, populations of pronghorn, and monarchs wintering in Michoacán, Mexico. A migration that declined by more than a certain percentage over a fixed period of time could be classified as endangered; a slightly lower rate of decline might place it in the less serious category of threatened. Even if the designation did not carry any immediate legal consequences in terms of habitat protection, restrictions on harvest, and so forth, it would nonetheless bring welcome attention to the issue. To some degree, consumers can also play a useful role in protecting migrations by virtue of what they buy or don’t buy. Places where coffee is grown under a canopy of native tropical trees, typically marketed as shade-grown coffee, provide suitable winter habitat for a variety of North American songbirds; places where sun-tolerant coffee is grown in sterile monocultures do not.

Finally, for any conservation program to succeed, it must be adequately funded. The North American Waterfowl Management Plan, a joint agreement between Canada, the United States, and Mexico to regulate hunting and protect the habitats of ducks, geese, and swans, has protected or restored millions of acres of wetlands. This accomplishment was made possible by a stable, secure funding source: a tax on the sale of guns and ammunition in the United States, plus mandatory purchase of an annual permit to hunt waterfowl. Hunters want waterfowl to remain abundant in order to enjoy longer hunting seasons and bigger bag limits, and they are willing to pay for that goal via the tax and permit. One would hope that the nation’s birdwatchers could be inspired to support a similar tax on binoculars, birdseed, and other tools of their trade, with the revenues going to support habitat protection and restoration programs. However, an attempt to enact such a tax in the 1990s floundered in the face of strong resistance from the affected industries, an antitax (and anticonservation) sentiment in Congress, and too little support from birdwatchers.

If we are successful at saving the world’s great animal migrations, we will have protected natural phenomena that provide us with inspiration, sustenance, recreation, and numerous ecosystem benefits. We also will have learned to take timely, cooperative action to solve a complex environmental problem. It is even possible that efforts to protect migratory animals will inform our efforts to address other environmental and social ills that similarly transcend artificial borders and boundaries. At the very least, we will have ensured that future generations can enjoy some of the same flocks of birds, schools of fish, and herds of mammals that have inspired and sustained us for thousands of years.

Enabling Economic Growth Through Energy

In “Fixing the Disconnect Around Energy Access” (Issues, Winter 2022), Michael Dioha, Norbert Edomah, and Ken Caldeira contrast the tale of two communities in Nigeria to highlight the daunting challenge of bringing universal energy access to low-income countries in a financially sustainable way. Although the article focuses on two communities in Nigeria, it speaks to a broader issue across the African continent.

In a recent World Bank book that I coauthored, Electricity Access in Sub-Saharan Africa: Uptake, Reliability, and Complementary Factors for Economic Impact, we addressed this very issue and laid out ways to think about electrification in sub-Saharan Africa. We reported an example similar to that of Kigbe, one of the authors’ case studies. In this case, the community of Gabbar, Senegal, implemented an off-grid solar energy system to help in producing onions for export to cities across the country. Elsewhere, we have also seen financially strained communities trying to get off a $7 per month installment contract they signed to acquire a solar home system—only to realize that they cannot afford the cost a few months down the road.

We also argued, as do Dioha and coauthors, that all electrification efforts should start with viewing it as a means to a greater end rather than an end in itself. This perspective is even more important in poorer countries that may lack the means to plan, fund, and excuse rapid electrification. It also requires understanding that although energy is crucial to most modern productive economic activities, it is still an input that needs complementary investments to turn access into impact.

Although energy is crucial to most modern productive economic activities, it is still an input that needs complementary investments to turn access into impact.

The question is, why is this seemingly straightforward logic broken? Dioha and coauthors provide an excellent diagnostic of the problem, but they do not address the why. Understanding the main reasons this is happening can help pave the way to better global development policies in areas beyond energy. In the mid-1970s, the British economist Charles Goodhart coined the Goodhart’s law, stipulating that “When a measure becomes a target, it ceases to be a good measure.” The United Nations Sustainable Development Goals (SDG), and in this particular case SDG’s Goal 7, which calls for ensuring access to affordable, reliable, sustainable, and modern energy for all, has fallen prey to Goodhart’s law. Counting the number of households that gained some form of access to modern energy from one year to the next has become an end in itself.

How can this challenge be addressed at the global level? The successor of the SDGs, if any, should focus on fewer targets centered on prosperity and let the local contexts determine how to get there. Alternatively, the SDGs should be much more ambitious. The Modern Energy Minimum produced at the Energy for Growth Hub, listed as a recommended reading by Dioha and coauthors, is an excellent example of rethinking the SDG’s Goal 7. This kind of effort should extend beyond energy to rethink more broadly a new approach to setting global targets for development.

Senior Fellow, Munk School of Global Affairs and Public Policy

University of Toronto

Senior Fellow, Clean Air Task Force

Fellow, Energy for Growth Hub

Michael O. Dioha, Norbert Edomah, and Ken Caldeira highlight that electricity access programs too often fail to deliver “much-needed outcomes in pace, scale, and improvements in quality of life.” Drawing on two Nigerian mini-grid case studies, the authors argue that in order to transform lives, energy access interventions must be paired with economic empowerment. While they focus specifically on community-level interventions, their three core messages also apply to larger, national-scale efforts.

First, Dioha and coauthors argue that community-level energy access programs must focus on more than connecting individual households to electricity; they must be paired with support for broader economic activity. This is equally important at larger scales. The primary international metrics for defining electricity access and success toward eradicating energy poverty focus principally on power consumption at home. These metrics drive much of the global energy development agenda, placing a political premium on achieving universal household access. But globally, 70% of electricity is consumed outside the home, where it powers economic activity and job creation. Energy development efforts, including electrification programs, need to balance connecting households with targeted investments in energy for businesses, manufacturing, and industry. These larger consumers not only power economic activity and job creation, but also serve as anchors for a more diversified and financially sustainable system.

Energy development efforts, including electrification programs, need to balance connecting households with targeted investments in energy for businesses, manufacturing, and industry.

Second, the authors stress the need to consult with affected communities, making the essential point that energy is a social challenge, not just a technical or economic one. At the community level, people gaining access to electricity for the first time need the “the opportunity to imagine what they would do with electricity access and how they might use it to change their lives.” This is equally true at the macro-level. Efforts to support large-scale energy systems development—especially those driven by outside funders and partners—need to better account for national development plans and industrialization goals. This means, first of all, listening to what communities, states, and nations want to achieve with energy—and then helping figure out how to power it. The reverse approach, of having a technological solution and then looking for a place to sell it, is unfortunately all too common.

Finally, the authors rightly point out that many energy access programs have focused too heavily on electricity supply, rather than on the broader enabling infrastructure that ensures power can be distributed and consumed. At a macro-scale, investing in modern grid infrastructure is crucial and often overlooked. Solving this bottleneck will become even more relevant as countries work to build flexible, resilient systems with greater shares of variable renewable power.

While we do see progress in each of these areas, there is much work left to do. The authors have done a service in highlighting these important issues and recommending a path forward.

Executive Director

Policy Director

Energy for Growth Hub

The Time for Strategic Industrial Policy Is Now

The strong thread that runs through technological leadership, economic success, and national security has never been more evident than it is today, so I read with great interest the article by Bruce R. Guile, Stephen Johnson, David Teece, and Laura D. Tyson, titled “Democracies Must Coordinate Industrial Policies to Rebuild Economic Security” (Issues, April 14, 2022).

The paradigm of free-market capitalism that has driven the global economy forward for the past 200 years is being challenged by a new vision for a managed economy operating at a global scale. Traditional models of international trade are breaking down, and the perils of using trade as a diplomatic tool have been cruelly exposed as nations now rush to rid themselves of dependencies on Russian products. A new approach to global collaboration in trade and commerce—one that preserves and advances the interests and values of liberal democracies—is sorely needed.

Guile and his coauthors are right to focus on industrial policy. The liberal democracies do collaborate well on publicly funded science. Perhaps not as much as they should, but the recently launched James Webb Space Telescope is a great example of a (literally) far-reaching joint endeavor from my own domain of expertise, space.

A new approach to global collaboration in trade and commerce—one that preserves and advances the interests and values of liberal democracies—is sorely needed.

Similarly, the private sector is very adept at working across international borders in the commercial domain. Large global enterprises with complex international supply chains are now the norm heading up all major industry sectors.

However, for the area in between, where the outputs of science are still being matured into promising new technologies for the future, the situation is quite different. Here the split of responsibilities between public and private is more ambiguous, and nations vary in their approaches. These differing views on how to use public resources to support industry advancement in new technologies—that is, industrial policy—are a source of friction in international trade relations, rather than harmony.

This makes it very hard to collaborate internationally in the maturation of new technologies, except in areas such as the European Union where common models of state aid and subsidy control are in place. Yet this is precisely the area where such collaboration is most needed. The technologies emerging now—beyond 5G communications, autonomous transport, and the commercialisation of space, to name just a few—will go on to define the twenty-first century. And those nations that bring the solutions, and define the standards that the world adopts, will reap the economic and geopolitical rewards.

For all its advantages, the free market is opportunistic and not strategic. The new threat, however, is highly strategic, operating at similar or larger scale, increasingly competent, and underpinned by the state at every stage. If the liberal democracies are to continue with their leadership in technological and economic advancement, then the time for coordinated and strategic industrial policy is now.

Chief Executive Office

Satellite Applications Catapult

Rethinking Benefit-Cost Analysis

In “New Ways to Get to High Ground” (Issues, Winter 2022), Jennifer Helgeson and Jia Li make a persuasive case for augmenting benefit-cost analysis (BCA) with multidisciplinary approaches when conducting resilience planning under climate change. Among other correctives, they suggest the need for nonmonetary data and the use of narratives as a way for community members to articulate key values. These are appropriate suggestions, but we can go further.

Climate risks are imposed on an already complex social landscape, populated by groups with distinct strengths and vulnerabilities, including differences in their sensitivity to environmental stressors and their access to information and resources. The basis for such differences may track the familiar cleavages of income and ethnicity, but many more specific factors may be at work in shaping vulnerability and resilience, which can best be identified through collaborative inquiry.

Disaster planning in New Orleans before Hurricane Katrina offers a case in point. Few provisions were made for evacuating residents without cars. As David Eisenman and colleagues have documented, lack of transportation prevented many poorer residents from leaving. Other residents with cars felt unable to leave because of economic constraints (insufficient money to pay for meals and lodging, fear of job loss), social constraints (responsibility for extended kin networks), or countervailing risks (health problems, fear of looting if they abandoned their homes).

Many more specific factors may be at work in shaping vulnerability and resilience, which can best be identified through collaborative inquiry.

Awareness of such risk factors, drawing on both quantitative and qualitative data, could have improved the response to Hurricane Katrina. Here an ethnographic approach is particularly valuable. It brings together a cultural account—capturing people’s own interpretations of their circumstances—with a delineation of local social systems, including the relations based on affinity, place, and power that shape people’s lives. Such multidimensional accounts provide a better basis for constructing risk pathways, showing how climate change stressors interact with local socioenvironmental conditions to affect individuals and communities.

Helgeson and Li noted some of the familiar weaknesses of BCA, including the indifference to both equity and the complex character of community resilience. Combining a collaborative approach with ethnographic inquiry can compensate at least partially for these weaknesses. Since BCA presumes that all benefits and costs can be monetized, losses of the rich will almost always outweigh losses of the poor. In contrast, an ethnographic account is well suited to a capabilities approach, in which losses of both rich and poor can be assessed on a common scale, the capacity to maintain a good life (the proper measure of resilience). The flood-driven loss of a modest car may be far more devastating for a poor family than the loss of a luxury car for a rich one. The poor family may lack the capacity to replace the car, and as a result lose access to employment, lose income, and fail to meet kin obligations. The rich family faces an inconvenience, which assets and insurance will soon make good.

Research Professor of Anthropology

University of Maryland, College Park

As our world changes, socially and climatologically, having tools at hand that are better able to support decisionmaking that reflects those changes is necessary. Jennifer Helgeson and Jia Li do an excellent job laying out the need for benefit-cost analysis (BCA) to evolve.

As a resilience specialist, I appreciated the explicit mention of needing capacity for considering co-benefits and multiple objectives within BCAs. Climate change adaptation can provide an opportunity for communities to proactively think about what they want, from amenities to industries to social justice. To support innovative and transformative adaptation, we need tools just as flexible and multifaceted. Related, Helgeson and Li’s point that equity and justice issues are either ignored or exacerbated by BCAs is critical. Reliance on “standard tools” that emphasize previously made investments is a clear example of systematically maintained inequities and injustices.

I also appreciated their acknowledgement of narratives and storytelling in decision-making. Researchers have been showing data on climate change for over a century, but the reality is that storytelling and emotional connection are how we see and process the world around us. Recognizing the need for BCAs to both accept inputs from these stories and enhance the ability to tell stories is a powerful reminder.

Reliance on “standard tools” that emphasize previously made investments is a clear example of systematically maintained inequities and injustices.

However, I think Helgeson and Li missed providing some important context: that is, the practical aspects of operationalizing BCAs, particularly at the local level. They highlighted the need for more data collection—overall and at the front end of a BCA being conducted. But they do not specify who would do this work, and I think it should be acknowledged that BCAs in their current form are already complex, challenging, and out of reach for most average to small municipalities. I have been working with communities to see if they could use the “plug-and-play” BCA tools that exist and have found that the input data requirements are still an almost insurmountable barrier. Without the time my team dedicated to helping them, it seems unlikely that those BCAs would have occurred. The TurboTax analogy the authors cite was well taken, but the difference is that with TurboTax the input data arrive in the mail, with labeled boxes that can be referenced to identify what data need to be entered and when.

Given this gap, I think an essential next step is building capacity so that more versatile BCAs do not become another source of inequity, where only communities with means can pursue them. The climate and weather enterprise has been building workforce capacity around resilience, focusing on stand-alone resilience positions, and integrating resilience into the skill sets of other professionals. I think BCAs require similar efforts. This includes helping groups in the private sector that support the public sector (e.g., engineering firms) and the referenced boundary spanners understand and develop practical skills for enhanced BCAs. The tools and supporting materials around enhanced BCAs also need to improve so they can be integrated into municipal and state officials’ tool kits without requiring an inordinate amount of time or money.

Coastal Climate Resilience Specialist

Mississippi State University Extension Service and Sea Grant

Focusing on Connectivity

Maureen Kearney’s article, “Astonishingly Hyperconnected” (Issues, Winter 2022), is first and foremost about connections: between the global climate and biodiversity crises, between organisms, and between humanity’s future and that of the rest of the living world. Its focus resonates strongly with the fabric of life on earth, emphasizing humans’ ancient and deep entanglement with all other living organisms, locally and remotely. In Kearney’s words, “Because life is astonishingly hyperconnected on scales much larger than we thought … the fate of any species in the face of environmental change is intertwined with the fate of many others.” Here I add a few reflections as to what this hypoconnectivity entails for science and policy.

Connectivity among disciplines. First, I agree on the need for convergent research among the natural sciences. I would add to this the need for more convergence between the biological and physical sciences on one hand, and the social sciences and humanities on the other. This is indispensable because the present environmental crises are manifested in the atmosphere and the biosphere but their roots and therefore their potential solutions are deeply social, economic, and political. The biophysical sciences are clearly not enough to deal with them. For researchers, this certainly entails extra layers of difficulty in bridging vastly different methods, categories, and epistemologies. For funding agencies, this involves a mindset shift concerning risk adversity, budget allocations, timetables, and researcher evaluation criteria. For example, what the literature call “boundary work,” which binds together the different disciplines in an integrated project, needs more time and money. Related, the judging of curricula vitae needs to consider that most researchers are venturing into uncharted waters and avoid penalizing the lack of an extensive trajectory in the new subject area.

The present environmental crises are manifested in the atmosphere and the biosphere but their roots and therefore their potential solutions are deeply social, economic, and political.

Connectivity among policymakers and researchers. I also agree on the need for policymakers and researchers to work in a more integrated way in addressing global climate and biodiversity challenges. In this, it is important to abandon the complexity-adverse attitude that has predominated so far. For example, in the preparation of the new post-2020 Global Biodiversity Framework of the United Nations Convention on Biological Diversity, the emphasis has been on particular aspects of biodiversity, often the easiest to communicate and monitor, yet not necessarily the most effective ones. The fact that the fabric of all life is interwoven and complex does not mean intractable, but defies “easy” targets such a setting aside X hectares under legal protection or promoting Y species from threatened to nonthreatened. The new goals for nature need to be clearer and bolder than ever before, but at the same time they must focus on connections and be themselves interconnected in a safety net.

Connectivity among institutions. One key point, probably the most difficult, is that indeed we are astonishingly hyperconnected in many ways, but yet astonishingly disconnected in others. The institutions that deal with different knots in the fabric of life are often disconnected or misaligned, each setting its rules, incentives, monitoring indicators, and standards in isolation from, or in contradiction with, the others. This happens, for example, in the regulation of water and wild animal populations across municipal, regional, and international borders. It is also rife among bodies acting at the same spatial scale but on different sectors, such as road maintenance and nature restoration, or urban planning, food sovereignty, and biodiversity protection.

The transformative change being called for by all the recent international assessments has to be not only bigger and deeper than ever before; it also has to shift the focus toward much more connectivity. We have been trying to handle an astonishingly hyperconnected earth with a set of astonishingly disconnected set of narratives, mindsets, and institutions. We can afford this no longer.

Professor of Ecology

Córdoba National University

Senior Member of the National Research Council of Argentina

Maureen Kearney extends an important conservation about climate change and biodiversity. As context, I have a stack of books on my desk that tackle this fraught topic, most of them dealing with loss of diversity, but some addressing the possibility of recovering species through de-extinction. A sample includes: Second Nature by Nathaniel Rich; Thor Hanson’s Hurricane Lizards and Plastic Squid; Strange Natures by Kent Redford and William Adams; Elizabeth Kolbert’s Under a White Sky; and the most recent arrival is Ben Rawlence’s The Treeline. All add in various ways to the increasingly clear conclusion that climate change is negatively affecting earth’s biodiversity and that we need to think hard about how to mitigate such an outcome.

Kearney agrees with that conclusion. In an important way, however, she goes further, exploring the thesis that biodiversity and climate change are not just connected, but “hyperconnected,” meaning they are inextricably intertwined. Her message is that we cannot solve the problem of declining biodiversity without solving the challenge of our changing climate, which is itself a complex function of earth’s biodiversity. Each influences the other in deep and important ways.

Others have gone down this road, but Kearney makes a strong case for the intersection of these areas, which makes a lot of sense. She first reviews how biodiversity contributes to an array of ecosystem services that benefit humans, making it clear how we rely on the organisms that surround us. And in a review of biotechnology approaches to managing environmental change, she appropriately urges that this possible set of solutions must be approached cautiously in light of possible unintended consequences.

Climate change is negatively affecting earth’s biodiversity and that we need to think hard about how to mitigate such an outcome.

Still, there are some people who flatly reject biotechnology approaches to mitigating climate change, or manipulating the environment in general. It would have added to Kearney’s perspective to comment on this view, and how she feels it can or cannot be incorporated into her call for integrating climate change and biodiversity.

The subtitles of the books mentioned earlier echo Kearney’s arguments. Rich uses Scenes from a World Remade to suggest how humans are altering ecosystems and the responsibilities that follow. Hanson uses The Fraught and Fascinating Biology of Climate Change to emphasize that organisms adapt to climate change and do not just suffer its effects. Redford and Adams use Conservation in the Era of Synthetic Biology to highlight their analysis of how the tools of gene editing will shape a future world along with the responsibilities that accompany such use. For Kolbert, The Nature of the Future invokes the ways in which humans have altered earth’s systems, raising questions along the way about what “nature” will look like in the future. Rawlence’s The Last Forest and the Future of Life on Earth uses the iconic boreal forest as a study system for analyzing the intersection of biodiversity and climate change.

Kearney joins these authors to highlight a topic—the intersection of biodiversity and climate change—that needs our increased understanding. In a discussion of “planetary futures” she emphasizes that “natural systems are climate solutions on par with greenhouse gas reductions and other objectives.” That sounds right. Her call for research on biological complexity to understand much better than we do the reciprocal interaction of climate change and biodiversity is also the right one in an era of global change that will call for adaptation as well as mitigation. It is a call we should join her in pursuing.

Virginia M. Ullman Professor of Natural History and the Environment

School of Life Sciences

Arizona State University

Mathemalchemy

"Mathemalchemy," 2021, mixed media
Mathemalchemy, 2021, mixed media

What happens when a fiber artist meets a world-renowned mathematician? In a word: mathemalchemy.

In 2019, the mathematician Ingrid Daubechies, who the New York Times dubbed the “godmother of the digital image” because of her work with wavelets and the role that it played in the advancement of image compression technology, visited an art exhibit entitled Time to Break Free. The installation, a quilted, steampunk-inspired sculpture full of fantastical, transformative imagery, was the work of fiber artist Dominique Ehrmann. Seeing the installation made Ingrid wonder whether art could similarly bring the beauty and creativity of mathematics to life. She contacted Dominique, and after much discussion, a collaborative project was born. Over several months, many workshops, and the challenges of a pandemic, the collaborative grew to include 24 core “mathemalchemists” representing a diverse spectrum of expertise. The result is a sensory-rich installation full of fantasy, mathematical history, theorems, illuminating stories of complexity, and even a chipmunk or two.

"Mathemalchemy," 2021, mixed media

What happens when a fiber artist meets a world-renowned mathematician?

The artists and mathematicians work in fabric, yarn and string, metal, glass, paper, ceramic, wood, printed plastic, and light; they depict or employ mathematical concepts such as symmetry, topology, optimization, tessellations, fractals, hyperbolic plane, and stereographic projection. Playful constructs include a flurry of Koch snowflakes, Riemann basalt cliffs, and Lebesgue terraces, all named after mathematicians. Additionally, the exhibition pays homage to mathematicians and mathematical ideas from many different origins and backgrounds, ranging from amateur mathematician Marjorie Rice to Fields Medalist and National Academy of Sciences member Maryam Mirzakhani.

"Mathemalchemy," 2021, mixed media

Mathemalchemy is on display at the National Academy of Sciences in Washington, DC, from January 24 through June 13, 2022. More information about the exhibit and the collaboration can be found at mathemalchemy.org.

From Medical Malpractice to Quality Assurance

Every decade or so, the United States is seized with a fervor to reform medical malpractice. Unfortunately, this zest is typically motivated by circumstances that have little to do with the fundamental problems of medical malpractice, and the proposed changes to the system do not address the true flaws. A well-functioning malpractice system should focus not only on how to compensate patients for medical errors but also on how to prevent these errors from occurring in the first place.

The United States has faced a medical malpractice “crisis” three times since 1970. Each of these crises was precipitated by conditions that created a “hard” market: decreased insurer profitability, rising insurance premiums, and reduced availability of insurance. And each time the crisis became a polarized battle between trial lawyers on one side and organized medical groups and insurers on the other. On the one side, stakeholders link the crisis to “runaway juries” and “greedy lawyers.” On the other are those who blame interest rates and possibly insurer pricing practices. If one attributes the crisis to falling interest rates and bad investments in the stock market, the policy implications are markedly different than if soft-hearted and cognitively limited juries and ambulance-chasing lawyers are blameworthy.

In the end, calm is returned, but the situation of patients is not improved. We are left with a system in which most victims of medical error are not compensated for their losses and in which the overall quality of care is not what it might be.

As a first step in tackling the real problems of medical errors and mediocre quality assurance, we need to debunk the popular misconceptions about the problems with the medical malpractice system. Once these ferocious but ultimately pointless conflicts are defused, we can begin to think about fundamentally reconstructing the system with an eye toward improving the quality of care by giving practitioners effective incentives to deliver the services that people need. There are a variety of options for reform; one of them, called enterprise insurance, has the potential to provide the initiative for systemic change.

Pervasive myths

Many myths about medical malpractice dominate the public discourse. These myths reinforce misinformation and are used to justify statutory changes that benefit certain stakeholders but are not in the broader public interest. Five of the most common are: medical care is costly because of malpractice litigation; only “good” doctors are sued; there are too many medical malpractice claims; dispute resolution in medical malpractice is a lottery; and medical malpractice claimants are overcompensated for their losses.

The high cost of personal health services in the United States is frequently attributed to litigation and the high cost of malpractice insurance. This assumes that premiums and outlays for awards have risen appreciably and constitute a major practice expense. The data, however, do not show appreciable increases over long time periods. Between 1970 and 2000, mean medical malpractice premiums went from 5.5 to 7.5% of total practice expenses. This is not the case for damage awards; payment per claim has increased substantially since the mid1990s. However, relationships between medical malpractice premiums, claims frequency, mean payment size, and total payments are complex and assumptions should not be made based on a single indicator.

Some critics of medical malpractice contend that being at the cutting edge technologically makes a physician more vulnerable to being sued. There is no empirical evidence that being sued is an indicator of superior performance. However, there is evidence that physicians with no claims histories were rated by their patients as being, or at least appearing to be, more understanding, more caring, and more available. Overall, it is untrue that only good doctors are sued, but at the same time, being sued is not a marker of being a bad doctor either.

The myth that there are too many malpractice claims is a bit more complex. There are two path-breaking studies showing that there are both too many and too few malpractice claims. The first of these studies was conducted in California in 1974. The second, the Harvard Medical Practice Study, was conducted in New York in 1984. In both studies, surveys of medical records of hospitalized patients were conducted to ascertain rates of adverse events attributable to provision of medical care to these patients and rates of adverse events due to provider negligence, termed “negligent adverse events.” The California study revealed that of the 5% of patients who experienced an adverse health event while in the hospital, 17% suffered a negligent adverse event. In New York, the corresponding rates were 4% for adverse events, of which 28% were negligent adverse events. The authors found that “invalid” claims, those not matching the study’s determination of liability, outnumbered valid claims by a ratio of three to one. However, they also found that only 2% of negligent adverse events resulted in medical malpractice claims. There were 7.6 times as many negligent injuries as there were claims. Thus, there were errors in both directions: Individuals filed too many invalid claims and not enough valid claims.

The public’s view of juries leads to the inference that outcomes of litigation are often random. Actual data, however, leads to the opposition conclusion: Outcomes are not random. There is a definite relationship, albeit an imperfect one, between independent assessments of liability and outcomes of legal disputes alleging medical malpractice. One study estimated that payment is made in 19% of malpractice claims when there is little or no evidence of errors. In contrast, when the evidence of an error is virtually certain, payment occurs 84% of the time. Using the results of this study, claims not involving errors accounted for 13 to 16% of the system’s total monetary cost. The way one views this percentage (substantial or small) depends on where one draws the line between error and no error. Unfortunately, the New York study conclusions do not stress or even mention that the estimates of error are subject to a very high degree of uncertainty.

Similar to the myth that malpractice claims are decided without regard to evidence of negligence, the myth that most plaintiffs are overcompensated for their injuries is pervasive. However, a comparison between the cost of injuries incurred by claimants and compensation actually received revealed that medical malpractice claimants on average are undercompensated. In one study, compensation exceeded cost by 22% for claimants who received compensation at verdict, whereas 26% percent received no compensation at all. On average, including those cases for which no compensation was received, compensation amounted to about half of monetary loss. Even including compensation for nonmonetary as well as monetary loss, compensation fell far short of injury cost. Nevertheless, this does not eliminate the possibility that compensation was excessive in selected cases.

Reconstructing the system

In principle, medical malpractice should be a quality-assurance mechanism; in practice, it falls far short of achieving this goal. For one thing, there is no empirical evidence that the threat of medical malpractice makes health care providers more careful. Also, meting out compensation is very expensive. Sadly, medical malpractice “tort reform” has aimed to save medical malpractice premium dollars rather than make it an effective mechanism for assuring quality and efficiently compensating injury victims. For example, a popular but misguided tort reform, caps on damages, has worked to reduce payments by medical malpractice insurers and create premiums below what they otherwise would have been, but caps have not altered the incentives, except perhaps to discourage attorneys from representing medical malpractice plaintiffs, even those with valid claims. If there is a benefit to caps, it is in redistributing income from injury victims and their attorneys to health care providers rather than in improving quality of care or markedly reducing rates of unnecessary tests and health care costs more generally. It seems unlikely that any savings in medical malpractice insurance premiums would accrue to patients as taxpayers and health insurance premium payers. Organized medicine plausibly supports caps primarily as a response to pressures from its constituency for financial relief.

Although the current system has many flaws, there is also a brighter side. First, contingency fees for plaintiffs’ attorneys give patients who are unsatisfied with outcomes a mechanism for addressing their grievances that may not be possible through other channels. The regulatory apparatus, which has a responsibility for safeguarding the quality of personal health services, is sometimes controlled or substantially influenced by health care providers and health care regulators who may be unresponsive to patients’ complaints. Second, the U.S. jury, despite its limitations, gives ordinary citizens a role in the dispute-resolution system. Although jurors are only rarely scientists, physicians, or other health care professionals, they reflect society’s values. Third, even during the crises when substantial increases in malpractice premiums occurred, the premiums remain a tiny component of total health care costs. Viewing long-term secular trends in medical malpractice payments and premiums rather than the short time periods during which there has been substantial growth in premiums reveals that increases in payments and premiums are rather modest, only slightly higher than the changes in prices in general. Finally, the current malpractice system does a good job of identifying some real errors.

However, the current system has serious deficiencies, just not the same as those typically depicted in the media. First, unlike other fields of personal injury tort, there is no empirical evidence that the threat of medical malpractice lawsuits deters injuries. This is a very serious deficiency, particularly because injury deterrence is typically listed as a goal, perhaps the primary goal, of tort liability. Second, tort liability focuses on the mistakes of individual providers, but errors frequently reflect simultaneous omissions or misjudgments on the part of several individuals. Third, most medical errors do not result in malpractice claims. As a result, the signal from tort to health care providers is insufficiently precise or even wrong. Fourth, compensation to injured patients is typically less than what they deserve based on the loss attributable to their injuries. Litigation is an extremely inefficient system for compensating injury victims. Various types of insurance, such as health and disability insurance, are much more efficient in distributing compensation to persons who have incurred a loss from receiving less than appropriate care.

Sadly, medical malpractice “tort reform” has aimed to save medical malpractice premium dollars rather than make it an effective mechanism for assuring quality and efficiently compensating injury victims.

Finally, health care providers in the United States largely reject the view that medical malpractice has a constructive role to play in health care delivery. Providers generally see no link between medical malpractice litigation and provision of low-quality care. Much commentary in assessments of medical malpractice and patient safety see medical malpractice as part of the problem rather than part of the solution. This misconception is an important roadblock because malpractice claims often arise from deficiencies in care.

Thus, medical malpractice does badly on injury deterrence, improved patient safety, and compensation of persons with medical injuries. Its strongest features are giving injury victims a day in court and making professionals accountable to ordinary citizens.

Patient safety and medical malpractice are inextricably linked. However, neither market forces nor the threat of tort liability seem to provide sufficient incentives for quality assurance. An important reason that the threat of lawsuits has not improved patient safety is that medical malpractice insurance shields potential defendants from the financial burden of being sued. Such insurance tends to be complete; there are no deductibles or coinsurance, and liability limits of coverage are rarely exceeded. Medical malpractice premiums tend not to be based on a physician’s own history of lawsuits. Thus, a physician with many past lawsuits may pay the same premium as a colleague who has never been sued.

Shortcomings in medical malpractice are not wholly responsible for shortcomings in the quality of health care in the United States. U.S. airlines have implemented very effective safety procedures. In other sectors, market forces provide some guarantee of quality. For example, there is no quality crisis in the hotel market. To the extent that consumers demand high-quality hotel rooms, this is provided by the market.

However, there are few means for consumers to inquire about the quality of a hospital or doctor, let alone demand high-quality health care. Employers often speak about quality assurance, but with few exceptions, medical care is not their principal business. Given the limitations of market forces in pressuring providers to supply high-quality care, there is indeed a role for government regulation and private regulatory mechanisms, such as peer review and tort liability. These mechanisms are not substitutes for the market but rather complements to it.

Options for reform

Meaningful tort reform should take account of the fact that many medical errors are not simply errors of individuals; they are errors of systems. Also, health care providers must have financial incentives to exercise care and implement meaningful quality-assurance mechanisms.

Overall, what have been called tort reforms have been short-term fixes, which do not improve system performance. In recent years, the reform most favored by physicians, hospitals, and insurers has been caps on damages. Caps have the effect of lowering payments per paid claim and probably discourage some trial lawyers from representing some medical malpractice plaintiffs. But they do not fundamentally change how medicine is practiced.

Scholars, other experts, and some policy analysts have proposed more sweeping reforms of the current system. They include no-fault insurance, health courts, alternative dispute resolution, private contracts, scheduled damages, enterprise liability, and enterprise insurance. Each proposal has advantages and disadvantages, and no one reform provides an exclusive remedy to the problems with the medical malpractice system. Of these options, however, enterprise insurance has, in our view, the potential to initiate systemwide change.

No-fault insurance. No-fault approaches are designed to be substitutes for tort, providing compensation regardless of fault. Currently, no-fault is widely used as a substitute for tort in auto liability and workers’ compensation. Medical no-fault has been implemented in only two states, Florida and Virginia, and for only a few medical procedures. The low administrative expense and faster payment of damages makes no-fault insurance an attractive alternative. But in the Florida and Virginia the programs were implemented to achieve savings in medical malpractice premiums rather than to distribute compensation to a larger number of medical injury victims. Revenue for these programs comes from physicians and hospitals. But if the system is truly no-fault, why should physicians and hospitals be the only parties taxed to fulfill a broad social obligation to compensate those with misfortunes? It seems more appropriate to tax the public at large, and no U.S. state has agreed to do this.

At least as interesting an alternative, not under active discussion, is private no-fault insurance. A hospital with an effective quality-assurance program could offer no-fault insurance to its patients for a reasonable premium, with an even lower premium for patients who agreed to forego filing a tort claim in the event of an injury. To the extent that the hospital had an effective quality-assurance program, the savings in premiums could be passed through to its patients.

This type of voluntary no-fault program would offer several important advantages to hospitals and their medical staffs. First, it might relieve providers of the threat of tort. Second, offering no-fault benefits would be a signal to consumers that the hospital has an effective patient-safety program and low rates of medical errors. Third, to the extent that injury victims value quick payments with little involvement of attorneys, this too would increase demand for the hospital’s services.

Although hospitals may anticipate some savings, it is essential that such no-fault coverage extend to a large number of conditions. When exclusions from coverage are necessary, they should be broad and easily understood by patients. Very narrow thresholds are difficult for patients to assess in advance of injuries. A few very costly procedures may be excluded from coverage, but it would be important that these be listed and described in understandable terms in advance.

Complete substitution of no-fault for tort is infeasible; however, a system in which patients would contract for no-fault coverage well in advance of receiving care at the hospital is more reasonable. Contracting in advance is essential to avoid situations in which a patient is faced with the option of contracting for no-fault at point of service, which could be interpreted as an adhesion contract, a standard form or boilerplate contract entered into by parties with unequal bargaining power. One way to partially avoid the unequal bargaining power is to allow employees to designate whether or not they wish to substitute no-fault for tort when they are choosing their health insurance plans. Surcharges for no-fault (if surcharges are imposed on patients) could then be built into the premium charged. In the case of voluntary no-fault, because insured patients would agree not to sue under tort, the savings in tort payments would offset at least part of the cost of the no-fault plan. No-fault plans would require prior regulatory approval, depending on the applicable regulatory authority. Regulators would pay attention to the method of enrolling persons into the plan, pricing, and issues bearing on plan solvency.

Health courts. Medical care is a technical subject, and proponents of health courts argue that judges and juries are often not well positioned to deal with the complexities. In addition to providing victims with consistent, fast, and relatively easily obtained compensation when warranted, health courts are also intended to reduce cost by streamlining the process, maintaining consistent medical standards, and capping or scheduling damages.

Full-time judges are a major feature of the health court proposals. The judges would deal only with malpractice cases, and there would be no jury. In one proposal, specialized judges would shape legal standards for medical malpractice, creating a body of science-based common law that healthcare providers could rely on when making treatment decisions. In theory, a body of science-based common law seems valid and useful, but it raises issues of its own. In the context of health courts, standards for medical practice would develop under state law. Yet, without federal regulation, each state would be free to develop its own standards, allowing for variable legal and medical practice standards among states.

Although concerns about the inadequacies of juries to decide technical matters and the inexperience of judges in medical matters provide the main rationale for health court proposals, what is not acknowledged in the policy debate is that the concerns about lay juries and judges apply to the much larger issue of the use of scientific evidence in the courts. Health courts represent only one of several overlapping alternatives for addressing this issue. Other alternatives include use of court-appointed experts, bifurcated trials, use of special masters, specially convened expert panels, blue-ribbon juries, and alternative dispute resolution.

We agree that health courts have attractive features but are reluctant to give this option our enthusiastic endorsement. Preserving juries in some form, even if they are blue-panel juries, would provide a broader representation of perspectives and values than would sole reliance on a narrow group of professionals to make judgments on specific cases. Even a judge with health expertise will not be able to be expert on the full range of issues health courts are likely to confront. In the end, it is important that plaintiffs as well as defendants view health courts as legitimate. If the court consists entirely of or is dominated by physicians and other health professionals, buy-in by plaintiffs, and society more generally, seems highly improbable.

Alternative dispute resolution. Dispute resolution under the trial-by-jury system is extremely costly. Thus, alternative approaches that streamline the process seem attractive. Generally, alternative dispute resolution is made up of any means of settling disputes outside of the courtroom. The two most frequently used forms are arbitration and mediation. Arbitration is a simplified version of litigation (there is no discovery and the rules of evidence are simpler). Mediation uses an impartial third party to facilitate an agreement in the common interest of all the parties involved.

The main advantage to alternative dispute resolution is that the process tends to be speedier than a trial. The advantages of arbitration include lower cost, private proceedings, more flexibility than in a court, and when the subject of the dispute is highly technical, the appointment of arbitrators with the appropriate degree of expertise. There are also disadvantages. The parties need to pay for the arbitrators, and the rule of applicable law is not compulsory. With binding arbitration, the decision reached is comparable to a jury verdict and can be overturned only if there is evidence of malfeasance in the process of reaching a decision. In mediation, sessions are not decided in favor of one party or another, and the parties are not bound to resolve their dispute and may pursue litigation if dissatisfied with the results of mediation. Although speed in dispute resolution and lower cost are advantages, there is some empirical evidence that this actually leads to more lawsuits.

Enterprise insurance is an attractive solution because it provides those in the best position to improve care with an incentive to introduce patient-safety measures.

Private contracts. The rationale for private contracts between health care providers and patients, as a substitute for tort, is that tort liability determines compensation based on standards of care that may differ from those that patients might prefer. Private contracts might set out specific circumstances in which providers might be responsible for compensating injury victims, scheduling damages, and specifying alternative dispute resolution mechanisms when disputes arise.

The strength of private contracts is that they can reflect preferences of individuals. Individuals with higher willingness to pay for safety pay more for such care. However, individual choice opens the door to adverse selection. That is, persons who are prone to suffer an injury because their health is more fragile may be more willing to pay for contracts offering extra precautions.

Opponents of private contracts point out that the relationship between the patient and provider is not one of equal power. A hospitalized patient or even an outpatient may not be well positioned to negotiate with a physician. Courts have overturned contracts reached at the point of service for this reason. But this is not when contracting would occur. Rather, as with a voluntary no-fault plan, contracts could be options offered to persons at the time they enroll in a health plan. Agreement to a lower standard of care or a less generous schedule of damages would result in a lower premium.

Scheduled damages. Rather then set a limit on the maximum size of an award, establishing a schedule of damages sets payment criteria for all awards, not only the large ones. Because scheduling affects the whole distribution, it is conceptually superior to flat caps on grounds of equity of payment to claimants with very severe injuries relative to those with less severe injuries. The trial bar opposes this approach because it would limit the ability of plaintiffs to make a case for their special circumstances. Such flexibility, however, must be measured against the vertical inequities of caps; that is, caps limit large awards for the relatively serious injuries but do not directly affect payment for more minor ones.

An anticipated objection to scheduled damages is that they limit jury discretion in awarding damages. However, there is a tradeoff between complete individualization of awards and reducing volatility and increasing predictability of awards. It would be appropriate for states to review the instructions that are provided to juries in order to assess whether guidelines for determination of monetary loss should be developed. In the end, however, even though scheduled damages are preferable to caps, the link to quality assurance is at best indirect.

Enterprise liability. Enterprise liability is a means of aligning the incentives of providers and of accounting for the fact that many errors arise from defects in systems rather than in individual providers. Because many medical injuries occur during receipt of hospital care, it makes sense to start the alignment process with hospitals and the physicians who work there. Under enterprise liability, when the receipt of care is in a hospital setting, the hospital would be named as the defendant in medical malpractice lawsuits. Separate suits against individual physicians would not be filed. If the hospital were the only named defendant, it would have a greater incentive to adopt quality-assurance measures, including for outpatient care.

Left unsaid in general discussions of enterprise liability is how the burden of hospital premiums would be shared. It would be advisable that physicians bear some part of the premium burden to provide some incentive to avoid claims. Hospitals could implement their own systems of surcharging physicians with many medical malpractice claims. Of course, hospitals, or medical staffs operating on their behalf, would retain the option of removing from their staffs physicians with adverse claims experience or those who do not comply with hospital patient-safety regimens. In fact, hospitals would have a greater incentive to monitor physician performance and remove physicians with adverse claims experience.

With hospital enterprise liability, the deterrent would be internalized to the hospital, establishing a clear financial incentive for quality improvement and error reduction. These organizations could impose a combination of financial and nonfinancial incentives for individual physicians to prevent injuries, coupled with increased surveillance measures. Also, the hospital and physicians at the hospital collectively would have an incentive to promote patient safety, because the enterprise’s premiums would depend on future anticipated losses from medical malpractice claims.

There are several possible objections to enterprise liability. First, plaintiffs might view hospitals as rich and faceless institutions with deep pockets, thus increasing plaintiffs’ demands for compensation. Of course, under the current system, insurers presumably could be said to have deep pockets as well.

Second, enterprise liability may restrict patient choice of provider. Physicians may have to limit their admissions to the one hospital at which they receive medical malpractice coverage. Physicians frequently have privileges at more than one hospital. This potential concern can be largely remedied by limiting physician coverage to care delivered within the walls of the facility under the hospital’s policy. Thus, if a physician practiced at three hospitals, he or she would be covered under three hospital policies. In addition, the physician would need to obtain medical malpractice insurance for care delivered in the office, but such coverage would be at a greatly reduced premium.

Third, physicians already complain about their growing loss of autonomy, and enterprise liability would probably exacerbate this trend. Physician autonomy is important because it allows providers to use their professional skill and judgment in particular situations. Outsiders, such as hospitals, may not be well positioned to know all the details and considerations of a physician-patient interaction.

Fourth, inpatient care is shrinking as a share of the total personal health care dollar. Because more care is being delivered outside the hospital, using the hospital as the locus of liability may not be ideal. But this concern disregards another trend. Hospital-provided ambulatory care is growing, and hospital enterprise liability would encompass care at all sites at which the hospital organization or system provides care.

Nevertheless, enterprise liability addresses many current deficiencies, especially the insufficient incentives providers have to invest in patient safety. A major barrier to implementation is the lack of a political constituency at the federal and state levels. Health care consumers are not well organized, and providers appear to be concerned with the “deep pockets argument,” as well as the loss of professional autonomy that may accompany enterprise liability.

Enterprise insurance. Another approach, enterprise insurance, does not change the cause of action against physicians and hospitals nor does it change the named defendants. Rather, physicians who render services to patients in hospitals would obtain their malpractice insurance through the hospital. Large organizations could self-insure for medical malpractice. Because all members of the pool would stand to lose from the provision of substandard care, there would be organizational incentives to monitor quality and implement quality-improving systems of care. For example, a hospital whose obstetric staff is sued repeatedly would have a direct financial incentive to take actions to deal with the causes of the lawsuits.

This approach seems very promising, but it too faces obstacles. In particular, in the United States, in contrast to other high-income countries and professionals in U.S. industries such as airline pilots, hospital medical staffs have been largely independent of hospitals. Physicians have resisted being under the control of hospitals, for financial reasons and out of concern for loss of professional autonomy. Any proposal from the “outside” that would cede control of medical decisionmaking to hospitals is likely to be resisted by many physicians. The key will be to have active physician involvement in hospital-based enterprise insurance. Smaller hospitals would face special challenges because they might be too small to operate a medical malpractice insurance plan on their own. Such hospitals might join regional compacts.

Finally, accountability incentives alone are not likely to provide sufficient motivation for hospitals to create systems management of medical injuries. Hospitals and physicians have many non-liability objectives and concerns. Implementation of enterprise insurance alone may not lead to optimal levels of patient safety in hospitals. Still, enterprise insurance is an attractive solution because it provides those in the best position to improve care with an incentive to introduce patient-safety measures.

Enterprise insurance creates efficiency by combining patient-safety measures and insurance, including premium setting. Because the insurer, in this case the hospital, is better able to “poke inside” the clinical organization to understand the source of errors, the insurer may be less likely to raise premiums dramatically because it has a better sense of what is going on. As with enterprise liability, hospitals would have added incentives to be selective about the quality of physicians they admit to and retain on their medical staffs. In turn, medical staffs have a much more direct incentive to support adoption of patient-safety measures in order to reduce medical malpractice losses at the hospital, especially if the medical staff is placed at some risk for losses above a threshold value.

Enterprise insurance has its limitations, but it also has the potential to provide the initiative for systemic change. By combining the function of preventing injuries with that of insuring loss if and when injuries do occur, the way to injury prevention is combined with the willingness to do so. Nevertheless, the medical malpractice apparatus, with or without enterprise insurance, should be seen as only part of the quality-assurance process. It cannot do the job on its own.