Forum – Summer 2005
Future oil supplies
As Robert L. Hirsch, Roger H. Bezdek, and Robert M. Wendling themselves point out, their “Peaking Oil Production: Sooner Rather Than Later?” (Issues, Spring 2005) repeats an often-heard warning. Who knows, maybe they are right this time. Right or wrong, however, the best reasons for getting off the oil roller-coaster in the 21st century have little or nothing to do with geology. Rather, national security and global warming have everything to do with the need to act, and the time is now.
The national security problem, as several bipartisan expert groups have recently pointed out, is rooted in the growth of terrorist activity. Terrorist attack at any point in the oil production and delivery system can cause major economic and political disruption. Such disruptions have been a concern for years, of course, because oil reserves are concentrated in the Middle East. What’s new and dangerous is that terrorists will make good on the threat. Unlike the members of OPEC, terrorist groups have little or no economic incentive to keep oil revenues flowing.
A related problem is that some of the cash we pay for oil gets redistributed to terrorist organizations, thus creating another risk for our national security. Even if paying this ransom persuades terrorists not to disable oil-production facilities in the Middle East, for example, it enhances their ability to cause trouble elsewhere.
Climate change is the other reason to act now. Although some uncertainties remain about the specifics, almost all scientists (and not a few business executives) agree that the likely prospect of climate change justifies taking steps soon to mitigate greenhouse gas emissions. However, it’s hard to capture the carbon dioxide produced by a moving vehicle. Therefore, the only way to reduce greenhouse gas emissions from oil burned in the transportation sector is to use less oil.
In short, even if its production doesn’t soon peak, we should start reducing our dependence on oil now. Fortunately, there is no shortage of good ideas about how to do so. As the recent National Research Council report on auto efficiency standards shows, technology is available to reduce the use of gasoline in existing internal combustion engines. Coupling ethanol produced from cellulose with hybrid engines is a very promising avenue for creating a domestic carbon-neutral fuel. Harder but still worth pursuing as a research program is the hydrogen economy.
This isn’t the doom-and-gloom scenario that Hirsch et al. conjure up, but it runs headlong into the same question: Will our leaders act now or continue to dither? If they are to be convinced, it seems to me that a reprise of the tenuous arguments about limits to the growth of oil production isn’t going to do the job. We can only hope that our national security and the plain risks of climate change are reasons enough to take direct action on oil.
Robert L. Hirsch, Roger H. Bezdek, and Robert M. Wendling have produced an excellent analysis of the peak oil issue, which is attracting increasing interest in many quarters around the world, and rightfully so.
The world is coming to the end of the first half of the Age of Oil. It lasted 150 years and saw the rapid expansion of industry, transport, trade, agriculture, and financial capital, which allowed the population to expand sixfold, almost exactly in parallel with oil production. The financial capital was created by banks that lent more than they had on deposit, charging interest on it. The system worked because there was confidence that tomorrow’s expansion, fueled by cheap oil-based energy, was adequate collateral for today’s debt.
The second half of the Age of Oil now dawns and will see the decline of oil and all that depends on it. The actual decline of oil after peak is only about 2 to 3 percent per year, which could perhaps be managed if governments introduced sensible policies. One such proposal is a depletion protocol, whereby countries would cut imports to match the world depletion rate. It would have the effect of moderating world prices to prevent profiteering from shortages by oil companies and producer governments, especially in the Middle East. It would mean that the poor countries of the world could afford their minimal needs and would prevent the massive and destabilizing financial flows associated with high world prices. More important, it would force consumers to face reality as imposed by Nature. Successful efforts to cut waste, which is now running at monumental levels, and improve inefficiency could follow, as well as the move to renewable energy sources to the extent possible.
Public data on oil reserves are grossly unreliable and misunderstood, allowing a vigorous debate on the actual date of peak. Whether it comes this year, next year, or in 5 years pales in significance compared with what will follow. Perhaps the most serious risk relates to the impact on the financial system, because in effect the decline removes the prospect of economic growth that provides the collateral for today’s debt. Whereas the physical decline of oil is gradual, the perception of the post-peak decline could come in a flash and lead to panic. The major banks and investment houses are already becoming aware of the situation, but face the challenge of placing the mammoth flow of money that enters their coffers every day. In practice, they have few options but to continue the momentum of traditional practices, built on the outdated mind-sets of the past, with their principal objective being to remain competitive with each other whether the markets rise or fall. Virtually all companies quoted on the stock exchanges are overvalued to the extent that their accounts tacitly assume a business-as-usual supply of energy, essential to their businesses. In contrast, independent fund managers with more flexibility are already reacting by taking positions in commodities and renewable energies and by trying to benefit from short-term price fluctuations.
Governments are in practice unlikely to introduce appropriate policies, finding it politically easier to react than prepare. It is admittedly difficult for them to deal with an unprecedented discontinuity defying the principles of classical economics, which proclaim that supply must always match demand in an open market and that one resource seamlessly replaces another as the need arises. But failure to react to the depletion of oil as imposed by Nature could plunge the world into a second Great Depression, which might reduce world population to levels closer to those that preceded the Age of Oil. There is much at stake.
As Robert L. Hirsch, Roger H. Bezdek, and Robert M. Wendling eloquently describe, a peak in global oil production is not a matter of if but when. Yet in order to assess how far we are from peak and to prepare ourselves for the moment in which supply begins to decline, we need to gather the most accurate available data. Unfortunately, reserve data are sorely lacking, and the global energy market suffers from the absence of an auditing mechanism to verify the accuracy of data provided by producers. The reason is that over three-quarters of the world’s oil reserves are concentrated in the hands of governments rather than public companies.
Unlike publicly traded oil companies, which are accountable to their stockholders, OPEC governments are accountable to no one. In recent years, we have seen that even public companies sometimes fail to provide accurate data on their reserves. In 2004, Shell had to downsize its reserve figures by 20 percent. Government reporting standards are far poorer. In many cases, OPEC countries have inflated their reserve figures in order to win higher production quotas or attract foreign investment. In the 1980s, for example, most OPEC members doubled their reserve figures overnight, despite the fact that exploration activities in the Persian Gulf declined because of the Iran-Iraq War. These governments, many of them corrupt and dictatorial, allow no access to their field-by-field data. The data situation worsened in 2004 when Russia, the world’s second largest oil producer and not a member of OPEC, declared its reserve data a state secret.
Our ability to create a full picture of the world’s reserve base is further hindered by the fact that in recent years, exploration has been shifting from regions where oil is ubiquitous to regions with less potential. According to the 2004 World Energy Outlook of the International Energy Agency (IEA), only 12 percent of the world’s undiscovered oil and gas reserves are located in North America, yet 64 percent of the new wells drilled in 1995–2003 were drilled there. On the other hand, 51 percent of undiscovered reserves are located in the Middle East and the former Soviet Union, but only 7 percent of the new wells were drilled in those regions. The reason for that is, again, the reluctance of many producers to open their countries for exploration by foreign companies.
These issues require behavioral changes by the major producers; changes that they are unlikely to enact of their own volition. The major consuming countries should form an auditing mechanism under the auspices of the IEA and demand that OPEC countries provide full access to their reserve data. Without such information, we cannot assess our proximity to peak and therefore cannot make informed policy decisions that could mitigate the scenarios described by the authors.
I was greatly impressed by “Peaking Oil Production: Sooner Rather Than Later?” because I felt it gave a very fair and balanced analysis of the issue. However, because I am quoted as projecting peak oil as occurring “after 2007,” I think your readers are entitled to know how I came to this somewhat alarming conclusion.
My analysis is based on production statistics, which although not perfect are subject to a much smaller margin of error than reserves statistics. The world is looking for a flow of oil supply, and in an important sense is less interested in the stock of oil (reserves), except insofar as these can be taken as a proxy for future flows.
By listing all the larger oil projects where peak flows are 100,000 barrels per day or more, we can gain a good idea of the upcoming production flows. The magazine I edit, Petroleum Review, publishes these listings of megaprojects at intervals, the most recent being in the April 2005 issue. We have now done it for long enough to be confident that few if any large projects have been missed. Stock exchange rules and investor relations mean that no company fails to announce new projects of any size.
Because these large projects average 5 to 6 years from discovery to first oil, and even large onshore projects take 3 to 4 years, there can be few surprises before 2010 and not much before 2012. Simply adding up future flows, however, misleads, because depletion has now reached the point where 18 major producers and 40 minor producers are in outright decline, meaning that each year they produce less than the year before.
The buyers of oil from the counties in decline are (obviously) unable to buy the production that is no longer there. Replacing this supply and meeting demand growth both constitute new demand for the countries where production is still expanding. By the end of 2004, just less than 29 percent of global production was coming from countries in decline. This meant that the 71 percent still expanding had to meet all the global demand growth as well as replacing the “lost” production. In the next 2 or 3 years, the countries in decline will be joined by Denmark, Mexico, China, Brunei, Malaysia, and India. By that point, the world will be hovering on the brink, with nearly 50 percent of production coming from countries in decline balanced or offset by the 50 percent where production is still expanding.
Once the expanders cannot offset the decliners, global oil production peaks.
If global demand (the least predictable part of the equation) averages 2 to 2.5 percent (somewhat slower than the past 2 years), then we find that supply and demand can more or less balance until 2007, but after 2008 only minimal oil demand growth can be met and by 2010 none at all. Because major oil developments take so long to come to production, new discovery now will be too late to affect peak.
The best we can hope for is that the development of known discoveries plus some new discovery can draw the peak out into an extended plateau while human ingenuity races to cope with the new realities.
Worries about the future supply of oil began many years ago, but recently a serious concern has arisen that an imminent peak in world oil production will be followed by decline and global economic chaos. Yet the world and the United States have weathered many past peaks, which have been mostly ignored or forgotten.
World oil production dropped 15 percent after 1980. Cause: high prices from the Iran-Iraq war. Earlier, the Arab oil embargo cut global oil production by 7 percent in 1974. Much earlier, the Great Depression cut U.S. oil demand and production by 22 percent after 1929. And even earlier, U.S. oil production dropped 44 percent after 1880, 20 percent after 1870, and 30 percent after 1862, as prices fluctuated wildly.
In every case, the law of supply and demand worked, and the “oil supply problem” solved itself. The solution wasn’t necessarily comfortable. In the United States, the net effect of the 1973–1980 oil price hikes was a doubling of unemployment, a fourfold increase in inflation, and the worst recession since 1929.
Yet future oil supplies, now as in the past, continue to be underestimated for at least three reasons: the U.S. Securities and Exchange Commission (SEC), technology, and oil price increases. The SEC effectively defines “proved oil reserves” worldwide; it is a decidedly and deliberately conservative definition. The effect, now and historically, is a serious understating of oil reserves. This is shown by a continuing growth in U.S. proved oil reserves. They were 36 billon barrels in 1983. Twenty years later, they were 31 billon barrels, an apparent drop of 5 billon barrels. In the interim, the United States produced 64 billon barrels. Simply adding these back to the corrected 2003 reserves shows that the 1983 numbers were understated by a factor of 2.64, or 164 percent.
Technology is a continuous process, enabling discovery and production from places and by means unimaginable in the past. The most famous of oil-supply pessimists, M. King Hubbert, predicted correctly in 1956 that U.S. oil production would peak in 1970 and then rapidly decline. But the oil potential in the deep Gulf of Mexico and in Alaska and the impact of a host of technology developments assured continued U.S. oil development. Hubbert necessarily ignored all these factors, because he based his projections only on history.
Increasing oil prices also enable the exploitation of resources once thought uneconomical, and they accelerate the development and application of new technologies to find and produce oil, induce conservation by consumers, and drive the development and use of alternative fuel supplies. All help force supply and demand into balance.
Syndromic surveillance
Michael A. Stoto’s “Syndromic Surveillance” (Issues, Spring 2005) catalogs numerous reasons why these early warning systems for large outbreaks of disease will disappoint the thousands of U.S. counties and cities planning to implement them. Besides suffering from high rates of expensive false positive and useless false negative results, syndromic surveillance algorithms only work when very large numbers of victims show up within a short period of time. When they produce a true positive, they don’t tell you what you’re dealing with, so they must be coupled with old-fashioned epidemiology, which trims the earliness of the early warning. Most disturbingly, when challenged by historical or simulated data, they tend to fail as often as they pass. It seems that, like missile defense but on a vastly smaller scale, syndromic surveillance has become yet another public expenditure on something that is not expected to work in the name of defending the homeland. The up side is that epidemiologists will have to be hired to investigate the syndromic surveillance alarms. They might find something interesting while investigating false positives and, if there were a bioterrorist attack, the extra workers would come in handy.
Syndromic surveillance systems monitor pre-diagnostic data (such as work absenteeism or ambulance dispatch records) to identify, as early as possible, trends that may signal disease outbreaks caused by bioterrorist attacks. Michael A. Stoto summarizes some of the main questions being asked about syndromic surveillance and the lines of research currently being pursued in order to supply the answers.
Syndromic surveillance is a relatively young field. It was given a push by the events of 9/11, which generated a huge amount of investment in developing systems designed to detect the effects of bioterrorist attacks. The acute necessity for counterterrorism measures meant that the proof of concept often required of new technologies before public investment is made was initially ignored. This has resulted in the proliferation of surveillance systems, all with similar aims and all now understandably looking to justify several years of investment.
In the absence of any known bioterrorist attacks since 2001, it is instructive to examine the benefits of syndromic surveillance (some mentioned in Stoto’s article), which go beyond bioterrorism preparedness. They include the early detection of naturally occurring infectious disease outbreaks (the majority being viral in nature); the provision of timely epidemiological data during outbreaks; the strengthening of public health networks as a result of the follow-up of syndromic surveillance signals; the identification of secular disease trends; the flexibility to be “tweaked” in response to new threats (such as forest fires, heat waves, or poisonings from new market products); and reassurance that no attack has taken place during periods of increased risk.
There is some discussion in Stoto’s article about the distinction between the generation of surveillance signals and the use of those signals thereafter. This distinction is crucial to the ultimate success of syndromic surveillance systems. A simple and widely accepted definition of disease surveillance is that it is “information for action.” In other words, although syndromic surveillance systems excel at generating signals from noisy data sets, the challenge is how to filter out the false positives and turn the remaining true positives into effective public health action (a reduction in morbidity and mortality). To facilitate public health action, surveillance teams must wrestle with a complex array of jurisdictional, legal, and ethical issues as well as epidemiological ones. A multitude of skills are needed to do this.
Those public health practitioners used to traditional laboratory surveillance may be resistant to, or simply not understand, the new technology. Education and careful preparation must be used to ensure that syndromic surveillance employees communicate effectively with colleagues in other areas, that timely but nonspecific syndromic signals are validated with more definitive laboratory analyses, and that those on the receiving end of surveillance signals know how to respond to them. It is essential that investigations of signals that turn out to be spurious, as well as those representing real outbreaks, be published so that a portfolio of best practice can be developed. Finally, there must be a recognition that the benefits of syndromic surveillance fall not only in infectious disease monitoring and biosurveillance but also in other areas, some potentially not yet realized. The discussion of this topic within this journal’s broad scientific remit is therefore welcomed.
Michael A. Stoto confidently straddles the ambivalent line between those who extol the virtues of syndromic surveillance and those who question the resources dedicated to its development. As a public health practitioner, it is refreshing to have the concerns articulated so clearly, yet without minimizing a new public health tool that potentially offers actionable medical information.
The author makes several points that are beyond dispute, including that many city, county, and state health agencies have begun spending substantial sums of money to develop and implement syndromic surveillance systems, and that too many false alerts generated from a system will desensitize responders to real events. Our impatience to build systems is rooted in the desire to improve our nation’s capacity to rapidly detect and respond to infectious disease scourges. Although it is difficult to argue this logic, Stoto appropriately clarifies cavalier statements, such as that syndromic surveillance is cost-effective because it relies on existing data. There is a high cost to these systems, too often without any evaluation of real benefits as compared to those of existing disease detection systems. Protection of patient privacy is another concern. Patient support for these systems could falter if their identities are not protected.
Public health departments need surveillance systems that are strategically integrated with existing public health infrastructure (many systems are outside public health, in academic institutions), and, more importantly, that are accompanied by capacity-building—meaning sufficient, permanent, and well-trained staff that can perform a range of functions. Ignoring these issues may prove to be self-destructive, as fractionated systems multiply and limited public health resources are spent to respond repeatedly to phantom alerts (or health directors begin to downplay alerts).
The offering of a possible supplementary “active syndromic surveillance” approach is appealing and strengthens the agreed-on benefits of syndromic surveillance: new opportunities for collaboration and data sharing between health department staff and hospital staff along with their academic partners, as well as reinforcement of stan-dards-based vocabulary to achieve electronic connectivity in health care.
A stronger message could be sent. Although they are intuitively appealing and growing each year, syndromic surveillance systems do not relieve ambitious developers or funders from executing systems that are better integrated (locally and regionally) and evaluated and have the skilled staff needed to detect and respond to alerts. We must strike a better balance between strengthening what is known to be helpful (existing infectious disease surveillance systems, improved disease and laboratory reporting, and distribution of lab diagnostic agents) and the exploration of new technology.
We wholeheartedly support Michael A. Stoto’s thesis that syndromic surveillance systems should be carefully evaluated and that only through continued testing and evaluation can system performance be improved and surveillance utility assessed. It is also undeniable that there is a tradeoff between sensitivity and specificity and that it is highly unlikely that syndromic surveillance systems will detect the first or even fifth case of a newly emerging outbreak. However, syndromic surveillance does allow for population-wide health monitoring in a time frame that is not currently available in any other public health surveillance system.
The track record of reliance on physician-initiated disease reporting in public health has been mixed at best. Certainly, physician reporting of sentinel events, such as occurred with the index cases of West Nile virus in New York City in 2000 and with the first case of mail-associated anthrax in 2001, remain the backbone of acute disease outbreak detection. However, the completeness of medical provider reporting of notifiable infectious diseases to public health authorities between 1970 and 1999 varied between 9 and 99 percent, and for diseases other than AIDS, sexually transmitted diseases, and tuberculosis, it was only 49 percent. The great advantage of using routinely collected electronic data for public health surveillance is that it places no additional burden on busy clinicians. The challenge for syndromic surveillance is how to get closer to the bedside, to obtain data of greater clinical specificity, and to enable two-way communication with providers who can help investigate signals and alarms.
The solution to this problem is not a return to “active syndromic surveillance,” which would require emergency department physicians to interrupt their patient care duties to enter data manually into a standalone system. Such a process would be subject to all the difficulties and burdens of traditional reporting without its strengths, and is probably not where the field is headed.
The development of regional health information organizations and the increasing feasibility of secure, standards-based, electronic health information exchange offer the possibility of real-time syndromic surveillance and response through linkages to electronic health records. The future of public health surveillance may lie in a closer relationship with clinical information systems, rather than a step away from them.
Michael A. Stoto’s article performs a useful service by subjecting a trendy new technology to evi-dence-based analysis and discovering that it comes up short. All too often, government agencies adopt new technologies without careful testing to determine whether they actually work as advertised and do not create new problems. Tellingly, Stoto writes that one reason why state and local public health departments find syndromic surveillance systems attractive is that “personnel ceilings and freezes in some states . . . have made it difficult for health departments to hire new staff.” Yet he points out later in the article that syndromic surveillance merely alerts public health officials to possible outbreaks of disease and that “its success depends on local health departments’ ability to respond effectively.” Ironically, because epidemiological investigations are labor-intensive and syndromic surveillance can trigger false alarms, the technology may actually exacerbate the personnel shortages that motivated the purchase of the system in the first place.
Problems with nuclear power
Rather than provide a careful exploration of the future of nuclear technologies, Paul Lorenzini’s “A Second Look at Nuclear Power” (Issues, Spring 2005) rehashes the industry’s own mythical account of its stalled penetration of the U.S. energy market.
Does Lorenzini really believe that the runaway capital costs, design errors, deficient quality control, flawed project management, and regulatory fraud besetting nuclear power in the 1970s and 1980s were concocted by environmental ideologues? The cold hard fact is that the Atomic Energy Commission, reactor vendors, and utilities grossly underestimated the complexity, costs, and vulnerabilities of the first two generations of nuclear power reactors. Indeed, the United States has a comparatively safe nuclear power industry today precisely because “environmentalist” citizen interveners, aided by industry whistleblowers, fought to expose dangerous conditions and practices.
Lorenzini shows little appreciation of the fact that during the past decade, the regulatory process has been transformed in the direction he seeks: It now largely shuts out meaningful public challenges. But even with the dreaded environmentalists banished to the sidelines—and more than $65 billion in taxpayer subsidies—Wall Street shows no interest. It has rushed to finance new production lines for solar, wind, and fuel cells in recent years, but not nuclear. Why? Lorenzini never addresses this key question. He fails to acknowledge the prime barrier to construction of U.S. nuclear plants for the past three decades: their exorbitant capital cost relative to those of other energy sources.
Meanwhile, Lorenzini devotes only one paragraph to assessing weapons proliferation and terrorism risks, conceding in passing that “reprocessing generates legitimate concerns.” But this concern is immediately overridden by his assertion that a key to solving the nuclear waste isolation problem is to “reconsider the reprocessing of spent fuel.” This is disastrous advice. There is no rational economic purpose to be served by separating quantities of plutonium now, when a low-enriched uranium fuel cycle can be employed for at least the next century at a fraction of the cost—and security risk—of a “closed” plutonium cycle.
The best way to determine whether nuclear power can put a dent in global warming is to foster competition with other energy sources on a playing field that has been leveled by requiring all producers to internalize their full environmental costs. For the fossil-fueled generators, that means a carbon cap, carbon capture, and emissions standards that fully protect the environment and public health. For the nuclear industry, it means internalizing and amortizing the full costs of nuclear waste storage, disposal, security, and decommissioning, while benefiting from tradeable carbon credits arising from the deployment of new nuclear plants. For coal- and uranium-mining companies, it means ending destructive mining practices. For renewable energy sources, it means new federal and state policies for grid access allowing unfettered markets for distributed generation.
Reasonable people ought to be able to agree on at least two points. The first is that the problem of long-term underground isolation of nuclear wastes is not an insuperable technical task. It remains in the public interest to identify an appropriate site than can meet protective public health standards.
The second point is that new nuclear plants should be afforded the opportunity to compete in the marketplace under a tightening carbon cap. Whether a particular technology also should enjoy a subsidy depends on whether that subsidy serves to permanently transform a market by significantly expanding the pool of initial purchasers, driving down unit costs and enabling the technology to compete on its own, or merely perpetuates what would likely remain unprofitable once the subsidy ends.
The nuclear power industry has already enjoyed a long and very expensive sojourn at the public trough. No one has convincingly demonstrated how further subsidizing nuclear power would lead to its market transformation.
Without resolution of its waste disposal, nuclear weapons proliferation, capital cost, and security problems, such a market transformation for nuclear power is highly unlikely, with or without the megasubsidies it is now seeking. It’s not impossible, but certainly unlikely on a scale that would appreciably abate the accumulation of global warming pollution. That would require a massive mobilization of new investment capital for nuclear power on a scale that seems improbable, even in countries such as Russia, China, and India, where nuclear state socialism is alive and well.
During the next decade, increased public investment in renewable energy sources and efficiency technologies makes more sense and would have a higher near-term payoff for cutting emissions.
“A Second Look at Nuclear Power” provides an example of why the debate over nuclear power is unlikely to be resolved in the context of U.S. policy any time soon. As someone who agrees with Lorenzini’s cost/ benefit arguments for greater replacement of hydrocarbon-based energy sources with forms of nuclear power generation, I find parts of his article convincing.
At the same time, the article shows symptoms of the “talk past” rather than “talk with” problem, caused by the overly crude division of the nuclear power conversation into two warring camps (science versus environmentalists) that afflicts energy policy. Using technological determinist arguments to paint nuclear energy opponents as hysterical Luddites is tired and ill-thought out from the standpoint of helping promote nuclear power strategies. It is equivalent to labeling pro-nuclear arguments as being simply the product of lobbyist boosterism and “Atomic Energy for Your Business” techno-lust. In any case, it ignores the current opportunity for progress (which Lorenzini notes), given that several major environmental figures have announced willingness to discuss nuclear options.
Put plainly, the nuclear power industry has earned a tremendous deficit of public trust and confidence. This is not the fault of mis-characterization of waste or an ignorant desire to return to a mythical nontechnological Eden. Previous incarnations of commercial nuclear power technologies largely overpromised and failed to deliver. Missteps were made in ensuring that the public perceptions of regulatory independence and vigilance would be fostered and that operators could be trusted to behave honorably. There are unfortunate, but rational, reasons to doubt industry commitment to full disclosure of hazards, as well as process fairness in siting past generations of reactors and waste facilities. The industry has also displayed overconfidence in engineering ability and a desire to hide behind the skirts of government secrecy and subsidies.
Until friends of nuclear power decide to enter into vibrant, open debate with those who disagree, I fear this problem of public policy will continue to be diagnosed as a struggle over rationality versus unreason. Are there groups who attempt to exploit emotions to promote their own interests instead of “the facts?” Of course: In the past, both the industry and its opposition have fallen short of the pure pursuit of truth. Lorenzini raises a number of excellent points as to why nuclear power must be part of future energy choices. However, progress depends on its supporters dealing with the real question: For many in our democracy, it does not matter what advantages we claim for nuclear power if they do not trust the messenger; after all, no one cares what diseases the snake-oil medicine salesman promises to cure.
The length to which Paul Lorenzini goes to selectively use data to support his position in “A Second Look at Nuclear Power” is astonishing. Some real numbers on renewables are called for. It is true that the total contribution from renewables has not increased significantly in the United States during the past 30 years, but that is because of the size and stability of the fraction due to wood and hydropower. It is also true that the International Energy Agency has predicted slow growth of wind, solar, and biofuels, but they have also been predicting stable oil and gas prices for the past 7 years while these have been increasing at an average rate of 28 percent annually.
Biodiesel use in the United States has been doubling every five quarters for the past 2 years, and that growth rate is projected to continue for at least the next 3 years. Wind energy has been growing at roughly 30 percent annually for the past 5 years, both in the United States and globally, and those growth rates are expected to continue for at least the next decade. Solar has seen annual global growth of about 30 percent for the past decade, and General Electric is betting hundreds of millions of dollars that that growth rate will continue or increase over the coming decade. The cost of the enzyme cellulase, needed to produce cellulosic ethanol, has decreased by more than an order of magnitude in the past 3 years. The energy balance for corn ethanol currently exceeds 1.7, and that for cellulosic ethanol from waste woody materials may soon exceed 10. Brazil is producing more than 5 billion gallons of ethanol annually at a cost of about $0.6 per gallon, and the annual growth rate of ethanol production there is projected to remain above 20 percent for at least the next 5 years.
On the subject of nuclear power, only a small minority of scientists would take issue with Lorenzini on the subjects of regulation and waste storage, but spent fuel reprocessing and breeder reactors are not nearly as straightforward as he implies. In fact, despite four decades of worldwide efforts, the viability of the breeder reactor cycle has not yet been demonstrated; and without it, nuclear energy is but a flash in the pan. The International Atomic Energy Agency concludes that the total global uranium reserves (5 million tons) of usable quality are sufficient to sustain nuclear power plants, with a 2 percent annual growth rate, only through 2040. Others have recently concluded that even with near-zero growth, the high-grade ores (those greater than 0.15 percent uranium) will be depleted within 25 years. Moreover, 15 years after the high-grade ores are depleted, we’ll be into the low-grade ores (below 0.02 percent uranium), which may have negative energy balance and result in more carbon dioxide emissions (during ore refining, processing, disposal, etc.) than would be produced by gas-fired power plants. The price of natural uranium has tripled during the past 4 years, and it seems likely that its price will triple again in the next 6 to 10 years, as the finitude of this resource becomes more widely appreciated by those controlling the mines.
Childhood obesity
We have failed our children, pure and simple. Nine million just in America are heavy enough to be in immediate health danger. And obesity is but one of many problems brought on by poor diet and lack of physical activity. For reasons woven deeply into the economics, social culture, and politics of our nation, unhealthy food prevails over healthy choices because it is more available and convenient, better-tasting, heavily marketed, and affordable. It would be difficult to design worse conditions.
A few shining events offer up hope. One is the release of Preventing Childhood Obesity by the Institute of Medicine (IOM). “Preventing Childhood Obesity” by Jeffrey P. Koplan, Catharyn T. Liverman, and Vivica I. Kraak (Issues, Spring 2005), who are all key figures in the IOM report, captures much of the strong language and bold recommendations of the report. An authoritative report like this is needed to shake the nation’s complacency and to pressure institutions like government and business to make changes that matter.
Profound changes are needed in many sectors. I would begin with government. Removing nutrition policy from the U.S. Department of Agriculture is essential, given the conflict of interest with its mission to promote food. The Centers for Disease Control and Prevention would be the most logical place for a new home, although substantial funding is necessary to have any hope of competing with industry. Agriculture subsidies and international trade policies must be established with health as a consideration. The nation cannot afford a repeat of the tobacco history, where the federal government was very slow to react because of industry influence. Many other institutional changes are necessary as well, with food marketing to children and foods in schools as prime examples.
Much can also occur at the grassroots, but programs must be supported, nurtured, evaluated, and then disseminated. Considerable creativity exists in communities. Examples are programs to create healthier food environments in schools, community gardens in inner cities, and movements to build walking and biking trails.
In January of 2005, two government agencies released the newest iteration of Dietary Guidelines for Americans. It is difficult to find a single adult, much less a child who could list the guidelines. Yet vast numbers could recall that one goes cuckoo for Cocoa Puffs. Such is the nutrition education of our children.
We step in time and again to protect the health and well-being of children. Mandatory immunization, required safety restraints in cars, and restrictions on tobacco and alcohol promotion begin the list. In each case, there is a perceived benefit that outweighs the costs, including reservations about big government and perceptions that parents are being told how to do their job. Each citizen must decide whether the toll taken by poor diet and inactivity warrants a similar protective philosophy, but in so doing must consider the high cost that will be visited on America’s children by failing to act.
State science policy
Knowledge of science and technology (S&T) has become critical to public policymaking. With increasing incorporation of technology in society, there is hardly a public policy decision that does not have some element rooted in scientific or technical knowledge. Unfortunately, at the state and local level, there are very few mechanisms for bringing S&T expertise to the policymaking process. For states that regard themselves as the high-tech centers of the world, this is a contradictory and economically unhealthy situation.
Heather Barbour raises this point in “The View from California” (Issues, Spring 2005) and emphasizes the scarcity of politically neutral, or at least politically balanced, S&T policy expertise in the state government. I could not agree more with the observation, and I join in expressing a serious concern that good policymaking is almost impossible without S&T experts being deeply involved in providing sound advice during the legislative process. Bruce Alberts, in his final annual address to the National Academy of Sciences, remarked that many states are “no better than developing nations in their ability to harness science advice.”
Of all the high-technology states, it is in California that the issue is being addressed more aggressively and effectively. Currently, the California Council on Science and Technology (CCST) provides a framework for a solution, and CCST could be the solution with sustained commitment from the government.
CCST is the state analog of the National Academies for California and for the past decade has been bridging the gulf between S&T experts and the state government. This is not an easy process. The two cultures are so disparate that there is often a fundamental communication gap to cross before any meaningful dialogue can occur. Policy-makers want clear-cut decisions that will solve problems during their brief stay in office. Some typical responses to S&T advice are: “Don’t slow me down with facts from another study,” “Just give me the bumper-sticker version,” or “Whose budget will be affected by how much?” S&T experts, on the other hand, look at data, at possibilities, at both sides, at pros and cons. Although often at home in Washington, D.C., they do not usually have experience with the state government. More often than not, when newly appointed CCST council members meet in the state capital, it is their first trip there. In spite of these and other challenges, CCST, with a decade of experience in addressing these issues, has worked successfully with the state government on genetically modified foods, renewable energy, nanotechnology, hydrogen vehicles, homeland security, and the technical workforce pipeline.
A look at the agenda items from the latest CCST meeting (May 6 and 7, 2005, in Sacramento) shows the list of issues that CCST addressed:
- Who owns intellectual property created from research funded by the state?
- What are the ethical and social implications of stem cell research?
- How are we going to produce and retain enough science and math teachers to meet our needs?
- How can our Department of Energy and NASA labs be best used for getting Homeland Security technology into the hands of first responders?
- How can our health care information technology system better serve our citizens?
Already there is, or soon will be, state legislation in the works on all of these topics. Effective, technically and economically sound policies in these areas are important to California and to us all. CCST’s relationship with public policymakers has been excellent overall, but, because of the organization’s limited resources, insufficient to meet the increasing demands of an ever more technological society. This situation has not been missed by the National Academies, who are currently strengthening CCST capabilities while considering CCST counterparts in other states. I agree that the time is ripe for California and other states to build on the constructive steps already taken toward effective and robust S&T advising.
Heather Barbour’s article rehearses an oft-repeated lament about the lack of capacity of state governments—even of the mega-states like California, New York, Florida, and Texas—to grapple with the substantive and procedural complexities of science and technology (S&T) policy. What may have been lost behind the host of Barbour’s generally appropriate recommendations for capacity-building is the more subtle point about policymaking by the ballot box, exemplified by California’s stem cell initiative: that such “[c]omplex issues . . . are not suited to up or down votes.” Out of context, this claim might be taken as critical of any efforts to increase the public role in S&T policymaking. But Barbour’s point is that the stem cell initiative included such a large array of sub-issues, including the constitutional amendment, the financing scheme, the governing scheme, etc., that it was inappropriate to decide all of them with a single vote. Since the landslide, advocates of the initiative have deployed public support as a shield against critics of its governance aspects. But the complex nature of the initiative logically yields no such conclusion. Although we know that the voters of California prefer the entire pro-stem cell menu to the alternative, we have little idea what they would choose if given the options a la carte.
Genetics and public health
In “Genomics and Public Health” (Issues, Spring 2005), Gilbert S. Omenn discusses the crucial role of public health sciences in realizing “the vision of predictive, personalized and preventive health care and community health services.” Here, I expand on three critical additional processes needed to integrate emerging genomic knowledge to produce population health benefits: 1) systematic integration of information on gene/disease associations, 2) the development of evidence-based processes to assess the value of genomic information for health practice, and 3) developing adequate public health capacity in genomics.
Advances in genomics have inspired numerous case-control studies of genes and disease, as well as several large longitudinal studies of groups and entire populations, or biobanks. Collaboration will be crucial to integrate findings from many studies, minimize false alarms, and increase statistical power to detect gene/environment interactions, especially for rarer health outcomes. No one study will have adequate power to detect gene/environment interactions for numerous gene variants. Appropriate meta-analysis will increase the chance of finding true associations that are of relevance to public health.
To develop a systematic approach to the integration of epidemiologic data on human genes, in 1998, the Centers for Disease Control and Prevention (CDC) and many partners launched the Human Genome Epidemiology Network (HuGE Net) to develop a global knowledge base on the impact of human genome variation on population health. HuGE Net develops and applies systematic approaches to build the global knowledge base on genes and diseases. As of April 2005, the network has more than 700 collaborators in more than 40 different countries. Its Web site features 35 systematic reviews of specific gene/disease associations and an online database of more than 15,000 gene/disease association studies. The database can be searched by gene, health outcomes, and risk factors. Because of the tendency for publication bias, HuGE Net is developing a systematic process for pooling published and unpublished data through its work with more than 20 disease-specific consortia worldwide.
In addition, the integration of information from multiple disciplines is needed to determine the added value of genomic information, beyond current health practices. The recent surge in direct-to-consumer marketing of genetic tests such as genomic profiles for susceptibility to cardiovascular disease and bone health further demonstrates the need for evi-dence-based assessment of genomic applications in population health. In partnership with many organizations, the CDC has recently established an independent working group to evaluate and summarize evidence on genomic tests and identify gaps in knowledge to stimulate further research. We hope that this and similar efforts will allow better validation of genetic tests for health practice.
Finally, the use of genomic information in practice and programs requires a competent workforce, a robust health system that can address health disparities, and an informed public, all crucial functions of the public health system. Although the Institute of Medicine (IOM) identified genomics as one of the eight cross-cutting priorities for the training of all public health professionals, a survey of schools of public health shows that only 15 percent of the schools require genomics to be part of their core curriculum, the lowest figure for all eight cross-cutting areas. The CDC continues to promote the integration of genomics across all public health functions, including training and workforce development. Examples include the development of public health workforce competencies in genomics, the establishment of Centers for Genomics and Public Health at schools of public health, and the funding of state health departments.
In a 2005 report, the IOM defined public health genomics as “an emerging field that assesses the impact of genes and their interaction with behavior, diet and the environment on the population’s health.” According to the IOM report, the functions of this field are to “accumulate data on the relationships between genetic traits and diseases across populations; use this information to develop strategies to promote health and prevent disease in populations; and target and evaluate population-based interventions.” With the emerging contributions of the public health sciences to the study of genomic data, the integration of such information from multiple studies both within disciplines and across disciplines is a crucial methodological challenge that will need to be addressed before our society can reap the health benefits of the Human Genome Project in the 21st century.
Public health and the law
Lawrence O. Gostin’s “Law and the Public’s Health” (Issues, Spring 2005) is of significant interest to public health authorities. Gostin presents the framework of public health law, describing the profound impact it had on realizing major public health achievements in the 20th century, before the field of population health “withered away” at the end of the century. Revitalization of the field of public health law is needed now, as we witness unprecedented changes in public health work, particularly in the expansion of public health preparedness for bioterrorism and emerging infectious diseases and in increased awareness of health status disparities and chronic disease burden attributable to lifestyle and behaviors. Fortunately for the public health workforce, Gostin’s concise and salient assessment of public health law and its tools shows promise that once again law can play an important role in improving the public’s health.
The events surrounding September 11, 2001, and the effects of the anthrax attacks (both real and suspected hazards) catalyzed public health. Public health preparedness became a priority for health officials across the country, with an emphasis on surveillance and planning for procedures for mandated isolation and quarantine—measures that have been largely unused during the past 100 years. Not only were public health practitioners unprepared for these new roles, but the laws needed to provide the framework for these actions were outdated and inconsistent and varied by jurisdiction. Gostin eloquently outlines the importance of surveillance and quarantine/isolation while juxtaposing concerns for personal privacy and liberty.
These issues are not just theoretical exercises in legal reasoning or jurisprudence. The recent outbreak of severe acute respiratory syndrome (SARS) in Toronto is a case in point. Individuals with known contact with infectious patients were asked to remain at home (voluntary quarantine). Compliance was monitored and non-voluntary measures were used when necessary. In New York City, the involuntary isolation of a traveler from Asia suspected of having SARS generated widespread media attention.
Unfortunately, our processes to perform these functions are rusty. In reviewing the history of our jurisdiction, Syracuse, New York, we found that the only example of public health orders to control an epidemic (other than in cases involving tuberculosis) was the influenza pandemic of 1918. Even then, the process was not clearly established. When a health officer issued an order to prevent public assembly of large groups, the mayor was asked by the press how it would be enforced. He simply answered “we have the authority.” Despite the order, public assemblies continued; no record of enforcement action can be found. Almost 100 years later, we are in discussion with the chief administrative judge of the district to again examine the authority available and to draft protocols needed to implement mandated measures should they became necessary because of bioterrorism or the next pandemic flu.
Gostin, in this article and in other work, speaks to these issues. We eagerly await the upcoming revision of his book Public Health Law: Power, Duty, Restraint, which will further detail the complexities of protecting the public’s health through legal interventions.
Commissioner of Health
Reducing teen pregnancy
I appreciate the many issues touched on by Sarah S. Brown in her interview with Issues in the Spring 2005 issue. As a former National Campaign staffer, I think the organization plays a significant role in bringing needed attention to teen pregnancy and the many lives affected by it. In particular, in the area of research, the campaign’s work is stellar.
However, the central policy message as articulated by Brown is that on one side of the policy divide are those that push safer sex and contraception and on the other are those that push abstinence only. This simply is not the case. Those of us who believe in a comprehensive approach do not choose either approach in isolation, but embrace both and encourage young people to abstain, abstain, abstain (emphasis intentional), but when they do become sexually active, to be responsible and protect themselves and their partners by using contraception. As Brown says, “there is no need to choose between the two.”
On the side of those who advocate abstinence only, however, pragmatism and moderation are completely absent from the discussion. When the issues of sexual activity and use of contraceptives are addressed, the worst possible picture is painted. One program tells young people that sex outside of marriage is likely to lead to suicide. Others repeatedly cite a resoundingly refuted study that says condoms fail more than 30 percent of the time, and others compare condom use to Russian roulette. Their bottom-line message: Condoms do not work.
The persistence of this extremism is all the more puzzling when the research about the efficacy of comprehensive programs is taken into account. According to research published by the National Campaign and authored by the esteemed researcher Doug Kirby, several studies of comprehensive programs say that they “delay the onset of sex, reduce the frequency of sex, reduce the number of sexual partners among teens, or increase the use of condoms and other forms of contraception” (Emerging Answers, 2001). In other words, there is no discernable downside to providing instruction that is comprehensive in approach. Every desired behavior is boosted by a comprehensive approach.
The same cannot be said for programs that focus only on abstinence. Eleven states have now conducted evaluations of their programs, and all of them have arrived at a similar conclusion: These programs have no long-term beneficial impact on young people’s behavior. Recently published data on the effect of virginity pledges—a key component of most abstinence-only-until-marriage programs—actually found negative outcomes. Among these outcomes is decreased use of contraception when sex occurs and an increase in oral and anal sex among pledgers who believe this type of “non-sex” keeps their virginity intact. Finally, nearly 90 percent of pledgers will still go on to have sex before they marry.
Sadly, the ongoing debate has much to do with politics and very little to do with the health of our young people. And unfortunately, the central policy position of the National Campaign exacerbates the situation by mischaracterizing the approach of comprehensive sexuality education proponents when, in fact, our message is the same as theirs: “There is no need to choose between the two.”