The Wild Card in the Climate Change Debate
The potential for abrupt, drastic climate changes on a regional scale is being underestimated by policymakers.
The debate on global warming, framed on one side by those who see a long-term gradual warming of global surface temperatures and on the other side by those who see only small and potentially beneficial changes, misses a very important possibility. A real threat is that the greenhouse effect may trigger unexpected climate changes on a regional scale and that such changes may happen fairly quickly, last for a long time, and bring devastating consequences. Yet, U.S. and global programs designed to study human-caused climate change do not adequately address this regional threat. The nation needs to develop a larger, more comprehensive, and better focused set of programs to improve our ability to predict regional climate change.
If emissions of greenhouse gases continue to grow as they have, several regional surprises are possible during this century. Summers may become much drier in the mid-continents of North America and Eurasia, with the potential to devastate some of the earth’s most productive agricultural areas. The Arctic ice cap may disappear, a profound blow to a unique and fragile ecosystem. The Atlantic Ocean currents that warm Europe may be disrupted. The West Antarctic Ice Sheet may collapse, leading to a rise in sea level around the world.
Regional changes such as these are seen in studies that examine the long-term climate effects that would accompany the quadrupling of atmospheric carbon dioxide, projected for the middle of the next century if current trends continue. Although each of these climate scenarios is individually unlikely, the chance that one or more major regional changes will occur is probably quite high. Numerous studies of past climate have shown a tendency of regional climate to shift rapidly from one state to a radically different one. This characteristic behavior of geophysical systems–to generate abrupt climate changes rapidly over limited areas–makes the threat of anthropogenic global change much greater and more urgent than it is currently perceived to be.
Proclivity for abrupt change
To understand why large, abrupt climate change over limited areas is more likely than uniform gradual changes over the whole globe, we need to examine the laws that govern the solids and fluids that envelope the earth. First, the earth’s geophysical and biological systems operate in a nonlinear fashion, exemplified in the way the wind blows itself: An area of high winds and cold temperature will blow toward an area with calm winds and warm temperatures. The place where the air masses converge is called a front, bringing temperature differences that originally extended over 1,000 miles into a zone just 30 miles across. A second key characteristic of the earth’s systems is internal feedback. For example, a large area of snow cover is nature’s way of generating very low temperatures. Snow is both an excellent reflector of the sun’s rays and an excellent radiator of energy away from its surface. Thus, the effect of snow over a significant area is to generate a large decrease in temperature in as little as a couple of days. When the air and ground are too warm for snow, a response to forcing, such as the seasonal decrease of solar radiation, is gradual. However, a threshold is crossed when the ground and air become cool enough to support snow cover. All at once, much lower temperatures can occur and be sustained over large areas. We see this behavior in weather every fall, when weeks of warm weather are terminated by a cold front that drops temperatures 30 degrees or more.
The proclivity for crossing the threshold from gradual to large change is typical of the climate system as well as the weather system, for the same reasons. The Arctic ice cap is a case in point. When spring arrives, the Arctic Ocean is covered with ice. By early summer, the periphery is open water, with breaks in the ice and pools of water on top of some of the ice. Sea ice rejects up to 80 percent of solar heating by reflection, whereas water absorbs 80 to 90 percent. This is a powerful feedback: The open water captures heat in the continuous summer sunlight that acts to melt more ice and create more open water.
The melting of the Arctic ice may already be well under way. A study by University of Washington researchers found that the cap’s average thickness at the end of the summer declined from more than 10 feet in the 1950s to about 6 feet in the late 1990s. If the melting were to continue at this rate, we would expect the Arctic to become open by about 2060. But as noted above, linear extrapolation almost never works in weather and climate prediction. If feedback effects are causing the current thinning, it is conceivable that the ice could be gone in a few decades. More typically, calculations such as those performed with the Geophysical Fluid Dynamics Laboratory (GFDL) climate model, which may underestimate the feedback effect, require a quadrupling of carbon dioxide and several hundred years to eliminate the ice pack.
It would be hard to overstate the many ramifications of an open Arctic Ocean. Certainly, people will see advantages in livability (if warmer weather is regarded as better) and in greater opportunities for shipping, while also wondering about the geopolitical implications of Europe, Russia, Canada, and the United States sharing a new open ocean. One thing is certain: The biological makeup of the high latitudes of the Northern Hemisphere would be profoundly changed. Populations of humans, small and large mammals, fish and other ocean dwellers, and birds would face a rate of environmental change unlike any seen since the end of the last ice age. The potential wholesale disappearance of polar habitat and the associated loss of species that are highly adapted to the cold and ice are probably the most important issues.
Another scenario under which abrupt regional climate change could occur is the possible change in the circulation of the Atlantic Ocean. Currently, warm, salty water flows northward along the coasts of the United States and Europe into the far northern Atlantic on both sides of Greenland. Here, the water is cooled to the point that it becomes convectively unstable–the top water is denser than that below and thus sinks deep into the ocean. This deepwater zone is a key to maintaining the northward flow of warm water; cessation of this process would bring the Atlantic conveyor belt to a halt. Such a halt appears to have occurred suddenly 12,000 years ago, resulting in a 15-degree temperature drop in Europe. Some climate models predict it will happen again as the earth continues to warm. In this scenario, warm water sequestered in the southeast Atlantic would warm the adjacent land (the United States), while a decrease in warm currents would cool the lands downwind of the North Atlantic (Europe). The conveyor belt’s halt could occur, for example, with an average global surface temperate increase of 3 degrees F but be consistent with a much greater regional change. As a result, an area of Europe could be 7 degrees colder than today whereas an equal area of the United States could be 13 degrees warmer. This particular lose-lose scenario would be devastating for agriculture on both continents.
Some of the regional climate change scenarios could interact with other regional changes. It is valuable to ask why central Australia is dominated by desert, whereas the North America interior is the richest agricultural land in the world. Australia is somewhat closer to the equator, which results in subtropical sinking air causing increased surface heating and evaporation. The temperatures become so high that the moisture is baked out at the beginning of the growing season. In many of the global warming scenarios, this process would operate in the U.S. interior. For the great agricultural zone that extends from the eastern slope of the Rockies to the Atlantic, the GFDL model predicts a 30 percent reduction in soil moisture for a doubling of carbon dioxide (shortly after mid-century) and a 60 percent reduction for a quadrupling (in the next century). Loss of the Arctic ice cap would change the amount of cool air entering North America, whereas a warmer Atlantic ocean would increase summer convection adjacent to the eastern half of the United States. Both of these changes would make North America more like Australia. It should be pointed out, however, that not all the models predict the creation of a permanent dustbowl in the eastern United States. Some predict increased precipitation.
I was once told that the 60 percent reduction in eastern U.S. summer soil moisture seen in the GFDL model was not a serious worry. “If it happens,” I was assured, “we’ll just have to irrigate the place.” Others may not take nature’s richest gift to the North American continent so lightly. The prospect of summer dryness, with its associated large impact on U.S. agriculture, should capture the attention of policymakers. And such a change would not be short lived. A reasonable timescale for this new dust bowl would be hundreds to thousands of years.
Currently, there is agreement neither among the models nor the scientific experts about the likelihood of these regional climate changes; they must be regarded as low-probability possibilities. Then again, it is unlikely that there will be a fire in your house in the middle of the night. Yet you protect yourself against this low-probability event by installing smoke detectors. Highly credible climate models could be our global change smoke detectors. The regional changes described above may have a low probability, but we should do everything possible to predict them while we have time to act.
Predicting climate change
Recently the Intergovernmental Panel on Climate Change (IPCC) issued its Third Assessment Report. It projected a global temperature increase of 2.5 to 10.4 degrees F between 1990 and 2100, based on scenarios of greenhouse gas emissions and a number of climate models. My experience as a weather forecaster leads me to believe that human intuition cannot compete with the millions or trillions of calculations that can be applied in a modern climate model. Yet the models produce disparate results, with one group predicting warming of 3 to 4 degrees F and another of 8 degrees. Differences in how the models handle internal feedback, such as the cooling caused by increasing cloudiness, is the reason for the different projections. With current capabilities, we can’t know whether those who say that feedbacks such as clouds will keep global change minimal are correct. Weather predictions have improved over the years because of better observations, more realistic descriptions of the physics of clouds and radiation, and faster computers. A similar approach is the only viable route to the answers we need on global change.
It is my belief that reliable prediction of climate change can be achieved in the early decades of the 21st century. Climate, unlike weather, is not inherently unpredictable beyond certain periods. Weather is unpredictable because a very small change in initial conditions can be shown to result in a large change at a later time (a few months). Climate, even with its feedbacks, is a forced system that does reach an equilibrium based on the balance of its forcing factors such as solar radiation. For example, St. Louis has a summer climate that is similar to the year-round climate of Iquitos, Peru, in the Amazon basin. However, it is easy to predict that St. Louis will be much colder than Iquitos in January; the decrease in solar radiation is a highly predictable forcing, augmented by feedback effects such as snow cover. Our regional climate models will be reliable when the estimates of forcing, such as that due to carbon dioxide, and the estimates of feedbacks are properly accommodated. It is both feasible and compelling to design a comprehensive global program to determine the future forcing and feedbacks that will cause regional climate changes.
Fortunately, the science and technology needed to provide answers is rapidly advancing. Progress will require directed and intensive efforts in three main areas: observations, physical understanding (resulting from research), and modeling. In each of these areas, the sum of global efforts is substantial but far below that dictated by the urgency of the threat.
The importance of in situ monitoring
There are both strengths and weaknesses in the current global observational system. After National Aeronautics and Space Administration (NASA) scientist James Hansen’s eye-opening congressional testimony about global warming during the hot, dry summer of 1988, the United States and other countries have spent about $3.25 billion per year on research and equipment designed to understand global change. About 60 percent of this has gone into satellite programs. In FY 1999, the United States spent about $1.85 billion on global change, with NASA’s earth-observing satellite program funded at $1.1 billion and the National Oceanic and Atmospheric Administration’s operational geostationary and polar orbiters funded at $500 million. Satellites have the advantage of perspective: A geostationary satellite continuously scans an entire hemisphere; a polar orbiter looks at the entire earth sequentially. It is eminently reasonable that the response of the political system was to put funds into the earth-observing satellite programs. These investments have provided rich rewards, including the continuous tracking of global sea surface temperatures, the ability of true color satellites to determine ocean and land surface biology over much of the globe, and microwave sensors that can determine average temperature for deep atmospheric layers and distinguish open water from ice.
The great strength of satellites, their overarching view of the planet, is counterbalanced by their great weakness: They are far from the substances (air, land, water) they are trying to measure. Scientifically, the best combination is often to use the satellite and an in situ sensor (one that is in the air or the ocean), with the satellite painting a broad and comprehensive picture and the in situ sensors providing calibration and necessary detail. For example, the top and horizontal size of a cloud of dust is easy to determine from a satellite, but only an in situ sensor such as an aircraft can determine the depth of the cloud and the size and type of dust particles. In trying to determine the fate of the Arctic ice, only in situ sensors are capable of measuring the most important geophysical parameters: the detailed temperature, humidity, and wind in the boundary layer just above the ice, and the temperature and interaction of the water immediately below the ice.
In recent years, a variety of in situ sensors have been developed, though the use of these sensors has been stingily funded compared to satellites. In the ocean, in situ sensors such as surface-based buoys with tethers and autonomous vehicles that cruise the subsurface are beginning to be used to measure variables such as temperature, salinity, and current beneath the surface. In the atmosphere, new unmanned aircraft and balloons that can cruise the stratosphere for months and drop instruments in various locations are being deployed to take measurements in the atmosphere and the ocean. If used more extensively, these in situ systems could provide a powerful boost to our understanding of the earth’s weather, climate, and chemistry.
Although we do have a global system of balloons that take atmospheric measurements, it was designed for weather forecasting, not climate prediction. Nevertheless, it is the best tool we have for detection of climate trends above the Earth’s surface. However, these measurements have been taken mainly in rich countries, leaving the great bulk of the earth’s area–the oceans, polar areas, and Africa and South America–essentially unobserved. Trying to discern climate trends with the existing network is like a drunk looking for his lost wallet beneath the only lamppost in the mile between his house and the bar. It is now possible to field a global array of stratospheric aircraft and balloons that drop climate-quality instruments at a few hundred locations equally distributed over the globe. Such a system could be in place by the time of the next polar orbiters, scheduled for late in the decade, although so far it has received minimal support. Development and operation of such a system would costs about $1 billion per year, which could be shared among the leading industrial nations. If we are going to understand regional climate change, this system is imperative. In addition to its value for climate prediction, the in situ system would also significantly improve weather forecasts.
The program discussed above differs greatly from the existing and planned efforts. Currently, many programs to measure regional change are episodic; an expedition is mounted to a geographic area of interest, such as the tropical Pacific or the Antarctic, and the data are collected for a year or so. Although these are certainly worthwhile, they do not capture the key attribute of interest: the change with time of the global state. Nor is it adequate to take measurements only where scientists expect problems; changes may occur where they are least expected. The global system operates as a giant clock, with toothed wheels of many sizes, each physically connected to the others. Thus, prediction of change for the United States will require knowledge of change as it occurs across the globe.
Bolstering research and modeling
Jerry Mahlman, the recently retired director of GFDL, has for years spoken eloquently about the dangers of climate change. One of his most important points bears repeating: The political system seems more willing to invest in hardware than in “brainware.” In other words, support for scientists is often crowded out by the investment in big systems. The investment in climate research, now about $800 million per year, could usefully be doubled. If our goal is much faster and better understanding of global change, it is clear that more support for scientists must be forthcoming.
The final leg of the three-legged stool needed to support prediction of regional climate change is modeling. The exponential growth of computer power has spurred vast improvements in climate models, but even now the physical effects are incorporated in a simple fashion in climate models compared to the way they are used in weather models. New efforts that focus on modeling regional change, such as the community efforts led by the National Center for Atmospheric Research, would benefit from substantial increases in resources.
Above all, a directed program of research focusing on regional climate change is essential. Although the U. S. Global Change Research Program has coordinated an excellent suite of programs in a variety of federal agencies, the end result has been something akin to a partially painted wall: Many important things are being left undone because of limits in agency mission, funding, or interest. Research whose goal is to achieve understanding is different from a directed program whose goal is to solve a specific problem. The programs that exist aren’t wrong, they are simply inadequate for the new phase we are entering. Excellent approaches to improving climate prediction are presented in the National Research Council report The Science of Regional and Global Change.
The dangers of climate change–seen as a gradual and mild warming over the coming centuries–fit with the current suite of loosely coordinated, discovery-driven programs. If instead the danger could be closer at hand and more profound than previously appreciated, then new programs should be initiated commensurate with the threat. The obvious solution is to identify within government an organization that would have comprehensive, overall responsibility for long-term climate prediction. Such an entity should be funded to provide a complete and balanced approach: It must ensure that the whole wall is painted. Historically, the route to a capability has been evolutionary. For example, current progress in making seasonal predictions, such as the El Nino forecast of 1998, is the correct approach to learning how to make credible longer-term prognostications. A strong U.S. program to expedite reliable prediction, complementing the international programs coordinated by the World Meteorological Organization and the United Nations Environment Program, is probably the best action the United States could take at the current time.
It will require far more certainty than now exists for democratic societies to make the large investments needed to switch to carbon-free economies. The most important thing to be done in the next 20 years is to develop reliable capability to predict in detail how the earth’s atmosphere will respond to various scenarios of greenhouse gas emissions. Our current set of programs will not deliver the climate prediction capabilities we will need. The more directed and intensive program described above, with a program of in situ sensing to complement the global satellite system, more research, and a directed modeling program, can deliver reliable answers needed in time and if necessary to change the outcome of the 21st century.