End of the world?
Review of
Our Final Hour: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future in this Century, On Earth and Beyond
New York: Basic Books, 2003, 228 pp.
The lights in my office at the University of Maryland blinked once and then went out. It was 4:11 p.m. on Thursday, August 14. The Great Blackout of 2003 had just turned off the lights across the entire northeastern United States and parts of Canada. It was the most extensive electrical blackout in history. The state of Maryland, however, is below the southern boundary of the blacked-out grid, and elsewhere in the state people were going about their normal business unaware of the chaos further north. Yet somehow, one finger from the Great Blackout reached beyond the northeastern grid, down to the sprawling University of Maryland campus, and switched off the lights, while outside the campus the lights continued to burn. No one seemed to quite understand how it had happened.
The power grid could be a metaphor for our modern scientific world. The purpose of the grid is clear: Unlike other utilities, electric power cannot be stored. The power company must generate the exact amount of power that is being used, literally responding to every electrical switch that is thrown. Linking separate power companies in a vast grid is meant to use the statistics of large numbers to smooth out fluctuations, thus reducing the likelihood of local blackouts.
Unfortunately, the power grid, like the human body, is the product of evolution rather than design. And like the body, the grid is burdened with vestigial organs that no longer serve a clear purpose and nerve connections that are no longer relevant. The power grid has become so complex that no one fully understands it. Instead of absorbing a relatively small local disruption in Ohio, the problem cascaded throughout the entire grid, shutting down state after state. Has modern technology made civilization too complex to manage?
A leading scientist, Sir Martin Rees, England’s Astronomer Royal, gives us his sober assessment of the prospects for the survival of modern civilization in Our Final Hour. The more technologically complex the world becomes, the more vulnerable it is to catastrophe caused by misjudgments of the well-intentioned as well as deliberate acts of terrorism. Complexity leaves us vulnerable to natural disasters and simple human blunders, as well as low-tech terrorist attacks. The irony is that even the elaborate defenses we construct to protect ourselves could become the instruments of our destruction. Edward Teller once proposed a vast armada of nuclear missiles in parking orbits, weapons at the ready, to be dispatched on short notice to obliterate any asteroid that threatened Earth. Most people would rather take their chances with the asteroid.
Rees’ perspective is that of a cosmologist, but he is above all a human: “The most crucial location in space and time,” he writes, “could be here and now. I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century. Our choices and actions could ensure the perpetual future of life (not just on Earth, but perhaps far beyond it, too). Or in contrast, through malign intent, or through misadventure, twenty-first century technology could jeopardize life’s potential, foreclosing its human and posthuman future. What happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.”
What follows is a set of brilliant essays forming more or less independent chapters that could be read in any order. He does not ignore the continued threat of nuclear holocaust or collision between Earth and an asteroid, but we have lived with these threats for a long time. His primary focus is on 21st century hazards, such as bioengineered pathogens, out-of-control nanomachines, or superintelligent computers. These new threats are difficult to treat because they don’t yet exist and may never do so. He acknowledges that the odds of self-replicating nanorobots or “assemblers” getting loose and turning the world into a “grey-goo” of more assemblers are remote. After all, we’re not close to building a nanorobot, and perhaps it can’t be done. But this, Rees points out, is “Pascal’s wager.” The evaluation of risk requires that we multiply the odds of it happening (very small) by the number of casualties if it does (maybe the entire population).
Personally, I think the grey-goo threat is zero. We are already confronted with incredibly tiny machines that devour the stuff around them and turn it into replicas of themselves. There are countless millions of these machines in every human gut. We call them bacteria and they took over Earth billions of years before humans showed up. We treat them with respect or they kill us.
So why isn’t Earth turned into grey-goo by bacteria? The simple answer is that they run out of food. You can’t make a bacterium out of just anything, and they don’t have wings or legs to go somewhere else for dinner. Unless they can hitch a ride on a wind-blown leaf or a passing animal, they stop multiplying when the local food supply runs out. Assemblers will do the same thing. You should find something else to worry about.
But that’s just my vote. As Rees puts it, “These scenarios may be extremely unlikely, but they raise in extreme form the issue of who should decide, and how, to proceed with experiments that have a genuine scientific purpose (and could conceivably offer practical benefits), but that pose a very tiny risk of an utterly calamitous outcome.” The question of who should decide, I would argue, is the most important issue raised by this issue-filled book.
Rees recounts the opposition to the first test, at Brookhaven National Laboratory, of the Relativistic Heavy Ion Collider (RHIC). The accelerator is meant to replicate, in microcosm, conditions that prevailed in the first microsecond after the Big Bang, when all the matter in the universe was squeezed into a quark-gluon plasma. However, some physicists raised the possibility that the huge concentration of energy by RHIC could initiate the destruction of Earth or even the entire universe. Every scientist agreed that this was highly unlikely, but that wasn’t very comforting to the nonscientists whose taxes paid for RHIC.
The universe survived, but this sort of question will come up again and again. Indeed, if we try hard enough we can probably imagine some scenario, however unlikely, that could conceivably lead to disaster in almost any experiment. Rees urges us to adopt “a circumspect attitude towards technical innovations that pose even a small threat of catastrophic downside.”
But putting the brakes on science, which excessive caution would do, also has a downside. The greatest natural disasters in our planet’s history were the great extinctions produced by asteroid impacts. If astronomers were to discover a major asteroid headed for a certain collision with Earth in the 22nd century, we could, for the first time in history, make a serious attempt to deflect it. Had HIV appeared just a decade earlier, we would have been unable to identify the infection until full-blown symptoms of AIDS appeared. The AIDS epidemic, as terrible as it has been, would have been far, far worse.
“The theme of this book,” Rees concludes, “is that humanity is more at risk than at any earlier phase in its history. The wider cosmos has a potential future that could even be infinite. But will these vast expanses of time be filled with life, or as empty as the Earth’s first sterile seas? The choice may depend on us, in this century.”