Forum – Summer 2011
The climate/security nexus
Richard A. Matthew has published “Is Climate Change a National Security Issue?” (Issues, Spring 2011) at just the right time. I would answer his question with a resounding “yes”; however, his piece clarifies why the emerging field of analysis on the climate/security nexus is in need of fresh thinking.
The critics Matthew cites are correct in saying that the past literature linking climate change and security suffered from weak methods. For those of us analyzing this topic years ago, it was a conceptual challenge to define the range of potential effects of the entire world changing in such rapid (on human time scales) and ahistorical ways.
We relied (too) heavily on future scenarios and therefore on the Intergovernmental Panel on Climate Change’s consensus-based and imprecise scientific projections. Much early writing was exploratory, as researchers attempted simply to add scope to a then-boundless debate. Serious consideration of causal relationships was often an afterthought. Even today, the profound lack of environmental, social, and political data needed to trace causal relationships consistently hampers clear elucidation of climate/security relationships.
Yet these weaknesses of past analysis are no cause to cease research on climate change and security now, in what Matthews accurately describes as its “imperfect” state. Today’s danger is that environmental change is far outpacing expectations. Furthermore, much of the early analysis on climate change and security was probably too narrow, and off track in looking primarily at unconventional or human security challenges at the expense of more precisely identifying U.S. national security interests at stake. This is reflected in Matthew’s categorization of previous research as focused on three main concerns: those affecting national power, diminishing state power, or driving violent conflict. The author accurately describes the past literature and his categories still resonate, especially in cases in which local-level unrest is affecting U.S. security goals in places such as Somalia, Afghanistan, and Mexico.
Still, the author’s categories are not broad enough to capture the climate change–related problems that today are the most worrisome for U.S. security (a weakness not in Matthew’s assessment, but in past research). Climate change is contributing to renewed interest in nuclear energy and therefore to materials proliferation concerns. Environmental change has already affected important alliance relationships. Natural resources top the list of strategically important issues to China, the most swiftly ascending power in the current geopolitical order. Fear of future scarcity and the lure of future profit from resources are amplifying territorial tendencies to the point of altering regional stability in the Arctic, the South China Sea, and elsewhere.
Matthew’s article makes an important contribution in serving as the best summary to date of what I’d call the first wave of climate change and national security research. I’m hopeful that it will further serve as a springboard for launching a much-needed second wave—one more methodologically rigorous that includes greater consideration of conventional U.S. security challenges.
A national energy plan
The nation is fortunate to have Senator Jeff Bingaman, an individual who is both knowledgeable and committed to energy policy, as chairman of the Senate Energy and Natural Resources Committee. In “An Energy Agenda for the New Congress” (Issues, Spring 2011), Bingaman identifies key energy policy initiatives, and he is in a position to push for their adoption.
Bingaman highlights four initiatives that any informed observer would agree deserve government support: robust energy R&D, a domestic market for clean energy technologies, and assistance to speed commercialization of new energy technologies and related manufacturing technologies. However, the recent debate on the 2011 continuing budget resolution, the upcoming deliberation on the 2012 budget, and the discussion of how to reduce the deficit highlight Bingaman’s silence on how much money he believes the government should spend on each of these initiatives, and the extent of the public assistance he is advocating. For example, there is a vast difference in the financial implications of supporting deployment rather than demonstration of new energy technologies and of supporting building manufacturing capacity rather than developing new manufacturing technologies.
I am less enthusiastic than Bingaman about a renewable energy standard (RES) for electricity generation. The RES will increase electricity bills for consumers, but without comprehensive U.S. climate legislation and an international agreement for reducing greenhouse gas emissions, will do little to reduce the risks of global climate change.
Effective management is essential to realize the benefits of each of these initiatives. Bingaman proposes a Clean Energy Deployment Administration as “a new independent entity within DOE” to replace the current loan guarantee programs for the planning and execution of energy technology demonstration projects. Here it seems to me that Bingaman wants to have it both ways: an independent entity that has the flexibility, authority, and agility to carry out projects that inform the private sector about the performance, cost, and environmental effects of new energy technologies but is also part of the government, with its inevitable personnel and procurement regulations and annual budget cycles that remains susceptible to the influence of members and their constituencies. I advocate the creation of a quasi-public Energy Technology Corporation, funded by a one-time appropriation, because I believe it would be more efficient and produce information more credible to private-sector investors.
But, in my view, something is wrong here. Congressional leaders should not be expected to craft energy policy; that is the job of the Executive Branch. Congressional leaders are supposed to make judgments between different courses of action, whose costs and benefits are part of a comprehensive energy plan formulated by the Executive Branch and supported by economic and technical analysis. The current administration does not have such a national energy plan, and indeed no administration has had one since President Carter. The result is that members of Congress, depending on their philosophies and interests, come forward with initiatives, often at odds with each other, with no action being the probable outcome. Bingaman’s laudable article underscores the absence of a thorough, comprehensive, national energy plan.
A better process for new medical devices
The U.S. Food and Drug Administration’s (FDA’s) public health mission requires a balance between facilitating medical device innovation and ensuring that devices are safe and effective. Contrary to Paul Citron’s assertion in “Medical Devices: Lost in Regulation” (Issues, Spring 2011), applications to the FDA for breakthrough technologies increased 56% from 2009 to 2010, and FDA approvals for these devices remained relatively constant.
In fact, our device review performance has been strong: 95% of the more than 4,000 annual device applications that are subject to performance goals are reviewed within the time that the FDA and the device industry have agreed on.
In the few areas where we don’t meet the goals, our performance has been improving. Part of the problem lies with the quality of the data submitted to the FDA. For example, 70% of longer review times for high-risk devices involved poor-quality clinical studies, with flaws such as the failure to meet primary endpoints for safety or effectiveness or a significant loss of patients to follow up. This submission of poor-quality data is inefficient for the FDA and industry and unnecessarily diverts our limited resources.
Citron attempts to compare the European and U.S. systems. But unlike the FDA, Europe does not report review times, provide a basis for device approvals and access to adverse event reports, or have a publicly available database of marketed devices, making it difficult to draw meaningful comparisons.
Some high-risk devices do enter the market first in Europe in part because U.S. standards sometimes require more robust clinical data. The FDA requires a manufacturer to demonstrate safety and effectiveness, a standard Citron supports. Europe bases its reviews on safety and performance.
For example, if a manufacturer wishes to market a laser to treat arrhythmia in Europe, the manufacturer must show only that the laser cuts heart tissue. In the United States, the manufacturer must show that the laser cuts heart tissue and treats the arrhythmia. This standard has served U.S. patients well but represents a fundamental difference in the two systems.
The FDA sees data for all devices subject to premarket review and can, therefore, leverage that information in our decisionmaking; for example, by identifying a safety concern affecting multiple manufacturers’ devices. In Europe, manufacturers contract with one of more than 70 private companies to conduct their device reviews, limiting the perspective of individual review companies.
Just as the European Commission has recognized shortcomings of their regulatory framework, so too has the FDA acknowledged limitations in our premarket review programs, and we are addressing them. Earlier this year, we announced 25 actions we will take this year to provide industry with greater predictability, consistency, and transparency in our premarket review programs and additional actions to facilitate device innovation.
The solution is not to model the U.S. system after that of Europe—both have their merits—but to ensure that the U.S. system is both rigorous and timely, striking the right balance between fostering innovation and approving devices that are safe and effective. Ultimately, this will best serve patients, practitioners, and industry.
It is not.
Rather, it is to decrease testing time and the expense of new products and to increase sales sooner. Yet to state the obvious, that this is at the expense of first fully proving safety for patients, would be an unacceptable argument for them to make.
Lax FDA oversight due to political pressure from multibillion-dollar companies (such as Medtronics, the company Paul Citron retired from as Vice President of Technology Policy and Academic Relations) has successfully allowed patient safety to take a back seat to profits, at the cost of patient injury and deaths. Downplaying “occasional device recalls”—as Citron calls them—is thoughtless and belies the truth. They are, in fact, a major problem. The current “recall” of the Depuy ASR hip replacement, unlike a car recall, in which a new part is merely popped in, will result in thousands of revision operations with undisputedly worse results, if not the “occasional” deaths from reoperation on the elderly. The “recalls” of Vioxx and Avandia were only after an estimated 50,000 deaths occurred.
The expense of testing should burden industries, not patients. Devices should be tested for three to five years, not two. As the joke goes, “it’s followup that ruins all those good papers,” though Citron would have us believe Voltaire’s “perfect is the enemy of good” to justify substandard testing for patient safety, based on European experiences. (Does he mean like Thalidomide?) Complications recognized early should be bright red flags and not minimized statistically to keep the process moving. Study redesigns with delays are better than injuring or killing patients. In the past, numerous products with early complications were pushed through, resulting in catastrophic problems and ultimately recalls. This is unacceptable.
The FDA should not cater to industry’s push for shortcuts. FDA advisory panels should be free of any industry consultants. Nonconflicted experts are not hard to find. Their recommendations should be followed by the FDA. The spinal device X-Stop was voted down by the FDA advisory panel yet nevertheless approved by the FDA without explanation. The revision rate is now 30% after longer followup. Also, devices approved for a use such as cement restrictors for hip replacements via the 510K pathway should not be allowed to be used for other purposes such as spinal fusion cages despite specific FDA warnings not to do so.
Corporate scandals abound and Citron’s Medtronics has had more than its fair share of headlines. Such occurrences support the notion that patient safety is not every company’s highest priority, despite the public relations rhetoric.
All should remember that the purpose of the FDA first and foremost is the safety of patients—nothing else—and the process of approval needs to be more rigorous, not less.
To be sure, Citron never contends that the FDA and the regulatory process are unnecessary, illegitimate, or inappropriate. Also never stated is that “all things and all folks FDA are evil.” His contention is simply that the current state of affairs is too “complex and expensive” and this leads to unnecessary delay in getting new devices and innovation to our sick patients. Certainly some demonize the FDA, and just as we should stomp out that type of rhetoric leveled at the “medical-industrial complex,” we should level only constructive criticism at the FDA. However, having participated in the development of many implantable devices, ranging from pacemakers to defibrillators, hemodynamic monitoring systems, left ventricular remodeling devices, and artificial hearts, I agree with
Citron that the regulatory process has become a huge and overly burdensome problem in the United States. This has resulted in the movement of device R&D outside of the United States. Citron’s argument that this is detrimental to our patients in need of new, novel, perhaps even radical, devices as well as to our academic and business community is on target. Citron is not arguing for ersatz evaluation of devices but for a more reasoned development perspective. An approach that gives the utmost consideration to all parties, including the most important one, the ill patient, needs to be developed.
There is a way forward. First, rational, thoughtful, and fair evidence-based (as Citron has done) critique of the present system with reform in mind is mandatory. Next, adopt the concepts and approach to device study and development that the INTERMACS Registry has. Though perhaps as the Study Chair for this Interagency Registry of Mechanical Circulatory Support I have a conflict of interest, I see this as an exemplary model for academic and clinical cooperation with our federal partners, the National Institutes of Health/National Heart Lung and Blood Institute, FDA, and Centers for Medicare & Medicaid Services, to develop common understanding of the challenge. A constructive and collegial, though objectively critical, environment has been created where the FDA has worked closely with the Registry and industry to harmonize adverse event definition, precisely characterize patients undergoing device implantation, and create high-quality data recovery and management that are essential to decisionmaking during new device development and existing device improvement.
Under the expert management of the Data Coordinating Center at the University of Alabama, Birmingham, and the watchful eyes of Principal Investigator James K. Kirklin (UAB) and Co-Principal Investigators Lynne Stevenson (Brigham and Women’s Hospital, Harvard University), Robert Kormos (University of Pittsburgh), and Frank Pagani (University of Michigan), about 110 centers have entered data from over 4,500 patients undergoing mechanical circulatory support device insertion (FDA-approved devices that are meant to be long-term and allow discharge from the hospital).
The specific objectives of INTERMACS include collecting and disseminating quality data that help to improve patient selection and clinical management of patients receiving devices, advancing the development and regulation of existing and next-generation devices, and enabling research into recovery from heart failure. The Registry has, as an example of constructive interaction with the FDA, provided device-related serious adverse event data directly to the FDA, allowed post-marketing studies to be done efficiently and economically, and was able to create a contemporary control arm for the evaluation of a new continuous-flow ventricular assist device.
Yes, there is a way forward, but it requires commonness of purpose and teamwork. Unfortunately, this is sometimes difficult when industry, academics, politicians, and the FDA are main players. We all too often forget that it is, in the end, simply about the patient. However, I am encouraged by the INTERMACS approach, philosophies, and productivity.
Moving to the smart grid
Lawrence J. Makovich’s most fundamental point in “The Smart Grid: Separating Perception from Reality” (Issues, Spring 2011) is that the smart grid is an evolving set of technologies that will be phased in, with modest expectations of gains and careful staging to prevent backlash. We agree. However, we think that he too strongly downplays the ultimately disruptive nature of the smart grid and some near-term benefits.
First, we stress that the smart grid is a vast suite of information and communications technologies playing vastly different roles in the electricity system, stretching all the way from power plants to home appliances. Many “upstream” technologies are not visible to the consumer and improve the reliability of the current system; they are nondisruptive and are slowly and peacefully getting adopted. “Downstream” smart grid elements, which involve smart meters and pricing systems that customers see, have been more controversial and disruptive.
Makovich is correct when he says that the smart grid will not cause rates to reverse their upward trend and that near-term benefits do not always outweigh costs. Smart Power (Island Press, 2010) and our report to the Edison Foundation document the factors increasing power rates, such as decarbonization and high commodity prices—factors that the smart grid cannot undo. However, the near-term economics of some downstream systems are not as black and white as Makovich suggests. Our very recent report for the Institute for Energy Efficiency examines four hypothetical utilities and finds that downstream systems pay for themselves over a 20-year time horizon, the same horizon used to plan supply additions.
We also agree with Makovich that the smart grid is not a substitute for a comprehensive climate change policy, including a price on carbon and strong energy efficiency policies. Although dynamic pricing can defer thousands of megawatts (MW) of new capacity (up to 138,000 MW in our assessment for the Federal Energy Regulatory Commission) and smart grid systems enable many innovative energy efficiency programs, a robust climate policy goes beyond these features.
Finally, as argued in Smart Power, the downstream smart grid will ultimately be highly disruptive to the traditional utility business model. Physically, the smart grid will be the platform for integrating vastly more customer-and community-sited generation and storage, sending power in multiple directions. It will also enable utilities and other retailers to adopt much different pricing than today’s monthly commodity tariffs. It isn’t a near-term development, but we think that the downstream smart grid will ultimately be seen as the development that triggered vast changes in the industry.
We agree with Makovich that the United States is not poised to move toward dynamic pricing of electricity any time soon, but we believe that such a change is inevitable and it is just a question of time before we will see widespread deployment of dynamic pricing. We have been working with regulatory bodies and utilities in North America at the state, provincial, and federal levels and are optimistic that once regulatory concerns are addressed, it will be rolled out, perhaps initially on an opt-in basis.
The Puget Sound example cited by Makovich deals with an early case where the specific rate design provided very little opportunity for customers to save money. The rate could have been redesigned to promote higher bill savings, but a change in management at the company prevented that from happening. More recently, in the Olympic Peninsula, dynamic pricing has been shown to be very successful. In fact, pilot programs across the country and in Canada, Europe, and Australia continue to show that consumers do respond to dynamic pricing rates that are well designed and clearly communicated. They also show that customer-side technologies such as programmable communicating thermostats and in-home displays can significantly boost demand response. The Maryland Commission has approved in principle the deployment of one form of dynamic pricing in an opt-out mode for both BGE and Pepco and is likely to approve other forms once more data have been gathered.
Lawrence J. Makovich predicts that consumers will eschew the interactive aspects of demand reduction, deterred by the fluctuating pricing and overall effort required for relatively small economic gains, thus pushing the benefits of the smart grid largely to the supply side. A fundamental shift from a “business as usual” mentality to dynamic pricing and engaged consumers he avers, is simply not in the offing.
Although Makovich may be right about consumers’ lack of interest in benefiting from real-time dynamic pricing, we cannot be certain. And while he may also be right about the slower than expected pace of smart grid implementation, there is a bigger point to be made here: that no system should be designed or deployed without an understanding of the human interface and how technology can best be integrated to enhance the human experience in a sustainable, reliable, and efficient manner.
Incentives and motivation play no small role in the adoption and acceptance of new technologies. Research at the University of Vermont, for example, indicates that human beings respond remarkably well to incentive programs. Behavioral studies conducted by my colleagues on smoking cessation for pregnant women show that in contrast to health education alone, a voucher-based reward system can be highly motivating. Although not exactly analogous to electricity usage, these studies suggest that consumer interest in the electric grid can be evoked by properly framing mechanisms that can find their basis in real-time pricing.
One such incentive might be the associated environmental benefits of consumer-driven load management. Smart meters that allow people to visualize the flow of electrons, and its imputed environmental costs, to specific appliances and activities could raise awareness of the environmental impact of electricity generation. Consumers might even come to view electricity not as an amorphous and seemingly infinite commodity but as an anthropogenic resource with an environmental price tag linked to fuel consumption and carbon emissions. As stewards of the planet, this may in fact be one of the smart grid’s most important contributions: making possible a fundamental change in how we view energy.
There is a long and vibrant history of resistance to technological innovation. Indeed, a recent article in Nature (3 March 2011) titled “In Praise of Luddism” argues that skeptics have played an important role in moving science and technology forward. Consider the telephone, which took a technologically glacial 70 years to reach 90% market penetration. In addition to requiring the construction of a new and significant physical infrastructure, many people viewed the telephone as intrusive and at odds with their lifestyles. The cell phone needed only about one-seventh of the time to reach similar market penetration. The land-line telephone was a new technology requiring entirely new infrastructure and behavior and encountering stiff skepticism, but the cell phone was an adaptation of a familiar technology, eliciting a behavior change that was arguably welcome and resulted in positive externalities (such as more efficient use of time). Presuming that cybersecurity and privacy issues are addressed satisfactorily, smart meters, building on an already partially supply-side smart grid, are much more likely to see a cell phone-like trajectory.
In any case, Makovich is right to acknowledge that the issues are complex. But rather than discount the value of the smart grid to the consumer or assume that its deployment will be slow, we need to better understand how consumers will interact with a smarter grid. A holistic approach is needed that can transcend the technological challenges of grid modernization to include related disciplines such as human behavior, economics, policy, and security. In addition, we must frame the opportunity properly—over what time frame do we expect measurable and significant demand-side change? And what are the correct incentives and mechanisms to catalyze this behavior?
In Vermont, we are working toward just such a goal, forming a statewide coalition of stakeholders, involving university faculty from a range of disciplines, utilities professionals, government researchers, energy executives, and policymakers. Although still in the early stages, this collaborative approach to statewide deployment of a smart grid appears to be a sensible and effective way forward.
Recalling the words of Thomas Edison, who proclaimed that society must “Have faith and go forward,” we too must have the courage to move toward an electricity game-changing infrastructure befitting the 21st century.
Improve chemical exposure standards
Gwen Ottinger’s article “Drowning in Data” (Issues, Spring 2011), written with Rachel Zurer, provides a personal account of a fundamental challenge in environmental health and chemical policy today: determining what a safe exposure is. As Ottinger’s story details, not only are there different standards for chemicals because of statutory boundaries (workplace standards set by the Occupational Safety and Health Administration, for example), but standards are also often based on assumptions of exposure that don’t fit with reality: single chemical exposure in adults as opposed to chemical mixtures in children, for example. A regulatory standard sets the bar for what’s legally safe. However, most standards are not based on real-life exposure scenarios nor determined based on health risks of public health concern, such as asthma, cardiovascular disease, diabetes, and cancer. Who then are such standards protecting? The polluters, communities living alongside the chemical industry might argue.
The discordant and fractured landscape of safety standards is reflective of statutory boundaries that in no way reflect the reality of how molecules move in the environment. We are not exposed to chemicals just in the workplace, in our neighborhoods, in our homes, or through our water, but in all of the places where we live and work. Communities living alongside and often working in chemical plants understand this perhaps better than anyone. Similarly, we’re not exposed just at certain times in our lives. Biomonitoring of chemicals in blood, breast milk, amniotic fluid, and cord blood conducted by the Centers for Disease Control and Prevention tells us that humans are exposed to mixtures of chemicals throughout our life spans. What is safe for a 180-pound healthy man is not safe for a newborn, but our safety standards for industry chemicals, except for pesticides, treat all humans alike.
I agree with Ottinger’s call for improvements in health monitoring. I would add that in addition to air monitoring, personal monitoring devices provide improvements in capturing real-life exposure scenarios. The National Institutes of Environmental Health Sciences are developing monitoring devices as small as lapel pins that could provide real-time 24-hour monitoring.
The frustration of the communities living and working alongside polluting industries detailed by Ottinger is a far too familiar story. It doesn’t take more exposure monitoring data to state with certainty that the regulatory process has failed to adequately account for the exposure reality faced by these communities. So too has chemical regulation failed the rest of the public, who are silently exposed to chemicals in the air, water, consumer products, and food. For some of these chemicals found in food or air, there might be some standard limit to exposure in that one medium. But for thousands of other chemicals, there are no safety standards. It is exciting then to see environmental justice communities joining with public health officials, nurses, pediatricians, and environmentalists to demand change to the way in which we monitor chemical safety and set standards in this country through reform of the Toxic Substances Control Act.
More public energy R&D
The nation’s energy and climate challenge needs new thinking and approaches, like those found in William B. Bonvillian’s “Time for Climate Plan B” (Issues, Winter 2011). Unfortunately, when dealing with any policy area, particularly energy policy, one must necessarily confront conventional wisdom and ideological blinders from the left and the right alike. Case in point is Kenneth P. Green’s response to the Bonvillian article (Issues, Spring 2011), which makes several egregious errors.
First, Green asserts that public R&D displaces private R&D and goes so far as to say that this is “well known to scholars.” Yet if one takes the time to examine the literature in question, the story is much different. Although a full review is impossible here, several studies over the years have found that public R&D tends to be complementary to private R&D rather than substitutive, and can in fact stimulate greater private research investment than would otherwise be expected. One of the most recent is a study published last year by Mario Coccia of the National Research Council of Italy, using data from several European countries and the United States. Others include a 2003 study of France by the University of Western Brittany’s Emmanuel Duguet, multiple studies of German industry by Dirk Czarnitzki and coauthors for the Centre for European Economic Research, and studies of Chilean industry by José Miguel Benavente at the University of Chile. These are just a few from the past decade, and there are many others that predate these. This is not to say that consensus has been reached on the matter, yet the evidence that public R&D complements and stimulates private R&D is strong, contrary to Green’s statement.
Green also argues that there is “plenty of private R&D going on.” But the evidence suggests that greater investment is sorely needed from both the public and private sectors. The persistent long-term declines in private energy R&D are of ongoing concern, as those who follow the issue will know. Princeton’s Robert Margolis, Berkeley’s Daniel Kammen, and Wisconsin’s Greg Nemet have all demonstrated the sector’s shortcomings. And many leading thinkers and business leaders, including the President’s Council of Advisors on Science and Technology and the industry-based American Energy Innovation Council, have called for major increases in R&D investment. Further, Green estimates private energy R&D spending to be about $18 billion—a substantial overestimate, to the point of being outlandish. A study sponsored by R&D Magazine and Battelle reported domestic expenditures at less than $4 billion, with a rate of only 0.3% of revenues, far less than the figure Green uses. This fits data recently compiled by J. J. Dooley for the Pacific Northwest National Lab, which indicates that private-sector R&D spending has never risen above its $7 billion peak from 30 years ago. So Green is off by quite a bit here.
Green also points to public opposition to high energy costs as a reason not to act. I’d argue that Green’s use of polling data is a selective misreading. Gallup has 20 years of data demonstrating at least moderate levels of public concern over climate change, and polls also show big majorities in favor of clean alternatives to fossil fuels and even in favor of Environmental Protection Agency emissions regulation. Green rightly points out that the public is wary of energy cost increases, but an innovation-based approach would inherently have energy cost reduction as its overriding goal, which is an advantage over other regulatory approaches. Green’s main error here is mistaking public unwillingness to shoulder high energy costs for a public desire for federal inaction.
Green closes his response with, to borrow his own phrase, a “dog’s breakfast” of the usual neoclassical economics tropes against government intervention in any form. Neoclassical thinkers may not care to admit it, but history is replete with examples of the positive role government has played in technology development and economic growth. Public investment is critical to accelerate innovation, broaden the menu of energy technology options, and facilitate private-sector takeup and market competition for affordable clean new energy sources. Bonvillian’s piece represents a terrific step toward this end.
Helpful lessons from the space race
I very much enjoyed reading “John F. Kennedy’s Space Legacy and Its Lessons for Today” by John M. Logsdon in your spring 2011 edition of Issues. As usual, my friend has turned his sharp eye toward the history of space policy and produced an incisive and provocative analysis. Although I find myself agreeing with much of what he has to say, there is one point on which I would take some exception. Logsdon notes that “the impact of Apollo on the evolution of the U.S. space program has on balance been negative.” This may seem true from a certain perspective, but I think that this point obscures the broader truth about the space program and its role in our society.
For those of us with great aspirations for our space program and high hopes for “voyages of human exploration,” he makes a clear-eyed and disheartening point. I am one of the many people who expected that by the second decade of the 21st century I’d by flying my jetpack to the nearest spaceport and taking Eastern or Pan Am to a vacation in space. The sprint to the Moon and the Nixon administration’s decision to abandon the expensive Apollo technologies as we crossed the finish line certainly crushed the 1960s aspirations of human space exploration advocates. From a 2011 point of view, it is easy to marvel at the folly of the huge financial expenditures and the negative long-term impact of the expectations that those expenditures inspired.
However, I can’t help but think that, from a broader perspective, going to the Moon was far from a “dead end.” Much as it may be hard for any of us to conceive of this now, in the Cold War context of 50 years ago, President Kennedy faced a crisis in confidence about the viability of the Western capitalist system. It was an incredibly bold stroke to challenge the Soviet Union in the field of spaceflight; a field in which they had dominated the headlines for the four years since Sputnik. Yet by the end of the 1960s, serious discussion about the preeminence of the Marxist model of development (and of the Soviet space program) had vaporized. Instead, human spaceflight had become the icon of all that was right with America. So from a larger geopolitical perspective, the Apollo program was a dazzlingly bold success. Moreover, consider the broader related impacts of the space race on our educational system, technology, competitiveness, and quality of life. Certainly, the ripple effects of our investment in the Apollo program have radically changed our lives, though perhaps not in the ways we had originally dreamed.
Nonetheless, Logsdon is correct when he observes that although we face difficult space policy choices in 2011, we are not (nor seem ever likely to be) in a “Gagarin moment.” President Kennedy’s call for a Moon mission was not about space exploration, it was about geopolitics. We can choose to emphasize the negative impact of that decision on our aspirations, but I am heartened by the prospect that President Obama and our elected representatives might draw a different lesson from the space legacy of President Kennedy. That broader lesson is that investment in space exploration can have an enormous positive strategic impact on our country and our way of life and, most of all, that we should not be afraid to be bold and imaginative in pursuing space exploration.
It is, as Logsdon repeatedly observes, because the lunar landing and the politics that enabled it were peculiar products of their time and circumstances and not about the Moon, space, or even science. This understanding is important not just as history but as policy. The failure of the United States to come up with human space goals worthy of the program’s risk and cost may indeed be because we have never come to grips with what the Apollo legacy is rather than what we wish it could have been. Certainly NASA has never shaken its Apollo culture or its infrastructure; a problem Logsdon recognizes all too well, no doubt strongly influenced by his service on the Columbia Accident Investigation Board.
I would like to see more attention paid to other parts of Kennedy’s legacy. The space program of the 1960s didn’t just send humans to the Moon; it created a robotic program of exploration that has taken us throughout the solar system with many adventures and discoveries of other worlds.
Logsdon hints at possible alternate histories, and I hope for his next work, he might go more deeply in such directions. What if Kennedy had lived and we had shifted to a cooperative U.S.-Soviet human lunar initiative? Conversely, what if Eisenhower’s lower-key, no-grand-venture approach had been taken and NASA had been built up more gradually? Would we be further out in space with humans by now or more deeply hidebound on Earth? The space program has brought us many achievements. Is it as good as we should have expected, or should we have expected more?
Logsdon’s history and his links to today’s policy questions should help those in the political system as they try to fix the mess they have created for future space planning. I hope it does.
The uniqueness of Apollo was that it was a marshalling of existing technological skills into an awesome engineering task that would be an unambiguous marker of preeminent hi-tech capability, in a world where that status had become uncertain due to Sputnik (and reinforced by Gagarin), even as that quality retained profound scientific, commercial, military, diplomatic, and societal significance. That motivation for the space race a priori doomed any significant U.S.-Soviet joint exploration, the dreams of diplomats notwithstanding. And the prestige gained by the United States and its endeavors in the Apollo triumph turned out (exactly as Kennedy and his advisors hoped in 1961) to be an immense multiplier factor to U.S. strengths in the decades that followed, up to the ultimate collapse of the Soviet Union and its replacement by a government in Moscow with which the United States and its allies could at last genuinely cooperate.
The real benefits of international cooperation in big integrated space projects such as the International Space Station (ISS) were slow to materialize, even as the original promises of cheaper, faster, better all turned out to be delusions. This is despite the way those myths continue to be dogmatically and defiantly touted as motives for future space cooperation, by historians who ought to have learned better and by diplomats who want to constrain the United States from any unilateral space activities unapproved by the international community.
For the ISS, the operational robustness, mutual encouragement of each nation’s domestic commitment, reassuring transparency, and inspirational aspects of the ultimate configuration may indeed validate the project’s expense. This is along with the gamble that attaining access to new environments has almost always paid off immensely in unpredictable ways, eventually.
There are conceivable future space projects, including major ”crash” projects, that could benefit from an understanding of the significant lessons of the Apollo program regarding leadership (which country is in charge and which partners take on subtasks), team staffing (civil servants in the minority), duration (long enough to accomplish something, short enough to allow the best people to sign on for the duration and then return to their original careers), reaping of outside capabilities and intuition (no “not invented here” biases), creative tension between realistic deadlines and an experience-based (not wish-based) safety culture, resonance with national culture (helping define who we are and our degree of exceptionalism), and a well-defined exit strategy or finish line.
The U.S. response started with Sputnik, from my personal experience, because as an 8th-grade “space nut” and inattentive student, I was recruited for an enriched math class within weeks, began Russian classes within months, and within two years was taking Saturday calculus classes—all before Gagarin.
One could quibble over whether Project Apollo “required no major technological innovations and no changes in human behavior.” Many technological developments needed to reach the Moon were under way in 1961, as Logsdon says. Some, such as the J-2 engines that powered the Saturn V rocket’s second and third stages and the integrated circuits in the Apollo Guidance Computer, needed further work. Others, such as orbital rendezvous, had never been tried before.
One change certainly occurred: Project Apollo transformed NASA. In 1961, NASA Administrator James Webb oversaw a collection of relatively independent research laboratories adept at managing small projects. To get to the Moon, he had to alter the way in which people in the agency did their work. Against significant resistance, Webb and his top aides imposed an integrating practice known as Large Scale Systems Management. The technique had been developed by the U.S. Air Force for the crash program to build the first fleet of intercontinental ballistic missiles but was new to NASA.
What similar transformational changes might accompany a new Apollo-type space mission? Most significantly, the mission will not resemble the national big science projects of which government employees have grown so fond. The current period of fiscal austerity and the dispersion of aerospace talent around the world and into the commercial sector preclude that.
Logsdon is right when he identifies the prospect of global cooperation as one of the prime reasons why Kennedy continued to support a Moon landing. That sort of cooperation could provide a rationale and a method for a new Project Apollo. So might commercial partnerships and less costly types of mission management.
Based on the experience of Apollo, one thing is sure. The NASA that completes such an undertaking will not resemble the agency that exists today.