Forum – Winter 2006
The university of the future
In “Envisioning a Transformed University” (Issues, Fall 2005), James J. Duderstadt, Wm. A. Wulf, and Robert Zemsky have once again rung a bell that seemingly has not yet been heard at our universities. I would not term it a wake-up bell, as that was rung in 1994 with the emergence of the user-friendly World Wide Web. Nor do I consider it a fire bell, as one was first heard in 1998 with the dot-com rush that many academics undeservedly mock and deride as a fad or a colossal disaster. And it is definitively not the bell that should have sounded resoundingly with the decision by Google in 2005 to digitize all published materials into a single interactive learning site. The bell that Google rang is roughly equivalent to an intellectual liberty bell.
Why a liberty bell? What do liberty and knowledge and universities all have in common? With the advent of massive information technology as an enabler of universal customized learning and the imperative need for students to learn quickly and integrate a broad range of disciplines in a rapidly changing world, the monopoly on higher learning once held by universities will vanish. New ways to learn are emerging, new institutions for learning are on the move, and new centers for R&D are springing up in unpredictable places. All of these emerging forces for learning and discovery are being driven by advances in information technology and learning technologies and by changes in knowledge organizations themselves.
Liberty is attained here by the fact that universities, which in the past could be designed and driven by their own internal social constructs, must now move beyond their historic models of elitism and isolation from society and assume a new role: educating students to be capable of rapidly mastering and integrating a broad array of complex and interrelated disciplines. Our universities must foster intellectual flexibility, creativity, and the capacity for innovation in a global society interconnected by advances in communications technology. Universities must adapt to the emergence of empowered and capable independent learners set free by the Internet.
The new directions in which universities must move will require them to rethink bedrock organizational principles and practices. Decisionmaking must match the competitive pace of enterprise despite the fact that time in the academy is measured in semesters. Learning takes place 24/7, but academic calendars impose arbitrary restraints. The socialization process associated with learning is critical, but living/learning residential options remain limited. Knowledge is inherently interdisciplinary, yet institutions are reluctant to accommodate new departments and research centers constructed across disciplines.
These new directions will engender change and reshape both the academy and society, unleashing a new type of intellectual liberty. Both students and faculty alike will be able to exercise the freedom to adjust their learning schedules to correspond with their interests and abilities and inclinations. Many students who are presently unable to survive our 19th-century system of learning will excel with the freedom to learn in a way that matches their own intellectual and creative gifts, as opposed to the focal learning presently implemented to match the particular interests of a given set of faculty members.
Universities are wonders to behold. They are transformational catalysts for societal change, and they perform a function essential to our collective survival. At present, they engage relatively small numbers of individuals in a very structured learning process. Advances in information technology of the order outlined by the National Academies and the forces of change those advances will unleash tell me that universities will soon be free to create learning environments that offer ubiquitous access to all information. How will universities adapt to this new information- liberated world—this new stage of human evolution? Some will adapt and change and bring about a new form of learning, one yet to be designed. Others will stay their course and retain their rigid pedagogy, inflexible practices, and focused predictable outcomes. Fifty years from now those universities will appear as removed from the front lines of change as the most remote monasteries of the Middle Ages.
James J. Duderstadt, Wm. A. Wulf, and Robert Zemsky’s article both illuminates and sounds the alarm about issues involving information technology’s impact on the future of higher education. Readers are reminded that faster, less expensive, but more accessible and more powerful technology will continue to change how faculty conduct research and teach, how students learn, how administrators conduct business, and how all of us interact across campuses and throughout the world.
Although the process of change may be in some ways evolutionary, the impact will be revolutionary. Unfortunately, we in higher education are all so busy extinguishing fires and pursuing resources for today that we rarely set aside sufficient time for reflection and envisioning. John Dewey, whose name the authors invoke, wrote that, “The only freedom that is of enduring importance is freedom of intelligence . . . freedom of observation and of judgment exercised in behalf of purposes that are intrinsically worthwhile.” Increasingly, we need to take the time and find the resources to do just that, reflecting not only on technology’s short-and long-term influence, but also on our response in anticipation of both certain and uncertain changes to come.
Tom Friedman has clearly thought about these issues and “gets it.” His best-selling The World Is Flat asserts that one of the most important developments of the new century is “the convergence of technology and events that allowed India, China, and many other countries to become part of the global supply chain for services and manufacturing, creating an explosion of wealth in the middle classes of the world’s two biggest nations, giving them a huge new stake in the success of globalization. And with this ‘flattening’ of the globe, which requires us to run faster in order to stay in place,” Friedman asks has “the world …gotten too small and too fast for human beings and their political systems to adjust in a stable manner?” Clearly, higher education faces the same challenge, whether we are talking about the impact of technology on the research enterprise or wide-ranging student learning issues, the rapidly changing demographics of today’s students, or students’ reliance on technology in all facets of their lives.
Among the most important issues the authors raise is that of community. Whether focusing on how we conduct research or on how and where students learn, universities will undoubtedly need to continue to think about the meaning of community and about technology’s role in strengthening our communities, with less and less emphasis on geography. There is an important caveat, however: As higher education increasingly embraces “big science,” supported by new technologies and even more highly organized structures for research collaboration funded by national agencies, we mustn’t discourage the creativity of individual investigators or underestimate the importance of human interaction both on and among campuses. The traditional benefits of such interaction will hopefully be enhanced by powerful new technologies, and universities can remain in the business of educating students and building human interaction in our society. In fact, as interdisciplinary collegial research becomes increasingly important, trust among human beings and the quality of relationships will become more important than ever. We in the academy must learn to take the time and dedicate the resources to reflect on, plan for, and adjust to new technologies and to do so in ways that place even greater emphasis on the importance of human relationships. Leadership will be critical: We must reinforce to the community that technology is not a threat but a tool for strengthening our core values.
I read with interest your collection of essays on technology and the university. But as I listen to the battle cry for sweeping change, I find myself musing on an earlier technological revolution: the printing press. Why was it that, when the world’s knowledge became available in bookstalls all over Europe, the university lecture system did not die out? It thrived instead. I think the answer is that available texts and data don’t do away with the need for teachers, but quite the contrary. It’s not so easy, to “read.” You need guidance, discussion, examples, and analysis. So now, when great banks of data and text are available “at Starbucks,” as Duderstadt et al. note, I don’t see the end of teaching, but a crying need for it. Education teaches us what is there, what can be done with it, how to think about what it means.
To be sure, the new technology is “interactive,” unlike books. Can it, then, educate students by itself, or at least with great efficiency—say, 50 students to a class, instead of 20? I don’t think so. My evidence is in my own experience, as a longtime teacher of freshmen and sophomores at a community college, and an early adopter—and eventual rejecter—of online instruction. Communication between teacher and stu dent can be rich and nuanced in the classroom, but is usually canned, pre dictable, and dry online. Online students are excited about a technology that looks efficient, from their point of view. “Look, I can learn composition while I watch the kids and make dinner.” Online students go through the motions. Uninvolved in a human relationship, their attitude is, “What do I have to do, and how fast can I do it?”
I appreciate Susanne Lohmann’s pitch about diversity of learning styles. Indeed, these exist, and some students are happier online than others. (The dropout rate in online courses is prodigious.) But we should not be misled to think that educational technology is being introduced in order to help those who are less linear; it is being introduced, as your writers note, because it saves the college money (and profits the vendors, not so coincidentally.) Here at the community college, we are putting students online as fast as we can, because we can’t afford to build classrooms and hire faculty.
But most people like to be taught by a person. If I were Harvard, I would not run off to imitate the poorest colleges. In half a generation, a human teacher is precisely what elite and expensive colleges will have to offer. Surely some of the wealth that technology is creating in society as a whole can be captured and dedicated to the time-honored business of passing not facts, but understanding, to the very people who will do the research at the research universities of the future.
James J. Duderstadt, Wm. A. Wulf, and Robert Zemsky are performing a great service by raising awareness of impending major changes in higher education. I agree that the general types and magnitudes of changes described in their article are highly likely to occur. My comments are intended to extend the discussion further.
I think of technology not as a driver, as described in the article, but rather as an enabler. The real drivers for change in universities are the same larger forces that are producing enormous pressure for change in almost every other aspect of our lives. Technology becomes a critical factor only when it enables people and organizations to respond effectively to these larger forces such as politics, economics, demographics, and nature. Thus, the challenge is not “to think about the technology that will be available in 10 or 20 years” and how the university will be changed by that technology. Rather, it is to imagine how the society of 20 years from now will have changed and how the university can best meet the educational and research needs of that society. Technological innovations that help universities to meet those evolving needs will be critically important, but innovations that do not help to meet the societal needs are not likely to prosper within our institutions.
A major question for future planning is “What really is our business?” A key lesson that corporate America learned over the past two decades is that survival depends on knowing what business one is in; many corporations learned to their dismay that they were not really in the business they thought they were in. Most of us in research universities have not thought seriously about what business we are in or what business society wants us to be in. We generally accept that our business is education, research, and service, but within restricted definitions of each that are appropriate to today’s enablers. For example, we have traditionally educated primarily a set of students with a fairly narrow band of characteristics (academic credentials, age, etc.) who can and will come to our campuses. Enablers such as technology and economic and political alliances will inevitably break many of the constraints that led to this narrow model and allow society and ourselves to reconsider our educational goal and mission. In a world with greatly changed geographic or political constraints, we are likely to have a different conception of whom we have an obligation or a desire to educate.
Finally, much of the educational innovation over the next few decades probably will come from Asia, with its enormous need to provide mass higher education inexpensively, and from the for-profit sector, which sees a huge potential worldwide marketplace. This will be the real “edge of the organization” referred to by the authors, and the resulting disruptive innovation will require all of us to play a truly global role in order to compete effectively.
Cyberinfrastructure for research
In “Cyberinfrastructure and the Future of Collaborative Work” (Issues, Fall 2005), Mark Ellisman presents compelling scenarios for advanced cyberinfrastructure (CI)-enhanced science, highlighting quite appropriately the ground-breaking Biomedical Information Research Network (BIRN) project that he directs. CI is a collection of hardware- and software-based services for simulation/modeling, knowledge and data management, observation and interaction with the physical world, visualization and interaction with humans, and distributed collaboration. CI is the platform on which specific “collaboratories,” research networks, grid communities, science portals, etc. (the nomenclature is varied and emergent) are built. The full “Atkins Report” mentioned by Ellisman is available at http://www.nsf.gov/od/oci/reports/toc.jsp, and resources about the collaboratory movement can be found at the Collaboratory for Research on Electronic Work ().
A growing number of CI-enhanced science communities, like BIRN, are becoming functionally complete and the place where the leading-edge investigators in the field need to be. They are not limited to automating past practice to make them faster, better, cheaper; they are about enabling new things, new ways, and potentially broadened participation. The push of technology and the pull of science for more interdisciplinary, globally distributed, and interinstitutional teams have combined to create an inflection point in the flow of information technology’s impact on science and more generally on the activities of many knowledge-based communities.
Mounting a potentially revolutionizing advanced CI program is very complex and will not emerge solely as a consequence of technological determinism. Real initiative, new resources, and leadership are required. It requires the nurture and the synergistic alignment of three types of activity: R&D on the technical and social architectures of CI-enabled science; reliable, evolving, and persistent provisioning of CI services; and transformative use through iterative adoption and evaluation of CI services within science communities. All this should be done in ways that extract and exploit commonality, interoperability, economies of scale, and best practices at the CI layer. It will also require shared vision and collective action between many stakeholders, including research-funding agencies, universities, and industry. An even bigger challenge and opportunity is to mount CI programs in ways that benefit and connect and leverage research, learning/teaching, and societal engagement at all levels of education and in a broad range of areas, including the humanities and arts.
Arden Bement, director of the National Science Foundation (NSF), is providing much-needed leadership for the CI movement. It is unfortunate, however, that just as there has never been more excitement and consensus among global science communities that advanced CI is critical to their future, there has never been a worse environment for financial investment in NSF and basic research in general. A coordinated and truly interagency approach, leveraged by our research universities, is required to establish clear leadership for the United States in the CI movement—an essential infrastructure for leadership in our increasingly competitive, global, and knowledge-based economy.
Protecting critical infrastructure
In their excellent article “The Challenge of Protecting Critical Infrastructure”(Issues, Fall 2005), Philip Auerswald, Lewis M. Branscomb, Todd M. La Porte, and Ermann Michel-Kerjan raise a number of key points. Because the “border is now interior,” U.S.-based businesses, perhaps for the first time in America’s history, find themselves on the front lines of a global battlefield. And because the economic infrastructure is largely privately owned, its protection depends primarily on “private-sector knowledge and action.”Yet their conclusion that there are insufficient incentives for private-sector investment in security may be premature.
It is instructive to remember that 25 years ago, U.S. business thought of quality as an unaffordable luxury, rather than a core business process with the potential to reduce cycle times and production costs and create competitive advantage. Like the quality inspectors of two decades ago, security managers are often seen as company cops, rather than global risk management strategists. Security is rarely designed into the company’s training programs, engineering, operations, or organization. But as the authors point out, the “Maginot Line” approach to security is both expensive and breachable.
The Council on Competitiveness’s Competitiveness and Security Initiative, led by Charles O. Holliday, chief executive officer of Dupont, and Jared Cohon, president of Carnegie Mellon University, found significant missed opportunities in achieving higher security and higher efficiency together. The initiative is identifying a number of areas from which to calculate a return on security investments: gains in productivity across the entire operation, reductions in losses heretofore tolerated as a cost of doing business, new revenue streams that flow from products and processes that embed security, and enhanced business continuity and crisis recovery. It has also identified less quantifiable but equally critical business benefits from security, including reputational value, shareholder value, and customer value.
Unfortunately, we have also found that most companies are not organized to identify these opportunities or capture a return on investments in security.
Security is not fully integrated into the business functions of strategic planning, product development, business development, risk management, or outsourcing management.
In many sectors, there is little operational business management resident in the security departments to enable a “build it in, don’t bolt it on” approach and little security expertise in the business and engineering units.
Unlike other core business functions, the roles and responsibilities of the security manager vary widely from industry to industry and company to company, often being dependent on personal rapport with the chief executive officer or the board of directors’ awareness of security challenges. There are few metrics to calculate the return—anticipated or real—on security investments.
The authors rightly note that in this new threat environment, “rigid and limited public-private partnerships must give way to flexible, more deeply rooted collaborations between public and private actors in which trust is developed and information shared.” From the Council on Competitiveness’s vantage point, however, what is also needed is an economic value proposition for increased security, new metrics, new organizational structures, and new technological options; and, most important, the visionary business leadership to implement them.
Can we anticipate disasters?
“Flirting with Disaster” (Issues, Fall 2005) by James R. Phimister, Vicki M. Bier, and Howard C. Kunreuther lays out several issues confronting high-hazard enterprises and regulators vis-a-vis the precursor analyses meant to help them ward off operational threats. Underlying those issues is the question of the scope and quality of the event analyses that delineate the precursors.
To develop more robust event analyses in any high-hazard industry, a first step is to recognize that precursor reporting systems’ effectiveness depends on the extent to which boards of directors, executives, and senior managers define these local efforts as being as structurally necessary to meeting their public responsibilities and their financial goals as any other basic production activity, and allocate intellectual and financial resources accordingly. But when those resources are understood in accounting terms to be “indirect” administrative contributions to production, the tendency is to minimize them. The scope and depth of event reviews suffer.
A second precondition is to reconsider processes for evaluating precursors’ risk significance. In nuclear power, accident sequence precursors and probabilistic risk analyses rely on global scenarios of meltdown threats; because these cannot account for local component interactions and dependencies introduced by upgrades and maintenance, their value as reference points is limited. To validate precursors’ risk significance requires statistical analyses that are unlikely to take into account contexts other than those of material, mechanical, and physical systems. The cultural, economic, political, and cultural systems that deep analyses find as the precipitating contexts of major accidents, despite being well-recognized locally as “error-forcing conditions,” have not been regarded as being risk-measurable. At the least, it is possible to make these measurable and reportable by incorporating so-called “subjective” measures, which are already widely used in risk modeling and are derived from validated methods of eliciting and structuring expert opinion. Formalized reporting and trending systems built up out of aggregated data would then also reflect such substantive evidence.
A third precondition is to reexamine the sources and meanings of “complacency,” a catch-all precursor. The absence of curiosity and doubt that it implies may, however, be a consequence of the well-observed insularity of those in high-hazard industries. Hermetic systems of language and talk, resistance to outsiders’ ideas, consultants and contractors who play “NASAchicken” (not being first to bring up an issue), internal turfs, and judging the credibility of knowledge by organizational rank—all these and others fence out diverse perspectives and new questions.
Locally, “expert opinions” should also include those of creditable outsiders. Globally, the rare informal discussions of issues in joint meetings of industry executives, technical experts, and social, political, and behavioral scientists concerned with the many facets of high-hazard enterprises (biotech, chemicals, medicine, nuclear power, security, and transportation) need to be shaped into a permanent forum for amplifying the fund of intellectual capital being drawn on to stay ahead of disaster, for their sakes and ours.
“Flirting with Disaster” raises important issues. Having spent years of my professional career analyzing the Nuclear Regulatory Commission’s systems and working to improve them, I have a few comments.
I would argue, based on the analyses I have done, that the number of reported incidents is a meaningful indicator and efforts should be taken to reduce incidents that are reported. Every incident that occurs is an alert to the system and a sign of stress that if not addressed can lead to cycles of decay.
The aim of all reporting systems should be nearly error-free operation. If the number of reported incidents becomes too high, too many resources get diverted to fighting fires, which leads to negative cycles of more incidents, fewer resources to devote to them, ever more incidents, and so on, until the system itself can become unglued.
I would say that it is not a question of having either a centralized system like nuclear power or a decentralized one like the airlines. Rather, both are needed, and the real character of the systems in both industries has elements of both centralization and decentralization. To catch error and correct it, multilayered redundant systems are necessary. As in the design of the U.S. government, a functional division of labor plus elements of central and local control must be set up to create and ensure various checks and balances.
The system has to be carefully thought out as to how the different parts relate, but it also must be left somewhat open-ended and decoupled in places, because a good incident-reporting system thrives on both a high degree of discipline and order among the parts and on some disorder for dealing with unexpected contingencies. An overdesigned and overdetermined system has many positive attributes and is likely to function better than one that it is underdesigned and underdetermined, but some degree of underdesign and underdetermination is still needed to deal with the surprises that inevitably take place. Ideas must flow freely; the system cannot be too controlled.
There also must be multiple means for accomplishing the same purposes: a requisite amount of redundancy and even overlap. To bring up important issues and afford them the attention they are due, some degree of conflict or tension among competing units with opposing missions is also sure to be needed. Other keys to a good system include theories for classifying events and how they can lead to serious accidents, methodologies for analyzing these events, the will to take corrective action and to overcome political obstacles, and a culture geared toward safety and not risk-taking.
More research on this important topic is needed, and “Flirting with Disaster” is a good start.
I very much liked “Flirting with Disaster,” which provides a great deal of useful information in a small space. Here are some reflections that supplement its insights.
Some years ago I got a call from the director of a nuclear power plant, whom I had met at a conference.“Can you tell me,” he asked, “what I can do to make sure we don’t miss the faint signals that something is wrong?” I had no answer for him then, though much of my adult life has been spent working on similar issues. In the months that followed, however, I would work on this question from time to time and eventually felt I could give him a good answer. I finally wrote a paper called “Removing Latent Pathogens,” that summed up my thoughts on receiving those faint signals. What follows is built on the conclusions of that paper.
First of all, it is important to secure the alignment of people in the organization. This means that employees feel that they and others are on the same team. This is a fundamental prerequisite for insuring that there will be a report if the employee sees something amiss. Before the Hyatt Regency disaster, the workmen building the hotel had learned to avoid the walkways that would later prove fatal to so many people. But if they brought their concerns to higher authority, there is no record of it. They didn’t feel it was their duty or perhaps their place to comment on it. They weren’t on the same team with the hotel guests. Creating a culture of openness, what I have called a “generative” organization, helps to make even minor employees feel that the door is open to the observations of anyone in the organization, or even to contractors. You never know who will spot the problem. It goes without saying that treating employees with the utmost respect is a key part of this culture.
Second, we need to be sure that the employee has as big a picture of the organization and its tasks as we can afford. The technicians who were responsible for putting shims in the mounting of the Hubble Space Telescope had no idea that they were creating a billion-dollar problem. When an inspector tried to get in their lab, they locked the door and turned up the music. Encouraging employees to get additional training and familiarizing them with the work of other departments help in building this big picture. Being able to understand the implications of what they see and do makes them better able to spot a potential problem and to report situations that they suspect can cause trouble. It is often the department-spanning employees who see the things that others don’t. A common consciousness of what might constitute a danger is also worth cultivating. Before the “friendly fire” accident when two U.S. Army Blackhawk helicopters were shot down over northern Iraq, there had been a “dress rehearsal” a year and a half before. But after that dress rehearsal, which within seconds produced a “fatal” result, no one picked up the phone and reported the incident. And so in the second case, two helicopters full of American troops died.
Managers need to train themselves in the art of being responsive to those who do not seem to be experts. I notice that in the recent attempt to change the culture of NASA, organization members were taught to engage in “active listening.” How successful this was I have no idea, but there is no question of its historical importance. Charles F. “Boss” Kettering once spent time talking to a painter who thought he could see that the propeller on one of the ships bearing a Kettering-designed diesel was off by a half an inch. This was a big propeller, and God knows how the painter could see it, but Kettering called the design office and had the propeller checked, and sure enough, it was off by a half-inch.
Finally, there is the issue of empowerment. The ability to contemplate a problem is often associated with the ability to do something about it. During the last, disastrous flight of the space shuttle Columbia, a team of structural engineers sought to get photos of the shuttle in space. In spite of Air Force willingness to provide such photographs, higher management did not want to ask for them. One reason, apparently, was that no way of fixing a seriously damaged shuttle in flight was known, so why bother to find out about it? When people feel they are powerless to act, they often appear powerless to think. But empowering people encourages them to look, to see, and to explore. Enrico Fermi, whose team built the first nuclear power assembly, called this “the will to think.” The will to think comes when workers expect their plans and efforts to be fruitful. Tie their hands and you close their eyes. If these insights are correct, what can we do to encourage those who shape our organizational cultures to provide a good climate for information flow? It always amazes me to discover that many managers have no idea how things they do act to discourage information flow. But even more broadly, that they have no idea that information flow is faulty in their organization and have never bothered to improve it. Many case studies suggest, on the contrary, that overt efforts to improve information flow are often very useful. Hopefully, the active listening and other skills that NASA managers were taught will help avoid another Columbia tragedy. But even if not, the rest of us can do much to build loyalty, educate our employees, and give them the power to change things for the better.
This is an important set of ideas, and I applaud James R. Phimister, Vicki M. Bier, and Howard C. Kunreuther for writing about them in such an interesting way. Their article couldn’t be more timely. What counts as a precursor to failure is a topic that pops up in various literatures. I would add the following observations.
From their own description and from other work I’ve read, it appears to me that voluntary reporting of potential precursors is more effective than command-and-control approaches. Because of that, I’m unconvinced that “in some cases mandatory reporting may be preferable.” The problem is not in requiring reporting, but that the way it is structured loads the system so that the reporter has a very strong interest in shaping a report so that responsibility is deflected. So I think a next step in developing this work is to look closely at the problem of individual and organizational incentives to tell precursor stories in particular ways.
Along these lines, the conclusion that “it is the responsibility of the private sector to be an engaged partner” seems weak to me. I would add to the overall argument in “Flirting with Disaster” that managers, and the organizations they try to control, often see it as in their interests not to see precursors. The article hints at this in pointing out that “not actively trying to learn from [precursors] borders on neglect.” This is a point that could be developed productively.
Voluntary and mandatory reporting systems are set up as alternatives. But the authors see positives and negatives with each. Could we imagine a system that combines the best of both, creating a third way toward more safety?
Carbon sequestration
“The Case for Carbon Capture and Storage” (Issues, Fall 2005) paints a very rosy picture of the technology’s long-term potential and advances a vigorous argument for investing in projects under the assumption that CO2 levels need only be stabilized over the next 50 years. Jennie C. Stephens and Bob van der Zwann argue that carbon storage will facilitate the deep reductions necessary to save the world from climate change just when aggressive reductions are required.
Carbon capture and storage (CCS) might be an option in the future when all the questions have been answered and problems fixed, but the world cannot wait 50 years before it tackles climate change as the authors suggest. We urgently need dramatic emissions reductions over the next few decades, on the order of 50% by mid-century, if we are to avoid the worst, irreversible impacts of climate change. Given the unresolved issues with CCS, it would be incredibly risky to assume that it will be possible to drive large reductions in carbon emissions with the technology.
We agree with the authors that there should be continued evaluation of the technology. However, we do not agree that it is clear at this point whether CCS will pan out as a part of the solution to climate change. Furthermore, the recent Intergovernmental Panel on Climate Change (IPCC) report on CCS makes it clear that the technology can be only a part of the solution.
In the end, everyone, including the recent IPCC Working Group, agrees that economics will determine whether CCS technology ever moves beyond the demonstration phase. However, the authors fail to mention the most logical economic driver for CCS technology: a mandatory cap-and-trade program similar to the one proposed by Sen. McCain or the European Trading System being used to implement the Kyoto Protocol. Only tough mandatory caps like these will create the economic space necessary for the advancement of sequestration, which the IPCC report pegged at $25 per ton of CO2.
The authors gloss over a number of important environmental issues with this technology. For example, the promotion of coal gasification—an integral part of CCS—will boost mountaintop mining in the eastern United States by eliminating the preference for low-sulfur Western coal. And the use of oceans as a “natural storage system” would accelerate ocean acidification and further harm already failing marine ecosystems.
CCS is not currently a cost-competitive and safe way to achieve large-scale reductions in global warming pollution. It may become one in the future, if its long-term safety is proven. But until that time, it is much more prudent to aggressively promote renewable energy and energy efficiency and not pretend that a technology exists that will facilitate the burning of all the world’s reserves of coal and oil.
Energy research
In the past years in the mass media, as well as technical and professional journals such as Issues, a deluge of articles about energy has appeared. Virtually without fail, each has contained the same panacea prescription for whatever magic-bullet approach to solving the energy problem it is are pushing: government financial support in its various forms, be it direct subsidies, tax incentives, earmarked government grants, government programs, etc. These articles also never fail to appeal to national pride or predict national economic doom in the international arena, because if such federal government support is not forthcoming in sufficient massive amounts, we will fall behind other nations in saving the world with enlightened policies and technological prowess. In the fall 2005 Issues, such an article appeared concerning carbon dioxide capture and sequestration (not to take issue with sequestration itself, as it seems to hold much promise).
Along with this cacophony has come almost universal criticism of Vice President Cheney’s energy task force and the recent energy legislation coming out of Congress. Doesn’t it occur to anyone how ludicrous it is to condemn the abysmal performance of government and at the same time lay all responsibility for solutions at the feet of that same government? If one were to submit nominations for groups that were the least knowledgeable in these matters, the most beholden to parochial interests, and the most prone to be least objective in what they believe, whom would you suggest? How about Congress and the general public? Yet these are just the targeted groups for all the nostrums.
Could it be that the mass confusion and lack of coherent action in this whole area are due to the community of technologists, economists, and political scientists, who are ostensibly the most qualified to develop an effective national strategy? They are too consumed with philosophical and ideological turf battles, not to mention mud-wrestling over government funds, to actually undertake the interdisciplinary interaction whose virtues they extol. But who knows? If the community could get its act together in some civilized way, they might convince Congress to do something intelligent that had the prospect of being constructive and effective.
It’s just a suggestion, but perhaps the community could produce some leadership in promoting some ecumenical conclaves where the various disciplines try to suppress their egos and ideologies and talk to each other and try to develop just such a national strategy of environmentally friendly energy independence. Heck, Issues could even consider devoting an issue to this topic. I suggest that the goal of such gettogethers might be to realistically assess the potential contributions, feasibilities, risks, and economic viabilities of the various approaches to energy supply, with due attention to the estimated time frames and uncertainties involved. What is needed is a strategy that can realistically be implemented sometime within the next 25 to 50 years. Symbolism and turf battles won’t get it done.
This would seem to be a necessary first step before Congress has enough objective information to determine what government support would be most effective and, not to belabor the point, cost-effective. My own impression is that the government can be most useful in the area of regulation rather than as the first resort for funds. But then that’s just me.
IPM revisited
In “Integrated Pest Management: A National Goal?” (Issues, Fall 2005), Lester E. Ehler opens the door to further debate on whether IPM (integrated pest management) has “been implemented to any significant extent in U.S. agriculture.” Although IPM’s intent to reduce pesticide use is well-meaning, despite being promoted for more than 30 years by various private and public groups, it is questionable whether pesticide use has declined significantly for many crops.
On our own field crop farm in California, we continue to rely on farm chemicals as we have for the past 20 years. We monitor our crops for pests and spray as needed. Sometimes this works and we avoid sprays, which makes monitoring seem like a good approach, but in the long run our pesticide use hasn’t changed much over the years. In fact, our pesticide use has probably increased recently with the introduction of a new weed (perennial pepper-weed) that is difficult to control.
If we are to reduce pesticide use, the current IPM approach seems too narrow. Now may be a good time to broaden our scope and focus on integrated crop management, which takes into account the whole farming system. This would include nutrient, soil, and water management; plant variety selection; and landscape effects on pests, as these all affect pest control needs on farms. A good place to start would be the American Society of Agronomy’s Certified Crop Advisor voluntary program, which ensures that participating members have experience and education in multiple disciplines and behave ethically. Ideally, crop advisors should not be able to profit from selling farm chemicals.
Having trained professionals who understand the whole cropping system and do not profit economically from selling chemicals will help provide growers with more options for alternative pest control strategies. However, there is also a need for more research on the basic biology of many of our crop pests. On our own farm, I frequently wonder where many of our major pests overwinter, what their natural enemies are, and whether they have alternative hosts. If we knew more about them, perhaps we could find weak links in their system to control them without the use of pesticides.
There is a great need to find and implement alternative pest management practices on farms that will reduce our dependency on farm chemicals. Pesticides are expensive, economically and for public health and our environment. A good start toward reducing pesticide use would be to broaden our scope of pest control to include all aspects of crop production.