Forum – Winter 2011
Geoengineering research
In “The Need for Climate Engineering Research” (Issues, Fall 2010), Ken Caldeira and David W. Keith issued a strong call for geoengineering research, echoing earlier such calls. I completely agree with them that mitigation (reducing emissions of greenhouse gases that cause global warming) should be society’s first reaction to the threat of anthropogenic climate change. I also agree that even if mitigation efforts are ramped up soon, they may be insufficient, and society may be tempted to try to actually directly control the climate by producing a stratospheric aerosol cloud or brightening low marine clouds.
It will be a risk-risk decision that society may consider in the future: Is the risk of doing nothing in terms of advertent climate control greater than the risk of attempting to cool the planet? To be able to make an informed decision, we need much more information about those risks, and thus we need a research program. However, such a research program brings up serious ethical and governance issues.
Caldeira and Keith call for a solar radiation management research program in three phases, 1) computer and laboratory research, 2) “small-scale” field experiments, and 3) tests of deployable systems.
Computer research, using climate models, the type in which I have been engaged for the past four years, does not threaten the environment directly, but does it take away resources, in researchers’ time and computer time, that could otherwise be used more productively for the planet? A dedicated geoengineering research program with new, separate funds would remove this ethical problem. But does it create a slippery slope toward deployment? Laboratory studies bring up the same issue but also make some wonder whether nozzles and ship and airplane platforms may be developed for hostile purposes.
Field experiments and systems tests bring up fundamentally different issues. Is it ethical to actually pollute the atmosphere on purpose, even for a “good” purpose? And would injecting salt into marine clouds or sulfur into the stratosphere threaten to produce dangerous regional or global climate change? How large an emission would be acceptable for scientific purposes? Would a regional cloud-brightening experiment be OK if emissions were less than, say, the emissions of a typical large ship that sails across the Pacific Ocean? Or does intention matter? The cargo ship was not trying to change climate. Or do two wrongs not make a right? Even if intentional pollution is small, is it acceptable? Detailed climate modeling will be needed to search for potential risks and design field tests, but how much can you depend on them? We will need to define how large an area, for how long, and how much material can be emitted. And how can this be enforced over the open ocean or the stratosphere, with no laws or enforcement mechanisms? If there is a potential impact on global climate, how do we obtain informed consent from the entire planet?
The UK Solar Radiation Management Governance Initiative is just beginning to address these issues, but it is not obvious that they will be successful. In any case, fundamentally new international rules, observing systems, and enforcement will be needed before we start spraying.
However, three important caveats are either omitted or not sufficiently emphasized in an otherwise excellent article. First, although the authors are convincing in saying that the technological genie can be let out of the bottle carefully—and then controlled—there is little evidence for this in the real world. Technologies, even those most obviously abhorred, once discovered always end up being used. We have never displayed much self-restraint, and we nearly always underestimate the consequences (both positive and negative). As such, the authors’ cost estimates for some technologies, such as introducing sulfate particles into the stratosphere (10 billion per year) are, by their admission, far too low if risk factors and unintended consequences are included. Add the cost of the Amazon or Ganges drying up, and the costs of tinkering with the stratosphere soar. By providing a number without the cost of risks, there is much danger that the public will gravitate toward a nonoptimal solution.
Second, their call to establish a government-funded research plan, although laudable and necessary, is probably off by an order of magnitude. $5 million to start up (once university overheads are subtracted it will be half that) is a smaller sum than the price of certain collectable automobiles available today. We need at least $50 million to start, and we need it now.
Third and perhaps most importantly, the authors, although they are keenly aware of the enormous political and social hurdles facing planetary engineering, make no mention of funding for socioeconomic research or for tracking and influencing public opinion. Failing to understand and influence the public’s understanding of climate change and of geoengineering solutions to deal with the most immediate problems of such change will undermine transparency and equitable decisionmaking. The danger, of course, is that without such parallel efforts in the social dimension, solutions designed to help us all will end up mostly helping those who need it least.
Because the current patent system places considerable power in the hands of individual inventors, private companies, and universities, rather than in the hands of states or international governing bodies, it could be detrimental to geoengineering R&D in a few ways. [Although government funding agencies retain many rights over a patent if the inventor is a government employee (for example, in a national laboratory), they have very few rights if they fund extramural research.] First, if inventors refuse to license their patents, they could stifle the innovation process. This will be particularly problematic in the case of broadly worded geoengineering patents, which the U.S. Patent and Trademark Office has already begun to issue. Second, if inventors disagree with policymakers about the definition of a “climate emergency,” they could make independent decisions about whether and when to experiment with and deploy geoengineering interventions. On the one hand, they may refuse to deploy the technologies, arguing that we have not reached crisis levels. On the other hand (and more likely), inventors might decide independently and without authorization from a state or other relevant governing body to deploy a technology because Earth is in an “emergency” situation. Inventors would probably deploy the technology in an area with lax regulatory oversight, but the move would have global repercussions.
Luckily, these aren’t unprecedented problems, and we have tools to solve them. One option would be for the U.S. government to take advantage of its power to force inventors to license their patents. However, because it almost never uses this power, it must develop a detailed policy that outlines the circumstances under which it would require compulsory licenses for geoengineering patents. Another option would be to develop a special system to deal with these patents, similar to the one devised for atomic energy and weapons in the mid-20th century. The 1946 Atomic Energy Act divided inventions into three categories: unpatentable (those that were solely useful in “special nuclear material” or “atomic energy in an atomic weapon”), patentable by government (technology developed through federal research), and regular patents (all other technologies). One could imagine a similar system in the case of geoengineering, with criteria based on the potential dangers of the invention and the ability to reverse course. Regardless, in order to ensure the benefits of a U.S. government– funded R&D program, we must seriously consider whether the current patent system is suitable. To do otherwise would be tragically shortsighted.
Carbon dioxide removal technologies (see especially “Pursuing Geoengineering for Atmospheric Restoration,” by Robert B. Jackson and James Salzman, Issues, Summer 2010), including restoring and expanding forest cover and eventually scrubbing carbon from the atmosphere, offer an additional approach to slowing climate change and ocean acidification. Even if fully implemented, however, there would be virtually no effect on the rising global average temperature until global emissions are dramatically reduced. Current fossil fuel emissions are roughly equivalent to the net carbon uptake of the Northern Hemisphere biosphere as it greens from March to September, and are more than five times the emissions from current deforestation. Ending deforestation will be challenge enough; sufficiently enhancing ocean and/or land uptake of carbon dioxide and scrubbing out enough to stabilize atmospheric composition without sharply cutting emissions are a distant dream, if possible at all.
Reducing incoming solar radiation is a second climate engineering approach. Jackson and Salzman apparently consider such approaches too dangerous because of potential unintended side effects, whereas Ken Caldeira and David W. Keith argue that it would be irresponsible not to research and develop potential approaches to solar radiation management to have in reserve in case of a climate emergency. Although there is agreement that reducing the warming of solar radiation is the only technique that could rapidly bring down global average temperatures, waiting until the emergency is evident might well be too late to reverse its major effects, especially biodiversity and ice sheet loss.
An alternative approach would be to start slowly, offsetting first one year’s warming influence and then the next and next, seeking to stabilize at current or slightly cooler conditions rather than allowing significant warming that then has to be suddenly reversed. Although geoengineering might be viewed as dangerous on its own, the world’s options are climate change with or without geoengineering (which Jackson and Salzman mention but omit from their analysis). Starting geoengineering on a regional basis (for example, in the Arctic, by resteering storm tracks) seems likely to me to have a lower risk of surprises, nonlinearities, and severe disruption than letting climate change continue unabated, even with mitigation and adaptation. But perhaps not; that is what research is for, and it is desperately needed.
Health care innovation
In “Where Are the Health Care Entrepreneurs?” (Issues, Fall 2010), David M. Cutler makes a cogent case for health care innovation. He correctly cites the fee-for-service business model as a challenging one, in which care is fragmented, and incentives drive caregivers to maximize treatments. The fee-for-service model often fails to keep the patient at the center of care.
As Cutler points out, Kaiser Permanente uses an integrated, technology-enabled care model. The centerpiece of our innovation is Kaiser Permanente HealthConnect, the world’s largest private electronic health record (EHR). KP HealthConnect connects 8.6 million people to their care teams and health information and enables the transformation of care delivery. Because Kaiser Permanente serves as the insurer and health care provider, important member information is available at every point of care, resulting in top-quality care and service delivery.
Cutler also highlights the need for preventive care, especially for chronic conditions. Our population care tools connect medical teams with organized information, resulting in health outcomes such as the 88% reduction in the risk of cardiac-related mortality within 90 days of a heart attack for our largest patient population.
Kaiser Permanente shares information within the organization to get these results, but it is far from “walled in.” Rather, we have established a model that we are now extending beyond our walls, effectively doing as Cutler suggests: bringing providers together to enhance quality and lower costs. In a medical data exchange pilot program, we helped clinicians from the U.S. Department of Veterans Affairs and Kaiser Permanente obtain a more comprehensive view of patients’ health using EHR information. Over many years, we have developed our Convergent Medical Terminology (CMT): a lexicon of clinician- and patient-friendly terminology, linked to U.S. and international interoperability standards. We recently opened access to make it available for use by a wide range of health information technology developers and users to speed the implementation of EHR systems and to foster an environment of collaboration.
Kaiser Permanente promotes technology and process innovation as well. Our Sidney R. Garfield Health Care Innovation Center is the only setting of its kind that brings together technology, facility design, nurses, doctors, and patients to brainstorm and test tools and programs for patient-centered care in a mock hospital, clinic, office, or home environment. As a result, we’ve introduced the Digital Operating Room of the Future and medication error reduction programs, and continue to test and implement disruptive technologies such as telemedicine, time-saving robots, remote monitoring, and handheld computer tablets—all designed to improve health and drive efficiency.
We’ve institutionalized innovation in many ways, including the Innovation Fund for Technology, which supports physicians and employees bringing important innovations from concept to operation. Many of our members are benefiting today from innovations, such as short message service text appointment reminders, that improve their experiences and reduce waste in the system.
An integrated model powered by the innovative application of information technology centered on patient needs is the key to improving health care. As more providers adopt EHRs and work toward interoperability, we will begin to see the results experienced at Kaiser Permanente increase exponentially and across the entire country.
Regulating nuclear power worldwide
“Strengthening Global Nuclear Governance,” by Justin Alger and Trevor Findlay (Issues, Fall 2010) raises a number of important issues that merit serious discussion. Although I have no argument with the issues raised or their importance, I found the article to be written from a rather conventional perspective. It reflects the current focus on building only large plants, the assumption being that such facilities are also appropriate for all other countries to meet their electricity demands. I am not convinced.
Increasing demand and the implications of climate change are not restricted to developed countries, and rapidly escalating costs and robust infrastructure requirements are not restricted to developing countries. The real questions are: What are the electricity requirements, and how can these best be met? The projected demand in any given region may well be significant, especially if described in percentage terms, but the manner in which such demand can be addressed may be very different depending on a host of factors, several of which are raised in the article.
Countries or regions that would be very hard-pressed to commit to building large nuclear power plants (for cost and/or infrastructure reasons) may be better served to look at small nuclear facilities. In fact, such facilities might make sense in many instances in countries that already have large facilities.
Although small nuclear facilities would be more affordable, more rapidly installed, and would not overload the distribution system, there are obviously a host of issues to be addressed. The technologies of small nuclear plants are not nearly as well developed as those of the Gen III plants, there is no operating experience yet to demonstrate their claimed inherent safety, and regulators have only recently started to devote serious assessment time to such designs.
It goes without saying that the safety, security, nonproliferation, infrastructure, and human resource and governance issues that the article raises with respect to the conventional systems would also need to be addressed with small nuclear facilities. The size and relative simplicity of these designs may ease finding acceptable technical solutions, which in turn may help with the governance issues.
Developing countries may have somewhat of an advantage in being able to select technologies that best suit their needs rather than only buying into the concept of bigger is better. Adopting small nuclear facilities would involve a distributed delivery system with a number of production nodes spaced where they are needed. However, building broad public understanding and acceptance of the idea of numerous small facilities being installed close to the user communities could be at least as great a challenge as developing the regulatory expertise or the necessary safety culture. If such a route is chosen, developing countries may be in the forefront, because substantive discussions in developed countries on this distributed concept have yet to be seriously initiated. Are we missing out?
Alger and Findlay go into great detail on the barriers confronting developing countries to complying with the global nuclear governance system (nuclear safety, nuclear security, and nuclear nonproliferation). Unfortunately, they have also overstated the problem. They begin by setting up two straw men as nuclear-aspiring developing countries: Nigeria and the United Arab Emirates (UAE). However, Nigeria is nowhere close to building a nuclear reactor, and classifying the UAE as a developing country is a real stretch. The UAE’s per capita gross domestic product is $58, 000—higher than that in Canada and the United States. The United Nations’ Human Development Report also considers the UAE to have a “very high human development,” ranking it 32nd in the world. Moreover, the vast majority of new builds will occur in existing nuclear markets (primarily India and China), not in new entrants from the developing world.
If the concern is the introduction of nuclear energy in developing countries, a better approach would compare the current situation with the historical record in Argentina, Brazil, India, China, and South Korea. All built nuclear reactors in the 1950s, 1960s, and 1970s when they were all developing countries. The global nuclear governance system was also in an embryonic state. There was no World Association of Nuclear Operators or Nuclear Suppliers Group. In addition, there were no international treaties on nuclear safety, nuclear waste, the physical security of nuclear materials, and nuclear terrorism. Although there were clear problems in issues of weapons proliferation, reactor performance, and other matters, the fact is that there were no serious accidents in these countries in years past. To better make their case, Alger and Findlay should have explained why global nuclear governance is more important today than in previous decades.
This short reply is not aimed at diminishing the role that new and more powerful international treaties and organizations have played in ensuring a more safe, secure, and peaceful nuclear sector. Nor am I suggesting that there is no need to strengthen the regime (I have made similar recommendations elsewhere). Rather, I am arguing that Alger and Findlay need to be less ahistorical in their analysis.
A smarter look at the smart grid
I commend Marc Levinson (“Is the Smart Grid Really a Smart Idea?”, Issues, Fall 2010) for putting front and center something that is often lost in technical and energy policy discussions: that the smart grid should be subject to a cost/benefit test. The “if it’s good, we’ve gotta do it” mindset pervades the smart grid discussion. Levinson provides a useful reminder that one needs to compare benefits to costs to see if the smart grid is worth it.
This brings up a second point I’d guess he’d agree with: If the smart grid is a good idea, why won’t the market take care of it? Being worthwhile doesn’t justify policy intervention, because marketplace success is our usual test for whether benefits exceed costs.
However, four market failures make smart grid policy worth considering. First, the absence of a way to charge for electricity based on when it is used creates enormous generation capacity cost. Levinson understates the severity of this problem. In many jurisdictions, 15% of electricity capacity is used for fewer than 60 hours out of the year, which is less than 1% of the time. To cover the cost of meeting that demand, electricity in those critical peak hours can cost upward of 50 times the normal price. Much of those costs could be avoided with smart grid–enabled real-time pricing.
A second problem is the production of greenhouse gases. Many cleaner electricity sources, particularly wind and solar, are subject to weather vagaries such as wind speed or cloud cover. By allowing utilities to match the availability of electricity to use, a smart grid may be an important tool for mitigating climate change.
Third, utilities surprisingly lack information on outages, depending on users for reports and then having to detect problems on site. Building communications and intelligence into the distribution network could help reduce the duration and severity of blackouts.
Fourth, many energy-sector observers believe that consumers fail to recognize that savings from reduced energy use will more than compensate for spending up front on high-efficiency equipment and facilities. A smart grid will allow utilities or entrepreneurs to combine electricity sales with energy efficiency in order to offer low-cost energy services that can pass along to consumers savings they didn’t know they could make.
Finally, Levinson’s concern regarding the little guy may be overstated. As Ahmad Faruqui of the Brattle Group has pointed out, it’s the wealthy whose air conditioners and pools are effectively subsidized by having peak electricity costs spread over the public at large. Moreover, a smart grid can enable utilities to pay users to reduce demand in those critical peak hours; the little guy can share in the savings.
In addition, residential use constitutes only around a third of electricity demand. Improving commercial and industrial efficiency, along with avoiding peak capacity, reducing carbon emissions, and improving reliability may justify the costs of the smart grid. But, as Levinson reminds us, that cannot be treated as a foregone conclusion.
Storing used nuclear fuel
“Nuclear Waste Disposal: Showdown at Yucca Mountain” by Luther J. Carter, Lake H. Barrett, and Kenneth C. Rogers (Issues, Fall 2010) incorrectly suggests that the political difficulties associated with a specific facility—Yucca Mountain—automatically translate to the absence of a larger used-fuel management program that can be successful. Although the nuclear industry supports the Yucca Mountain licensing process, it must be recognized that the program is bigger than Yucca Mountain itself and that there are advantages associated with ongoing efforts such as the development of a centralized interim storage facility.
Clearly, U.S. policy on the back end of the nuclear fuel cycle is not ideal, and the United States needs a path forward for the long-term disposition of high-level radioactive waste from civilian and defense programs. But the political football that has been, and to some extent remains, Yucca Mountain, is not the linchpin concerning growth in the nuclear energy sector. Rather, electricity market fundamentals will determine whether new nuclear plants are or are not built, as evidenced by site preparation activity under way in Georgia and South Carolina.
Several states have moratoria on nuclear plant construction by virtue of the government’s not having a repository for used nuclear fuel disposal, but there is widespread reconsideration of that ban in a number of those states. Alaska earlier this year overturned its moratorium. Industry’s safe and secure management of commercial reactor fuel is playing a role in this reconsideration by state legislatures.
Even as the administration’s Blue Ribbon Commission on America’s Nuclear Future examines a range of policy options, one certainty is that we will be securely storing used nuclear fuel for an extended period of time.
The nuclear energy industry supports a three-pronged, integrated used-fuel management strategy: 1) managed long-term storage of used fuel at centralized, volunteer locations; 2) research, development, and demonstration of advanced technology to recycle nuclear fuel; and 3) development of a permanent disposal facility.
Long-term storage is a proven strategic element that allows time to redesign the nuclear fuel cycle in a way that makes sense for decades to come. Meanwhile, the Nuclear Regulatory Commission’s approval of the final rulemaking on waste confidence represents an explicit acknowledgment by the industry’s regulator of the ongoing, safe, secure, and environmentally sound management of used fuel at plant sites and/or central facilities. Although used fuel is completely safe and secure at plant sites, indefinite onsite storage is unacceptable.
The Blue Ribbon Commission must take the next step and recommend forward-looking policy priorities for used-fuel management. To date, the commission has demonstrated an awareness of the importance and magnitude of its task. The challenge is to recommend a used-fuel management policy that can stand the test of time and enable the nation to take full advantage of the largest source of low-carbon electricity.
In “Transforming Education in the Primary Years” (Issues, Fall 2010), Lisa Guernsey and Sara Mead argue that we must invest in building a high-quality education system that starts at age three and extends through the third grade. They envision this as integral to a “new social contract” that “sets forth the kind of institutional arrangements that prompt society to share risks and responsibilities of our common civic and economic life and provide opportunity and security for our citizens.”
They discuss the current fragmentation of the early childhood education sector, its uneven, mediocre quality, and the systemic underperformance of primary education in the United States, and they argue that fixing and extending the primary years of children’s education is a critical first step. I believe that this argument takes an important first step for granted. We need to define what quality early education programs look like, how to build capacity to provide it at scale, what outcomes we should expect—or demand—in exchange for greater investment, and what measurement will ensure that we receive value for investment.
Having worked in this field for 10 years, I see scant evidence of consensus about meaningful early childhood quality indicators among policymakers, providers, schools of education that prepare early educators, or agencies that administer and oversee this fragmented sector.
Many efforts to improve the early childhood education sector have been incremental and transactional versus transformational. Transactional leadership values stability and exchanges, whereas transformational leadership emphasizes values and volatility.
We know that evidence-based early interventions can build critical language, vocabulary, pre-literacy, and numeracy skills that, left unattended, typically result in reading and math difficulties in the primary grades. We also know that important social/emotional skills, such as attending to instruction, following teacher directions, learning to persist, and solving problems with words can be taught to 3- and 4-year-old children and that having these skills dramatically improves their potential for success in the early years.
Yet, in advocacy for greater access to preschool, many still argue a false dichotomy that early education is either about academics or developing children’s social/emotional skills, when research is clear that it must be both. In other situations, the primary purpose of early childhood centers is to provide care for children while their parents work and to provide employment for adults who have much in common with the families whose children are enrolled.
If we truly want to have a higher-performing educational continuum, we need to encourage more disruptive technologies in both early education and primary education to drive greater quality, while innovating to ensure that a higher-performing but diverse delivery system delivers quality outcomes throughout children’s educational experiences.
If, in the context of a “new social contract,” we could agree on an audacious, meaningful, and measureable goal, such as bringing all children to the normative range in language, vocabulary, pre-literacy, and numeracy skills before they enter kindergarten, we could make a stronger case for greater investment in early education and then truly begin to transform education in the primary years.
If early education is to deliver on its promise, this major change must include a reassessment of the skills involved in and value of jobs teaching young children before they reach the kindergarten door. Every day across the country, approximately 2 million adults care for and educate nearly 12 million children between the ages of birth and five. When educators are well-prepared, supported, and rewarded, they can form strong relationships that positively affect children’s success. Unfortunately, too many lack access to education, are poorly compensated, and work in environments that discourage effective teaching. Disturbingly, far too many practitioners live in persistent poverty and suffer untreated depression and other health problems that undermine their interactions with children. Job turnover in early education ranks among the highest of all occupations and is a driving force behind the mediocre quality of most early learning environments. Recent college graduates shy away from careers with young children, or seek positions in K-3 classrooms, knowing that the younger the student, the lower the status and pay of their teachers.
Long-held, deep-seated attitudes about the unskilled nature of early childhood teaching itself are evident in proposals to promote in-service training to the exclusion of higher education for early childhood teachers. Such positions reflect a culture of low expectations for these teachers, resulting in part from decades of predominately least-cost policy approaches to retain or expand early education. Human capital strategies, whether based on more education and/or training for teachers, dominate investments in quality improvement, eclipsing other equally significant factors, such as better compensation and supportive work environments that facilitate appropriate teaching and recognize the contribution of adult well-being to children’s development and learning.
Those concerned about the environment reframed our thinking about industry, creating new jobs to protect and replenish our natural resources. Today, green jobs constitute one of the few growing employment sectors in the economy. A new vision for the early learning workforce is equally important to our country’s future. Revalued early childhood jobs could attract young people who are educationally successful and excited to invest in the next generation, confident that they will earn a professional salary; such jobs could transform the lives of the current early childhood workforce, predominately low-income women of color, eager to improve their practice and livelihood, pursue their education, and advance on a rewarding career ladder. Such a change requires return-on-investment analyses that demonstrate the costs to children, their parents, teachers, and our nation if we continue to treat jobs with young children as unskilled and compensate them accordingly, even as we proclaim preschool education’s potential to address long-standing social inequities and secure global economic competitiveness.
Schools have resorted to the overuse, particularly in schools with our most vulnerable children, of prescriptive curriculum and isolated assessments to solve their problems. This is a slippery slope that has far-reaching consequences. Prescriptive/scripted curricula force a pace that is unresponsive to the actual learning and understanding of the students. Teachers know they are not meeting the needs of their students but regularly report that they fear administrator reprisal if they are not on a certain page at a certain time. Prescriptive curricula were developed to support teachers who were new or struggling, not to replace the professional in the classroom.
Despite what we know about the importance of vocabulary development; oral language expression; and opportunities to represent learning in language, written expression, and graphically and pictorially, prescriptive curricula predominantly value closed-ended questions and “right” answers. These curricula do not take into account culture or language or prior experiences of the children, all things that we know are important in order to engage children and to ensure that they acquire information and knowledge. Instead of viewing the opportunity to ask children about what they have heard, learned, and understood as the strongest and most powerful form of assessment, teachers spend too much of their time pulling children for the assessment of isolated skills. This isolated skills assessment leads directly back to isolated skills instruction. It is a cycle that leads us far astray from the “developmentally appropriate formative assessments and benchmarks to monitor children’s progress….to inform instruction and identify gaps in children’s knowledge” espoused in this article.
In order to realize a new social contract for the primary years, let’s start by taking an active stance in favor of making the child, rather than the curricula, central to our concern and focus.
This is not a new idea. Guernsey and Mead do not include in their review the results of multiple attempts to capitalize on preschool’s success. The first of these, The National Follow Through Project, was initiated in the 1970s with mixed results. Over the next 30 years, four more attempts were made to show that the effects of preschool could be enhanced by focusing on kindergarten and the primary grades. Across all of them, results have been mixed, limited, or showed no impact on children’s school success.
To ensure that this P-3 initiative is more successful than those in the past, we should carefully consider why past initiatives were unsuccessful. We also will need to set out some specific program guidelines and criteria for effective early education. These guidelines are needed to make our vision operational so that we can say: When this system is implemented in an effective way, this is what will we see in practice.
Guernsey and Mead begin to lay out some of these specifics, but as a field we are a long way from having established what the P-3 system should look like, and once implemented as intended, what we expect to see as the actual effects on student learning. We must conduct evaluations to assess whether our best-evidence practices actually result in increased student success. With a review of our past successes and (mostly) failures with P-3 and with more a detailed definition of what it should look like and accomplish, the P-3 movement will be much more likely to fulfill its promise.
Rethink technology assessment
Richard E. Sclove is one of the most creative voices promoting participatory technology assessment and is responsible for bringing the Danish consensus conference model to the United States. In “Reinventing Technology Assessment” (Issues, Fall 2010), Sclove articulates the many virtues of citizen engagement with technology assessment and calls for broad support of the Expert & Citizen Assessment of Science & Technology network (ECAST).
Although Sclove’s contribution to conceptualizing participatory technology assessment cannot be overstated, there are two general areas in his article I’d suggest demand careful consideration before we move headlong along the lines he suggests. First, we need to give serious thought to ways in which we might promote different thinking about technology among our leaders and lay citizens. Second, more attention to the challenges raised by previous efforts at participatory technology assessment is required before we institutionalize relatively unaltered versions of these initiatives.
Our culture is clouded by a pervasive scientism. This is the idea that when we consider developments in science and technology, matters of value (what is good or bad) and of social effects (who will be advantaged/who disadvantaged) can be separated from technical considerations. Instead, in real life, the social and the technical are inextricably linked. Until technical experts, policymakers, and citizens understand this, we will engage in futile efforts to create responsible science and technology policy. Advocates of participatory technology assessment should promote education programs that challenge reductive ideas about the science/technology–values/society relationship and seminars for policymakers that would lead them to reflect on their assumptions about the sharp distinction between knowledge and values.
With regard to the practical organization of participatory initiatives, we must establish rigorous methods for evaluating what works. Existing mechanisms of recruitment for consensus conferences, for example, are premised on a “blank slate” approach to participation. Organizers seek participants who have no deep commitments around the issues at stake. In the United States, where civic participation is anemic, this makes recruitment challenging. However, research I have done with collaborators shows that non–blank slate citizens are capable of being thoughtful and fair-minded. We should not exclude them.
To take another example, citing the case of the National Citizens Technology Forum (NCTF), Sclove advocates the virtues of using the Internet to expand the number and geographical location of participants in consensus conferences. However, although the Internet may provide a useful tool in the future, as my collaborators and I have shown, NCTF (which we were involved in organizing) participants often experienced the online portion of the forum as incoherent and chaotic. It was not truly deliberative. To draw on the promise of the Internet for participatory technology assessment will require carefully selected software and attention to integrating in-person and online deliberation.
I support the objectives behind ECAST. I do hope, however, that part and parcel of the initiative will be a national effort to change thinking about the nature of science and technology and that ECAST will engage in rigorous evaluation of the different participatory assessment approaches they use.
The future of biofuels
“The Dismal State of Biofuels Policy” (Issues, Fall 2010, by C. Ford Runge and Robbin S. Johnson) is essentially history, because it is all about corn ethanol, which has now peaked near the 15-billion-gallon mandate established by Congress. Corn ethanol will not grow further, so the real issue is not past policy but future policy with respect to other biofuel feedstocks. The hope for the future is biofuels produced from cellulosic feedstocks such as corn stover, switchgrass, miscanthus, and woody crops. These feedstocks can be converted directly into biofuels via a thermochemical process leading to green diesel, biogasoline, or even jet fuel. Congress has mandated 16 billion gallons of cellulosic biofuels by 2022. The problem is the huge uncertainty facing potential investors in these facilities. The major areas of uncertainty are:
- Market conditions—what will be the future price of oil?
- Feedstock availability and cost
- Conversion technology and cost
- Environmental effects of a large-scale industry, and
- Government policy
The cellulosic biofuel technologies become market-competitive at around $120 per barrel of crude oil. We are far from that today, so there is no market except the one created by government policy. Feedstock cost is another big issue. Early U.S. Department of Energy (DOE) estimates put the feedstock cost at around $30/ton. Today’s estimates are closer to $90/ton, triple the early DOE figures. There are no commercial cellulosic biofuel plants, so there is huge uncertainty about how well they will work and what the conversion cost will be. Although most assessments of environmental effects show environmental benefits of cellulosic biofuels, there remain unanswered questions regarding the environmental effects of a large industry such as that mandated by Congress. Finally, government policy is highly uncertain. There is a cellulosic biofuel subsidy on the books today, but it expires in 2012 before any significant amount of cellulosic biofuel will be produced. The Renewable Fuel Standard (RFS) established by Congress has off-ramps, meaning it does not provide the iron-clad guarantee of a market needed by investors in today’s environment. Senator Richard Lugar has proposed a reverse auction process to support the industry. Potential investors would bid the price at which they would be willing to provide cellulosic biofuels over time. The lowest price bidders would receive contracts. Such a policy would reduce market and government policy uncertainty, leaving companies to deal with feedstock and conversion technology issues. It is this kind of forward-looking policy we need to consider at this point.
The three policy changes proposed by Runge and Johnson are really old news. Changes in the RFS are not being considered. There are already proposals on the table, supported by the corn ethanol industry and many in Congress and the administration, to phase out the corn ethanol subsidy. There is also a proposal to end the import tariff on ethanol. Where we need to focus our attention is on cellulosic biofuels that do not pose the challenges described by Runge and Johnson for corn ethanol. That is not to say it would be cheap, but if Congress wants the increased energy security and reduced greenhouse gas emissions, cellulosic biofuels are where the focus should be, and, indeed, today that is where it is.