Forum – Spring 2017

The infrastructure challenge

In “Infrastructure and Democracy” (Issues, Winter 2017), Christopher Jones and David Reinecke remind us that infrastructures have historically been inaccessible to many people in the United States, particularly those living in poor and rural communities. By tracing the development of US railroad, electrical, and Internet networks, the authors show that many infrastructures are not democratic by design, but made accessible through citizen activism and organizing. In the nineteenth century, for example, Americans demanded railroad regulation. In the twentieth century, communities self-organized to extend electricity to unserved areas. Today, as problems associated with aging infrastructure (crumbling bridges and dams) mount, the federal government is poised to pursue infrastructure spending dependent on private investment. If projects are motivated by revenue rather than the public good, it seems likely that historical problems of equity and accountability could be repeated in terms of what is built (toll roads, not water pipes) and who is served (affluent urban areas, not poor and rural communities).

By analyzing the infrastructure problems of the past, the authors provide an illuminating and much-needed perspective on the present. But as I read, I began to wonder if the ambiguity of the first word in the article’s title—infrastructure—might be antithetical to more public access and accountability. When the word infrastructure was adopted in English from French in the early twentieth century, it was a specialized engineering term referring to the work required before railroad tracks could be laid: building roadbeds, bridges, embankments, and tunnels. After World War II, it was reimagined as a generic bureaucratic term, referring to projects of spatial integration, particularly supranational military coordination (NATO’s 1949 Common Infrastructure Programme) and international economic development. It was not until the 1970s, if general English language dictionaries are indicative, that the broad current usage of the word—physical and organizational structures that undergird a society or enterprise—became stabilized. Paradoxically, that same decade saw the decline of the ethos of state-led infrastructure management and universal service provision in favor of privatization.

Infrastructure now refers to all kinds of projects built for purposes that include transportation, communication, security, health, finance, and environmental management. It wasn’t always so all-encompassing. In fact, neither nineteenth-century railroads nor early-twentieth-century electrical networks were called infrastructure during those eras. That said, my concern is neither historical anachronism nor vague terminology, per se, but the fact that the word infrastructure has displaced some alternatives, such as social overhead capital and public works, that emphasize broad access and the public good rather than generating revenue. Jones and Reinecke rightly emphasize that communities must mobilize, make demands, and hold providers accountable in order to democratize infrastructures. Society might also re-democratize its terminology. After all, why should a private oil pipeline and a municipal water system be labeled with the same French engineering term? Maybe we should even disaggregate the single word infrastructure, replacing it with a more heterogeneous and specific group of terms that foreground what is to be built and whom is to be served. I am particularly fond of public works.

Ashley Carse

Assistant Professor, Department of Human and Organizational Development
Vanderbilt University

Advancing clean energy

The Winter 2017 edition of Issues presents an array of articles addressing the expected clean energy transition. In “Inside the Energiewende: Policy and Complexity in the German Utility Industry,” Christine Sturm provides a thorough critique of that nation’s energy policy, showing the problems it has caused for utilities and the financial costs it has imposed on consumers. While I have no doubt that those problems are real, she does not give enough credit to the German government for its efforts to address them.

As Sturm points out, Germany has numerous policies pushing for a transition to a low-carbon energy system, but the one that gets the most attention and that has most contributed to rapid deployment of wind and solar facilities is the feed-in tariff, which requires that utilities must connect any and all renewable electricity generators to their grid and pay those systems a premium price for the electricity they generate, depending on which technology they use. Those premiums, though they have gone down since 2000, are lavish by US standards and have pushed up the costs of electricity to ordinary consumers quite dramatically.

However, Sturm fails to mention that the German government has, for just this reason, moved away from feed-in tariffs, revising the Renewable Energy Sources Act to replace feed-in tariffs with an auction system that it has been developing over the past couple of years. Renewable energy facilities that currently get the feed-in tariff payments will continue to receive them for a fixed period of years, but new systems will function under the auction system, which was intentionally designed to slow down the rapid renewables deployment rate. Some German environmental groups have greeted this change with scathing criticism, furious at the loss of the feed-in tariffs for future renewable energy deployment. Sturm’s article does not even mention the change, which will not solve all of the problems that German utilities face, but at least has begun to respond to some of their concerns. It’s enough to make me feel some sympathy for government officials.

Sturm closes with the comment that poets and thinkers should not tinker with large-scale technological systems—an attitude that misses a key point. If a system is not sustainable or has growing externalities, then it needs to change. Moreover, all such systems do change over time, and historically those changes have always been undirected, messy, chaotic, and sometimes violent affairs. Large firms in the energy system, left to their own devices, have shown little interest in making any but the most incremental changes to what they do, which is understandable given the immense capital investments they have in the existing system. The energy transition away from fossil fuels will not be quick, easy, or cheap, and everyone involved will make mistakes along the way. But I have much respect for government policies that try to push the system in a beneficial direction and move it along faster than incumbent actors might like.

Varun Sivaram’s article on energy technology lock-in and Kartikeya Singh’s article on solar energy in India give us more examples of how complex and difficult the clean energy transition will be and yet how progress can happen. All three articles make two important general points.

The first point is the simple reality of path dependency. The effects of new policies or of technological or business innovations depend greatly on the contexts in which they operate. Their success or failure will depend on the specific circumstances of the country and even locality in which they operate, from existing technologies to supporting infrastructure, from existing businesses structures to attitudes toward paying for energy. Rigid universal models will likely fail in this complicated world.

The second point is that government efforts to change large-scale technological systems will always produce unintended consequences. It is impossible to predict everything that will happen when a policy hits the complexity of the real world. The German energy system has experienced three large policy or contextual changes in rapid succession: German reunification, which was followed by sweeping European Union regulations that forced all EU countries to liberalize their electricity systems, followed by the advent of the modern feed-in tariff in the 2000 renewable energy law. It would be remarkable if these changes had not produced unexpected problems.

The hallmark of good policy, as the political scientist Edward Woodhouse has argued for years, is not that it gets things right the first time, but instead that policymakers learn and adapt as complex systems react to policy changes in unexpected ways. All of the articles in this section strengthen that point and show us the kinds of analysis that can make policy better.

Frank N. Laird

Josef Korbel School of International Studies
University of Denver

In “Unlocking Clean Energy,” Varun Sivaram makes a number of important observations about innovation, deployment, and the scale-up process. His analysis highlights the value of several well-described ideas about “technological lock-in” that can appear in any field when efforts to introduce new technologies also—intentionally or not—provide a tremendous boost to the emerging “darling” technologies of any one era, thus inhibiting the next wave of innovations. Clear examples of this exist in military designs, fossil-fuel power plants, automobiles, and, as Sivaram notes with thoughtful examples, the clean energy space.

The question is not “is this dynamic real?”—it certainly is—but what to do about it given that we need to rapidly scale the national and global clean energy industry to not just provide a growing share of new demand, but to rapidly eat into the greenhouse-gas-emitting generation base. And “rapidly” means just that: even in nations that have begun the transition, the transformation must proceed at more than 5% per year, a huge feat that must be maintained through the mid-century “climate witching hour.”

A number of strategies can assist in this joint task of preventing lock-in and accelerating the change, and briefly I highlight two that expand Sivaram’s argument.

First, there is no better medicine than investing in, but also nurturing, “use-inspired” basic research and development (R&D). In work now almost two decades old, we found that underinvestment in energy research was not only chronic, but that, paradoxically, waves of publications and patents (two different, and admittedly imperfect, measures of innovation) often preceded new rounds of funding. This finding argues for an array of approaches including, among others, building unconventional collaborations (for example, solar and storage innovators working with computer science and behavioral social science researchers); granting not only prizes, but also market opportunities to novel technologies (sadly, often seen as the much-derided “picking winners”); and finding ways to build a more diverse and inclusive research community.

Second, the focus should be on where clean energy needs to be not in five or 10 years, but in 2050. The process of lock-in is not due just to the market advantage earned by the early entrant, but also because near-term goals obscure the vision of the objective. With a goal of 80% or more reduction in emissions by 2050, short-term transitions (for example, coal to gas without a strategy to then move rapidly off of gas, or conventional vehicles to hybrid vehicles instead of electric and hydrogen ones) can block subsequent ones. One valuable emerging tool is to develop and integrate energy (and water, and manufacturing decarbonization) planning models into R&D planning models.

In my laboratory, we have developed one such model of current and future power systems—SWITCH. The model can explore the cost and feasibility of generation, transmission, and storage options for the future electricity system. It identifies cost-effective investment decisions for meeting electricity demand, taking into account the existing grid as well as projections of future technological developments, renewable energy potential, fuel costs, and public policy. SWITCH uses time-synchronized load and renewable generation data to evaluate future capacity investments while ensuring that load is met and policy goals are reached at minimum cost. The model has been invaluable in working with researchers and governments around the world to understand decarbonization pathways where the mid-term objectives (for example, for 2020 and 2030) enable instead of hinder the long-term goals.

Models do not provide answers, but they clarify how investments in research and deployment can unintentionally prioritize near-term objectives over the true goal. Acting on those findings is the art of building incentives for innovation without inducing hesitation.

Daniel M. Kammen

Professor in the Energy and Resources Group, the Goldman School of Public Policy, and the Department of Nuclear Engineering
University of California
Science Envoy, US State Department

At this year’s World Economic Forum in Davos, many of us who work on low-carbon technologies came away brimming with excitement and optimism. Business and political leaders spoke of economic opportunity and job growth as benefits of combating climate change in solidarity with engineers and researchers, who have long advocated for clean energy. This Davos experience stands in contrast to some of the observations that Varun Sivaram makes in “Unlocking Clean Energy.”

While I fundamentally agree with Sivaram that investment in ideation and innovation of new technologies is critical for long-term decarbonization, leading technologies and the policy frameworks and investments that encourage these nascent incumbents need not pose a detriment to the next wave of innovation. In fact, I would argue that all of these critical pieces are needed to move the needle against climate change for two important reasons.

First, a leading technology signals progress and technological advancement to business and political leaders. The growing market of these technologies, such as solar photovoltaics, shows to investors that this is a vibrant sector worthy of attention.

Second, having a winner enables us to focus on development and deployment of existing technologies, whose mass adoptions are desperately needed now to fight global warming. From $4 per watt in 2008, silicon photovoltaic modules are $0.65 per watt today. This substantial cost reduction—and the wide-scale deployment that follows—is in large part due to concentrated development. In the absence of a market winner, this would not have happened.

Moreover, in free societies where innovation happens routinely with hard work, persistence, and oftentimes luck, successful innovators look beyond technology development. They effectively articulate new markets for their breakthroughs; that is, they creatively define how these emerging technologies are either solutions to yet unchallenged problems and unmet needs, or are disruptive and superior to incumbent technologies. Case in point is solar again. With silicon photovoltaic modules now commoditized for rooftop solar applications, start-up companies may be better off focusing on installation technologies to reduce balance-of-systems cost, the development of storage technologies to overcome the intermittency of solar, or the creation of building-integrated transparent solar cells to increase energy efficiency and occupant comfort. Head-on competition for rooftop installs is still possible, but understandably harder given ongoing societal and business benefits proffered by silicon photovoltaics.

My institution, the Andlinger Center for Energy and the Environment, is among others that are seeding a portfolio of future technologies to combat climate change and are collaborating with industry to bring them to market. Policies that intentionally or unintentionally discourage the adoption of newer and superior technologies must go by the wayside as countries race to fulfill the ambitious goals of the Paris Agreement on climate change. With the two-degree warming threshold looming, the question foremost on our minds should be how fast we can curb and neutralize emissions. Innovation in longer-term decarbonization technologies has to be part of the equation, but so do measures that enable immediate and large-scale deployment of incumbent technologies.

Yueh-Lin (Lynn) Loo

Director, Andlinger Center for Energy and the Environment
Theodora D. ’78 and William H. Walton III ’74 Professor in Engineering
Professor of Chemical and Biological Engineering
Princeton University

Varun Sivaram makes an important point about the risks of standardizing low-carbon energy systems on suboptimal platforms. As he argues, it is vital to link together innovation and deployment policy, so as to continually improve relatively immature technologies.

That said, there is some risk in his approach of making the best the enemy of the good. High-carbon energy systems are even more deeply locked-in than silicon solar photovoltaics or first-generation biofuels. Fossil fuels still power our civilization and, in doing so, sustain national governments, some of the biggest multinational companies, and many, many jobs. Any transformation of the energy system will need to reconfigure these institutions and interests so that enough of them gain enough from low-carbon energy innovation that they support innovation, rather than resist it.

That means that our thinking about breaking lock-in should extend beyond the relatively narrow technical approaches that Sivaram describes. The energy transition will involve building and sustaining new political constituencies and cultural norms as well as public and private R&D programs. Sometimes, this process may require compromises and “strange bedfellow” coalitions. As Sivaram points out, the technologies that dominate our current energy system, such as internal combustion engines, attained that status due to the political savvy and muscle of their champions as well as to their technically attractive features.

So while energy innovation policymakers should do all that they can to create protected niches for promising new technologies and bridges across technological generations to avoid lock-in, they may also need to live with lock-in if, at some point, less-than-perfect low-carbon energy technologies that are good enough from a climate protection point of view are able to capture hearts and minds as well as wallets.

David M. Hart

Professor, George Mason University
Senior Fellow, Information Technology and Innovation Foundation
Washington, DC

It is well known that the journey between laboratory research to full-scale deployment of any new energy technology can take 10 to 20 years. Advances between each stage in the innovation chain is often a near-death experience. At the fundamental research stage, graduate students move on to find jobs, and professors can lose grant funding or turn their interests in another direction. At the proof-of-concept stage, lack of know-how to build the integrated systems stymies even the passionate inventor. Getting resources for the scale-up needed to make the case to prospective investors requires more funding than is often available through traditional funding channels. Even technologies with lots of promise often can’t get the financial backing to compete with incumbent market participants. And shifting policies at state and national levels can change the playing field more quickly than the time needed to adjust to new business realities.

In fact, given all of the obstacles for good new ideas to swim upstream, it is all the more remarkable that we have as many new innovations as we do. We should applaud those that have succeeded. But as Varun Sivaram points out, in order to provide low-cost, reliable, and environmentally sustainable energy to everyone, we need more and faster innovation.

Historically, the energy innovation ecosystem has been highly fragmented: by stage in the innovation chain, by preference for one type of energy or another, by incumbent versus new market entrants, by institutional constraints or monopolies. The reasons for such fragmentation are too numerous to name.

To meet the global energy challenge, we need to fix the energy innovation ecosystem. Achieving this will require a dynamic network of relationships spanning science, technology, finance, markets, and the realm of policymaking. And efforts must include academics, entrepreneurs, innovators, the venture capital community, start-ups, large energy companies, policymakers, and most importantly the wellspring of talented young people entering the workforce every year. A thriving energy innovation would hone and vet the best ideas, draw new technologies out of the universities, get faculty and students working on the real problems that industry faces, assemble the capital needed at all stages of the innovation cycle, and help create the policy framework for a market pull for new technologies.

How do we cultivate this thriving energy innovation ecosystem? Most importantly, we need to get the energy industry back to the innovation table. Energy technology requires financial resources and scale-up know-how that exists only in the industry. Venture plays a critical role, too, in de-risking emerging technologies, and at the same time needs certainty that it will be rewarded for it. Universities and national laboratories are needed for spawning the next generation of science and technology innovation and educating the workforce with the knowledge and skills needed.

We propose a new approach to cultivating this thriving energy innovation ecosystem that will bring all of the right players to the table and align incentives for success. We can create topically focused consortia to bring to fruition promising new energy innovations. The consortia would support the portfolio of pre-commercial R&D activities needed to get these new technologies to market. The consortia can be cost-shared equally between industry, the government, and the venture community, and held accountable for the results. Policymakers and financial institutions need a seat at the table, too, to anticipate and pave the way for new market entrants. Those private-sector participants that take advantage of opportunities emerging from these consortia will be positioned to thrive in the rapidly evolving energy landscape.

This is not a new idea. Models such as this have worked before, in such areas as protecting national security and nurturing the growth of the semiconductor industry. Much progress has been made. But we need more energy innovation, faster, and with more certainty of success. This is an idea whose time has come.

Sally M. Benson

Arun Majumdar

Co-Directors, Precourt Institute for Energy
Stanford University

Kartikeya Singh’s article, “Of Sun Gods and Solar Energy,” presents a compelling narrative of solar energy in India. The author correctly identifies the need for more effective and coherent policy midwives that assist the birthing of solar across the country. The reader assumes that the purpose of the article is to highlight the challenges and opportunities surrounding solar energy, and in the same spirit we identify a few other key aspects that were either missed or deserved more attention.

First, the cost of capital for consumers and entrepreneurs continues to be too high as banks remain reluctant to finance small-scale projects that do not promise high returns. Training banks in solar technology and service, bundling small projects together to reduce liabilities, and having the government or other highly regarded institutions provide guarantees can potentially ease the restraints on lending and open up new avenues for finances.

Second, as Singh demonstrated in the article, there remains a disproportionate focus on lighting solutions as opposed to understanding how a whole host of other services can be integrated with solar, including heating, cooking, refrigeration, entertainment, and even mobility. The ultimate potential of solar is in enhancing the real incomes of energy-poor consumers, and that requires much greater customization. Concurrently, several case studies have highlighted the various socioeconomic benefits of off-grid renewable energy, but very few have explored the impacts of rising incomes and the corresponding increase in energy demand and how they may be catered to.

Third, the negative impact of centralized grid energy is grossly underestimated. A key factor that is expected to shape India’s economic growth story in the coming two decades is the perceived latent potential of its rural consumers, who account for more than 60% of the population. The current government policy on mini and micro grids in India tentatively aims to achieve an installed capacity of 500 megawatts by 2022 from renewable energy (largely solar), which in the current scheme of reaching 100 gigawatts of solar capacity is a pittance and suggests a highly disproportionate focus on utility-scale and urban rooftop solar projects. Singh rightly identifies the need for policy coherence in supporting the diffusion of renewables, but poor coordination across various government departments introduces further lag. Coordination between the department of agriculture, renewable energy, and labor alongside relevant state-level departments in aligning policy is useful and necessary for achieving rapid energy access.

Fourth, Singh identifies the importance of having strong local networks that can address the perennial lack of distrust that rural populations often harbor, as a result of policies and programs that over the years have over-promised and under-delivered. Indeed, such policies continue to view rural consumers as passive recipients of welfare as opposed to active consumers in a marketplace. Social entrepreneurs across India are gradually capturing this market, but unless policy creates more incentives, the off-grid solar market will remain forever a niche.

In sum, it appears that the sun gods will continue to shine long and bright in India, but while some bathe in it, many continue to remain in the shade.

Benjamin Sovacool

Chaitanya Kotikalapudi
University of Sussex, United Kingdom

Electric vehicle prospects

In “Electric Vehicles: Climate Saviors, or Not?” (Issues, Winter 2017), Jack Barkenbus presents a misleading assessment of the greenhouse gas impacts of electric vehicles (EVs). The article is mired in the present, but energy transitions are about the future. What’s important about EVs is their role in a future low-carbon energy system. But even the treatment of current EV emissions is flawed.

First, large-scale energy transitions take several decades. Second, any meaningful effort to mitigate greenhouse gas emissions must substantially decarbonize electricity generation. It takes decades for new vehicle systems to overcome the market’s aversion to risk, reduce costs via scale economies and learning by doing, create diversity of choice across vehicle types and manufacturers, build a ubiquitous recharging infrastructure, and replace the existing stock of vehicles. Technological advances are also needed and, so far, are ahead of schedule, according to assessments by the US Environmental Protection Agency.

The National Research Council 2013 report Transitions to Alternative Vehicles and Fuels concluded that a very aggressive combination of policies might achieve an 80% reduction in greenhouse gas emissions over 2005 levels for light-duty vehicles by 2050. In the most intensive scenario for battery electric vehicles (BEVs), plug-in electric vehicles achieved a market share of 10% by 2030 and 40% by 2050, by which time other policies could reduce grid emissions by 80% also. The most successful scenarios also included hydrogen fuel cell vehicles, but that’s another story.

Regrettably, even the article’s evaluation of current EV impacts is flawed. Although the carbon intensity of electricity delivered to a BEV is relevant, so-called “well-to-wheels” emissions per mile is a superior metric, because it includes everything from primary energy production to the vehicle’s energy efficiency. Argonne National Laboratory’s GREET model compares “like to like” vehicles across a wide range of fuels and propulsion systems. Its well-to-wheels numbers rate a 2015 BEV using US grid average electricity at 201 grams of carbon dioxide per mile (gCO2/mile), less than half the 409 gCO2/mile of a comparable gasoline vehicle. BEVs powered by California’s grid average a 70% reduction compared with a conventional gasoline vehicle and a 59% reduction relative to a hybrid vehicle.

But most EV owners can purchase “green power” from their local utility. (Green power programs are audited to ensure that the renewably generated electricity truly displaces nonrenewable generation.) An EV using renewable electricity emits 1 gCO2/mile, a 99.8% reduction.

With BEVs making up about 0.1% of vehicles in use today, current emissions by EVs are not entirely irrelevant. There are regional variations, so it makes sense to aim the strongest policies at regions with the cleanest grids and moderate temperatures, such as California, where roughly half of EVs are sold.

Today’s electric vehicles are cleaner than gasoline vehicles almost everywhere in the United States and the European Union. With green power, they can be 99.8% clean. And before EVs can become a large fraction of the vehicle stock, there is ample time to substantially decarbonize electricity generation. Every electric vehicle sold today is another small step toward a sustainable global energy system.

David L. Greene

Senior Fellow, Howard H. Baker Jr. Center for Public Policy
Research Professor, Department of Civil and Environmental Engineering
University of Tennessee, Knoxville

Jack Barkenbus’s article on electric vehicles (EVs) illustrates the importance of moving away from coal-fired generation of electricity. The electrification of transportation and the generation of electricity with wind and solar energy are both important. In a book that my colleagues and I coedited, Solar Powered Charging Infrastructure for Electric Vehicles: A Sustainable Development, we explore the concept of covering parking lots with solar panels to provide shaded parking and an infrastructure for charging EVs. The shaded parking has economic value for the batteries in the vehicles on hot summer days because high temperature can shorten battery life. The temperature is also cooler in a parked car when it is shaded, and this has social value.

The effort to improve urban air quality is one of the most important drivers of the transition to EVs and renewable electricity in numerous cities in California, as well as in London, Beijing, New Delhi, and many other cities in the United States and around the world. There are many urban communities where improving the quality of the air by reducing combustion processes is a very high priority. The emphasis is on the electrification of transportation and the addition of new wind and solar generating capacity to replace coal-fired power plants. Adding solar-powered charging stations in parking lots enables EVs to be charged while their drivers are at work or at an event. If 200 million parking spaces were covered with solar panels in the United States, approximately 25% of the electricity generated nationwide, based on 2014 levels, could be generated with solar energy.

The American Lung Association’s report State of the Air 2016 documents that progress is being made in Los Angeles and many other California cities where there is a significant effort to electrify transportation. The Los Angeles area achieved its lowest level ever for year-round particle pollution, based on data from 2012, 2013, and 2014. For ozone, Los Angeles had its lowest number of unhealthy days ever. But problems remain. According to the report, 12 of the 25 most polluted cities failed to meet national air quality standards for annual particle pollution. Roughly 166 million people in the United States live in counties that experience unhealthful levels of particle or ozone pollution, or both.

Many sources describe even greater levels of particle pollution in Beijing and New Delhi. China is moving forward with the electrification of transportation, but the effort is new and air quality is still very poor in Beijing, Tianjin, and several other large cities. From December 30, 2016, through early January 2017, Beijing experienced a stretch of extremely bad air pollution, according to a report in the January 23, 2017, issue of Chemical and Engineering News. Transitioning to EVs and solar-generated electricity in large cities would improve urban air quality in these locations and reduce greenhouse-gas emissions.

State of the Air 2016 points out that climate change has increased the challenges to protecting public health because of wildfires and drought that impact air quality. Deaths from asthma, chronic obstructive pulmonary disease, and cardiovascular disease occur when air quality is poor. According to the World Health Organization, there are about 6.5 million deaths each year because of air pollution.

We have the science and technology to reduce greenhouse-gas emissions and improve urban air quality by transitioning to EVs, solar-powered charging infrastructure in parking lots, and renewable energy to generate electricity. This transition has already started and there is significant progress in some parts of the world, such as in Norway. Electric buses are being purchased and put into service in many large cities. When the benefits of better air quality and reduced greenhouse-gas emissions are considered, decision makers should take action to move forward with programs and policies that enhance the rate of this great transition. Individuals can do their part by leasing or purchasing an electric vehicle or adding solar panels to their home, or both.

Larry E. Erickson

Professor of Chemical Engineering
Director, Center for Hazardous Substance Research
Kansas State University
Manhattan, Kansas

Putting technology to work

In “A Technology-Based Growth Policy” (Issues, Winter 2017), Gregory Tassey calls for the science and technology policy community to make an effort not only to understand the central role of technology in a global economy but also to help translate its understanding into policy prescriptions needed to leverage productivity growth. This call comes at an auspicious time. The US economy continues to struggle to attain a structural—and hence long-lasting—recovery from the Great Recession. And such a call is not new to the policy arena. So one might reasonably ask: Why does an emphasis on a growth policy to leverage productivity growth seem to fall on deaf ears?

Perhaps there are numerous explanations, but let me concentrate on only one. Technology is indeed the core driver of long-run productivity growth, as Tassey artfully points out. But if today’s technology-focused additional investment dollar will have an impact only in the long run, then that dollar might garner greater political capital if allocated toward more visible short-run projects. In addition to political expediency, a more fundamental problem might be the difficulty explaining to congressional constituents the merits of investments in long-run growth policies relative to investments in short-run stabilization efforts.

The logic behind the importance of US research and development (R&D) intensity returning to levels that rival those of some European and Asian countries is subtler than simply keeping up with our global competitors (Tassey’s Figure 3). To draw from W. Brian Arthur’s The Nature of Technology, the importance of continued investments in R&D rests on the fact that new technologies are combinations of previous ones. A nation must continually grow its technical knowledge base because, as those who are students of the technology revolutions that Tassey notes will attest, breakthrough technologies do not fall like manna from heaven. Rather, they have at their origin the accumulation of the knowledge base of prior technologies.

From whom is the nation to turn to enrich this knowledge base? Many policymakers have long known at least one answer to this question: small entrepreneurial firms. Policymakers need only remember President Jimmy Carter’s 1977 Presidential Domestic Policy Review System and his directive to Congress in 1979 in which he singled out the important role of small technology-based firms to our economy and thus to economic growth. There are volumes of academic research to support the growth-related role that small entrepreneurial firms play.

So, perhaps one possible step toward the type of R&D-based growth policy that Tassey is calling for should be increased efforts to stimulate R&D in small entrepreneurial firms that in turn will add to the evolution of a knowledge base on which subsequent technologies can be built. The limitation of venture capital in recent years to support the software, biotech, and services entrepreneurial start-ups, for example, is now inhibiting the ability of entrepreneurs in other “hard” technology fields, such as energy, from scaling up for production. This could be a fruitful area for future policy discussions.

Albert N. Link

Virginia Batte Phillips Distinguished Professor
University of North Carolina at Greensboro

Gregory Tassey makes a convincing argument that R&D investment in technology-based productivity growth is critical for long-term economic competitiveness. He points out, correctly, that the science and technology community often struggles to effectively make the case for the variety and scale of R&D investment required for the challenges of the global tech-based economy. In his article, Tassey makes an invaluable contribution to addressing this problem by identifying and characterizing key categories of technology-related economic assets prone to market failures. The article should, however, be taken as a “call to action” for the science and technology community. More work needs to be done to translate understanding of particular technology innovation processes and systems, economic spillovers, and risks beyond just “general anecdotes and descriptions” into a practical and holistic economic growth strategy—one with targeted policies and a coherent evidence base, which can address specific market failures.

One reason so many economists fail to appreciate the central role of technology in economic growth is that the sources of productivity improvement and market failure identified by Tassey occur within economic “black boxes.” An advanced theory of technology-based productivity growth, which can underpin more effective evidence gathering and policy development, will require opening up these boxes. Economists and policymakers will need to work with scientists, technologists, systems engineers, and operations management researchers, among others, to integrate more detailed understanding of the complex systems nature of technology-based products, advanced manufacturing systems, and global value chain networks.

Tassey’s arguments are not only important; they are becoming increasingly urgent. As competing economies build comparative advantage, acquiring new capabilities to innovate high-tech products and develop ever more advanced manufacturing systems, countries such as the United States can no longer rely on the strength of their science and engineering research base to drive competitive productivity growth. The old twentieth-century model whereby technological innovation is driven by a small number of countries (those with elite research universities and major R&D-intensive corporations that dominate supply chains) is rapidly disappearing. The pace of technological innovation and increasing competition means advanced economies no longer have comfortably long “windows of opportunity” to translate new knowledge from research into manufacturing, or for supply chains and skills portfolios to reconfigure around high-value economic opportunities associated with emerging technology-based products.

In this new era, a high-quality research base driving innovation, together with monetary and fiscal policies stimulating demand, may just not be sufficient to compete in the global tech-based economy. The capability to rapidly translate novel emerging technology R&D into manufacturing, and the ability to coordinate the complex manufacturing systems into which these technologies diffuse, may become the critical factors for enabling national economic value capture. Tassey’s argument for a technology-based growth policy—focused on coordinated investment in technology platforms, innovation infrastructure, institutions, and human capital—is convincing, timely, and urgent.

Eoin D. O’Sullivan

Babbage Fellow of Technology & Innovation Policy
Director, Centre for Science, Technology & Innovation Policy, Department of Engineering
University of Cambridge
Cambridge, England

Watch what you write

In “Journalism under Attack” (Issues, Winter 2017), Keith Kloor describes how issue advocates unwilling to concede basic facts worked to delegitimize him for reporting the truth and correcting the record. The parallels he draws to the current political discourse and attacks on the media ring all too true.

I could write a similar article about the importance of speaking truth to power—as well as the eternal tendency to shoot the messenger—from a scientist’s perspective. Perhaps in this, scientists and serious journalists such as Kloor have much in common.

As a scientist who has often presented scientific information in a policy setting, most often on issues related to marine resources, I find that evidence is sometimes not only inconvenient but unwelcome. In some cases, that results in scientists becoming a target in a way not unlike those that Kloor describes concerning his reporting on the efficacy of vaccines or the impacts of genetically modified crops.

On contentious issues, there is a well-worn tactic of ascribing deep, dark motives to scientists (and journalists), suggesting bias and manipulation of the facts. Those making the accusations, of course, assume an unwarranted veil of objectivity and independence.

A case in point is one of the many battles over climate change concerning an alleged slowdown in the rate of global warming since 1998. Clear scientific evidence based on several separate studies and using multiple datasets (for example, from the National Oceanic and Atmospheric Administration; the University of California, Berkeley; and the United Kingdom’s national weather service, called the Met Office) demonstrates that no such slowdown occurred and that warming has continued apace. But global warming conspiracy theorists, including Lamar Smith (R-TX), chairman of the Science, Space, and Technology Committee in the US House of Representatives, regularly “discover” new plots by scientists, believing that if they find that one smoking gun, then climate science will fall like a house of cards.

Consider the source. Chairman Smith and his cohorts are closely aligned with the major industries responsible, according to the evidence, for much of that warming. So the idea that he, or they, are objective while scientists measuring climate are somehow conflicted seems at best odd and, less charitably, absurd. But Chairman Smith continues to use his powerful position to push his “alternative facts” in public discourse.

As attacks become stronger and more unreasonable, retreating into a protected space is appealing. But the response cannot be to shy away from issues. Facts still matter, especially when they are vociferously denied. Both scientists and journalists should continue to investigate emerging issues and present evidence and the interpretation of that evidence, and speak up loudly for peers who are subject to unfair treatment. That their results are challenged or even denigrated makes the job even more important to a broader public. How else can the “court of public opinion” even function?

“For all our outward differences, we, in fact, all share the same proud title, the most important office in a democracy: citizen,” President Obama said in his farewell address. In other words, it is we who hold the real power in our country. So let scientists and journalists continue to speak truth to power.

Andrew A. Rosenberg

Director, Center for Science and Democracy
Union of Concerned Scientists
Washington, DC

The philosopher’s view

In “Philosopher’s Corner: The End of Puzzle Solving” (Issues, Winter 2017), Robert Frodeman issues a challenge to scientists: switch from puzzle solving to problem solving. Whereas puzzles are defined by a disciplinary matrix, problems are presented to us by the world outside of academe. By insisting that scientists not only solve their own puzzles, but also address the world’s problems, Frodeman asserts that “the autonomy of science has been chipped away, and its status as a uniquely objective view on the world is widely questioned.” If what we experienced after the end of the Cold War was a gradual erosion of the place of science in society, the recent elections in the United States and the rise of populism across Europe throw these changes into sharp relief. As Kevin Finneran suggests in the same issue (“Editor’s Journal: Take a Deep Breath”), now is a time for self-reflection.

Although Frodeman lists several topics for reflection (gender policies, CRISPR, and the nature of impact, among others), all of which present interesting ethical, legal, and societal issues, I think we have a larger problem that deserves our full attention: the current reward system in academe is designed to encourage puzzle solving rather than problem solving. Engagement with policy issues in science and technology is treated as an add-on to the “real” work of scholarly publishing, or even as an unnecessary distraction. (Teaching, of course, is treated as a necessary evil.) Unless we restructure the academic reward system to encourage, rather than to punish, problem solving, scientists (and, yes, philosophers) will continue polishing the brass on the Titanic. It is less that we need “a new skill,” as Frodeman suggests, and more that we need a new goal. The end (telos) of puzzle solving needs to be replaced; and if we are now to pursue a different goal, we need to restructure the academic reward system to reflect—and to encourage—the change.

Britt Holbrook

Assistant Professor, Department of Humanities
New Jersey Institute of Technology

Technocracy Chinese style

The topic that Liu Yongmou brought up in “The Benefits of Technocracy in China” (Issues, Fall 2016) concerns many Chinese intellectuals. He found, in scientific principle, certain similarities between technocracy and China’s current political system, and his argument that China’s political system is “limited technocracy” does enlighten. Nevertheless, it was not quite appropriate to use technocracy, a Western concept, to describe the role of technocrats in decision making under China’s actual conditions.

Currently, although China doesn’t have a Western electoral system like that of capitalistic countries, the Communist Party of China (CPC) has already formed a strict and effective mechanism to select and appoint officials to rule the country, the most significant feature of which is that it inherited Confucian classics of “exalting the virtuous and the capable.” Henri de Saint-Simon and Thorstein Veblen were criticized by Marx and Engels as “utopian socialists,” whose socialist views are closely related to technocracy, yet quite different from the CPC’s ruling system. On June 28, 2013, President Xi Jinping set five criteria for good officials in the new era: “faithful, serving the people, hard-working and pragmatic, responsible, and incorruptible.” Among them, the first is having faith in communism and Marxism and sticking to the Communist Party’s fundamental theories. The CPC and the government select high-level talents based more on their political minds and comprehensive skills, not favoring just those with scientific or engineering backgrounds. President Xi himself had an engineering background, but in order to build his comprehensive skills, he turned to humanities as a postgraduate.

Historically, the rise of technocrats with engineering backgrounds reflected that the CPC values knowledge and respects intellectuals. It is indispensable to China’s industrialization, and it resulted from a shortage of humanities in the history of Chinese education. During the Mao era (1949-76), the humanities in Chinese education were greatly affected by the ideology. Back then, in desperate need of developing heavy industries, China followed the Soviet Union’s lead of specialized education and enrolled mostly science and engineering majors. During 10 years of Cultural Revolution, senior intellectuals from the Republic of China era were suppressed as “reactionary academic authority.” As a result, the whole education system was paralyzed, triggering a severe shortage of talents. Those new technocrats with good political backgrounds accumulated political capital, knowledge capital, and cultural capital. In the post-Mao era after the Cultural Revolution, Deng Xiaoping said that science and technology constitute the primary productive force, highlighting the need to build a knowledgeable and young leadership team. “Red engineers” who had a good political background, had received higher education, and had been down to the grassroots for many years joined the Communist Party and soon got promoted. Names on this list include Jiang Zeming, Hu Jintao, and Wen Jiabao. Since the reform and opening up starting from 1978, humanities received renewed attention from the Chinese government and were resumed. The enrollment of students majoring in humanities skyrocketed. Many of the graduates entered politics. The incumbent premier, Li Keqiang, was an outstanding law student at Peking University from 1978 to 1982. Compared with technocrats with engineering technical knowledge, Li represents a new generation of leadership, or as the American political scientist Robert D. Putnam put it in 1977, “those with economic technical knowledge.”

Now, after more than 30 years of national science and technology system reform since 1985, the expert consulting system has become a prevalent practice in all kinds of government departments in China. A typical example is the National Office for Education Sciences Planning, made up of experts from different departments of the State Council. Also worth mentioning is the fact that the role of technocrats in social, political, and economic decision making is still influenced, to some extent, by ideology. Technocrats can play their fullest role only when their opinions are perfectly aligned with those of the CPC. Otherwise, their power will be weakened.

To conclude, although China’s current political decision-making system and Western technocracy share some similarities in terms of valuing experts and valuing knowledge, the two are fundamentally different. In recent years, the Chinese government has made innovation a basic strategy, attaching increasing importance to innovative talents in science and technology. With gradual social progress and the development of civil societies, we believe, technocrats will play a big role in every aspect of Chinese society.

Zhihui Zhang

Associate Professor, Institute for History of Natural Sciences
Chinese Academy of Sciences
Beijing

Cite this Article

“Forum – Spring 2017.” Issues in Science and Technology 33, no. 3 (Spring 2017).

Vol. XXXIII, No. 3, Spring 2017