Forum – Spring 2011

Technology innovation: setting the right policies

In “Fighting Innovation Mercantilism” (Issues, Winter 2011), Stephen Ezell has identified a truly vexing problem: the proclivity of important countries (notably China) to stimulate domestic innovation by using a wide variety of subsidies, such as public grants, preferential government procurement, a sharply undervalued currency, and other techniques. Elements of “innovation mercantilism” are not particularly novel, but the current scale of these practices poses a distinct threat to U.S. leadership on the innovation frontier.

To be sure, from the early days of the Republic, the U.S. government has deployed an array of public policies to promote innovation; not only patents and copyrights, but bounties and land grants to promote canals and railroads, easements to build out electricity, telegraph and telephone networks, military outlays to lay the foundations for nuclear power, civilian aircraft, the Internet, and much more.

Using Ezell’s terminology, it’s overly simplistic to say that U.S. innovation supports have historically been “good”—benefitting both the United States and the world—while Chinese supports are “ugly”—benefitting China at the expense of other nations. However, two features distinguish contemporary Chinese policies.

First, Chinese subsidies are combined with less-than-energetic enforcement of intellectual property rights (IPRs) owned by foreign companies. In fact, China often requires foreign companies to form joint ventures with Chinese firms, and in other ways part with their technology jewels, as the price of admission to the Chinese market. Second, during the past five years, China’s sharply undervalued renminbi has enabled the nation to run huge trade surpluses, averaging more than $200 billion annually, and build a hoard of foreign exchange reserves approaching $3 trillion. A decade ago, the trade surpluses corresponded to exports of toys and textiles; increasingly, Chinese trade surpluses are now in areas such as sophisticated manufactures, electronics, and “green” machines (like wind turbines).

The burst of Chinese innovation mercantilism coincides, unhappily, with languishing U.S. support. Federal R&D outlays have declined from 1.3% of U.S. gross domestic product (GDP) in 2000 to 0.9% in 2007. Equally important, adverse features of the U.S. corporate tax system not only prompt U.S.-based multinationals to locate production abroad but also to consider outsourcing R&D centers.

What should be done? I agree with many of the specifics in Ezell’s policy recommendations, but let me highlight three broad themes:

  • Instead of carping at U.S.-based multinationals over taxes and outsourcing, President Obama and Congress should listen to what business leaders prescribe for keeping innovation humming in the United States.
  • Any U.S. company that assembles the specifics on unfair subsidy or IPR practices by a foreign government should be warmly assisted by the U.S. Trade Representative in bringing an appropriate case, especially when high-tech products are at stake.
  • The United States should no longer tolerate trade deficits that exceed 2% of GDP year after year. Balanced trade, on a multilateral basis, should become a serious policy goal.


Reginald Jones Senior Fellow

Peterson Institute for International Economics

Washington, DC

Protection is not the long-term route to growth and competitiveness, as Stephen Ezell argues. Although trade protection has helped to incubate local steel industries, for instance, most protected or publicly owned steel industries have lagged behind global best practices and often led to high local steel prices. In the automotive industry, India combined trade barriers to protect its infant automotive sector with a ban on FDI to create local industries but could not close the cost and performance gap with global companies. India’s decision to remove both trade and investment barriers meant that productivity more than tripled in the 1990s, and some local players emerged as innovative global competitors. Protecting local producers usually comes at a cost to consumers. The high prices and limited growth of the Indian and Brazilian consumer electronics sectors can be attributed largely to the unintended consequences of policies such as Brazil’s information act that protected the nascent local computer industry, and India’s high, yet poorly enforced, national and state-level tariffs.

Ezell rightly argues, too, that overemphasizing exports is mistaken. Providing incentives for local export promotion can be very expensive. For instance, Brazilian state governments competing to host new automotive plants offered subsidies of more than $100,000 for each assembly job created, leading to overcapacity and very precarious financial conditions for Brazilian local governments. And in any case, manufacturing is not the sole answer to the global challenge of job creation.

Research by the McKinsey Global Institute (MGI, McKinsey & Company’s business and economics research arm) finds that promoting the competitiveness and growth of service sectors is likely to be much more effective for creating jobs. Productivity improvements are a key factor in all sectors, but most job growth has come from services. In high-income economies, service sectors accounted for all net job growth between 1995 and 2005. Even in middle-income countries, where industry contributes almost half of overall GDP growth, 85% of net new jobs came from service sectors.

Another message that emerges from MGI’s research is that, as your article suggests, an emphasis on local production in innovative sectors is not nearly as important as the impact of innovation on the productivity in the broader economy. Innovative emerging sectors are too small to make a difference to economy-wide growth. In the case of semiconductors, the sector employs 0.5% or less of the total workforce even among mature developed economies and has a limited direct contribution to GDP. But the sector’s innovation has contributed hugely to the information technology adoption that has improved business processes and boosted productivity in many other sectors—and in that way has made a difference for economy-wide growth. These benefits often don’t require local suppliers. In fact, policy efforts to protect local-sector growth can halt that growth if they increase costs and reduce the adoption and use of new technologies. For instance, low-tech green jobs in local services, such as improving building insulation and replacing obsolete heating and cooling equipment, have greater potential to generate jobs than does the development of renewable technology solutions.



McKinsey & Company

San Francisco, California

Stephen Ezell’s article captures an unhappy reality of our present world economy: that some governments are pursuing technology innovation policies that are deliberately designed to favor their domestic firms. Ezell highlights China as the contemporary archetype of purveyors of what he calls “ugly” technology innovation mercantilism—“ugly” in that the behavior hurts competing U.S. and international firms and workers. He rightly calls for U.S. government economic diplomats and trade negotiators to take aggressive multilateral, regional, and bilateral actions.

I argue that although Ezell is right to label these technology innovation mercantilist policies ugly, they pointedly fit his “bad” and even “self-destructive” categories, too, because they contradict our and their long-term interests. The United States has built the world’s best technology innovation system by investing in public-good basic research in our national laboratories and universities. But the strength of U.S. technology innovation is not money alone. European scholars, searching for explanations for why the United States has emerged as the technology innovation center of the world, say that Americans integrate public research institution science with private enterprise technology market developers better than they do in Europe and everywhere else. U.S. contributions of new medicines, medical devices, clean energy, and information technologies are due to technology laws that encourage public research laboratories and universities to license patented technologies to private enterprises, whether an established large business or a small entrepreneurial venture, whether American or foreign. Many big European, Japanese, and Korean firms conduct their most innovative work at their U.S. R&D centers. Fuelled by risk-tolerant capital markets, U.S. and international firms operating in the United States share patented technologies and collaborative know-how to get new products into the marketplace, first in the United States and then in other markets.

Nobody else has such an effective technology innovation system. Americans should not be shy about recommending our technology innovation system as a model for other countries. Studies consistently find that the most innovative companies keep their best technologies out of China and everywhere else where their intellectual property is not respected. Technology competitors and consumers suffer when the locally available technology is second-rate. Brazil’s Embraer became the world’s dominant midsized aircraft maker after their government opened the borders to international information technologies. Brazilian intellectual property and technology law reforms and public science and technology (S&T) investments are resulting in technology innovation unprecedented in Brazil. India’s people will get access to the newest innovative medicines when Indian policymakers and judges implement policies that encourage global innovators to sell their patented medicines in the country and that make their local biomedical S&T community another hub of global innovation. The vast Indian generic marketplace will not be diminished; rather, Indian generic makers will benefit from the global know-how entering their country. We should all participate, not just our trade negotiators, in dialogues with the S&T leaders in countries around the world, especially in developing countries, where policy choices are being made about S&T institution/market relationships that will encourage new dynamism to everybody’s benefit.



Creative and Innovative Economy Center

George Washington University Law School

Washington, DC

Climate Plan B

If one set out to assemble some of the worst possible policy responses to the threat of climate change, and implement them with maximum opacity to the general public, one could not do much better than William B. Bonvillian’s “Plan B,” as elucidated in your Winter 2011 Issues (“Time for Climate Plan B”).

Bonvillian’s plan is fundamentally undemocratic: The public, through its elected representatives, has repeatedly rejected greenhouse gas (GHG) emission controls, and polls show that the public is unwilling to pay for GHG reductions. Bonvillian’s plan is also fundamentally dishonest, hiding a GHG reduction agenda behind an energy policy façade. Americans want energy policy that offers affordable and abundant energy; Bonvillian’s plan would use government muscle to force consumers to buy more expensive energy, appliances, automobiles, and more.

Aside from lacking in democracy, Bonvillian’s plan is a dog’s breakfast of failed economic thinking. His call for increased R&D spending flies in the face of what is well known to scholars: Government-funded R&D only displaces private R&D spending. As Terence Kealey puts it in The Economic Laws of Scientific Research, “… civil R&D is not only not additive, and not only displacive, it is actually disproportionately displacive of private funding of civil R&D.” It’s also unnecessary: Contra to Bonvillian, there’s plenty of private R&D going on. According to the Energy Information Administration, the top 27 energy companies had revenues of $1.8 trillion in 2008. At Bonvillian’s estimate of energy sector R&D spending of 1% per annum, that’s $18 billion. Thus, Bonvillian’s support for President Obama’s desired $15 billion in annual government R&D spending would simply displace what’s already being spent.

The rest of Bonvillian’s plan rests on the “fatal conceit” that government planners can centrally plan energy markets. Thus, he wants more government subsidies and loan guarantees to pick winning and losing technologies. He wants more regulations that burden the private sector and retard economic growth. He wants more appliance standards that reduce consumer choice and increase the cost of appliances and automobiles. He wants more government mission creep, focusing the Department of Defense on energy conservation rather than actually defending the country. These are old, economically illogical, historically failed public policy approaches. This is not so much a Plan B, but a rerun of the big-government nonsense of the pre-Clinton era.

Rather than pouring market-distorting subsidies, tax credits, regulations, “performance standards,” and other such economically nonsensical things into an already bad economy with tragically high levels of unemployment, what we need to do is to take the “resilience option.” We should address threats of climate variability—manmade or natural—by increasing the resilience of our society, while revving up our economy through the use of free markets. We can do this best by eliminating subsidies to climatic risk-taking, streamlining environmental regulations, removing subsidies to all forms of energy, removing housing and zoning restrictions that make relocation harder, and making maximum use of free markets to deliver goods and services that are fully priced, incorporating the price of climatic risk. That is a true Plan B.


Resident Scholar

American Enterprise Institute

Washington, DC

Reducing access barriers

In “Reducing Barriers to Online Access for People with Disabilities” (Issues, Winter 2011), Jonathan Lazar and Paul Jaeger do an excellent job of raising a warning and calling for action. If anything they understate the case, and the implications of their arguments should extend beyond regulation and procurement to research, standards, and policies shaping America’s digital future.

Lazar and Jaeger note that roughly 20% of the U.S. population has at least one disability. By age 45, most people face changes in their vision, hearing, or dexterity that affect their use of technology. Everyone will experience disability in their lifetime. There is an even larger proportion of the population that at any given time has a limitation that is not typically tracked as a disability but is nevertheless affecting their ability to leverage technology to achieve their full potential and live rich lives (for example, illness, injury, poverty, or mild impairment). We are seeing a growing population of cognitive disorders that also can affect and be affected by the use of technology. Further, everyone at some point experiences contextual disability (such as noisy environments, cognitive load from distractions, and glare from bright sunlight). A 2003 Forrester Research study suggests that 60% of adult computer users could benefit from accessibility features. Although the focus of Lazar and Jaeger is appropriately on those formally identified as having disabilities, the goal should be a world in which everyone is achieving their potential irrespective of individual differences.

Lazar and Jaeger note that although the Internet has clearly opened opportunities for people with disabilities, many Web sites are inherently problematic, depending on a given person’s set of disabilities and goals. This is an issue today, but it will become more of an issue tomorrow. It is clear that the digital future that is emerging will require even greater dependence on technology in order to fully engage with the world. This future can be the fulfillment of a dream, or it can be a nightmare.

To increase access to the wealth of information, communications, and services that are emerging, Lazar and Jaeger call for a more aggressive stance within federal and state governments. We can aim higher. We have the ability to create a digital world that adapts to each individual’s personal characteristics. Cloud computing, the processing power and intelligence that are evolving behind it, and the increasing ubiquity of wireless networks mean that most individuals will rarely if ever need to be isolated. The variety of devices available to the individual is increasing, more and more information about the world and how we can interact with it is available, and the palette of technologies that extend the range of natural user interactions and experiences is increasing ever more rapidly. Everyone should be able to appropriate the set of technologies that makes sense to accomplish their goals and extend their potential.

Government, academia, and industry should be working together, not just reactively to ensure that the digital world is accessible but collaborating to create the infrastructure for a fully accessible digital future and to drive the innovation that embracing full diversity can unleash.


Director, User Experience

Microsoft Corporation

Redmond, Washington

Jonathan Lazar and Paul Jaeger effectively articulate the importance of accessible technology. I’d like to emphasize that the field of accessible technology is broad-reaching and a rich source of innovation.

The market for accessible technology extends far beyond people with severe disabilities. Naturally, there is a wide variety in the level of people’s abilities. One person may experience a persistent disability, such as permanent vision loss. Another person may experience vision strain at the end of a long working day. The value of making technology accessible is that it can be used by a broad set of people, in a way that meets their unique requirements. And that technology can adapt as the person’s abilities change, which can result from changing health, aging, or merely being in an environment or situation that reduces vision, hearing, mobility, or speech or increases cognitive load. Therefore, the market for accessible technology expands to people with mild impairments, occasional difficulties, the aging population, and the mainstream population in various situations.

The technology industry should realize that a powerful outcome of making technology accessible is that it drives innovation in the computing field as a whole. The resulting innovations are core building blocks for new, exciting computing experiences. Take, for example, screen-reading software, which reads aloud information on the screen with a computer-generated voice. A person who is blind relies on the screen reader to interact with their computer, listen to documents, and browse the Web. Other groups of people also benefit from screen readers, such as people learning another language and people with dyslexia. Listening to information read aloud helps with language acquisition and comprehension. Yet another application of screen-reading technology is the growing trend of eyes-free computing. An emerging application of eyes-free computing is driving a car while listening to driving directions or email or interacting with entertainment devices.

There is no free lunch or free green energy. It’s time for our political leaders to tell us honestly that it’s going to cost us a lot to preserve the future for our grandchildren.

This dynamic ecosystem of services and devices needs to be engineered so all the pieces work together. Our engineering approach at Microsoft is one of inclusive innovation. The principle behind inclusive innovation is that the entire ecosystem of products and technologies needs to be designed from the ground up to be usable for everyone. This will result in robust solutions that will benefit a broad population. To build accessible technology from the ground up requires dedication across the entire software development cycle. From product planners to the engineers, the teams need to incorporate accessibility into their fundamental approach and mindsets. At Microsoft, our accessibility initiatives include outreach, education, and research with public and private organizations. These collaborations are key to delivering accessible technology and reaching our goal of educating others who are creating technology solutions.


Senior Program Manager, Accessibility Business Unit

Microsoft Corporation

Redmond, Washington

No free energy

“Accelerating the Pace of Energy Change” (Issues, Winter 2011) by Steven E. Koonin and Avi M. Gopstein is a refreshingly frank look at the challenge we face to protect our climate’s and nation’s futures. We in the United States are likely to assume that as a nation we can accomplish anything if we have the will to do so. After all, we designed the nuclear bomb in less than 5 years and accomplished the goal of the Apollo program in less than 10. But these projects constructed a few items, albeit very complex ones, from scratch. As the article points out, the existing energy system is huge, even by U.S. government standards. It consists of an enormous capital investment in hardware, matched by a business strategy that generates a modest but reliable return on investment.

It’s tempting to hope that one or more technical innovations will be discovered to solve the problem, such as cheaper solar cells, economical means to convert grass into ethanol, inexpensive CO2 sequestration, etc. As an applied scientist, I enthusiastically endorse R&D to improve all potential contributors to our future energy supply and energy conservation. But if we follow the authors’ reasoning, technical innovations can contribute only a small part of the solution. Even after the benefits of an innovation are obvious, there will be a long delay before the capital structure catches up with it; that is, waiting for existing equipment, which has already been paid for, to approach the end of its useful life and require replacement.

The alternative, investing in new equipment and infrastructure before the normal replacement cycle, is expensive, as is forcing the use of less economical alternative energy supplies. The money will not come from existing utility company profits, nor from current government revenues. It must be provided by citizens, either through increased taxes or increased energy costs. There is no free lunch or free green energy. It is time for our political leaders to tell us honestly that it’s going to cost us a lot to preserve the future for our grandchildren. It is also time to stop spending precious resources on illusions of green energy, like corn ethanol.

From a macro perspective we need young workers to move from depressed areas to booming areas. The mobility bank would help to finance the short-term costs of making such a move.

As the authors point out, essential ingredients for inducing energy companies to make changes are stability and predictability. Unfortunately, the U.S. Congress rarely commits itself even one year ahead. That matches poorly with energy investments whose useful life may be 50 years. The only alternative I can imagine is to formulate a long-term plan that receives sufficient public endorsement that future legislators are hesitant to abandon it. There are precedents; each is called a “third rail of American politics.” One requirement of such a plan is absolute honesty: If we agree to pay the cost of such a plan, we don’t want to be surprised later, except by savings we didn’t expect. Please don’t tell us about savings that may never appear and don’t assume that the economy will always remain at peak levels.


1032 Skylark Drive

La Jolla, California

Telling science stories

I see considerable irony in the fact that Meera Lee Sethi and Adam Briggle (“Making Stories Visible: The Task for Bioethics Commissions,” Issues, Winter 2011) begin their analysis of the role of narrative in explaining science with a story of their own: a story about David Rejeski’s childhood fascination with Captain Marvel, ham radio, and rockets. To do so mythologizes their human subject (Rejeski) just as surely as Craig Venter’s analogies serve, in the view of these authors, to tell us a fairy story about synthetic biology. We are invited here to see Venter as an evil genius bent on misleading the public by oversimplifying synthetic biology and downplaying its risks, while Rejeski comes across as the authentic superhero who can bring him to task for this transgression.

A scientific journal article is, in its own way, a narrative story, with a tendency to mythologize its subject: the experiment or study that it reports. Everyone working in science knows that research does not proceed as neatly, cleanly, or predictably as the tersely worded research publications that survive peer review tend to suggest. So it is not just “the public” (whoever they are) that needs stories to explain the complex nature of scientific truth. Scientists tell stories to one another all the time. The problem for the rest of us often amounts to deciding which stories we should believe. On this point I agree with Sethi and Briggle.

I also agree that there is money in synthetic biology, and that Venter and others can certainly smell it. What I am less certain of is whether Rejeski’s use of scary images from science fiction helps his credibility as a spokesperson for “the public.” He may hope that such images can scare regulators into fearing a panicked populace, thus pushing for more aggressive regulation, but this is a rhetoric that may be self-defeating to the extent that it suggests public fears are simply silly.

As someone who taught media studies for 20 years, I know how easy it is to mistake popular-culture images for what various publics are actually thinking. Worth noting in this context: Research by Michael Cobb and Jane Macoubrie at North Carolina State has suggested that Americans who have read Prey might be less fearful of nanotechnology than those who have not, a phenomenon probably attributable to the fact that science fiction fans tend to like science.

Indeed, Americans in general tend to like science, and I know of no hard evidence that they fear synthetic biology. They certainly do not fear nanotechnology, which in some ways, as Rejeski’s shop has helped publicize, perhaps they should. Science fiction is one of the few truly popular forums in which our hopes and our fears about new technology can be explored, but its significance should not be overstated. As someone who would like to see a stronger voice for various publics in making science policy, I believe we should think more carefully about how public opinion is actually formed, as well as how it is best consulted. Media content is not “what people think.”


School of Environmental and Public Affairs

Editor, Science Communication

University of Nevada, Las Vegas

Las Vegas, Nevada

Reversing urban blight

Michael Greenstone and Adam Looney present an excellent overview of how economists think about the household-level consequences of local job destruction (“Renewing Economically Distressed American Communities,” Issues, Winter 2011). During a deep recession, job destruction increases and job creation slows. Those who own homes in cities that specialize in declining industries will suffer from the double whammy of increased unemployment risk and declining home prices. Poverty rises in such depressed cities. In such a setting featuring bleak job prospects for young people, urban crime, unwed pregnancy rates, and school dropout rates will rise, and a culture of poverty is likely to emerge.

Empirical economists continue to try to identify effective public policies for reversing such blight. The broad set of policies can be divided into those aimed at helping the depressed place and those aimed at improving the quality of life of the people who live in the place. Greenstone and Looney sketch out three innovative proposals. The first is place-based, whereas the second and third are person-based.

I am least optimistic about the beneficial effects for depressed communities from introducing empowerment zones. Rents will already be quite low in these depressed areas. I am skeptical about whether a tax cut and grants would lure new economic activity to the area. It is more likely that the new tax haven would attract firms who would have located within the city’s boundaries anyway but now choose the specific community to take advantage of this tax break. The intellectual justification for luring firms does exist in the case of firms that offer sharp agglomeration benefits. In his own research, Greenstone (along with Moretti and Hornbeck) has identified cases of significant beneficial spillovers to other local industries from luring specific plants (

I have mixed feelings about the proposal to retrain displaced workers. James Heckman’s evaluation of the Job Training Partnership Act of the 1990s convinced me that the returns from such programs for adult workers are low ( I wish this was not the case.

I am most optimistic about the potential benefits from the mobility bank. The United States consists of hundreds of local labor markets. From a macro perspective, we need young workers to move from depressed areas to booming areas. The mobility bank would help to finance the short-run costs of making such a move.

Although such a mobility bank helps the people, how can we help the depressed cities? Depressed cities feature low rents. New immigrants often seek out such communities. Utica, New York, has experienced an infusion of immigrants from Colombia and Somalia. The United States has a long history of immigrant success stories, and increased immigration might be one strategy for revitalizing these cities.

Housing demolition in blighted neighborhoods is a second strategy for reducing local poverty. Housing is highly durable. When Detroit was booming in the 1960s, it made sense to build houses there, but now Detroit has too many houses relative to local labor demand. Cheap housing can act as a poverty magnet. The mayor of Detroit recognizes this point and has instituted a policy for knocking down low-quality homes and building up new green space (


Professor of Economics

University of California at Los Angeles

Los Angeles, California

Cite this Article

“Forum – Spring 2011.” Issues in Science and Technology 27, no. 3 (Spring 2011).

Vol. XXVII, No. 3, Spring 2011