Forum – Fall 2013
Energy for global development
In “Making Energy Access Meaningful,” Morgan Bazilian and Roger Pielke Jr. (Issues, Summer 2013) raise important questions about what we mean by access to electricity. As they note, the World Bank and its 14 partners who designed the Global Tracking Framework (GTF) to measure progress toward the three goals of the Sustainable Energy for All (SE4ALL) initiative, have come up with a five-tier definition of access. They argue that even the GTF’s top Tier 5 household electricity access, which includes the ability to use high-power appliances, translates into a comparatively low level of household consumption at just 2,121 kilowatt- hours (kWh) per year.
The multi-tier framework was developed precisely to counteract a tendency in some quarters to set the access bar too low by counting a solar lantern or a small solar home system as equivalent to 24/7 grid power. By differentiating among these solutions, the multi-tier framework allows off-grid solar to be acknowledged as a step toward but not the final destination of energy access.
A comparison of minimum household electricity consumption of 2,121 kWh a year at Tier 5 with average household consumption shows it to be about right. It’s about half of the average household electricity consumption in Greece (3,512 kWh), Spain (3,706 kWh), Germany (3,788 kWh), South Korea (4089 kWh), Bulgaria (4,650 kWh), and Japan (4,826 kWh) and nearly the same as that in Italy (2,527 kWh). It is about twice the level of power consumption of connected households in India (901 kWh) and China (1150 kWh).
The developed-country figures cited by the authors refer to overall per- capita electricity consumption for residential and nonresidential purposes, including that for cooking, productive uses, and community requirements. This cannot be compared with GTF benchmarks for household residential electricity consumption. To reflect these other energy needs, the GTF report includes a framework for measuring access to energy for cooking. It also acknowledges the importance of access to energy for productive uses and community applications and commits to developing similar multi-tier frameworks for them as well.
I share the sense of justice that animates the authors’ contention that an electricity access goal of modest ambition would reflect a “poverty management” over a “development” approach. But their contention is misplaced. The level of ambition for SE4ALL goals in each country needs to be defined by governments, in consultation with civil society, entrepreneurs, developmental agencies, and the public. The GTF’s multi-tier framework for access seeks to facilitate this process so that each country can define its own electricity access targets.
The chasm between high ambition and unfeasible goals must be filled by pragmatically designed intermediate steps. Such intermediate steps include interventions that would allow children to study at night, village businesses to stay open after dark, and rural clinics to provide basic services to those who heretofore have had none. These intermediate steps take us closer to the ultimate goal of full energy access. They are worth pursuing, and they do not exclude pursuit of the ones that lie beyond them.
Morgan Bazilian and Roger Pielke Jr. get the essential point right: that meaningful access to modern energy services must go beyond lighting to other productive uses, such as water pumping for irrigation. Like many observers, however, they seem daunted by the scale of investment required, which they estimate at $1 trillion to achieve a minimal level of energy access and 17 times more to reach a level comparable to that in South Africa or Bulgaria.
Those numbers are very large compared to the amount that might plausibly be available through conventional development assistance, but that is the wrong lens to use. Electricity is a commodity that even very poor people are willing to pay for. Indeed, they are already paying more than $37 billion a year for dirty, expensive fuels (kerosene for lighting and biomass for cooking), according to an International Finance Corp. report. With the right financing, solar energy in rural areas is cheaper than these sources or diesel fuel for generators. The availability of energy is a spur to economic development that can quickly become self-sustaining.
Providing access to modern energy services should thus not be seen as a burden for governments to bear, but as a multi-trillion-dollar business opportunity, and there is plenty of capital available for such investments, prudently made. The best way for the world, through national governments, “to ensure that the benefits of modern energy are available to all” is to create safe environments for private investment and to use limited public funds to reduce the political risks to investors and the cost of borrowing. Facilitating this pathway is one of the principal objectives of Sustainable Energy for All, the farsighted initiative launched by United Nations Secretary-General Ban Ki-moon and now co-led by World Bank President Jim Yong Kim.
AN IMPORTANT GOAL FOR THE COMING FEW YEARS IS TO DEVELOP A TRUE SUPPORT SYSTEM SO THAT COMMUNITIES, UTILITIES, AND MINISTRIES CAN COUNT ON THESE MINI-GRIDS TO DELIVER AN EXPANDING SET OF SERVICES.
The central message of the paper by Morgan Bazilian and Roger Pielke Jr., at least to me, is to highlight the explosion of demand and the diverse modes of energy consumption that are possible and should be anticipated as energy access (and in particular electricity access) is expanded. This is an important observation that ties together many stories: (1) the value of energy access for both basic services and economically productive activities; (2) the need to analyze and to plan around unmet demand, so that a small taste of energy services does not lead to unrealized expectations and then dissatisfaction; (3) the complexity of building and planning an energy system capable of providing services reliably once the “access switch” has been turned on.
The question is not the value of energy access or the need for interactive “organic” planning of the emerging electricity systems to be as resilient as possible against surges in demand, technical problems in delivering electricity, or even the problems of managing the market (payment, theft, etc.). These are key issues, but all energy systems deal with them.
What is unresolved, and where readers of Bazilian and Pielke’s paper need to keep a watchful and innovative eye, is on the tools that energy planners use to build adaptive energy networks. Although “simply” plugging into the grid may be the ideal (or in the past has been the aspirational goal), the record of the past decades is that this has not been technically, economically, or politically easy. National grids have not expanded effectively in many nations, in terms of coverage or service reliability or cost, so new models are needed.
Micro- or mini-grids have gained tremendous popularity because although they do require more local knowledge, they are faster to install, more flexible, and suffer less from the “tragedy of the commons,” where people just tie in and feel (culturally or economically) less responsible for being good citizens. For example, walking through the Kibera slum in Nairobi in the early morning is an adventure in “energy fishing,” as people who have illegally tapped into the distribution system unhook their fishing lines from the overhead wires.
Some mini-grids work very well, providing service and responding to local needs. A great many, however, are in technical disrepair or the tariff assessment and collection scheme is not working. The lesson from these mini- grids is not that they do not work or can’t provide the range of services (not just lighting, but energy for schools, clinics, and in many eyes most critically businesses) but that engagement, planning, and service quality must be brought to these systems.
In some of the mini-grids I have been involved in designing and assessing in Kenya and Nicaragua, the lessons have been spectacularly good. Community engagement in the planning and operation of these grids means that they have been able to flexibly respond to changes in demand, and formerly powerless villagers have ramped up their demands and expectations.
The key issue for groups such as Sustainable Energy for All, researchers, and nongovernmental and public officials is to find ways to support this new mode of energy access. An important goal for the coming few years is to develop a true support system so that communities, utilities, and ministries can count on these mini-grids to deliver an expanding set of services. My laboratory at the University of California ), Berkeley, is engaged in efforts ranging from publishing journal papers, to creating wikis, to running training efforts in off-grid and recently connected communities. Needed will be a large community working together on all of these issues, and establishing goals for technology standards, data sets on different market and tariff schemes, and a list of new hardware needs to increase services and service quality in this burgeoning field.
Morgan Bazilian and Roger Pielke Jr. provide a valid long-term perspective on the energy needs of the developing world, but the critical question is what practical difference does it make to think in the more ambitious, longer-term way that they propose? We don’t do that for any of the other United Nations Millennium Development Goals such as those for health, education, and water. Why, then, should we do so for energy? What investment choices or policies would we approach differently?
The authors suggest that it would make a big difference, but they do not provide the detail or analysis necessary to understand what changes would result. A related concern that they do not consider is that their effort to emphasize the enormity of the long-term challenge could make the task seem so daunting that it will discourage the development agencies from taking the smaller steps that must be taken now to put the developing nations on the path that will ultimately lead to an adequate energy supply. Thinking about the trillions of dollars that will be needed over many decades rather than the billions of dollars that can make a difference now could be paralyzing.
Marshaling the resources to make incremental improvements has been hard enough. For example, the effort to improve the performance of simple cookstoves has yielded good results but could use additional resources to fulfill its potential. Until this can be accomplished, is there any point in worrying about what it will cost to provide every one with an electric stove?
One could approach this question from another direction by examining the most successful examples of electrification and energy modernization such as that in China. Was their success the result of focusing on an ambitious long-term vision? Or did they progress one step at a time by addressing immediate needs and achievable goals, and then setting new goals that built on each accomplishment?
Making job training accountable
Louis S. Jacobson and Robert J. LaLonde (“Proposed: A Competition to Improve Workforce Training,” Issues, Summer 2013) have led the development of accountability systems since the 1970s. They began their work in the heyday of employment and training programs. At that time, their focus was on increasing the accountability of Comprehensive Employment and Training (CETA) programs. In those days, the state of the art in this field was to use short-term training programs to assist individuals in dealing with employment challenges. Jacobson and Lalonde realized that “second-chance” training programs such as CETA weren’t sufficient to confront the challenges that displaced workers and low-income individuals faced in finding employment. Instead, it was more important to provide individuals with an adequate first chance that prepared them to find and maintain stable employment in high-demand fields.
Since the days of CETA, we have experienced a shift away from short-term training programs toward educational solutions to promoting employment. This shift has been driven mainly by the skill-biased technological change in the economy. Over the past several decades, technology has been automating repetitive tasks and activities on the job. As a result, more and more jobs left to people are nonrepetitive and require skills beyond those produced by a high-school education. The resultant increasing entry-level skill requirements for work have made postsecondary education and training the gatekeepers for access to jobs that pay middle-class wages.
In truth, CETA never had the horsepower to accomplish its objectives. The $2 billion to $5 billion spent annually on CETA programs is minuscule compared to the over $300 billion that is now spent just on postsecondary education.
The essence of Jacobson and Lalonde’s work on connecting education and training with labor market outcomes has always stretched the boundaries of the state of the art in accountability in education and training and is now receiving greater attention in the wider discussion. As they point out, the surest way to efficiency and maximum choice without interference in complex institutional and consumer-driven decisions is transparency in measured outcomes. Yet, although some progress has been made in enhancing the accountability and transparency of programs at for-profit institutions, this model needs to be expanded to all of postsecondary education and training.
All the necessary data for the implementation of college and career information systems already exist. This information just needs to be interconnected and made accessible to prospective students and trainees in a form that they can understand and use to make better decisions. This information also should be made available according to the program of study, for as Jacobson and Lalonde point out, there are vast differences in the costs and outcomes of different postsecondary career and technical education programs, and individuals are often not well equipped or sufficiently supported to make good choices in this regard. The Student Right to Know Before You Go Act introduced by Senators Ron Wyden (D-OR) and Marco Rubio (R- FL) is the legislative gold standard for making this type of information widely available to the public. In no small part, this effort is a result of Jacobson’s and Lalonde’s work in promoting transparency and accountability in educational outcomes.
Are there enough STEM workers?
Hal Salzman (“What Shortages? The Real Evidence About the STEM Workforce” Issues, Summer 2013) ably punctures several myths about the scientific and engineering labor market, but his thoughtful analysis does not address whether U.S. students in general are learning science and math well enough to prepare them for good jobs. As Salzman points out, the claims of extensive science, technology, engineering, and mathematics (STEM) worker shortages are hard to reconcile with engineers and scientists experiencing only average wage growth and with the number of science and engineering graduates rising as fast as the number of job openings. Science Ph.D.s take nearly eight years of graduate study to complete their degree, and then many languish for years in modestly paid postdoctoral positions, hoping for one of the few openings in university professor positions. Another much heralded shortage—of high-school math and science teachers—is apparently a problem of high attrition and is not due to low absolute numbers of newly qualified math and science teachers.
Still, engineers and scientists earn well more than double the average level of earnings. Mean earnings for most science and engineering occupations are as high as the mean earnings of lawyers, although well short of the mean earnings of physicians. Thus, even though shortage claims about engineers and scientists lack convincing evidence, most individuals are likely to achieve high rates of return on their investments in attaining those degrees.
Salzman’s analysis leaves open a key question: Would doing a better job of teaching STEM subjects increase U.S. productivity growth? Here the research is less clear, especially for intermediate- level jobs that do not require a BA. Employers looking for workers who qualify as technically competent machinists or welders sometimes ground their complaints of shortages in terms of weak academic skills. Improving STEM education, especially through career and technical education programs and apprenticeships, might increase the job opportunities and enhance the careers of many young people.
For workers attaining advanced science and engineering degrees, the impact of an expanded supply is not entirely clear. It is no coincidence that Israel, boasting one of the largest numbers of scientists and engineers per capita and highest shares of gross domestic product spent on R&D, is also at the forefront of innovation and high-technology entrepreneurship. But the connections may not be supply- driven. Instead, it may be that added R&D spending is creating the demand for an increased supply of STEM- trained workers, many of whom in turn spur innovation. By implication, a good way to encourage more young people to major in STEM subjects and potentially increase innovation is to raise both public and private R&D spending.
Finally, I want to lend my full support to Salzman’s position on immigration. Rather than focus visas on temporary workers who work for specific firms in the information technology industry, U.S. policies should promote a balance in terms of the skills of new immigrants on a permanent basis. Because the family unification strategy already attracts low-skill workers, the remaining programs should provide adequate places for medium- and high-skill immigrants.
The STEM workforce often inspires extreme assertions. In 2007, Microsoft’s Bill Gates testified that there is a “terrible shortfall in the visa supply for highly skilled scientists and engineers” and asserted that “it makes no sense to tell well-trained, highly skilled individuals—many of whom are educated at our top universities—that they are not welcome here.” Gates urged Congress to eliminate the 65,000-per- son per year H-1B visa cap. At the other extreme, University of California, Davis, Professor Norm Matloff says the “H-1B [program] is about cheap labor.” Foreign workers tied to a particular U.S. employer for up to six years, and hoping to be sponsored for an immigrant visa by their U.S. employer, are young, “loyal,” and cheaper than older U.S. workers.
Conventional wisdom, according to economist John Kenneth Galbraith, is “commonplace knowledge” that the public and experts believe to be true but may not be true. Galbraith used the term to highlight resistance to new ideas in economics, but it also applies to claims of labor shortages in agriculture, health care, and STEM occupations, where employers and blue-ribbon commissions warn of shortages and offer solutions that include admitting more guest workers and immigrants.
Hal Salzman and Lindsay Lowell are empirical sociologists who have examined the data and found that the conventional wisdom about STEM labor shortages is wrong. Consider three facts. First, the United States produces plenty of graduates with STEM degrees. Only one of every two graduates with STEM degrees is hired into a STEM job, and only two of three computer science graduates are hired in information technology. Second, STEM careers do not promise high lifetime earnings to high-ability students. STEM college courses require ability and discipline, but many STEM occupations offer neither financial nor employment stability, as advanced degree holders in science often fill a series of low-paid postdoc positions before getting “real jobs,” and many engineers face industry-wide layoffs that make re-employment in engineering difficult.
Third, labor markets work. Economics textbooks do not include chapters on “shortages.” Instead, they discuss how changes in prices bring the supply and demand for goods into balance, and changes in wages balance supply and demand in the labor market. Not all markets respond instantly, and labor markets may respond slowly if training is required, but people do respond to market signals, as is evident today in the rising number of U.S. petroleum engineers. On the other hand, jobs offering wages that hover around the minimum, few work-related benefits, and seasonal and uncertain work explain why over three-fourths of U.S. farm workers were born in lower-wage countries.
The United States is a nation of immigrants that has welcomed millions of newcomers to the land of opportunity. In recent decades, employers have asked for guest workers who lose their right to be in the United States if they lose their jobs. Whether phrased as overcoming brain or brawn shortages, admitting guest workers to fill labor shortages is government intervention that slows what would otherwise be normal labor market adjustments. In the case of agriculture, rising wages would probably reduce the demand for labor via mechanization and imports; in STEM, rising wages would attract and keep more U.S. workers in STEM occupations.
Hal Salzman’s article questions the prevailing view that America has a shortage of STEM workers. Even a narrow definition shows an annual supply of STEM graduates 50 to 70% greater than the demand. Similarly, labor market data are not compatible with shortages: STEM salaries have been stagnant since the 1990s, and unemployment has been rising.
The pervasive public view that the United States has a serious STEM shortage is due in part to employers’ very successful public relations campaign. It obviously would not be politically acceptable to argue for large numbers of temporary foreign workers (TFWs) to suppress wages or to avoid raising wages to overcome labor shortages (as Salzman shows petroleum engineers’ employers have done). TFW advocates obfuscate their real motives by arguing that foreign workers strengthen competitiveness, which they define as lowering labor costs by reducing wages rather than by increasing productivity and quality, which clearly would be better for workers and the nation.
Advocates buttress their case for more TFWs with such assertions as:
(1) There are not enough qualified Americans; without admitting that serious efforts have not been made to recruit Americans or that employers resist raising wages or training domestic workers if TFWs are available. The presumption that successful business leaders know what they are talking about gives them unwarranted credibility on this issue.
(2) TFW advocates often cite the real advantages of immigration without revealing the sharp differences between indentured TFWs and immigrants who have the right to freely change employers and become citizens, which TFWs cannot do. TFW advocates rarely propose more legal permanent residents, and the companies hiring the most TFWs sponsor few of them for permanent residency.
(3) The quick exhaustion of annual H-1B quotas is commonly cited as evidence of STEM shortages, although all it really shows is a demand for indentured TFWs who can legally be paid below-market wages.
(4) The absence of reliable data, research, and agreed-upon definitions and methodologies for calculating shortages enables these questionable claims to persist despite fairly consistent evidence to the contrary.
With globalized labor markets and aging populations, TFWs and immigrants can boost productivity, growth, and living standards provided that they fill real shortages, receive market-based wages, and complement U.S. workers. It might even be in the national interest to temporarily suppress wage increases for highly skilled workers in short supply. But these decisions should be transparent and made on the basis of evidence, not interest-based assertions. Otherwise, the importation of foreign workers could depress wages, divert Americans from STEM careers, distort labor markets, undermine labor market institutions, and generate social conflict.
An independent professional advisory commission is needed to provide acceptable definitions, better data and research, and objective evidence-based recommendations to the President and Congress about the number and composition of temporary and permanent foreign workers to be admitted each year to promote value-added development. We can learn much about how to structure such a commission from other immigration nations, especially Canada, Australia, and the United Kingdom.
MOOCs plus
Thank you for publishing the William B. Bonvillian and Susan R. Singer article, “The Online Challenge to Higher Education” (Issues, Summer 2013). They provide needed perspectives that clarify the rapidly developing pace and place of online education in postsecondary education and, by extension, K-12. Cyber-enabled or online learning environments, built on carefully considered, evidence-based design principles drawn from the learning sciences, are places where new and promising learning ecologies can be found. As Bonvillian and Singer remind us, the structures, delivery, assumptions, and habits of education are changing.
Along with my colleagues, Sarah Michaels (Clark University), Brian Reiser (Northwestern University), and Cynthia Passmore (University of California, Davis), I am involved in the design of what Bonvillian and Singer call a “blended learning” environment. We call our blended model the Next Generation Science Exemplar PD System, or NGSX. Our particular environment is intended for K-12 teachers of science as well as pre-service science education students. NGSX participants have 24/7 access to a highly resourced Web-based platform that supports and supplements 30 hours of face-to-face learning sessions. In these face-to-face sessions, tablets, notebooks, or smart- phones are used to connect to the NGSX Web platform and the instructional units contained in it. Each face- to-face session blends highly interactive physical and virtual space using the NGSX environment. Learning goals target strengthening teachers’ understanding of science content and the epistemic practices of science, such as making one’s thinking public, model- based reasoning, and using the language of science to begin to transfer both science knowledge and practices to their classrooms.
NGSX comes in response to at least two learner and learning realities of the 21st century. First, we access, visualize, learn, and share information in vastly different ways and for different purposes than we did even a decade ago. Second, the expectations for workforce skills and abilities to support our current knowledge-centric society have changed. As Bonvillian and Singer suggest, knowledge is no longer just acquired; it is acquired with the expectation that an action will follow, whether that is acumen in solving problems, reasoning from evidence, designing models, or undertaking critical assessment. The junction of this knowledge-to-action, action-to-knowledge interface provides fertile ground for envisioning various models of online learning (blended or not).
I agree with Bonvillian and Singer that science or the STEM subjects are a good bet for beginning this envisioning process. There the knowledge-to- action dynamic is more apparent, more visible than in the humanities or social sciences. Virtual learning environments can serve the aims of STEM well with promising functionalities for interactivity, visual representations, data collection and retrieval, model building, and space for “figuring things out.” At the same time, in our knowledge- centric world, the humanities, social and behavioral sciences, health sciences, and communication sciences along with other discipline areas are needed in these envisioning conversations. Correctly, Bonvillian and Singer write, “online technologies alone do not improve learning outcomes. Outcomes depend on how the technology is used and placed in a larger learning context.”
Governing geoengineering research
“Time for a Government Advisory Committee on Geoengineering Research” by David E. Winickoff and Mark B. Brown (Issues, Summer 2013) is a welcome contribution to the geoengineering debate. It not only provides a concise overview of the main issues raised generally and from a governance perspective, it is one of the few articles that addresses the institutional side of geoengineering governance and elaborates its rationale for a specific proposal. The following comment is intended to provide food for thought for the further development of these ideas.
The proposal for a standing advisory body is made from a U.S. perspective, while acknowledging that geoengineering raises international issues. Adding international governance to the picture could facilitate acceptance, while an appropriate vertical division of labor between the national [and European Union (EU)] and international levels could address concerns about international micromanagement.
At all levels, academic and political discussion on geoengineering governance should be based on explicit objectives and criteria that any proposed governance arrangements are meant to pursue, balance, and fulfill. For instance, the Oxford Principles mentioned in the article do not include or address the established concept of a “precautionary approach.” From a different EU perspective, this approach might well be one of the values to be considered, not despite, but because of, the different connotations and implications it might have from the U.S. perspective. The potential need for tradeoffs between the criteria and objectives pursued also has to be considered.
Regarding the scope of potential governance, not all concepts currently discussed as geoengineering might require an elaborate governance structure or institutional anchoring at this stage. At the present stage of knowledge and existing governance, solar radiation management techniques deserve priority attention, also regarding the international level.
In terms of the mandate and output of the proposed committee, Winickoff and Brown stress the purely advisory function of the proposed committee at the national level. It is interesting that, in apparent contrast, the proposal envisages that the committee could “build norms” in the context of international communication and coordination. This would call for further clarification.
An advisory function makes sense in terms of preserving the political legitimacy of and responsibility for subsequent decisions. Scientific input should in principle be separate from political decisionmaking, as it is essentially a political decision whether pursuing climate protection can justify the potential and actual risks posed by geoengineering activities. This separation is not undermined by the authors’ argument that the input provided should take into account considerations that go beyond a mere balancing of safety and efficacy.
Basic or applied: Does it matter?
With interest have I followed the dialogue on the linear model of innovation between Venkatesh Narayanamurti and colleagues (“RIP: The Basic/Applied Research Dichotomy” Issues, Winter 2013, and Forum, Summer 2013) and Neal Lane (Forum, Spring 2013). Both parties provide sociological insight regarding the obsoleteness (empirically if not rhetorically) of the linear model and also in their discussion of an alternative model for guiding national decisionmaking for research policy in the areas of energy, health, basic science, defense, and more broadly U.S. competitiveness. It has been refreshing to see what historically has been the most important question for research policy revisited and addressed so directly and thoughtfully. But the discussion thus far, while valuable sociologically (and mostly in a Kuhnian way, addressing discrepancies between the linear model and how innovation actually occurs in the lab), provides no new insight for research policy design and implementation.
This is a fair criticism for two reasons. First, both parties ignore the bulk of the academic literature on the sociological obsoleteness of the linear model (see for example Nelson and Winter, Kline and Rosenberg, Rip, Etzkowitz, etc. in Research Policy, Science & Public Policy, and Science, Technology, & Human Values). Moreover, both parties call for government “experiments” in organizing and managing scientists and engineers in ways that acknowledge the discovery/invention dynamic they promote and ignore the fact that such “experimentation” has long been underway in the national mission agencies.
One only need look at the National Science Foundation’s (NSF’s) Research Applied to National Needs Program (implemented during the Nixon administration), the NSF Engineering Research Center (ERC) program’s three-plane strategic planning model (which is neither hierarchical nor linear despite how it sounds), and more recently efforts by the National Institutes of Health to map biological systems at the nanoscale (such as their Nanomedicine Development Centers), NSF’s I-Corps and DOE’s Energy Innovation Hub programs (both designed to create regional networks of innovation), and DOE’s Energy Frontier Research Centers (which employ a model very similar to ERC’s).
Although different, what these (and other) U.S. research policies have in common is that they shun the linear model and attempt to coordinate diverse sets of scientists and engineers from across institutions, disciplines, and sectors to address difficult socioeconomic problems requiring nonhierarchical discovery and invention. Accordingly, the most important task for research policy scholars such as Narayanamurti et al. and Neal Lane becomes not moving from rational to descriptive models of innovation but rather developing a predictive understanding of when to support individual investigators versus boundary-spanning research centers or networks and, in the case of the latter, how to get the incentives right to ensure innovation in the national interest.